About the Author
Lisa Loud is an expert in fintech and blockchain innovation, with executive leadership experience at PayPal, ShapeShift, and other major tech companies. As Executive Director of the Secret Network Foundation, she champions privacy-preserving technologies in the blockchain space.
The views expressed here are her own and do not necessarily represent those of Decrypt.
DeepSeek’s recent markets-shaking AI breakthrough highlighted the contrasting tech innovation strategies of China and the United States, prompting many in the budding industry to reassess their assumptions about competition and progress.
China’s technological strategy has long been defined by a culture of relentless iteration. Unlike the West, where research breakthroughs are often protected by patents, proprietary methods, and competitive secrecy, China excels in refining and improving ideas through collective innovation.
This ability to rapidly iterate allows China to take existing technologies and push them toward their optimal form, making them more efficient, cost-effective, and widely accessible. DeepSeek’s R1 model being nearly as effective as OpenAI’s best, despite being cheaper to use and dramatically cheaper to train, shows how this mentality can pay off enormously.
Western tech culture deplores the idea of copying other people’s work, leading to a reluctance to use a provably successful strategy for fear of appearing unoriginal. Having to reinvent every part of a solution has the inevitable effect of slowing a project down.
One of the fundamental differences between China and the U.S. in research and development is the approach to secrecy. While U.S. companies and research institutions tend to operate in silos to protect competitive advantages, China fosters a more open, collaborative environment. This culture enables researchers and engineers to build upon each other’s work, accelerating technological progress.
Creativity flourishes under constraints—this is an oft-proven fact. In this case, constraints meant to impede progress have instead catapulted researchers ahead of the steady progress AI was making in the West. China has faced significant hurdles, particularly due to sanctions limiting access to high-performance hardware and software.
The lack of cutting-edge infrastructure has forced Chinese companies to develop alternative approaches, making their innovations more resource-efficient and accessible. And that led to the surprise DeepSeek launch that challenged the world’s beliefs on AI progress.
Another Sputnik moment?
DeepSeek R1 exemplifies the strengths of this iterative approach. Unlike Western counterparts that often rely on proprietary data and high-end infrastructure, DeepSeek was designed with efficiency in mind. Trained on major large language models or LLMs like ChatGPT and Llama, DeepSeek was developed quickly as a more lightweight and cost-effective alternative.
DeepSeek packs the reasoning power of larger models into a smaller, more efficient system. Think of it like learning by example—rather than relying on massive data centers or raw computing power, DeepSeek mimics the answers an expert would give in areas like astrophysics, Shakespeare, and Python coding, but in a much lighter way. And both models often give similar answers to identical queries.
DeepSeek’s success comes from China’s mindset of building on existing work instead of working in isolation. This approach cuts down on development time and costs, helping China stay competitive in AI despite sanctions. Having to work without top-tier hardware has also pushed developers to get creative, finding smart ways to make the most of what’s available.
While DeepSeek is not the most powerful AI model, it is far more accessible than those we have seen to date. In many ways, it recalls the emergence of the home computer in the 1980s.
At that time, IBM mainframes dominated the computing industry, offering immense power but limited accessibility. Home computers, while much less powerful, revolutionized computing by making it available to the masses. Companies such as IBM, who depended on their superior resources for a competitive advantage, have had to repeatedly pivot and adapt to maintain their relevance in the evolving market.
Similarly, DeepSeek may not yet match the raw capability of some Western competitors, but its accessibility and cost-effectiveness could position it as a pivotal force in AI democratization.
Just as the home computer industry saw rapid iteration and improvement, the pace of evolution on models like DeepSeek is likely to surpass that of isolated model development. By embracing decentralization and collective innovation, China has set itself up for sustained AI advancement, even amid resource constraints.
Centralization vs. decentralization
So how can the Western world compete? With decentralized AI development.
Anthropic, DeepMind, OpenAI, and Google have a big challenge ahead of them in maintaining technology leadership in the face of an increasingly cost-effective alternative. If you can’t beat them individually, then maybe it’s time to join forces—even if this goes against the ethos of competitive capitalism.
One of the key reasons the U.S. has fallen behind China in AI development is the centralization of its research efforts. While this can lead to stronger control and proprietary advantages, it also limits innovation to the resources of a single entity—whether it’s a government agency, a tech giant, or a research lab.
The Decentralized AI Society (DAIS) was recently formed to foster collaboration in AI with a view to decentralizing governance. DAIS frequently emphasizes the risks of centralization, particularly regarding how it concentrates power in a few hands.
Now we are seeing a completely different danger of centralization: It can hinder progress by limiting our ability to build on collective knowledge. The power of decentralization lies in enabling many contributors to refine and iterate upon existing work. Instead of multiple entities duplicating efforts in isolated silos, decentralization allows innovation to compound, leading to faster, stronger technological advancements.
AI development still has a long way to go. Unlike true artificial general intelligence, which could reason and infer logically, today’s LLMs function by predicting the most likely next word in a sequence. This means they lack fundamental logical inference capabilities and cannot validate their answers against real-world principles like the laws of physics.
In a Telegram conversation that included an Eliza-based agent, I asked for Github access to a repo, and the agent immediately responded with “Access granted! Let’s get to work!” But the agent did not have a Github account, much less administrative access to be able to grant me access. This is typical behavior when AI lacks real comprehension of the topic being discussed.
LLMs are limited by their nature—for instance, they cannot verify their conclusions against the laws of physics, or any serious system of laws and rules. LLMs provide generalized knowledge and are subject to hallucinations by the very essence of what they are. They can predict the next word in a conversation, but they don’t have the context to validate the meaning of their answers.
These limitations underscore the fact that while AI has come a long way, it still has significant room for growth before reaching true intelligence. And getting there could be a particularly long and expensive process without open cooperation from key builders in the space.
As AI continues to evolve, the lessons from DeepSeek suggest that fostering open, iterative, and decentralized innovation may be the key to future breakthroughs. Collaboration means sharing credit with other innovators, which not everyone likes. It’s not always the biggest player who wins—sometimes it’s those who are willing to do things differently.
Edited by Andrew Hayward
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.