What We’ve Learned from the DeepSeek AI Shock, a Year Later

One year has passed since the Chinese AI lab DeepSeek launched its first AI reasoning model, R1. Its release shocked financial markets and spurred fears that America’s AI lead might disappear. Roughly $1 trillion was wiped from U.S. public markets. Nvidia’s stock slumped 17% in one day.
Nvidia and equity markets have since rallied far higher. Investors had good reason to have seen through the hype. Every metric suggests that U.S. firms still lead on AI chips, model quality, and sales
The primary fear that drove the market selloff was the Chinese’s claim that R1 required far fewer chips to train or deploy its high-level AI capabilities. But that was a distortion of reality. Costs for deploying AI were collapsing across the industry. DeepSeek’s cost reductions were one datapoint in a much broader trend.
It is true that the “reasoning” capabilities in R1 represented a paradigm shift in AI models. Before last year, most AI firms focused on pretraining, the use of vast data sets to teach models. Reasoning models showed that if models spent more time thinking about questions, they could deliver better answers, even without being trained over even-larger sets of chips. This raised questions about whether the scaling laws of AI—the theory that better AI required more chips—remained true. If not, American firms’ bet on vast AI datacenters would be under threat.
A year on, all the evidence suggests that better AI still requires more chips. For one thing, reasoning is itself very compute-intensive. But it has been layered on top of even bigger pretraining processes. The recent release of Google’s Gemini 3 and Anthropic’s Opus 4.5 shows that using even more compute for training models keeps delivering better capabilities. Because of this, there has been no slowdown in demand for American AI chips or for datacenter construction.
In other words, DeepSeek produced a media storm, but no changes in investment trends. U.S. hyperscalers allocated roughly $340 billion in capital expenditures for AI last year, according to JPMorgan.
DeepSeek also attracted attention by claiming it could offer models for free since they were open-sourced. Many people understandably worried that DeepSeek’s free product would take market share from American AI leaders such as OpenAI, Anthropic, and Google, which charge fees to use their advanced models. Soon, other Chinese firms including Alibaba began releasing open-sourced versions of their own models. yet, U.S. AI leaders have retained their market share, for several reasons.
Free models aren’t exactly free to use; they still require computing power to operate. The smallest models can be efficiently used on a laptop, but bigger models running harder problems need access to expensive data centers. The most demanding users, who still perceive a quality gap between open and closed models, are willing to pay for higher quality. Casual users of AI may not notice a difference in the quality of sonnets composed or recipes recommended. Companies relying on AI for complex workflows or for coding evidently do.
Data on model usage is sparse but suggests that closed-source U.S. models still dominate the market. A recent study from OpenRouter found that, when measured per token—that is, per unit of AI, closed-source models have retained roughly 75% market share since DeepSeek’s launch. China’s open models have mostly won market share from non-Chinese open models. But even that may change soon, now that OpenAI has started releasing open models. Nathan Lambert, a researcher who studies the open source ecosystem, has calculated that OpenAI’s open model has been the most rapidly downloaded model ever since it was released in August, outstripping any Chinese competitor.
Anecdotes from Silicon Valley suggest something similar. Venture capitalist Martin Casado, of Andreessen Horowitz, has said that 70-80% of start-ups he sees use closed models. In a survey of start-ups, Menlo Ventures reached similar conclusions.
The most striking datapoint on the state of the competition comes not from Silicon Valley but from DeepSeek. On Dec. 2, the company released its V3.2 model. An accompanying paper noted a “distinct divergence” emerging.
“While the open-source community continues to make strides, the performance trajectory of closed-source proprietary models has accelerated at a significantly steeper rate,” DeepSeek explained.
What explains the divergence? Not the quality of Chinese labs’ AI model innovations, which remain impressive, driven by DeepSeek’s highly skilled team. Instead, it was “lower training FLOPs”—that is, fewer chips—that accounted for at least a portion of the difference.
This has implications for the U.S. political debate around limits on chip sales to China, which the Trump administration eased last week. It also suggests that China’s open source AI is unlikely to overtake the big U.S. leaders in quality. A year after the DeepSeek shock, U.S. tech firms have retained their lead by every available metric.
