Has the Chinese government figured out the fact that GPUs ≠ AGI?
And what does it mean if they figured this out before the US?
As anticipated here yesterday, Trump’s decision to allow Nvidia to sell H200s to China hasn’t gone over well. Here for example is AI analyst Zvi Mowshowitz’s take:
The H100 move is not playing well on the political right, either. I just did a lengthy interview on The Conservative Review with Daniel Horowitz on The Blaze’s Podcast Network, and the host there wasn’t all that thrilled with the H200 decision, either. (Like me, he also worries about overall coherence of the current policies, and about the US’s large investments in AI infrastructure that may not pay off.)
As noted yesterday, the H200 move is hardly popular with either party. Same thing with proposed preemption of states’ rights to regulate AI. Some (as noted in an update to my piece yesterday) think that the H200 move is just a (costly!) move to prop up Nvidia and hence the stock market. At best, the motivations for the sale remain unclear. But, plot twist, what if Trump licensed Nvidia to sell fancy H200s to China and … China didn’t want them?
In unexpected news just reported by the FT (as summarized below), that seems in fact to be happening to some degree.
Part of Beijing’s response is no doubt about protectionism. They want homegrown companies like Huawei to prosper, just as Trump wants Nvidia to prosper. And maybe has covered their needs with black- and gray-market chips. Or figured out how to make their own H200 clones. Or maybe they are worried about backdoors in Nvidia chips (a claim that Nvidia denies).
But maybe, maybe, just maybe, China’s reticence to stock up on H200s is also a sign that China has realized that GPU’s aren’t the royal road to AGI. And realized that loading up on AI infrastructure that is likely to rapidly depreciate might be premature — and not the massive competitive or economic advantage many once people thought it was.
The first country to really appreciate all this may get a huge competitive advantage, in whatever comes next, after LLMs.
P.S. The above-mentioned Conservative Review podcast/interview was really terrific, a broad tour through the magic thinking that has afflicted the last few years, touching investment fallacies, policy, and education. When the host asked “What is writing with ChatGPT doing to college education?”, I explained that writing with ChatGPT was outsourcing education:




Friend teaches physics at a university. He said homework grades and exam grades used to be tightly correlated. Now students get the homework right and fail the exam. Taking out big loans to fake their way through school. I see PhD students struggling to use NotebookLM to write their lit reviews. Why get a PhD if you aren’t more interested in reading the literature than learning a tool that will change in 6 months? Everything is a short term transaction. We need long term thinking.
Friends here are running Nvidia Spark boxes with local DeepSeek R1. They pay for the box, sw is free, and 90-95% good enough considering how poorly shrinkwrap AI is at delivering with high ongoing dollar cost. This was bound to happen someday and Nvidia was smart enough to build the sidecar appliance to do it.