How China might crush the West in the race to AGI if we don’t up our regulatory game
Regulation doesn’t always stifle innovation
The conventional wisdom these days seems to be that AI regulation will stifle innovation, and that it should be avoided at all costs.
What if that conventional wisdom were wrong? What if instead the world were facing one of the great multi-layered ironies in history? Some thoughts, premises numbered for your convenience (such that people respond with precision in the comments).
Regulation is NOT always bad for technology, e.g., regulation around the environmental impact of cars has spurred advances in electric cars; rising fuel standards have had a positive impact as well. A 1966 US Army restriction on fixed-wing aircraft for airlifts presumably inspired innovation in helicopters, etc.
GPT-5 will not bring us to AGI. I see it; Yann LeCun sees it; now even Sam Altman sees it.
Getting GPT-5 first will therefore not directly bring China or US to AGI first, per Premise 2.
Truthfulness is one of the major gaps in current systems, as evidenced by hallucinations, weird defamatory accusations etc. Another major challenge lies in forcing LLMs to consistently align with (any) set of ethical values.
Neurosymbolic AI might offer a better way of resolving these issues, since it traffics in facts and explicit reasoning, but neurosymbolic AI is out of fashion, and neither well-developed nor strongly supported by the VC world.
At present, however, there is no legal requirement in the west that AI companies resolve the truthfulness or alignment issues. Market pressures (e.g. on search) may or may not suffice.
There is a fair amount of resistance to any substantive regulation in the US, particularly in the (powerful) libertarian strands of the tech community, e.g. remarks like “the whole point of regulation is to slow innovation”. (To me, this seems like an overgeneralization from the true statement some regulation is bad to the untrue statement all regulation is bad.)
China’s leaders are largely free to impose whatever regulatory regime they like, and seem to keen impose strong regulatory requirements
One of those requirements is for, putting it politely, strong alignment with Communist Party perspectives; less politely, we are talking about censorship. Bots that don’t toe the party line will be banned.
This regulation may force the Chinese tech community to solve a version of the alignment problem, perhaps initially by using tons of underpaid manual labour but ultimately by investing in neurosymbolic AI.
Investments in that problem could thus spur China to leapfrog the West in AI.
In short, the commercial pressure of committing to mandatory censorship of LLMs could induce Chinese tech companies to overtake their Western counterparts in the development of AGI; more regulation might spark more innovation, and something that is anathema to Western observers (including myself) might be the ironic catalyst.
How might we in the West counter all that? In part by carefully choosing the right innovation-encouraging regulations of our own.
My best candidate? Placing very high standards around truth, with strong penalties for LLM-induced defamation and the wholesale spread of harmful misinformation. As Justice Gorsuch seem to hint recently, Section 230 should not protect platforms from misinformation that their own tools generate. Regulations that spur greater accuracy might actually spur greater innovation.
If AGI is to emerge at all, I hope that it will emerge from the noble act of trying to make a better world, and not as an accidental byproduct of censorship and propaganda. If getting to the right place requires regulation, rather than just the kindness of corporations with a nominal attachment to “responsible AI”, so be it.
Let’s not leave innovation entirely to chance.
Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, is deeply, deeply concerned about current AI but really hoping that we might do better.
Watch for his new podcast, Humans versus Machines, debuting April 25th, wherever you get your podcasts.
Thanks for reading The Road to AI We Can Trust! Subscribe for free to receive new posts.