How China might crush the West in the race to AGI if we don’t up our regulatory game
Regulation doesn’t always stifle innovation
The conventional wisdom these days seems to be that AI regulation will stifle innovation, and that it should be avoided at all costs.
What if that conventional wisdom were wrong? What if instead the world were facing one of the great multi-layered ironies in history? Some thoughts, premises numbered for your convenience (such that people respond with precision in the comments).
Regulation is NOT always bad for technology, e.g., regulation around the environmental impact of cars has spurred advances in electric cars; rising fuel standards have had a positive impact as well. A 1966 US Army restriction on fixed-wing aircraft for airlifts presumably inspired innovation in helicopters, etc.
GPT-5 will not bring us to AGI. I see it; Yann LeCun sees it; now even Sam Altman sees it.
Getting GPT-5 first will therefore not directly bring China or US to AGI first, per Premise 2.
Truthfulness is one of the major gaps in current systems, as evidenced by hallucinations, weird defamatory accusations etc. Another major challenge lies in forcing LLMs to consistently align with (any) set of ethical values.
Neurosymbolic AI might offer a better way of resolving these issues, since it traffics in facts and explicit reasoning, but neurosymbolic AI is out of fashion, and neither well-developed nor strongly supported by the VC world.
At present, however, there is no legal requirement in the west that AI companies resolve the truthfulness or alignment issues. Market pressures (e.g. on search) may or may not suffice.
There is a fair amount of resistance to any substantive regulation in the US, particularly in the (powerful) libertarian strands of the tech community, e.g. remarks like “the whole point of regulation is to slow innovation”. (To me, this seems like an overgeneralization from the true statement some regulation is bad to the untrue statement all regulation is bad.)
China’s leaders are largely free to impose whatever regulatory regime they like, and seem to keen impose strong regulatory requirements
One of those requirements is for, putting it politely, strong alignment with Communist Party perspectives; less politely, we are talking about censorship. Bots that don’t toe the party line will be banned.
This regulation may force the Chinese tech community to solve a version of the alignment problem, perhaps initially by using tons of underpaid manual labour but ultimately by investing in neurosymbolic AI.
Investments in that problem could thus spur China to leapfrog the West in AI.
In short, the commercial pressure of committing to mandatory censorship of LLMs could induce Chinese tech companies to overtake their Western counterparts in the development of AGI; more regulation might spark more innovation, and something that is anathema to Western observers (including myself) might be the ironic catalyst.
How might we in the West counter all that? In part by carefully choosing the right innovation-encouraging regulations of our own.
My best candidate? Placing very high standards around truth, with strong penalties for LLM-induced defamation and the wholesale spread of harmful misinformation. As Justice Gorsuch seem to hint recently, Section 230 should not protect platforms from misinformation that their own tools generate. Regulations that spur greater accuracy might actually spur greater innovation.
If AGI is to emerge at all, I hope that it will emerge from the noble act of trying to make a better world, and not as an accidental byproduct of censorship and propaganda. If getting to the right place requires regulation, rather than just the kindness of corporations with a nominal attachment to “responsible AI”, so be it.
Let’s not leave innovation entirely to chance.
Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, is deeply, deeply concerned about current AI but really hoping that we might do better.
Watch for his new podcast, Humans versus Machines, debuting April 25th, wherever you get your podcasts.
"...Section 230 should not protect platforms from misinformation that their own tools generate. Regulations that spur greater accuracy might actually spur greater innovation."
As always, the question becomes: Who decides what is accurate and what is misinformation? The regulators? What shields them from industry capture, or from pursuing their own perverse incentives and political motives? Who watches the watchmen?
Also, since next-token predictions are based on training sources, will the regulators be picking and choosing those sources? If so, how transparent would that process be? What if something is considered misinformation one day and found out to be accurate the next(e.g. Hunter Biden's laptop)? What would the retraining process look like? Suppose there isn't always clear line between facts and lies (spoilers: there isn't) ?
If the ultimate goal is to make GPT and its ilk abandonware, maybe such censorship routines would do the trick, but at the cost of rigging up yet another powerful and unaccountable bureaucracy. A Ministry of Truth by any other name, and who would assuredly follow the same pattern of cancerous growth and mission creep as the rest.
Point #1: The conventional wisdom these days seems to be that AI regulation will stifle innovation, and that it should be avoided at all costs.
AI is yet another 55 gallon drum of jet fuel about to be pored on an already overheated knowledge explosion. Nobody in AI land seems capable of looking past the details of one particular technology to the larger picture in which all emerging technologies reside.
The "more is better" conventional wisdom you refer to is at least a century out of date. The technical brilliance of these innovators is obscuring the fact that they are backward looking philosophical children.
Simple common sense available to high school kids should be sufficient to inform us that if we insist on pushing the knowledge explosion forward faster and faster without limit it's only a matter of time until society at large won't be able to successfully adapt to the changes.
Nuclear weapons have been my generation's inexcusable crime. AI will be yours.