Give me compromise, or give me chaos
The EU is about to make a decision that could have enormous, lasting repercussions.
The EU needs to cut a deal. Its pioneering, world-leading AI Act is on the ropes. In the worst case, after five years of negotiation, we could wind up with nothing, leaving the citizens of the world more or less1 entirely on the hook for any negative externalities that arise from generative AI, from misinformation to cybercrime to new bioweapons to bias; runaway AI, if it’s really a thing, would also not be covered.
Big tech wants the world to revolve around its new plaything, Generative AI (combined total revenue so far a few billion dollars, just a tiny tiny fraction of the economy), and to exempt it from any regulation.
We should not allow that to stand.
§
Over the last few days, a bunch of us, including Yoshua Bengio, Geoff Hinton, Stuart Russell, Marietje Schaake and myself, have signed a pair of open letters urging policymakers to seek a compromise, one to the German government (released today), another more broadly to the world a few days ago.
As the letter to the German government puts it
[I]t is vital that risks are addressed at the foundation model level. Only the providers of foundation models are in a position to comprehensively address their inherent risks. They exclusively have access to and knowledge of the models’ training data, guardrail design, likely vulnerabilities, and other core properties. If severe risks from foundation models aren’t mitigated at the foundation model level, they won’t be mitigated at all, potentially threatening the safety of millions of people.
We understand some voices support addressing risks from foundation models through a system of self-regulation. We strongly advise against this. Self-regulation is likely to dramatically fall short of the standards required for foundation model safety. Since even a single unsafe model could cause risks to public safety, a vulnerable consensus on self-regulation does not ensure EU citizens’ safety. The safety of foundation models must be ensured by law.
I would hope that all of this is obvious.
§
In the words of the other letter:
Foundation models differ significantly from traditional AI. Their generality, cost of development, and ability to act as a single point of failure for thousands of downstream applications mean they carry a distinct risk profile – one that is systemic, not yet fully understood, and affecting substantially all sectors of society (and hundreds of millions of European citizens). We must assess and manage these risks comprehensively along the value chain, with responsibility lying in the hands of those with the capacity and efficacy to address them. Given the lack of technical feasibility and accessibility to modify underlying flaws in a foundation model when it is deployed and being adapted to an application, there is no other reasonable approach to risk management than putting some responsibility on the technology provided by the upstream model developers.
Far from being a burden for the European industry, the regulation applied to the technology of foundation models offers essential protection that will benefit the EU industry and the emerging AI ecosystem. The vast resources needed to train high-impact models limit the number of developers, so the scope of such regulation would be narrow: fewer than 20 regulated entities in the world, capitalised at more than 100 million dollars, compared to the thousands of potential EU deployers. These large developers can and should bear risk management responsibility on current powerful models if the Act aims to minimise burdens across the broader EU ecosystem. Requirements for large upstream developers provide transparency and trust to numerous smaller downstream actors. Otherwise, European citizens are exposed to many risks that downstream deployers and SMEs, in particular, can’t possibly manage technically: lack of robustness, explainability and trustworthiness. Model cards and voluntary – and therefore not enforceable – codes of conduct won’t suffice. EU companies deploying these models would become liability magnets. Regulation of foundation models is an important safety shield for EU industry and citizens.
§
The sticking point is that some considered the original drafts of rules around Foundation Models (generative AI) too burdensome for smaller companies to meet; nobody wants to lock them out.
So the obvious and correct compromise, which the Spanish government has been trying to push, is a “tiered approach” to try to put the greatest burden on the largest companies.
An excellent website tied with the first open letter, https://www.tieredapproach.eu, goes into a bit more detail about the proposed compromise, with a useful FAQ:
One important excerpt is its answer to the question of Why?
Crucially only a tiny number of massively well-heeled companies (that could easily afford the costs) would be subject to the largest regulatory burdens—exactly as it should be. As Yoshua Bengio put in an email, the specific criterion is imperfect, and should be refined over time, but a compromise with some criterion is far, far better than having no coverage of Foundation models whatsoever.
§
I hope that some form of compromise survives, placing genuine requirements on (at least) the most capable foundation models, perhaps empowering a scientific committee to adjust the criterion over time, factoring in size, changes in technology, newly identified risks, the nature of deployment, and so forth.
If the EU does not find a compromise, we are all in trouble; the only beneficiaries will be the largest companies who are least in need of help. The rest of us will be SOL.
Gary Marcus has had it with strawperson arguments about how regulating AI will ruin everything. Regulation has given us safer food, safer medicine, safer airplanes, and safer, more ecological cars. The fact that some regulation is bad doesn’t magically mean we should exempt the AI industry; it means we should craft our regulation wisely.
The “less” is that some existing regulations cover some tiny fraction of the conceivable negative externalities. But virtually all existing laws were written before AI was really on the scene, and so very few really envision how AI changes things.
Agree on not polarizing into camps that can't compromise. Compromise is essential for most great endeavors.
As a former regulator (of IT risk in finance), keeping regulation targeted in scope, instead of over-broad, can keep it from swamping small companies that can't afford compliance costs and won't (and shouldn't!) be training novel frontier foundation models.
For now, the ability to train a frontier foundation model (aka afford the compute for training) is a key bottleneck/chokepoint. Bottlenecks like this are the best places to apply regulations in a targeted way to a smaller number of better-resourced entities.
And if compute costs fall precipitously in the future, it would probably be good to have regulation in place on frontier foundation models specifically, since it could help ensure frontier models aren't developed by tiny companies without sufficient controls.
Very strong economic arguments here, thank you for laying out so clearly! I was previously a bit more ambivalent about this whole topic, but all of your points make 100% sense, as presented. Could hardly agree more now.
This whole approach is very akin to “shift left” approach in modern computer security- it is much easier (and cheaper!) to treat a problem at its source, rather than deal with downstream consequences.