Two models of AI oversight - and how things could go deeply wrong
It’s good that governments are stepping up, but some of the signals are deeply worrisome
The Senate hearing that I participated in a few weeks ago was in many ways the highlight of my career. I was thrilled by what I saw of the Senate that day: genuine interest, and genuine humility. Senators acknowledged that they were too slow to figure out what do about social media, that the moves were made then, and that there was now a sense of urgency. I am profoundly grateful to Senator Blumenthal’s office for allowing me to participate, and tremendously heartened that there was far more bipartisan consensus around regulation that I had anticipated. Things have moved in a positive direction since then.
But we haven’t landed the plane yet.
§
Just a few weeks earlier, I had been writing in this Substack and in the Economist (with Anka Reuel), about the need for an international agency for AI. To my great surprise, OpenAI CEO Sam Altman told me before the proceedings began that he was supportive of the idea. Taken off guard, I shot back,, “terrific, you should tell the Senate”, never expecting that he would. To my amazement, he did, interjecting, after I raised the notion of global AI, that he wanted to echo support for what Mr. Marcus said.”
Things have in many ways moved quickly since then, far faster than I might have ever dreamed. In 2017, I proposed a CERN for AI, in the New York Times, to relatively little response. This time, things (at least nominally) are moving at breakneck speed. Earlier this weekn. British Prime Minister Rishi Sunak explicitly called for a CERN for AI as well something like an IAEA for AI, all very much in line with what I and others have hoped for. Earlier today President Biden and Prime Minster Sunak agreed today, publicly, “to work together on A.I. safety”
All that is incredibly gratifying. And yet … I am still worried. Really, really worried.
§
What I am worried about is regulatory capture; governments making rules that entrench the incumbents whilst doing too little for humanity.
The realistic possibility of this scenario was captured viscerally in a sharp tweet from earlier today, from British technology expert Rachel Coldicutt:
I had similar pit-of-my-stomach feeling in May after the VP Kamala Harris met with some tech executives, with scientists scarcely mentioned:
§
Putting it bluntly: if we have the right regulation; things could go well. If we have the wrong regulation, things could badly. If big tech writes the rules, without outside input, we are unlikely to wind up with the right rules.
In a talk I gave earlier today to the IMF, I painted two scenarios, one positive, one negative:
§
We still have agency here; we can still, I think, build a very positive AI future.
But much depends on how much the government stands up to big tech, and a lot of that depends on having independent voices – scientists, ethicists, and representatives of civil society – at the table. Press releases and photo opportunities that highlight governments hanging out with the tech moguls they seek to regulate, without independent voices in the room, send entirely the wrong message.
The rubber meets the road in implementation. We have, for example, Microsoft declaring right now that transparency and safety are key. But their current, actual products are definitely not transparent, and at least in some ways, demonstrably not safe.
Bing relies on GPT-4, and we (eg, in the scientific community) don’t have access to how GPT-4 works, and we don’t have access to what data its trained on (vital, since we know that systems can bias eg political thought and hiring decisions based on those undisclosed data) — that’s about as far away from transparency as we could be.
We also know, for example, that Bing has defamed people, and it has misread articles as saying the opposite of what they actually say, in service of doing so. Recommending Kevin Roose get a divorce wasn’t exactly competent, either. Meanwhile, ChatGPT plugins (produced by OpenAI, which they have a close tie with) open a wide range of security problems: those plugins can access the internet, read and write files, and impersonate people (e.g., to phish for credentials), all alarms to any security professional. I don’t see any reason to think these plugins are in fact safe. (They are far less sandboxed and less rigorously controlled than Apple app store apps.)
This is where the government needs to step up and say “transparency and safety are indeed requirements; you’ve flouted them; we won’t let you do that anymore.”
We don’t need more photo opportunities, we need regulation – with teeth.
§
More broadly, at an absolute minimum governments need to establish an approval process for any AI that deployed at large scale, showing that the benefits outweigh the risks, and to mandate post-release auditing - by independent outsiders - of any large-scale deployments. Goverments should demand that systems only use copyrighted content from content providers that opt-in, and that all machine-generated content be labeled as such. And governments need to make sure that there are strong liability laws in place, to ensure that if the big tech companies cause harm with their products, they be held responsible.
Letting the companies set the rules on their own is unlikely to get us any of these places.
§
In the aftermath of the Senate hearings, a popular sport is to ask, “is Sam Altman sincere, when he has asked for government regulation of AI?”
A lot of people doubted him; having sat three feet away from him, throughout the testimony, and watched his body language, I actually think that he is at least in part sincere, that it is not just a ploy to keep the incumbents in and small competitors out, that he is genuinely worried about the risks (ranging from misinformation, to serious physical harm to humanity). I said as much to the Senate, for what it’s worth.
But it doesn’t matter whether Sam is sincere or not. He is not the only actor in this play; Microsoft, for example, has access, as I understand it, according to rumor, to all of OpenAI’s models, and can do as they please with them; if Sam is worried, but Nadella wants to race forward, Nadella has that right. Nadella has said he wants to make Google dance, and he has.
What really matters is what governments around the world come up with by way of regulation.
We would never leave the pharmaceutical industry to entirely self-regulate itself, and we shouldn’t leave AI to do so, either. It doesn’t matter what Microsoft or OpenAI or Google says. It matters what the government says.
Either they stand up to Big Tech, or they don’t; the fate of humanity may very well rest on the balance.
Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, deeply concerned about current AI but really hoping that we might do better. He spoke to the US Senate on May 16, and is the co-author of the award-winning book Rebooting AI, as well as host of the new podcast Humans versus Machines.
The safety-critical world (nuclear, railways, aerospace etc) would be a good starting point. Anyone who develops a safety-critical system is required to produce an evidence-based safety case for that system, in order for that system to be certified against technical standards. Only a system that has been so certified may be deployed. (See for example the UK Safety Critical Systems Club: https://scsc.uk). Any AI system sufficiently powerful to cause harm (either to individuals, or to society, e.g. democracy) is effectively a safety-critical system, and should be required to be certified against strict technical standards prior to deployment. Given that we don't really understand how complex neural-net-based systems even work, I very much doubt that any NN-based system (such as an LLM) would meet the requirements for safety-critical certification. Which immediately means that anyone proposing such regulation is going to be accused of "stifling innovation" (i.e. wealth generation / tax dollars) at the expense of "us" (the US, UK) vs "them" (China, Russia, etc). It's a classic Molochian Trap, where every actor behaves according to their own short-term self-interest, thereby leading to an endgame that is a massively sub-optimal for everyone. The real AI problem is not the technology per se, but the global coordination problem.
I am (very) glad to hear your success in reaching possible regulators. It is stunning how many people are talking about AI but without any knowledge of the real and often subtle issues. Your leadoff for the Bleak Future identifies one cause: the abyss between those concerned with Safety versus Ethics hinders and limits public understanding. I think the numerous possibilities for harm need to be made concrete in as many ways as possible. Your illustration of the overt and visible calamity development in the bleak future scenario is a good example of what will help people grasp the risks. There can also be cryptic risks, and those need story-telling as well. I took a stab at illustrating how instrumental AI goals of persuasiveness could lead quite stealthily to human loss of control. https://tedwade.substack.com/p/artificial-persuasion I wish more it had more exposure.