Six months after the Pause letter, AI is still erratic — and still something we should still be deeply concerned about
Friday will be six months after the infamous “pause letter”, signed by Yoshua Bengio, myself, Steve Wozniak, Rachel Bronson, Viktoria Krakovna, Tristan Harris, Gillian Hadfield, Ian Hogarth, Elon Musk, and tens of thousands of other people. Since then everything has changed, and nothing has changed.
Well, not really; that last sentence just sounded nice. In reality, only a few things have changed, most have not.
What has changed
We have vastly more attention on AI policy, throughout the world, and I definitely give the pause letter (which I signed but didn’t write) some credit for that. Governments across the globe have stepped up their game, holding public hearings (great!) and closed-door meetings (hmm) trying to figure out what we should about AI.
The field has continued to make advances. DALL-E 3 looks genuinely interesting; Google’s latest Bard, according to the New York Times, is improved, but still erratic.
Ironically, GPT-5–which is essentially the only thing that the pause letter (if you read it carefully) actually proposed to pause—does actually seem to be on pause, if Sam Altman’s May Senate testimony meant what I think it did. He said, and I quote, “We are not currently training what will be GPT-5. We don’t have plans to do it in the next six months.” (Why not? Altman didn’t say. My own guess is that between the release of GPT-4 in November 2022 and the May 2023 testimony OpenAI ran some preliminary tests and decided that the system would not meet expectations. Given that the costs for training would perhaps be measured in the hundreds of millions of dollars, they decided to hold off pending new ideas for improvements.)
Some bona fide AI legislation has been crafted, including, in the US, a bipartisan bill led by Ted Lieu calling for an AI Commission, and a bipartisan bill by Hawley and Blumenthal that gratifyingly looks a lot like the things I called for in my testimony before their committee, emphasizing transparency, auditing and an FDA-like licensing process.
The White House worked together with industry leaders to craft a set of voluntary guidelines.
The UN is taking global AI policy very seriously (something I emphasized in my TED Talk), building a high-level advisory body at the request of Secretary General Guterres.
What has not changed, and what we have not seen
At least in the United States, policy around AI is still largely notional, not actual. Voluntary guidelines aren’t laws, and they aren’t enforcement mechanisms, either; companies can still do as they please. The Hawley-Blumenthal proposal largely got lost in the press around the much less-specific Schumer-led closed door meeting that followed it by a day; maybe it will become law, maybe it will not. The EU AI Act is on its way, but still being negotiated, and again, not yet, actual law. The UN has called for action, but not yet made clear what action that might be.
Large language models continue to be unreliable. None of the basic issues that I have been harping on for the last 20 years has actually been solved. Large language models still make stuff up (“hallucinate”), still can’t be trusted in their reasoning, and remain unreliable. That Times headline yesterday about Google’s latest Bard edition says it all, “Google’s Bard Just Got More Powerful. It’s Still Erratic.” A good part of my own concerns about AI revolve around that erratic nature; society is placing more and more trust in a technology that simply has not yet earned that trust.
As a society, we still don’t really have any plan whatsoever about what we might do to mitigate long-term risks. Current AI is pretty dumb in many respects, but what would we do if superintelligent AI really were at some point imminent, and posed some sort of genuine threat, eg around a new form of bioweapon attack? We have far too little machinery in place to surveil or address such threats.
The big companies have all pledged fealty to transparency, but none of them will actually tells us what’s in their data sets. (The voluntary guidelines didn’t address this). If we don’t know what is in the data sets, we can’t know what biases these systems will spread through society if they are widely used, we can’t know the extent to which individual content creatore are being exploited, and worse any outside efforts to mitigate risks are greatly undermined. We can’t properly assess the ability of these systems to generalize without understanding the relation between their training data and their output. Without that, addressing risks is like trying to dry a camp full of dishes with a single rag slopping water around.
§
I have some mild sense of regretting having signed the pause letter, not because of anything it actually said, but because of how it has repeatedly been politicized and misinterpreted. I have been accused of trying to hype AI (certainly not my intention), people have miscontrued the letter as banning all AI research (which is not what it called for, and not something I would support), and so on. But I sure am glad it opened the conversation.
We are obviously never going to have a moratorium anytime in the near-term. The apparent economics of AI (we will see if they yet come to pass as imagined) are too seductive to too many, to people building AI companies, to people investing in AI companies, to people getting paid 6 and even 7 digit salaries, and to people expecting campaign donations from big tech companies.
The question is, if we don’t pause AI, what will we do to mitigate the many risks – from accidental misinformation (defamation, medical misinformation) and wholesale deliberate disinformation (which may manipulate our elections and our markets) to increases in cybercrime and malware, to AI-generated bioweapons, and so forth – and who will bear the costs.
Next time: what should we do?
Gary Marcus testified about all this before the US Senate in May. He’s cautiously optimistic that we might see actual legislation pass in the next six months, but still pretty worried about whether whatever it comes up with will be up to the job.
“Society is placing more and more trust in a technology that simply has not yet earned that trust.”
The society does not place trust, the society simply gives way, abandon itself to this technology. Because of the fundamental weaknesses underlying our society: greed of the companies, indolence of authorities, less effort and comfort seeking of users. I am afraid, globally, our society will be quickly very happy with AI and will not want to hear about the critical threats. Companies will make big money, governments will have the ultimate control tool over their populations, ordinary people will feel supported and smarter with it. People will be pleased by AI systems, will get used to them and depending on them. The game seems to be over yet.
"As a society, we still don’t really have any plan whatsoever about what we might do to mitigate long-term risks. Current AI is pretty dumb in many respects, but what would we do if superintelligent AI really were at some point imminent, and posed some sort of genuine threat, eg around a new form of bioweapon attack? We have far too little machinery in place to surveil or address such threats."
This is the real existential threat in my opinion. It's impossible to detect that a superintelligent AI is imminent. Scientific breakthroughs do not forecast their arrival. It is also possible that some maverick genius working alone or some private group may have clandestinely solved AGI unbeknownst to the AI research community at large and the regulating agencies. A highly distributed AGI in the cloud would be impossible to recognize. I lose sleep over this.