Scientists, governments, and corporations urgently need to work together to mitigate AI risk
“It is hard to see how you can prevent the bad actors from using it for bad things” -- but we must try
Regular readers of this Substack will know that Geoff Hinton and I disagree about a lot; I love symbols; he hates them. He thinks neural networks “understand” the world; I think they do not. He probably thinks we are closer to AGI (artificial general intelligence) than I do.
But we are both really, deeply worried about AI, and seem to be converging on a common idea about what to do about it.
Most of our concerns are shared. I’ve been writing with urgency about my concerns about the contributions of large language models misinformation, and my concerns about how bad actors might misuse AI, and in my essay AI risk ≠ AGI risk, argued that we should worry about both near-term and long-term risks.
In endorsing the “pause letter” (despite expressing some concerns about the details), I was saying we need to slow down, and to focus on the kind of research that the pause letter emphasized, viz work on making sure that AI systems would be trustworthy and reliable. (This was also the major thrust of my 2019 book with Ernest Davis, which was subtitled Building AI We Can Trust; the point of the book was that current approaches were not in fact getting us to such trust.)
Hinton has heretofore been fairly quiet about AI risk, aside from a hint at a recent CBS News interview in March, in which he said rather cryptically that it was “not inconceivable” that AI could wipe out humanity. In the last few day he left Google, and he spoke more freely with Cade Metz, in a must-read article at The New York Times. Metz reports that Hinton expressed worries about misinformation (“His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”), misuse of AI (“It is hard to see how you can prevent the bad actors from using it for bad things”), and the difficulty in controlling unpredictable machines (“he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze”).
I agree with every word. And independently made each of these points a little less than two weeks ago, when I spoke at TED. (Rumor has it that my talk will be released in the next couple weeks.)
The question is what we should do about it.
§
At TED, and in companion op-ed that I co-wrote in the Economist, I urged for the formation of an International Agency for AI:
We called for
the immediate development of a global, neutral, non-profit International Agency for ai (iaai), with guidance and buy-in from governments, large technology companies, non-profits, academia and society at large, aimed at collaboratively finding governance and technical solutions to promote safe, secure and peaceful ai technologies
The thing that struck me the most about Hinton’s interview is that he has converged on his own to a very similar place. Quoting Metz in the Times:
. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said
Let’s get on it.
§
I have spent all my time since TED gathering a crew of interested collaborators, speaking to various leaders in government, business, and science, and inviting community input. Philanthropists, we need your help.
Anyone who wants to help can reach out to me here.
Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, deeply concerned about current AI but really hoping that we might do better. He is the co-author of Rebooting AI and host of Humans versus Machines.
One thing we can do is to promote and adopt the authentication measures being developed by the Coalition for Content Provenance and Authenticity. https://c2pa.org/
The other obvious thing is to pass laws and regulations requiring clear labeling of synthetic communications and media and criminalizing the creation of deep fake clones of people without their permission.
We can do all of this NOW concurrently with establishing international institutions.
We can, of course, continue to work on methods for detecting deep fakes, but this is an arms race that probably cannot be won. This is all a consequence of passing the Turing Test, and I'm chagrined that I didn't see it coming.
I am grateful for your work and for all you share with us. I love reading your pieces.