In just a few short months, the race to regulate AI, possibly at an international level, has gone from barely on the table to heated. The urgency for a coherent, international plan to regulate AI has been widely recognized, and some world leaders have come out favoring the idea.
But we all know that crafting legislation, let alone international agreements, is wicked hard, and can often take many years. Many of us have also become worried about the possibility of regulatory capture, and whether shrewd executives at the major tech companies might induce governments to alight on regulation that ultimately does more harm than good, keeping out smaller players, and entrenching Big Tech’s own status, while doing little to address the underlying risks, such as around misinformation and cybercrime.
Throughout the time in which I have been advocating for international AI governance, I have emphasized the need to have scientists and civil society more generally at the table. The photo ops I have seen from governments so far don’t give me huge confidence that that will happen, at least not in a sufficiently empowered way.
Which has gotten me to thinking. Is there a way that a third party - philanthropists - could catalyze something something that was both faster and more independent than what governments working together with tech companies might do on their own?
§
For the last two months, I along with my collaborators Anka Reuel (Stanford) and Karen Bakker (UBC/Harvard), I have been working to develop an alternative that we are calling “governance-in-a-box”; yesterday at my keynote at the AI for Good Global Summit in Geneva we announced that we are launching CATAI - the Center for The Advancement of Trustworthy AI, and and announced our collaboration with our first philanthropic partner in this adventure, Omidyar Network.
The response was amazing; we expect that will have other partnerships to announce before long.
The idea of governance-in-box, in a nutshell, is to create turnkey tools, training, consulting, and best practices for AI regulation. Ideally, we would give that package away for minimal cost (supported in part by philanthropy) to countries that lack the expertise to develop their own regulatory regimes for AI, presumably customizable by their individual needs.
The virtue in this is twofold: first, there is safety in numbers. If a large number of countries can alight on common procedures and practices, the big tech companies will be obliged to take those procedures and practices seriously; what’s more, it’s actually in the interest of the individual companies to have common standards for interoperability, and for climate change reasons as well (nobody should want to train 193 models for 193 countries, with all the emissions thar are associated with each training or retraining).
Second, there is speed; virtually everyone recognizes that large language models carry risks as well as opportunities, and that there is an urgent need to start to address these risks. In our new organization - the Center for the Advancement of Trustworthy AI, we can move swiftly. If we can produce the right package of tools and expertise for auditing, prerelease examination, and so on, address questions of bias, reliability, misinformation and so forth, working together of course with existing agencies that have been aiming to craft related standards, adoption may be swift. The general principles around transparency, privacy, and accountability are well known; our hope is to help countries to put teeth into those expectations.
We certainly don’t see this approach as in fundamental conflict with governments building their own frameworks. But we hope that it can contribute to– and even catalyze – a collective endeavor in which government will play in important role, and in which companies will step up as well.
Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, deeply concerned about current AI but really hoping that we might do better. He spoke to the US Senate in May 16, is the co-author of the award-winning book Rebooting AI, as well as host of the new podcast Humans versus Machines.
It's hard to have much confidence in governance schemes when, as best I can tell, such discussions never seem to mention what could be the most important threat presented by AI, further acceleration of the knowledge explosion.
Let's imagine for a moment that AI is made perfectly safe. Safe AI would still accelerate the knowledge explosion, just as computers and the Internet have. The real threat may come less from AI itself than from the ever more, ever larger powers which emerge from an AI accelerated knowledge explosion.
The knowledge explosion has already produced at least three powers of vast scale, which we basically have little to no idea how to make safe.
1) Nuclear weapons
2) Artificial intelligence
3) Genetic engineering
Instead of learning from this, we're using tools like AI to pore even more fuel on the knowledge explosion, which will almost certainly result in even more powers of significant scale which we will also struggle to make safe. As the emergence of AI illustrates, this processes is feeding back on itself, leading to ever further acceleration.
Experts are playing a losing game in trying to address emerging threats one by one by one as they emerge from the knowledge explosion assembly line, because that accelerating process is going to produce new threats faster than we can figure out how to defeat existing threats. Nuclear weapons were invented in 1945, before almost all of us were born, and we still have no clue how to get rid of them.
What we need are experts who are holistic thinkers. We need experts who will focus on the knowledge explosion assembly line which is producing all the emerging threats.
Taking control of the knowledge explosion so that it produces new powers at a rate which we can successfully manage is not optional. It's a do or die mission. Experts can declare this goal impossible all they want, but it will still remain a do or die mission.
The knowledge explosion has created a revolutionary new environment. Nature's primary rule is that creatures who can't adapt to changing conditions must die.
This is an exciting initiative! Building the technical basis for auditing and regulation is the highest priority. We are starting to see some papers on this, but we have a long way to go.