Jumpstarting AI Governance
Customizable governance-in-a-box, catalyzed by philanthropy
In just a few short months, the race to regulate AI, possibly at an international level, has gone from barely on the table to heated. The urgency for a coherent, international plan to regulate AI has been widely recognized, and some world leaders have come out favoring the idea.
But we all know that crafting legislation, let alone international agreements, is wicked hard, and can often take many years. Many of us have also become worried about the possibility of regulatory capture, and whether shrewd executives at the major tech companies might induce governments to alight on regulation that ultimately does more harm than good, keeping out smaller players, and entrenching Big Tech’s own status, while doing little to address the underlying risks, such as around misinformation and cybercrime.
Throughout the time in which I have been advocating for international AI governance, I have emphasized the need to have scientists and civil society more generally at the table. The photo ops I have seen from governments so far don’t give me huge confidence that that will happen, at least not in a sufficiently empowered way.
Which has gotten me to thinking. Is there a way that a third party - philanthropists - could catalyze something something that was both faster and more independent than what governments working together with tech companies might do on their own?
§
For the last two months, I along with my collaborators Anka Reuel (Stanford) and Karen Bakker (UBC/Harvard), I have been working to develop an alternative that we are calling “governance-in-a-box”; yesterday at my keynote at the AI for Good Global Summit in Geneva we announced that we are launching CATAI - the Center for The Advancement of Trustworthy AI, and and announced our collaboration with our first philanthropic partner in this adventure, Omidyar Network.
The response was amazing; we expect that will have other partnerships to announce before long.
The idea of governance-in-box, in a nutshell, is to create turnkey tools, training, consulting, and best practices for AI regulation. Ideally, we would give that package away for minimal cost (supported in part by philanthropy) to countries that lack the expertise to develop their own regulatory regimes for AI, presumably customizable by their individual needs.
The virtue in this is twofold: first, there is safety in numbers. If a large number of countries can alight on common procedures and practices, the big tech companies will be obliged to take those procedures and practices seriously; what’s more, it’s actually in the interest of the individual companies to have common standards for interoperability, and for climate change reasons as well (nobody should want to train 193 models for 193 countries, with all the emissions thar are associated with each training or retraining).
Second, there is speed; virtually everyone recognizes that large language models carry risks as well as opportunities, and that there is an urgent need to start to address these risks. In our new organization - the Center for the Advancement of Trustworthy AI, we can move swiftly. If we can produce the right package of tools and expertise for auditing, prerelease examination, and so on, address questions of bias, reliability, misinformation and so forth, working together of course with existing agencies that have been aiming to craft related standards, adoption may be swift. The general principles around transparency, privacy, and accountability are well known; our hope is to help countries to put teeth into those expectations.
We certainly don’t see this approach as in fundamental conflict with governments building their own frameworks. But we hope that it can contribute to– and even catalyze – a collective endeavor in which government will play in important role, and in which companies will step up as well.
Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, deeply concerned about current AI but really hoping that we might do better. He spoke to the US Senate in May 16, is the co-author of the award-winning book Rebooting AI, as well as host of the new podcast Humans versus Machines.


This is an exciting initiative! Building the technical basis for auditing and regulation is the highest priority. We are starting to see some papers on this, but we have a long way to go.
I must say that my personal reaction was surprise at the boldness of the proposal. Also, that its chances of success are negligible. Not trusting my personal perspective I used GPT-4 to investigate, it concluded: “that while Marcus's proposal is innovative and necessary, there are significant concerns about its implementation. Transparency, inclusivity, and adaptability are crucial, as well as a keen awareness of cultural, ethical, and security implications. Despite these challenges, the consensus is that the idea has potential and could, with thoughtful execution, help expedite the development of comprehensive, international AI regulations.”
https://chat.openai.com/share/5f379a49-800e-428a-8a11-5e6de5697e52