89 Comments
Jun 8, 2023Liked by Gary Marcus

The safety-critical world (nuclear, railways, aerospace etc) would be a good starting point. Anyone who develops a safety-critical system is required to produce an evidence-based safety case for that system, in order for that system to be certified against technical standards. Only a system that has been so certified may be deployed. (See for example the UK Safety Critical Systems Club: https://scsc.uk). Any AI system sufficiently powerful to cause harm (either to individuals, or to society, e.g. democracy) is effectively a safety-critical system, and should be required to be certified against strict technical standards prior to deployment. Given that we don't really understand how complex neural-net-based systems even work, I very much doubt that any NN-based system (such as an LLM) would meet the requirements for safety-critical certification. Which immediately means that anyone proposing such regulation is going to be accused of "stifling innovation" (i.e. wealth generation / tax dollars) at the expense of "us" (the US, UK) vs "them" (China, Russia, etc). It's a classic Molochian Trap, where every actor behaves according to their own short-term self-interest, thereby leading to an endgame that is a massively sub-optimal for everyone. The real AI problem is not the technology per se, but the global coordination problem.

Expand full comment

I am (very) glad to hear your success in reaching possible regulators. It is stunning how many people are talking about AI but without any knowledge of the real and often subtle issues. Your leadoff for the Bleak Future identifies one cause: the abyss between those concerned with Safety versus Ethics hinders and limits public understanding. I think the numerous possibilities for harm need to be made concrete in as many ways as possible. Your illustration of the overt and visible calamity development in the bleak future scenario is a good example of what will help people grasp the risks. There can also be cryptic risks, and those need story-telling as well. I took a stab at illustrating how instrumental AI goals of persuasiveness could lead quite stealthily to human loss of control. https://tedwade.substack.com/p/artificial-persuasion I wish more it had more exposure.

Expand full comment

Gary, the concerns you express and outline here are the very reason why in May of 2014 we started a blog SocializingAI.com in what proved to be a failed attempt to engage the tech world about these very issues. We branded the blog, “Socializing AI – Where coders don’t know to go”. As ML started to explode, we sensed that there was both a great opportunity and potential for AI, as well as grave danger. Ultimately, we did connect with high level people at Microsoft, Intel, IBM (Watson and other divisions) and to a lesser degree Google and Google Brain, VCs (one famous VC engaged us with 50+ emails but would not meet in person and after a couple of years ended engagement by saying he thought we had something, but he didn’t have the time to think it through) and others. But we found that we were speaking an alien language to them, no one we talked to had the ability to comprehend the meaning of what we were saying. To a very large degree this inability to see the problem we were highlighting was due to their binary mindset reinforced by their mechanistic capitalist mental model of the world. These were fundamentally good people and even though we proposed and demonstrated both technology and mental models of approaches could be used to address these issues, approaches that many found engaging and of some limited interest, they literally could not grasp the need for these. The models we shared were adjacent, not replacement, tech/mental models, but they did not fulfill the goal of the tech world’s existing tech/mental models of command-and-control, dominance, and power. Models which they believe are completely validated by the inconceivable monetary success the tech world is experiencing, which to them confirmed the ‘rightness’ of their work and approaches. We stopped posting in the blog in 2019.

Expand full comment

Gary, you’re a smart person, WTF did you think would happen when asking gov’ts for regulation? You really think things would proceed as your idealized world would have it? Do you really believe all of this to be neutral technology for the benefit of our collective kumbaya? I’ll try to avoid calling you naïve, but when you trust in gov’t to deliver us from evil, you are simply marching into the dragon’s lair. Happy to hear you got your moment in the Senate’s sun, and they appeared as interested as you’d hoped, but they do this with an eye to addressing their interests (and those who support them) not yours. This will not be regulatory capture because they’ll anoint anyone, it will be so because only the deepest pockets will be able to pay or afford the cost of entry. Don’t forget, while there’s lots of evil out there, our gov’ts are the devils we know and we should always be weary of them 😉

Expand full comment

The chances of regulation producing more bad effects than good is extremely high. And regulation gets worse over time. That will be even more true for fast moving AI. I prefer Marc Andreesen's approach. Let AI fly and tackle issues as they arise.

Expand full comment

I don't disagree with anything said in this piece - and only want to point out that I think the geopolitical situation makes the efficacy of regulation a little dicey. We not only have the capitalistic motivation of profits, we are also in an AI arms race with some of our more contentious global neighbors. Not staying at pace or ahead of nation-states which which would very much like to weaponize AI against us (more - because they already have) has very real national security ramifications, and could threaten the well-being of free people the world over. Threat actors, nation states and otherwise, are already trying to weaponize AI. We've already seen upticks in 'small' cybercrime - more effective phishing campaigns written by AI. Coupled with the fact that the line between "cutting-edge" GenAI and not-cutting edge is a very slim margin, its going to be exceptionally hard to us to defend against state-of-the-art AI, without our own to support us.

That's not to say we shouldn't strive for regulation, even global regulation to govern the use of AI, only that we should be aware of how the geopolitical situation will influence our appetite for regulation when we know our adversaries are carelessly sprinting ahead.

Expand full comment
Jun 8, 2023·edited Jun 8, 2023

re: "We also know, for example, that Bing has defamed people, and it has misread articles as saying they opposite of what they actually say, in service of doing so."

Anyone who takes what it says as "truth" should be viewed the same way someone who believes the Babylon Bee or the Onion. Someone should patiently explain to them how adults grasp that not all sources of information are accurate.

People should be free to use flawed tools if they wish to. Adults are free to impair their judgement with alcohol: and are held responsible if they drive or have an accident while doing so. We no longer ban alcohol, and we don't hold the alcohol companies responsible for the actions of their users.

Some people though apparently share the mindset of alcohol prohibitionists who assumed they somehow should have the right to protect people from themselves, whether they want it or not.

Expand full comment

Let's see...our government passes regulations about burning coal so we'll burn less to none so that we don't raise temperature. Naturally, that means that all other countries around the globe are doing exactly the same thing.

Likewise, we pass regulations about AI so that we only do good things with it. Ergo, parallel to the coal business, all other governments--following in our hallowed footsteps--will do the same and check with us to make sure they're doing it right.

This is kinda like the definition of "hate": it's like taking poison hoping it will kill the person we hate. Is there really a viable alternative to being the meanest sonofabitch in the valley?

Expand full comment

If we are depending on politicians and bureaucrats to steer us away from an AI debacle, then all hope is already lost, if government regulation in industries such as healthcare, agriculture, finance, etc. is anything to do by.

Expand full comment

Sorry to write that, but the idea, that the government (-s) will do anything in a correct way is pretty much naive. Of course they will consult the tech giants, but not the scientists, as the latter can't do anything against, while the former can. Nobody will do anything on the global scale - rather on the regional one. Why should US allow any participants from outside, where all of the tech giants are sitting just next to them. And even if the regulation will be set up - if it's done the way pharma is regulated, then we are in a serious problem.

Although I don't believe in the worst case scenario, that we end in anarcha and AI wars, I either don't believe in the positive one. But the time will tell

Expand full comment

TTRC is a good name for your project. Trust in Technology Research Center. Because you’re undoubtedly going to be addressing technologies which don’t necessarily fit under the category of AI. Doesn’t matter that it’s not catchy, just don’t give it a crappy logo that looks like some northwest Luddite movement, lol. Give it a bold and serious logo like NATO has, but combine it with the lyrical humanism that the CFR (Council on Foreign Relations) has. The combination of blue shades in that logo are a good start. I wish you luck.

Expand full comment

We need an FDA for AI and we need it now.

Expand full comment

I couldn't agree with you more Gary. It seems that we need to demand that governments give AI public interest groups a seat at the table too. Are there any that you think would be a good fit for public advocacy? A quick Google search turns up https://publicinterest.ai

Also, https://link.springer.com/article/10.1007/s00146-022-01480-5 seems like a good article on this topic - I will have to make some time to read it in the next few days.

Expand full comment

“This is where the government needs to step up and say “transparency and safety are indeed requirements; you’ve flouted them; we won’t let you do that anymore.””

Gary - I can’t tell if you are just naive or being purposefully obtuse.

Belief that any government working with any group of industry leaders will come up with the best future is a view devoid of historical perspective.

And belief that an historically non-transparent government will somehow create regulations that ensure transparency is pollyannish.

With the world being on the cusp of quantum computers combining with AI technologies your mission of playing cassandra is futile.

Expand full comment

Gary, your bifurcated AI future as presented to the IMF is spot on. clearly, concisely, well-articulated "poles". Obviously reality will probably fall somewhere in between... and articulating these "edge" (which aren't, imho, so "edge," more like <15% probability) scenarios is an excellent framing of the present day challenge and potential future outcomes / consequences. thank you for your continued voice in this space!

Expand full comment
Jun 9, 2023·edited Jun 9, 2023

There is talk of regulation, but so far it has not been proven that "AI" (let's just say Machine Learning so as not to be so pompous) is more dangerous than everyday programming. So, we should regulate programming too?

The only thing I see are strong incentives to put regulations in an area that anyone can copy (because it really has nothing special, and is even very basic scientifically speaking), and thus protect the current players (i.e. regulatory capture). Players such as Altman, who in my personal opinion, besides not being himself an expert in Machine Learning, is a person only interested in accumulating power and influence, nothing more.

I would take the issue of regulations a little more seriously if the following conditions are met:

1. Multidisciplinary teams are formed and the scientific method is applied to evaluate the true capacity of current Machine Learning models. The process should be transparent so that anyone can replicate the results.

2. Critical areas of human activity are identified, and regulations are established that specify the conditions that must be met to provide services in them through Machine Learning models, or directly prohibit the use of models.

For example, if a bank uses a model to make a credit decision, the model must be able to explain why it made those decisions, if a search engine uses a model, the model must be able to cite the sources from which it extracts the information it presents, if a doctor uses a model for a diagnosis, the model must be able to explain that diagnosis, and it must be the doctor who has the final word and approves it, it would be prohibited to have autonomous weapons, etc., etc.

It's strange that none of that is what is being done, is it because that hurts the current players?

Expand full comment