55 Comments
Mar 28, 2023Liked by Gary Marcus

This seems exactly right. The risk right now is not malevolent AI but malevolent humans using AI.

Expand full comment

Very well said!

And while there is a growing recognition of the potential harm, damage or even destruction, from misused LLMs or other forms of AI, there is already immediate harm, growing harm, that will lead to much pain, suffering and death, many deaths. Though as you have said causality will be difficult to prove. The US, and much of the rest of the world are experiencing a mental health crisis. It’s very complicated with profound confusion about how to address this. This has created an explosion of billions in funding for mental health and well-being apps (sixty-seven percent had been developed without any guidance from a healthcare professional), some if which are already using LLMs and GPT and we know in advance the results. It is a bit like social media has many benefits, but many years later society at large is just starting to understand the significant harm, especially to children and the vulnerable, but also to society as a whole. This current form of AI is doing real harm today, that will grow to great harm, even if the AI is not malevolent or does not eventually lead to the end of civilization.

Expand full comment

I believe that we are about to enter a multi-decade era of "Artificial Stupidity", i.e. a world of AI/AGI systems that (a) have genuine utility (translating into significant wealth for both their owners and users), but nevertheless (b) are nowhere near super-intelligent, (c) will be perceived by many/most lay people as being far more intelligent/reliable than they actually are, and consequently (d) will be deployed at scale in may situations for which they are not entirely competent, and furthermore (e) will be abused in many ways my many malicious actors. As a result of (d) and (e) these systems will (simultaneously to and in parallel with the utility and wealth that they generate due to (a)) inflict massive societal harm (globally and at scale) over the coming years and decades, and it will likely take society (including policymakers etc) one or two decades to start to understand the extent of this harm. From where I'm standing (in the AGI world) the AI/AGI-fueled harm that we are about to inflict upon ourselves is entirely foreseeable - it's like watching a train wreck in slow motion.

Expand full comment

Additionally, my greatest concern is that we as a community have invested very little in using these methods on problems that actually are the biggest needs of humanity.

We could be studying how to to make education more accessible to communities around the world. We could be learning how we can use AI algorithms to build healthier communities. Or weather prediction to support precise agriculture. These problems are hard. When people think of AI, they think of intelligent systems that help make our lives easier, safer, more productive, and more enjoyable. But, as a scientific community, we haven't even begun to investigate how to build intelligent solutions for these problems.

It disappoints me greatly that after billions of USD in investment, what we have is a tool that can generate large amounts of misinformation. All because we have been chasing some mythical AGI instead of iterating over real problems with well defined experiments and impact measurement. And, there doesn't seem to be any space for doing so either. Each such project is painful and will not lead to AGI - so no interest from funders or managers.

It feels like we failed humanity.

Expand full comment
Mar 28, 2023Liked by Gary Marcus

One thing we might start with is a sort of 'hippocratic oath' for IT-engineers.

Expand full comment
Mar 28, 2023Liked by Gary Marcus

https://xkcd.com/1289/

Expand full comment

Applause from here! Doomer posts like this, especially when coming from a variety of intellectual elites, give me hope, and cheer me up. Thanks Marcus! I hope to make a contribution to such a constructive dialog with the following thought experiment.

THOUGHT EXPERIMENT: Imagine for a moment that AI, nuclear weapons, genetic engineering and all other technologies of vast scale which we have grave concerns about were to magically vanish.

Then what? Does this solve the problem? No, it doesn't solve the problem, because the knowledge explosion machinery which created these threats is still in place, and continuing to generate new powers at what seems an accelerating rate. Getting rid of existing threats would certainly buy us some much needed time, but it wouldn't make us safe, because new and potentially even bigger threats would soon emerge, and then we'd be right back where we are now.

THE PROBLEM: The challenge we face is not fundamentally technical, it's philosophical. We are trying to navigate the 21st century with a "more is better" relationship with knowledge left over from the past.

This "more is better" knowledge philosophy was entirely rational in the long era of knowledge scarcity. What we're failing to fully grasp (while pretending that we do) is that we no longer live in the old knowledge scarcity era, but in a radically different new era where knowledge is exploding in every direction at an accelerating pace. We're failing to update our knowledge philosophy to meet the new environment the spectacular success of science has created.

It's simple. We're failing to adapt to a changing environment. And has always been true for for every creature on the planet, the price imposed for a failure to adapt to changing conditions is death.

If we insist on pushing knowledge development forward as fast as possible, that is going to require us to change the way we think in a radical manner too. We can't have one without the other. There is no "cake and eat it too" solution here.

As one example, if intend to continue releasing vast new powers in to the human environment, we can no longer afford violent men. That's over. They have to go. Or they have to be fundamentally changed by some as yet unknown biological mechanism. Or maybe we have to get rid of all men. Something, something that no one wants to talk about, and that no one considers possible, and that lots and lots of people are going to angrily reject, and that the "experts" will all dismiss with a lazy wave of their hands, has to happen with violent men.

An accelerating knowledge explosion and violent men are incompatible. Revolutionary new technology requires revolutionary new thinking. It's not optional.

Here's what a mature knowledge philosophy adapted to our times would look like. The same thing we do with our kids. We don't buy our six year old son a shotgun for his birthday. We realistically recognize that he's not ready for that yet. This is just common sense that everyone accepts as an obvious given.

We just need to apply this very same common sense to ourselves. Some things we are ready for, and some things we are not. When will be ready for AI? When we've gotten rid of nuclear weapons. When we've proven that we can fix our mistakes.

Expand full comment

What I don’t understand is people like Musk tweet about these dangers yet he, more than most anyone has the resources to go after the issues full force and has done very little aside from investing in one of the problems (OpenAI). If the risk is really there, why is the action so minuscule? Why are folks like LeCunn working for companies like Meta who have over and over shown that the good of humanity is not their priority? I know there are groups working on AI risk, but their efforts are lilliputian when compared to the fanboys of AI attacking anyone who dares to question “progress.”

Expand full comment

"Within 10 years computers won't even keep us as pets."

― Marvin Minsky (1927-2016)

Expand full comment

XKCD answered the question once forv all: https://xkcd.com/1289/

Expand full comment

Between the long term “existential” risk and the short term “criminal” risk, there is a mid-term societal “ethical” threat. The latter will come with more advanced next generation AI systems, not yet “superintelligence” level, but low level AGIs capable of processing rationally complex problems (data collecting, processing, optimization, extrapolation, design or decision making) which will be likely available in a decade. These systems will be thus able to perform intellectual work, to do “reasoning”, the prerogative and proud of educated people, the daily task of engineers, managers, doctors, etc. This prerogative will be taken from us and we are not ready to handle it as a society. The concern is not only about jobs and economy, but also about human leadership, human position and self-esteem in a society where two types of “intelligence” will coexist. Not only human intellect “market” value but also people social status is at stake. When manual wearing work is taken over by a robot it does not make the same societal effect, it has not the same impact on people when an intellectual interesting work is taken over by an electronic brain. The widespread idyllic concept of AI analyzing and proposing and the human intelligence finally deciding will become a kind of fiction in many domains. Because the human intelligence will not be able to critically check and to correct the content generated by the artificial one. This societal issue also needs anticipation and regulation.

Expand full comment

Why do you say the human extinction threat is overblown when you fuel just that?

"Maybe humans would not literally be “wiped from the earth,” but things could get very bad indeed."

Then don't use the word extinction or the phrase wiped from the earth.

Moreover, this is largely not a problem of AI regulation, but of things like biosafety measures, access control, laboratory equipment, and so on. An additional tool for misinfo is quite unlikely to raise risks significantly towards extinction.

More points at the "Mitigating the risk of extinction from AI should be a global priority alongside risks such as pandemics and nuclear war" arguments map https://www.kialo.com/mitigating-the-risk-of-extinction-from-ai-should-be-a-global-priority-alongside--risks-such-as-pandemics-and-nuclear-war-63178

You're very inconsistent and are more a cause of the perception problem that you address in this post's very title.

Expand full comment

The important question is:

Can AI help reduce the risks above ?

Risk of AI vs reduced risk of other

Expand full comment

"The real issue is control." No, it is not. Oh, sure, it is concerning that varied oligarchies, politburos and/or tech elites may use these new powers to exacerbate the already vexing disinformation/agitprop sphere. But that has been a problem in every human civilization going back 6000 years. We need to look at which methods ever allowed civilization to evade such traps. And only one has ever worked - flat-competitive reciprocal accountability, enabled by general transparency. It is THE method Pericles spoke of and that Adam Smith and the US Founders. I speak of it elsewhere: (https://www.blogger.com/comment.g?blogID=8587336&postID=1163004943576110681&page=1&token=1680392360425) and it works by siccing elies upon each other... as we might sic AIs competitively on each other. It is the only thing that possibly CAN work. And (alas) it is almost never discussed.

-- David Brin, author of The Postman and The Transparent Society: Will Technology Make Us Choose Between Privacy and Freedom?

Expand full comment

I dunno. I can certainly see LLMs making it way easier to do phishing and other low-level scams. But they're already pretty easy, and people are already pretty wary of them. If the e-mails get really persuasive, I imagine people will adapt very quickly -- I can't see all of humanity just passively waiting to be fleeced of all its savings by Nigerian lottery winner e-mails. Presumably it will become virtually unknown for anyone to believe e-mail lacking some certiification or other that it's from someone you trust. (Maybe the PGP people will finally be right that everyone uses digital signatures.)

But I'm having a hard time seeing how LLMs accelerate any risk of rogue nuclear weapons launches, misuses of CRISPR, et cetera. These are all very very high impact events, and so bad actors wishing to pursue them can already spend all the money they want, hire anyone they need. SBF didn't need an LLM to persuade people to invest their savings in FTX, he could hire celebrities to pitch them in Superbowl ads. It's going to be a long time before any chatbot is going to be as persuasive as Gisele Bündchen cooing 'you don't want to miss this, handsome!' from the 40" bigscreen TV after half a sixpack.

So I'm not seeing any significant *extra* leverage the black hats are going to acquire with LLMs, which mimic human conversation, since they can *already* hire as much human conversational talent as they want to pursue any of these very high value targets.

Secondarily, since this is a known risk, all the high value targets are also of course already hardened against these kinds of attacks. The KGB used to do honey-trap attacks all the freaking time, it was kind of their specialty, so it's not like military C&C apparatus isn't already very awake to the possibility of key personnel being persuaded or fooled into betrayal of their role by some appeal to their human weaknesses. If the defensive measures we have in place have served reasonably well against attack by human agents, I'm not seeing any reason to think attack by agents mimicking humans, and using the same tactics of pesuasion, are going to succeed qualitatively better.

I'm not saying disruptive technology isn't, well, disruptive. Some people will lose their jobs, industries will shift, certain demographics prosper and others wither, there will be new forms of crime and new ways to ruin your life through inattention and bad decisions. But this seems like an appurtenance of essentially all technology. What is sufficiently different about *this* one to justify any extra level of worry, above what the GM wrench-turner feared from robots on the assembly line?

Expand full comment

The tech is here to stay. The criminals will still get access to it even if it is prohibited by law. Just check to see how successful the war on drugs has been. A prohibition would only limit good people to do good things with AI

Expand full comment