55 Comments

This seems exactly right. The risk right now is not malevolent AI but malevolent humans using AI.

Expand full comment

well said. i should have said that last sentence in the essay!

Expand full comment

Not just malevolent people, just careless adoption can already be catastrophic. Kids having low attention span or hating their body because of social media is neither the result of malevolent AI or malevolent people and large scale adoption of AI technologies will have stronger and worse consequences than this with no way of "pulling the plug" out of society

Expand full comment

Good start, and then you should write about 100 more essays on that topic. :-)

Expand full comment

True that. The overwhelming vast majority of violence all over the world at every level of society is committed by small fraction of the human population, violent men.

https://www.tannytalk.com/p/world-peace-table-of-contents

Expand full comment

Malevolent humans already have access to guns, knives, telephones, the internet, cars, laser printers, in some cases whole armies, you think chatbots is where the limit is?

Expand full comment

That's always been the case with new technologies. The genie is out of the bottle. Attempting to restrict development only stops the rule-followers. Malevolent actors will not care about regulations.

Expand full comment

That's what it looks like.

Unfortunately, at this point the most effective response is to develop a strengthened immune system, adding an extra level of skepticism toward digitally sourced information output. And that's an individual endeavor. There's no way to confer strong critical thinking skills on someone else, as a passive procedure where they just hold out their arm and they get an injection of info-antibodies to lower their gullibility index. Critical thinking is a skill set that's strengthened by exercise. By all indications, those skills are due to be required to become even more attuned.

I remember when fake copied eBay listing pages for particularly valuable items began showing up on the site as phishing scams, back around 2002. It took some years for eBay to get a handle on that problem. I never went for an eBay listing that looked too good to be true, but I've read some reports by people who did.

Now it looks like the entire Internet is going to be vulnerable to a much more diffuse array of mockups. I'm skeptical that can be prevented. I anticipate that countermeasures will eventually be crafted in order to winnow out most of the abuses, but they will probably need to evolve after the fact, because the active applications have to show up in order to figure out how to effectively grapple with them and guard against them.

Meanwhile, the problems will need to be dealt with one at a time. It will be educational to learn how that plays out. Consider how many people have gone for QAnon. QAnon's appeal is reliant on a handful of elementary fallacies, but in my observation, most people- most Americans, anyway- have no practice at debugging even the most basic informal logic challenges. Lacking acquaintance with detection of logical fallacies, Americans tend to prefer to adopt a pose of blanket cynicism. QAnon actually feeds on cynicism, eventually converting it into paranoia.

Moreover, many of the people who are currently laughing at QAnon dupes don't have such great critical thinking skills either, and they do little to reflectively examine their own ideas for weaknesses of bias. The vulnerability toward wanting to believe, to gravitate toward a preconceived narrative, is rampant. It was clear to me 25 years ago that the schools needed to initiate a curriculum in media criticism and awareness in the elementary grades. "Media Criticism and Awareness" sounds like a pretty advanced subject to teach to 9 year olds, but that's only because of the pretentious title. Kids are able to pick up the basics at an early age- and they NEED to do it as soon as possible, nowadays.

Same with informal logic and detection of verbal fallacies, although that's a little bit more sophisticated. Institutional schools have always been allergic to teaching those skills, sadly. I've heard of "critical thinking" being taught in high schools, but a lot of it frankly looks like a very narrowly channeled and directed criticism- specific critique, rather than general principles. Critical thinking loses its power if it isn't applied as impartially as possible, according to its precepts. This society is rife with people who are capable of incisively picking apart the flaws in opposing positions without applying the same acuity to finding the flaws in the positions they favor.

Expand full comment

Very well said!

And while there is a growing recognition of the potential harm, damage or even destruction, from misused LLMs or other forms of AI, there is already immediate harm, growing harm, that will lead to much pain, suffering and death, many deaths. Though as you have said causality will be difficult to prove. The US, and much of the rest of the world are experiencing a mental health crisis. It’s very complicated with profound confusion about how to address this. This has created an explosion of billions in funding for mental health and well-being apps (sixty-seven percent had been developed without any guidance from a healthcare professional), some if which are already using LLMs and GPT and we know in advance the results. It is a bit like social media has many benefits, but many years later society at large is just starting to understand the significant harm, especially to children and the vulnerable, but also to society as a whole. This current form of AI is doing real harm today, that will grow to great harm, even if the AI is not malevolent or does not eventually lead to the end of civilization.

Expand full comment

All technologies come with benefits, even nuclear weapons. But as the scale of emerging technologies grows, the room for error shrinks. If we get enough emerging technologies of vast scale emerging fast enough, sooner or later the room for error vanishes.

Expand full comment

I believe that we are about to enter a multi-decade era of "Artificial Stupidity", i.e. a world of AI/AGI systems that (a) have genuine utility (translating into significant wealth for both their owners and users), but nevertheless (b) are nowhere near super-intelligent, (c) will be perceived by many/most lay people as being far more intelligent/reliable than they actually are, and consequently (d) will be deployed at scale in may situations for which they are not entirely competent, and furthermore (e) will be abused in many ways my many malicious actors. As a result of (d) and (e) these systems will (simultaneously to and in parallel with the utility and wealth that they generate due to (a)) inflict massive societal harm (globally and at scale) over the coming years and decades, and it will likely take society (including policymakers etc) one or two decades to start to understand the extent of this harm. From where I'm standing (in the AGI world) the AI/AGI-fueled harm that we are about to inflict upon ourselves is entirely foreseeable - it's like watching a train wreck in slow motion.

Expand full comment

Additionally, my greatest concern is that we as a community have invested very little in using these methods on problems that actually are the biggest needs of humanity.

We could be studying how to to make education more accessible to communities around the world. We could be learning how we can use AI algorithms to build healthier communities. Or weather prediction to support precise agriculture. These problems are hard. When people think of AI, they think of intelligent systems that help make our lives easier, safer, more productive, and more enjoyable. But, as a scientific community, we haven't even begun to investigate how to build intelligent solutions for these problems.

It disappoints me greatly that after billions of USD in investment, what we have is a tool that can generate large amounts of misinformation. All because we have been chasing some mythical AGI instead of iterating over real problems with well defined experiments and impact measurement. And, there doesn't seem to be any space for doing so either. Each such project is painful and will not lead to AGI - so no interest from funders or managers.

It feels like we failed humanity.

Expand full comment

Well said!

Expand full comment

One thing we might start with is a sort of 'hippocratic oath' for IT-engineers.

Expand full comment

Regarding your clever chart: If there was a technology that could rid the world of violent men, that would lead to a radical reduction in violence, and vast resources would be liberated for constructive purposes. It wouldn't be a perfect utopia, but such a future could be fairly described as world peace.

We typically take violent men to be an obvious given that we can do little about. If we're going to dive headlong in to revolutionary new technologies maybe it's time for that assumption to change.

Expand full comment

Applause from here! Doomer posts like this, especially when coming from a variety of intellectual elites, give me hope, and cheer me up. Thanks Marcus! I hope to make a contribution to such a constructive dialog with the following thought experiment.

THOUGHT EXPERIMENT: Imagine for a moment that AI, nuclear weapons, genetic engineering and all other technologies of vast scale which we have grave concerns about were to magically vanish.

Then what? Does this solve the problem? No, it doesn't solve the problem, because the knowledge explosion machinery which created these threats is still in place, and continuing to generate new powers at what seems an accelerating rate. Getting rid of existing threats would certainly buy us some much needed time, but it wouldn't make us safe, because new and potentially even bigger threats would soon emerge, and then we'd be right back where we are now.

THE PROBLEM: The challenge we face is not fundamentally technical, it's philosophical. We are trying to navigate the 21st century with a "more is better" relationship with knowledge left over from the past.

This "more is better" knowledge philosophy was entirely rational in the long era of knowledge scarcity. What we're failing to fully grasp (while pretending that we do) is that we no longer live in the old knowledge scarcity era, but in a radically different new era where knowledge is exploding in every direction at an accelerating pace. We're failing to update our knowledge philosophy to meet the new environment the spectacular success of science has created.

It's simple. We're failing to adapt to a changing environment. And has always been true for for every creature on the planet, the price imposed for a failure to adapt to changing conditions is death.

If we insist on pushing knowledge development forward as fast as possible, that is going to require us to change the way we think in a radical manner too. We can't have one without the other. There is no "cake and eat it too" solution here.

As one example, if intend to continue releasing vast new powers in to the human environment, we can no longer afford violent men. That's over. They have to go. Or they have to be fundamentally changed by some as yet unknown biological mechanism. Or maybe we have to get rid of all men. Something, something that no one wants to talk about, and that no one considers possible, and that lots and lots of people are going to angrily reject, and that the "experts" will all dismiss with a lazy wave of their hands, has to happen with violent men.

An accelerating knowledge explosion and violent men are incompatible. Revolutionary new technology requires revolutionary new thinking. It's not optional.

Here's what a mature knowledge philosophy adapted to our times would look like. The same thing we do with our kids. We don't buy our six year old son a shotgun for his birthday. We realistically recognize that he's not ready for that yet. This is just common sense that everyone accepts as an obvious given.

We just need to apply this very same common sense to ourselves. Some things we are ready for, and some things we are not. When will be ready for AI? When we've gotten rid of nuclear weapons. When we've proven that we can fix our mistakes.

Expand full comment

Our failure to adapt to change, is that our current world has been created by humans programed with an operating system based on views and ideas hundreds of years old, a mental model, that does not acknowledge, understand or facilitate adaptation to new changing circumstances.

Expand full comment

One of the reasons our mental models are not evolving sufficiently is that those we look to as experts have a built-in bias for the status quo. They're on top of the culture, and so have little incentive to embrace real change.

As example, it's not in the financial interest of the science community to argue we should be doing less science, or slowing science down. And any scientist who raises their voice above the group think of that community is likely to be punished.

For other intellectual elites the bias is for making every issue as complicated and sophisticated as possible, because it is by doing so that they maintain their elite status. And so the public tunes out.

Expand full comment

This is such a great take. Human incompatibility with AI.

Expand full comment

What I don’t understand is people like Musk tweet about these dangers yet he, more than most anyone has the resources to go after the issues full force and has done very little aside from investing in one of the problems (OpenAI). If the risk is really there, why is the action so minuscule? Why are folks like LeCunn working for companies like Meta who have over and over shown that the good of humanity is not their priority? I know there are groups working on AI risk, but their efforts are lilliputian when compared to the fanboys of AI attacking anyone who dares to question “progress.”

Expand full comment

Highly educated elites continually warn us of the climate change crisis, as they fly around the world in CO2 emitting jets to conferences that they could have attended over the Internet. I recently saw video of one such conference in Munich where experts from around the world sat in the audience watching speeches on a big screen. The irony was rich, if distressing. I asked one high level attendee about this, a person I respect. Nice guy, with nothing credible to say about it, imho.

Expand full comment

I don’t understand this either.

Expand full comment

We'd need AI to generate the full list of all that we don't understand. :-)

Expand full comment

"Within 10 years computers won't even keep us as pets."

― Marvin Minsky (1927-2016)

Expand full comment

XKCD answered the question once forv all: https://xkcd.com/1289/

Expand full comment

Between the long term “existential” risk and the short term “criminal” risk, there is a mid-term societal “ethical” threat. The latter will come with more advanced next generation AI systems, not yet “superintelligence” level, but low level AGIs capable of processing rationally complex problems (data collecting, processing, optimization, extrapolation, design or decision making) which will be likely available in a decade. These systems will be thus able to perform intellectual work, to do “reasoning”, the prerogative and proud of educated people, the daily task of engineers, managers, doctors, etc. This prerogative will be taken from us and we are not ready to handle it as a society. The concern is not only about jobs and economy, but also about human leadership, human position and self-esteem in a society where two types of “intelligence” will coexist. Not only human intellect “market” value but also people social status is at stake. When manual wearing work is taken over by a robot it does not make the same societal effect, it has not the same impact on people when an intellectual interesting work is taken over by an electronic brain. The widespread idyllic concept of AI analyzing and proposing and the human intelligence finally deciding will become a kind of fiction in many domains. Because the human intelligence will not be able to critically check and to correct the content generated by the artificial one. This societal issue also needs anticipation and regulation.

Expand full comment

Why do you say the human extinction threat is overblown when you fuel just that?

"Maybe humans would not literally be “wiped from the earth,” but things could get very bad indeed."

Then don't use the word extinction or the phrase wiped from the earth.

Moreover, this is largely not a problem of AI regulation, but of things like biosafety measures, access control, laboratory equipment, and so on. An additional tool for misinfo is quite unlikely to raise risks significantly towards extinction.

More points at the "Mitigating the risk of extinction from AI should be a global priority alongside risks such as pandemics and nuclear war" arguments map https://www.kialo.com/mitigating-the-risk-of-extinction-from-ai-should-be-a-global-priority-alongside--risks-such-as-pandemics-and-nuclear-war-63178

You're very inconsistent and are more a cause of the perception problem that you address in this post's very title.

Expand full comment

The important question is:

Can AI help reduce the risks above ?

Risk of AI vs reduced risk of other

Expand full comment

"The real issue is control." No, it is not. Oh, sure, it is concerning that varied oligarchies, politburos and/or tech elites may use these new powers to exacerbate the already vexing disinformation/agitprop sphere. But that has been a problem in every human civilization going back 6000 years. We need to look at which methods ever allowed civilization to evade such traps. And only one has ever worked - flat-competitive reciprocal accountability, enabled by general transparency. It is THE method Pericles spoke of and that Adam Smith and the US Founders. I speak of it elsewhere: (https://www.blogger.com/comment.g?blogID=8587336&postID=1163004943576110681&page=1&token=1680392360425) and it works by siccing elies upon each other... as we might sic AIs competitively on each other. It is the only thing that possibly CAN work. And (alas) it is almost never discussed.

-- David Brin, author of The Postman and The Transparent Society: Will Technology Make Us Choose Between Privacy and Freedom?

Expand full comment

I dunno. I can certainly see LLMs making it way easier to do phishing and other low-level scams. But they're already pretty easy, and people are already pretty wary of them. If the e-mails get really persuasive, I imagine people will adapt very quickly -- I can't see all of humanity just passively waiting to be fleeced of all its savings by Nigerian lottery winner e-mails. Presumably it will become virtually unknown for anyone to believe e-mail lacking some certiification or other that it's from someone you trust. (Maybe the PGP people will finally be right that everyone uses digital signatures.)

But I'm having a hard time seeing how LLMs accelerate any risk of rogue nuclear weapons launches, misuses of CRISPR, et cetera. These are all very very high impact events, and so bad actors wishing to pursue them can already spend all the money they want, hire anyone they need. SBF didn't need an LLM to persuade people to invest their savings in FTX, he could hire celebrities to pitch them in Superbowl ads. It's going to be a long time before any chatbot is going to be as persuasive as Gisele Bündchen cooing 'you don't want to miss this, handsome!' from the 40" bigscreen TV after half a sixpack.

So I'm not seeing any significant *extra* leverage the black hats are going to acquire with LLMs, which mimic human conversation, since they can *already* hire as much human conversational talent as they want to pursue any of these very high value targets.

Secondarily, since this is a known risk, all the high value targets are also of course already hardened against these kinds of attacks. The KGB used to do honey-trap attacks all the freaking time, it was kind of their specialty, so it's not like military C&C apparatus isn't already very awake to the possibility of key personnel being persuaded or fooled into betrayal of their role by some appeal to their human weaknesses. If the defensive measures we have in place have served reasonably well against attack by human agents, I'm not seeing any reason to think attack by agents mimicking humans, and using the same tactics of pesuasion, are going to succeed qualitatively better.

I'm not saying disruptive technology isn't, well, disruptive. Some people will lose their jobs, industries will shift, certain demographics prosper and others wither, there will be new forms of crime and new ways to ruin your life through inattention and bad decisions. But this seems like an appurtenance of essentially all technology. What is sufficiently different about *this* one to justify any extra level of worry, above what the GM wrench-turner feared from robots on the assembly line?

Expand full comment

The tech is here to stay. The criminals will still get access to it even if it is prohibited by law. Just check to see how successful the war on drugs has been. A prohibition would only limit good people to do good things with AI

Expand full comment

so I guess we should make murder legal, too?

Expand full comment

Again with this fallacious argument. Are you thus proposing to make AI illegal? If not, why bring the "legal" argument up at all? And how does AI compare to murder?

Expand full comment

I think he is comparing AI with murder in a sense that you can compare all pointy objects with murder. Ban all kitchen knives, pencils, tools, saws, etc, because they can be used for murder.

Playing it extra safe will just kill innovation. What if AI is the answer to survive other mega threats like Asteroids, severe climate change, etc.?

Expand full comment