Not just malevolent people, just careless adoption can already be catastrophic. Kids having low attention span or hating their body because of social media is neither the result of malevolent AI or malevolent people and large scale adoption of AI technologies will have stronger and worse consequences than this with no way of "pulling the plug" out of society
Malevolent humans already have access to guns, knives, telephones, the internet, cars, laser printers, in some cases whole armies, you think chatbots is where the limit is?
That's always been the case with new technologies. The genie is out of the bottle. Attempting to restrict development only stops the rule-followers. Malevolent actors will not care about regulations.
And while there is a growing recognition of the potential harm, damage or even destruction, from misused LLMs or other forms of AI, there is already immediate harm, growing harm, that will lead to much pain, suffering and death, many deaths. Though as you have said causality will be difficult to prove. The US, and much of the rest of the world are experiencing a mental health crisis. It’s very complicated with profound confusion about how to address this. This has created an explosion of billions in funding for mental health and well-being apps (sixty-seven percent had been developed without any guidance from a healthcare professional), some if which are already using LLMs and GPT and we know in advance the results. It is a bit like social media has many benefits, but many years later society at large is just starting to understand the significant harm, especially to children and the vulnerable, but also to society as a whole. This current form of AI is doing real harm today, that will grow to great harm, even if the AI is not malevolent or does not eventually lead to the end of civilization.
I believe that we are about to enter a multi-decade era of "Artificial Stupidity", i.e. a world of AI/AGI systems that (a) have genuine utility (translating into significant wealth for both their owners and users), but nevertheless (b) are nowhere near super-intelligent, (c) will be perceived by many/most lay people as being far more intelligent/reliable than they actually are, and consequently (d) will be deployed at scale in may situations for which they are not entirely competent, and furthermore (e) will be abused in many ways my many malicious actors. As a result of (d) and (e) these systems will (simultaneously to and in parallel with the utility and wealth that they generate due to (a)) inflict massive societal harm (globally and at scale) over the coming years and decades, and it will likely take society (including policymakers etc) one or two decades to start to understand the extent of this harm. From where I'm standing (in the AGI world) the AI/AGI-fueled harm that we are about to inflict upon ourselves is entirely foreseeable - it's like watching a train wreck in slow motion.
Additionally, my greatest concern is that we as a community have invested very little in using these methods on problems that actually are the biggest needs of humanity.
We could be studying how to to make education more accessible to communities around the world. We could be learning how we can use AI algorithms to build healthier communities. Or weather prediction to support precise agriculture. These problems are hard. When people think of AI, they think of intelligent systems that help make our lives easier, safer, more productive, and more enjoyable. But, as a scientific community, we haven't even begun to investigate how to build intelligent solutions for these problems.
It disappoints me greatly that after billions of USD in investment, what we have is a tool that can generate large amounts of misinformation. All because we have been chasing some mythical AGI instead of iterating over real problems with well defined experiments and impact measurement. And, there doesn't seem to be any space for doing so either. Each such project is painful and will not lead to AGI - so no interest from funders or managers.
What I don’t understand is people like Musk tweet about these dangers yet he, more than most anyone has the resources to go after the issues full force and has done very little aside from investing in one of the problems (OpenAI). If the risk is really there, why is the action so minuscule? Why are folks like LeCunn working for companies like Meta who have over and over shown that the good of humanity is not their priority? I know there are groups working on AI risk, but their efforts are lilliputian when compared to the fanboys of AI attacking anyone who dares to question “progress.”
Between the long term “existential” risk and the short term “criminal” risk, there is a mid-term societal “ethical” threat. The latter will come with more advanced next generation AI systems, not yet “superintelligence” level, but low level AGIs capable of processing rationally complex problems (data collecting, processing, optimization, extrapolation, design or decision making) which will be likely available in a decade. These systems will be thus able to perform intellectual work, to do “reasoning”, the prerogative and proud of educated people, the daily task of engineers, managers, doctors, etc. This prerogative will be taken from us and we are not ready to handle it as a society. The concern is not only about jobs and economy, but also about human leadership, human position and self-esteem in a society where two types of “intelligence” will coexist. Not only human intellect “market” value but also people social status is at stake. When manual wearing work is taken over by a robot it does not make the same societal effect, it has not the same impact on people when an intellectual interesting work is taken over by an electronic brain. The widespread idyllic concept of AI analyzing and proposing and the human intelligence finally deciding will become a kind of fiction in many domains. Because the human intelligence will not be able to critically check and to correct the content generated by the artificial one. This societal issue also needs anticipation and regulation.
Why do you say the human extinction threat is overblown when you fuel just that?
"Maybe humans would not literally be “wiped from the earth,” but things could get very bad indeed."
Then don't use the word extinction or the phrase wiped from the earth.
Moreover, this is largely not a problem of AI regulation, but of things like biosafety measures, access control, laboratory equipment, and so on. An additional tool for misinfo is quite unlikely to raise risks significantly towards extinction.
I dunno. I can certainly see LLMs making it way easier to do phishing and other low-level scams. But they're already pretty easy, and people are already pretty wary of them. If the e-mails get really persuasive, I imagine people will adapt very quickly -- I can't see all of humanity just passively waiting to be fleeced of all its savings by Nigerian lottery winner e-mails. Presumably it will become virtually unknown for anyone to believe e-mail lacking some certiification or other that it's from someone you trust. (Maybe the PGP people will finally be right that everyone uses digital signatures.)
But I'm having a hard time seeing how LLMs accelerate any risk of rogue nuclear weapons launches, misuses of CRISPR, et cetera. These are all very very high impact events, and so bad actors wishing to pursue them can already spend all the money they want, hire anyone they need. SBF didn't need an LLM to persuade people to invest their savings in FTX, he could hire celebrities to pitch them in Superbowl ads. It's going to be a long time before any chatbot is going to be as persuasive as Gisele Bündchen cooing 'you don't want to miss this, handsome!' from the 40" bigscreen TV after half a sixpack.
So I'm not seeing any significant *extra* leverage the black hats are going to acquire with LLMs, which mimic human conversation, since they can *already* hire as much human conversational talent as they want to pursue any of these very high value targets.
Secondarily, since this is a known risk, all the high value targets are also of course already hardened against these kinds of attacks. The KGB used to do honey-trap attacks all the freaking time, it was kind of their specialty, so it's not like military C&C apparatus isn't already very awake to the possibility of key personnel being persuaded or fooled into betrayal of their role by some appeal to their human weaknesses. If the defensive measures we have in place have served reasonably well against attack by human agents, I'm not seeing any reason to think attack by agents mimicking humans, and using the same tactics of pesuasion, are going to succeed qualitatively better.
I'm not saying disruptive technology isn't, well, disruptive. Some people will lose their jobs, industries will shift, certain demographics prosper and others wither, there will be new forms of crime and new ways to ruin your life through inattention and bad decisions. But this seems like an appurtenance of essentially all technology. What is sufficiently different about *this* one to justify any extra level of worry, above what the GM wrench-turner feared from robots on the assembly line?
The tech is here to stay. The criminals will still get access to it even if it is prohibited by law. Just check to see how successful the war on drugs has been. A prohibition would only limit good people to do good things with AI
Again with this fallacious argument. Are you thus proposing to make AI illegal? If not, why bring the "legal" argument up at all? And how does AI compare to murder?
I think he is comparing AI with murder in a sense that you can compare all pointy objects with murder. Ban all kitchen knives, pencils, tools, saws, etc, because they can be used for murder.
Playing it extra safe will just kill innovation. What if AI is the answer to survive other mega threats like Asteroids, severe climate change, etc.?
Agree. I see third immediate threats. First, a human agent wielding this enormous power in a nefarious manner. Second, in a not-too distant future, 99.999% of all content we come across, whether it be writing, audio, video or other media, will be AI-generated, with little, perhaps none, human input. What does it mean when only a small fraction of everything out there has been produced by humans? And what will be the nature of the content spawned by AI? Third, by outsourcing creativity and critical thinking to AI, those critical traits will atrophy amongst humans, just the way we have lost the ability to perform other skills we have automated over the centuries. What is a human that cannot think or create, but mainly feel?
Isn't this all a bit overwrought. Most of you live in a country that doesn't care about guns in the hands of criminals, but you think chat bots is a step too far? You all realise that Lex Luthor is fictional, right?
This seems exactly right. The risk right now is not malevolent AI but malevolent humans using AI.
well said. i should have said that last sentence in the essay!
Not just malevolent people, just careless adoption can already be catastrophic. Kids having low attention span or hating their body because of social media is neither the result of malevolent AI or malevolent people and large scale adoption of AI technologies will have stronger and worse consequences than this with no way of "pulling the plug" out of society
Malevolent humans already have access to guns, knives, telephones, the internet, cars, laser printers, in some cases whole armies, you think chatbots is where the limit is?
That's always been the case with new technologies. The genie is out of the bottle. Attempting to restrict development only stops the rule-followers. Malevolent actors will not care about regulations.
Very well said!
And while there is a growing recognition of the potential harm, damage or even destruction, from misused LLMs or other forms of AI, there is already immediate harm, growing harm, that will lead to much pain, suffering and death, many deaths. Though as you have said causality will be difficult to prove. The US, and much of the rest of the world are experiencing a mental health crisis. It’s very complicated with profound confusion about how to address this. This has created an explosion of billions in funding for mental health and well-being apps (sixty-seven percent had been developed without any guidance from a healthcare professional), some if which are already using LLMs and GPT and we know in advance the results. It is a bit like social media has many benefits, but many years later society at large is just starting to understand the significant harm, especially to children and the vulnerable, but also to society as a whole. This current form of AI is doing real harm today, that will grow to great harm, even if the AI is not malevolent or does not eventually lead to the end of civilization.
I believe that we are about to enter a multi-decade era of "Artificial Stupidity", i.e. a world of AI/AGI systems that (a) have genuine utility (translating into significant wealth for both their owners and users), but nevertheless (b) are nowhere near super-intelligent, (c) will be perceived by many/most lay people as being far more intelligent/reliable than they actually are, and consequently (d) will be deployed at scale in may situations for which they are not entirely competent, and furthermore (e) will be abused in many ways my many malicious actors. As a result of (d) and (e) these systems will (simultaneously to and in parallel with the utility and wealth that they generate due to (a)) inflict massive societal harm (globally and at scale) over the coming years and decades, and it will likely take society (including policymakers etc) one or two decades to start to understand the extent of this harm. From where I'm standing (in the AGI world) the AI/AGI-fueled harm that we are about to inflict upon ourselves is entirely foreseeable - it's like watching a train wreck in slow motion.
Additionally, my greatest concern is that we as a community have invested very little in using these methods on problems that actually are the biggest needs of humanity.
We could be studying how to to make education more accessible to communities around the world. We could be learning how we can use AI algorithms to build healthier communities. Or weather prediction to support precise agriculture. These problems are hard. When people think of AI, they think of intelligent systems that help make our lives easier, safer, more productive, and more enjoyable. But, as a scientific community, we haven't even begun to investigate how to build intelligent solutions for these problems.
It disappoints me greatly that after billions of USD in investment, what we have is a tool that can generate large amounts of misinformation. All because we have been chasing some mythical AGI instead of iterating over real problems with well defined experiments and impact measurement. And, there doesn't seem to be any space for doing so either. Each such project is painful and will not lead to AGI - so no interest from funders or managers.
It feels like we failed humanity.
One thing we might start with is a sort of 'hippocratic oath' for IT-engineers.
https://xkcd.com/1289/
What I don’t understand is people like Musk tweet about these dangers yet he, more than most anyone has the resources to go after the issues full force and has done very little aside from investing in one of the problems (OpenAI). If the risk is really there, why is the action so minuscule? Why are folks like LeCunn working for companies like Meta who have over and over shown that the good of humanity is not their priority? I know there are groups working on AI risk, but their efforts are lilliputian when compared to the fanboys of AI attacking anyone who dares to question “progress.”
I don’t understand this either.
"Within 10 years computers won't even keep us as pets."
― Marvin Minsky (1927-2016)
XKCD answered the question once forv all: https://xkcd.com/1289/
Between the long term “existential” risk and the short term “criminal” risk, there is a mid-term societal “ethical” threat. The latter will come with more advanced next generation AI systems, not yet “superintelligence” level, but low level AGIs capable of processing rationally complex problems (data collecting, processing, optimization, extrapolation, design or decision making) which will be likely available in a decade. These systems will be thus able to perform intellectual work, to do “reasoning”, the prerogative and proud of educated people, the daily task of engineers, managers, doctors, etc. This prerogative will be taken from us and we are not ready to handle it as a society. The concern is not only about jobs and economy, but also about human leadership, human position and self-esteem in a society where two types of “intelligence” will coexist. Not only human intellect “market” value but also people social status is at stake. When manual wearing work is taken over by a robot it does not make the same societal effect, it has not the same impact on people when an intellectual interesting work is taken over by an electronic brain. The widespread idyllic concept of AI analyzing and proposing and the human intelligence finally deciding will become a kind of fiction in many domains. Because the human intelligence will not be able to critically check and to correct the content generated by the artificial one. This societal issue also needs anticipation and regulation.
Why do you say the human extinction threat is overblown when you fuel just that?
"Maybe humans would not literally be “wiped from the earth,” but things could get very bad indeed."
Then don't use the word extinction or the phrase wiped from the earth.
Moreover, this is largely not a problem of AI regulation, but of things like biosafety measures, access control, laboratory equipment, and so on. An additional tool for misinfo is quite unlikely to raise risks significantly towards extinction.
More points at the "Mitigating the risk of extinction from AI should be a global priority alongside risks such as pandemics and nuclear war" arguments map https://www.kialo.com/mitigating-the-risk-of-extinction-from-ai-should-be-a-global-priority-alongside--risks-such-as-pandemics-and-nuclear-war-63178
You're very inconsistent and are more a cause of the perception problem that you address in this post's very title.
The important question is:
Can AI help reduce the risks above ?
Risk of AI vs reduced risk of other
I dunno. I can certainly see LLMs making it way easier to do phishing and other low-level scams. But they're already pretty easy, and people are already pretty wary of them. If the e-mails get really persuasive, I imagine people will adapt very quickly -- I can't see all of humanity just passively waiting to be fleeced of all its savings by Nigerian lottery winner e-mails. Presumably it will become virtually unknown for anyone to believe e-mail lacking some certiification or other that it's from someone you trust. (Maybe the PGP people will finally be right that everyone uses digital signatures.)
But I'm having a hard time seeing how LLMs accelerate any risk of rogue nuclear weapons launches, misuses of CRISPR, et cetera. These are all very very high impact events, and so bad actors wishing to pursue them can already spend all the money they want, hire anyone they need. SBF didn't need an LLM to persuade people to invest their savings in FTX, he could hire celebrities to pitch them in Superbowl ads. It's going to be a long time before any chatbot is going to be as persuasive as Gisele Bündchen cooing 'you don't want to miss this, handsome!' from the 40" bigscreen TV after half a sixpack.
So I'm not seeing any significant *extra* leverage the black hats are going to acquire with LLMs, which mimic human conversation, since they can *already* hire as much human conversational talent as they want to pursue any of these very high value targets.
Secondarily, since this is a known risk, all the high value targets are also of course already hardened against these kinds of attacks. The KGB used to do honey-trap attacks all the freaking time, it was kind of their specialty, so it's not like military C&C apparatus isn't already very awake to the possibility of key personnel being persuaded or fooled into betrayal of their role by some appeal to their human weaknesses. If the defensive measures we have in place have served reasonably well against attack by human agents, I'm not seeing any reason to think attack by agents mimicking humans, and using the same tactics of pesuasion, are going to succeed qualitatively better.
I'm not saying disruptive technology isn't, well, disruptive. Some people will lose their jobs, industries will shift, certain demographics prosper and others wither, there will be new forms of crime and new ways to ruin your life through inattention and bad decisions. But this seems like an appurtenance of essentially all technology. What is sufficiently different about *this* one to justify any extra level of worry, above what the GM wrench-turner feared from robots on the assembly line?
The tech is here to stay. The criminals will still get access to it even if it is prohibited by law. Just check to see how successful the war on drugs has been. A prohibition would only limit good people to do good things with AI
so I guess we should make murder legal, too?
Again with this fallacious argument. Are you thus proposing to make AI illegal? If not, why bring the "legal" argument up at all? And how does AI compare to murder?
I think he is comparing AI with murder in a sense that you can compare all pointy objects with murder. Ban all kitchen knives, pencils, tools, saws, etc, because they can be used for murder.
Playing it extra safe will just kill innovation. What if AI is the answer to survive other mega threats like Asteroids, severe climate change, etc.?
Agree. I see third immediate threats. First, a human agent wielding this enormous power in a nefarious manner. Second, in a not-too distant future, 99.999% of all content we come across, whether it be writing, audio, video or other media, will be AI-generated, with little, perhaps none, human input. What does it mean when only a small fraction of everything out there has been produced by humans? And what will be the nature of the content spawned by AI? Third, by outsourcing creativity and critical thinking to AI, those critical traits will atrophy amongst humans, just the way we have lost the ability to perform other skills we have automated over the centuries. What is a human that cannot think or create, but mainly feel?
Isn't this all a bit overwrought. Most of you live in a country that doesn't care about guns in the hands of criminals, but you think chat bots is a step too far? You all realise that Lex Luthor is fictional, right?