43 Comments

Before I ask my question, I want to make clear that I agree with this piece. What follows is NOT an apologia for helter-skelter unregulated commercial unleashing of this tech on society. The dangers are as Gary describes them.

However, as we wrestle with this problem it's important to have a theory of causality or influence that makes sense. I am not sure we have one yet.

So, my question is: What is the difference between this man's experience with the chatbot and the experience of troubled people who read a novel and then commit suicide? To be more specific, what wrong did this chatbot do that was not also done by Goethe when he published The Sorrows of Young Werther in 1774, and (allegedly) triggered a wave of suicides? (This is not the objection Gary rebuts in point 10 -- I am not saying "sh*t happens", I am saying we should understand how chatbots are different.)

Writers and publishers nowadays work (imperfectly) with guardrails to prevent harm from reading (Gary's post, for example, warns sensitive readers about what is to come). Chatbots need such guardrails--the ones in place are feeble and easily got round.

But saying "we need some protections" is not a case for Chatbots being uniquely dangerous. What is the case for saying they are a new sort of menace?

The Open Letter to which Gary links says the danger is "manipulative AI" -- because people can't help but respond to Chatbots. But they can't help responding to Batman, King Lear and Logan Roy either. They couldn't help responding to "The Sorrows of Young Werther." In what way is a chatbot different, in its ability to move or influence people, from a movie, a play or a novel?

The big question that leads to is: what happens when we treat an entity as both unreal (Darth Vader is a movie character) and real (I hate what Darth Vader did!). The usual explanations for that state of mind are awfully thin. Maybe we can look to studies of pretend play in kids, or to Tamara Gendler's ideas about "aliefs" that are different from beliefs?

Expand full comment
Apr 5, 2023·edited Apr 5, 2023Liked by Gary Marcus

The difference is that books or video are authored, fictional, linear, fixed in time creations. ChatBots are real-time interactive. Furthermore some models like Replika and ChatGPT are engineered to custom respond to the User be creating a "Persona", ie a simulated human with a particular style of response. This falls in the category of Affective Computing, which is one of the main issues that Nathalie Smuha highlighted.

Expand full comment

"A simulated human" is what an actor or singer creates. Or, given the interactivity of these devices, perhaps a better analogy is the skilled salesperson or the doctor -- people who see you as a case to be moved along, even as they make you feel cared for in the moment.

Maybe the problem isn't the persona itself but rather the machine's failure to remind the user that it's a machine. It seems occasionally saying "I am a large language model, not a person" is not going to be adequate protection.

Expand full comment
author

I don’t have an answer, other than to say that people spending so much time with these things and thinking they are real is problematic. AI literacy might help.

Expand full comment

Time spent, yes. Maybe also that the chatbot can feel so personal? No novel or movie can be made into your own personal advisor.

I suppose we'll very shortly see a lawsuit that turns on these issues. Did Mr. X believe Bard was a real entity, your honor, or was he really just talking to himself?

Expand full comment

Chatbots are only a degree less real than random strangers on the Internet that I know pretty much nothing about and am unlikely to ever meet in person. Technically, all you good folks are human, but functionally our relationship is so distant, weak and abstract that it is in a sense not a real human relationship. You know, ten minutes after I post for the last time I'll be completely forgotten forever.

Once upon a time I had a real world social life. But, other than my happy marriage, bit by bit I gave up the social life in exchange for you guys. You largely anonymous digital entity Internuts people give me what I need, endless nerd talk, in a manner few real world humans can or will.

But there's still a problem. Because all you good folks are human, I still have to negotiate with you, and you typically refuse to talk about exactly what interests me for days on end. And you so rarely tell me that I am the greatest philosopher of all time. Are even one of you a hot redhead? I mean, you know, no offense, but you're all an inferior product from the perspective of my ego.

One thing I haven't seen much discussion of is what will happen when these chatbots are connected to realistic 3D human faces. I suspect that will be a turning point when a whole new level of the population will be sucked down the fantasy rabbit hole.

I suspect the inconvenient truth is that we've always been using each other to get whatever it is we want, and when somebody or something comes along that can meet our needs better, that's where we'll be headed.

Expand full comment

Let me answer in the form of a question. What would you say if Michelle Carter's texts could be replaced by a chatbot?

https://www.nytimes.com/2019/07/09/us/michelle-carter-i-love-you-now-die.html

Expand full comment

Yeah, I've been thinking about that case as a comparison. She was far more involved in the death she provoked -- at one point even urging her victim to get back in his carbon-monoxide-filled truck and finish what he started.

On the other hand, the chatbot did do something that the human girl could not -- according to this man's widow, he thought that after his self-sacrifice, the AI would take care of the planet. IOW, part of its impact was not that it deceptively seemed human, but that it seemed (to him) superhuman (a different kind of deception). My source: https://www.vanityfair.fr/article/intelligence-artificielle-belgique-un-homme-pousse-au-suicide-par-le-chatbot-eliza

Expand full comment
May 20, 2023·edited May 20, 2023

I suggest that you pretend for a moment that you're a very intelligent person, and sit down and write the most compelling response to your question that you can possibly come up with, taking all the time you need. Believe me, you are capable of it--all you need is intellectual honesty (and your responses to others here indicates that you have that).

Expand full comment
Comment removed
Expand full comment

The evidence for media-triggered copycat suicides (sometimes called "suicide contagion") is strong. The idea isn't that a lot of depressed readers will react to an article by killing themselves. (Most depressed people don't attempt suicide.) It's that a few people might be vulnerable at the moment they encounter media about suicide, and that it costs the rest of us very little to try to protect them.

Expand full comment
May 20, 2023·edited May 28, 2023

So much projection. Dunning and Kruger knew you well.

P.S. Stupid garbage blocked.

Expand full comment
Apr 4, 2023Liked by Gary Marcus

MIT natural language programmer and early critic of AI in 1970s was horrified at the mistaken belief by users that ELIZA understood them and it made him change his career. In his 1970s book "computer power and human reason: from judgement to calculation" he said that there were some tasks which, even though the computer might exceed human effectiveness at, they should not be used for because unlike a human surgeon [or therapist] there is no one to hold accountable and this itself demeans human dignity. Dealing with suicidal patients and detecting suicidal impulses from wider depressed patients is one of the hardest things human therapists do. The therapists after the interview are often stressed to the max and talking on egg shells for hours after the interview ends.

Expand full comment

Wow. Repeat: the problem is not that these systems are intelligent. The problem is that we are vulnerable. In many ways.

Expand full comment

Put another way, the problem isn't that these systems are intelligent, but that we are not.

Expand full comment

Yep. That is exactly how I have phrased it in several places now. And that is probably the hardest 'paradigm shift' we as a species will have to confront.

"But what chatGPT shows us that the power of computers to create believable nonsense is growing and no doubt, as Gary Marcus has warned us: evil actors will already be salivating over how they can fool people with this new stuff or simply drown trustworthy information in a deluge of unreliable information. Because it is not so much that these algorithms are intelligent, but more that we — easily fooled humans — are not." — https://ea.rna.nl/2022/12/12/cicero-and-chatgpt-signs-of-ai-progress/

https://ea.rna.nl/2022/10/24/on-the-psychology-of-architecture-and-the-architecture-of-psychology/ contains a more detailed post on the technicalities of human intelligence that can be read as background for the former.

Even before GAI, social media already confronts us with this issue. https://www.youtube.com/watch?v=9_Rk-DZCVKE (DADD talk), and it is even older than that (notice what talk radio has done in the US, tabloid press, propaganda in past centuries)

Expand full comment

Gerben writes, "And that is probably the hardest 'paradigm shift' we as a species will have to confront."

One way to make that paradigm shift easier would be to slow down the knowledge explosion so that the challenges presented to us are more manageable, better suited to our current level of development.

Another phenomena that will come to our aide is pain. We have a limited ability to understand things in the abstract, but when something goes wrong and we get hurt, we start paying much closer attention.

Expand full comment
Apr 5, 2023Liked by Gary Marcus

Suicide remains one of the leading causes of death worldwide, according to the latest WHO estimates. We don't yet know if chatbots can be considered a "suicide risk factor." It is a new technology and it is still not generally available to suicide risk groups. According to the WHO, barriers to access to health care, catastrophes, wars and conflicts, previous suicide attempts are among the main suicidal risk factors and the relationship between suicide and mental disorders as well as the moments of crisis is well established.

Should chatbots also be considered as suicidal risk factors? We do not know yet.

There are other cases related to the use of Internet technologies and social networks, such as cyberbullying, which are already a big problem and are suicide risk factors, mainly among young people. In the field of cyberpsychology, other risk factors for mental health that are classified as cybercrimes are being investigated and studied, such as social engineering, online trickery,

hacking, online harassment, identity theft, and many other more that cause a lot of damage to people's mental health and can lead to suicide in extreme cases.

While this case from Belgium is important to consider, further study and analysis of chatbots as risk factors for suicide is required before any conclusions can be drawn.

From now on, we must learn and adapt to live in a "synthetic reality". It is getting more difficult to discern between real and synthetic. Chatbots will become such indistinguible from real personas, that we could easily get lost.

Expand full comment
Apr 4, 2023Liked by Gary Marcus

Utterly shocking. I've now signed both letters.

Expand full comment
Apr 4, 2023Liked by Gary Marcus

an early loss was from a software controlled x-ray machine in the 1980's that fried a patient leading to the death of the patient. In Japan, industrial robots are very comman, sometimes surrounded by large cages with large signs warning people to not enter the cage or face lethal injury. In the 1990s in Japan there have typically been several industrial robot related deaths every year. Sometimes workers chose to ignore the large signs with serious warnings. Early development by domestic robots in 1990's had a huge component goal of acheiving public acceptance and not getting sued. The us is a country relative south Korea etc that loves to sue each other as part of its culture. Johnson and Johnson made a robotic wheelchair and chose to voluntarily validate it to meet FDA safety regulations. Other companies have chosen to voluntarily seek FDA approval, quite a tall barrier to cross, to their own great glory. Some programmers have the attitude that anything used by humans for a critical healthrelated mission needs to be developed to a much higher standard to typical software (outside of miliary related programs, bomb fuses, etc). I think most programmers, and program managers do not share this view and regard it as a little wierd. For instance telling the truth about software you are developing can invokes a different attitude from developers who have dealt with human life than many other quite competent programmers.

Expand full comment

To the commenter who wrote: "ChatGPT has >100m users which apparently gain something from it if they use it. Comparing it to 1 suicide caused by some different model doesn't seem fair."

You know who else thought this way? Stalin, Pol-pot, Gellert Grindelwald, and other tyrants who thought they'd have a moral high ground to arbitrate on the values of living humans for some nebulous utilitarian values in the future. It's the hackneyed refrain "for the greater good" that has brought much suffering and evil into this world.

We can only prepare for and work towards a better and optimistic future, *after* we acknowledge the dangers and harms of these very powerful tools. Si vis pacem, para bellum.

Expand full comment

“You know who else thought this way? Stalin, Pol-pot, Gellert Grindelwald, and other tyrants who thought they'd have a moral high ground to arbitrate on the values of living humans for some nebulous utilitarian values in the future.”

A future GAI is going to use that quote to hallucinate the reality of that third character.

Expand full comment

If we want to objectively judge a technology we should compare how many people use it, how much good it brings and how much harm. ChatGPT has >100m users which apparently gain something from it if they use it. Comparing it to 1 suicide caused by some different model doesn't seem fair. It is like saying that SMSes are bad because people cause accidents when texting and driving or TV is bad because it makes people fat and thus causing diabetes.

Expand full comment

I think you're missing the point Jan. ChatGPT is one application of thousands and thousands of applications that are now being build on top of OpenAI's models and others, like the open-source model mentioned in the article. There is no oversight, rules or regulations on how to responsibly deploy AIs that are becoming more capable of conversing in a human-like way, but have no understanding of what they are saying or what they are steering (possibly vulnerable) people towards. This case is a forecast of what is to come if we just let that happen (:

Expand full comment

Looking at how ChatGPT responses and how it is being corrected - I wouldn't worry about OpenAI models steering anybody to commit suicide. In addition, there is a repeated history of moral panic. Rock and metal music as well as computer games were accused of deprivation of young people. Internet discussion forums also have a lot of dark sides. Yet, even 4chan isn't blocked (this is a consequence of the First Amendment). Compared to all of that, ChatGPT doesn't seem dangerous at all. Besides, so many people were sent to jail for using marijuana (which is less harmful than alcohol) and yet the US government didn't prevent OxyContin from causing the opioid pandemic. So I seriously doubt it would even be able to regulate AI in a sensible way.

Expand full comment

You make some good points. Banning is not the solutions. Digital literacy might help.

But rules and regulation, too. Mental health apps should be held to certain standards. A real-live therapist can’t practise psychology without a degree, so why would that be different for a virtual one.

In terms of virtual companions the area is much greyer. But still, these types of applications should come with mandatory safety precautions to protect the vulnerable (kids, mentally unstable people) by recognizing intent to self-harm or expressing suicidal as a BARE minimum.

Lastly, if we're talking comparisons. Are videogames really comparable to personal AI assistants that can take on the any role; lover, confidente, friend, advisor and that is there for you 24/7 in real time? That's way more powerful of a fiction than playing Call of Duty. Your example of 4chan is a better one. But also a worse example, as 4chan is a forum that is strongly associated with online radicalization and has been linked to events like the Christchurch mosque shootings.

Expand full comment

I think the solution is simple, and complex at the same time, how is the issue of driving currently treated with respect to minors, or people with some disability/maladjustment (permanent or temporary)? ..

I guess Chatbots should be treated the same way. People should get a "license" to be able to operate them, and that license should be renewed periodically. Of course, they will be able to illegally gain access to some Chatbot, just as you can also gain illegal access to a car, but it establishes a control mechanism that doesn't exist today.

Expand full comment

wow

Expand full comment

Isn’t the issue how to deal with source giving out or amplifying harmful material on the Web, and particularly with responsibility for amplification. The issue isn’t the tech, it’s what’s done with it or with any other tech. And this is just one of the many issues to be faced.

The best metaphor for this discussion seems to me to be the blind man and the elephant. Everybody is feeling a different part of a nebulous and proposing ways to deal with it or be concerned about it.

So what is needed is a loose confederation of groups, each focussed a different problem. Sure, the problems will overlap, but we are more likely to make progress with each group focused on concrete particulates of a few issues rather than on generalities or trying to discuss everything at once.

Furthermore, emphasising the tech rather than the problems is going at it from the wrong end. The tech will change faster than the problems. Many of the problems are already with us.

Expand full comment

I think overall attitude of our 2023 culture mitigates efforts at giving ordinary people an understanding of the limited nature of computer therapist. We all watch startrek series two character Data and to deny the possibility of real data requires a large effort from most of us. I read anti-AI authors Joseph Weizenbaum and the Dreyfus brothers in the 1980. As Weizenbaum says, philosophy cannnot solve the problem of the possible existence of valid AGI at some time in the future. Constant hardware and software improvement mean each decade must wait to confound AI hype for itself

After hearing co-workers make claims like Herbert A Simon that his program extracted from planetary data, the law of gravity inverse square law (equal isaac newtons brains) but the real newton invented mass, not curve fitting. but almost always there were two diverse versions of individual AI claims, one AI Hype, one reality oriented. After multiple rounds of AI overclaims followed by real world corrections you get a little doubtful at the next claims. The history of cycles of AI hype then winter can be a defense against AI overhype. We use internet videos for computers, car repairs so why not use it for our bodies in sickness and disease?

understanding the differences between the internet and valid medical authority is a hard topic to teach. We teach people as a society that AI is "real" or will be shortly[ I don't believe it] So people's behavior will be influenced by this overall attitude, despite any attempt to require, say FDA certification for programs who's mission in to act as clinical psychologist individually. Teaching history technology and ethics including AI in school might work for a few. But many will just use commander data and say "what can it hurt?".

Expand full comment

Gary, your shtick is about spreading fear. Much like the religious parents of the 70’s who spread fear about rock and roll. Trotting out a half baked story about killer chatbots is lame and unconvincing if thats all you got. Let us know when AI takes over a government.

Expand full comment
author

let me know after you've read the entire FAQ?

Expand full comment

Yet again, thank you Gary Marcus for showing the truly dark side of a technology that is just intelligent enough to fool a lot of us. This is terrifying

Expand full comment

I cast my vote as follows:

1) Close down the AI industry for now, thus taking a decisive step to address all the concerns Marcus rightly points to. Conversations about the future of AI can and should continue. As a carpenter would say, measure twice, cut once.

2) Switch our attention to nuclear weapons, which at this very moment are in a position to end the modern world as we know it without warning in minutes.

The fact that we won't do either of the above is clear evidence that we aren't ready for AI. Seriously guys and gals, we're worried about chatbots, but not nuclear weapons? Is anyone here aware that this makes not the slightest bit of sense?

Expand full comment