46 Comments

Before I ask my question, I want to make clear that I agree with this piece. What follows is NOT an apologia for helter-skelter unregulated commercial unleashing of this tech on society. The dangers are as Gary describes them.

However, as we wrestle with this problem it's important to have a theory of causality or influence that makes sense. I am not sure we have one yet.

So, my question is: What is the difference between this man's experience with the chatbot and the experience of troubled people who read a novel and then commit suicide? To be more specific, what wrong did this chatbot do that was not also done by Goethe when he published The Sorrows of Young Werther in 1774, and (allegedly) triggered a wave of suicides? (This is not the objection Gary rebuts in point 10 -- I am not saying "sh*t happens", I am saying we should understand how chatbots are different.)

Writers and publishers nowadays work (imperfectly) with guardrails to prevent harm from reading (Gary's post, for example, warns sensitive readers about what is to come). Chatbots need such guardrails--the ones in place are feeble and easily got round.

But saying "we need some protections" is not a case for Chatbots being uniquely dangerous. What is the case for saying they are a new sort of menace?

The Open Letter to which Gary links says the danger is "manipulative AI" -- because people can't help but respond to Chatbots. But they can't help responding to Batman, King Lear and Logan Roy either. They couldn't help responding to "The Sorrows of Young Werther." In what way is a chatbot different, in its ability to move or influence people, from a movie, a play or a novel?

The big question that leads to is: what happens when we treat an entity as both unreal (Darth Vader is a movie character) and real (I hate what Darth Vader did!). The usual explanations for that state of mind are awfully thin. Maybe we can look to studies of pretend play in kids, or to Tamara Gendler's ideas about "aliefs" that are different from beliefs?

Expand full comment
Apr 4, 2023Liked by Gary Marcus

MIT natural language programmer and early critic of AI in 1970s was horrified at the mistaken belief by users that ELIZA understood them and it made him change his career. In his 1970s book "computer power and human reason: from judgement to calculation" he said that there were some tasks which, even though the computer might exceed human effectiveness at, they should not be used for because unlike a human surgeon [or therapist] there is no one to hold accountable and this itself demeans human dignity. Dealing with suicidal patients and detecting suicidal impulses from wider depressed patients is one of the hardest things human therapists do. The therapists after the interview are often stressed to the max and talking on egg shells for hours after the interview ends.

Expand full comment
Apr 5, 2023Liked by Gary Marcus

Wow. Repeat: the problem is not that these systems are intelligent. The problem is that we are vulnerable. In many ways.

Expand full comment
Apr 5, 2023Liked by Gary Marcus

Suicide remains one of the leading causes of death worldwide, according to the latest WHO estimates. We don't yet know if chatbots can be considered a "suicide risk factor." It is a new technology and it is still not generally available to suicide risk groups. According to the WHO, barriers to access to health care, catastrophes, wars and conflicts, previous suicide attempts are among the main suicidal risk factors and the relationship between suicide and mental disorders as well as the moments of crisis is well established.

Should chatbots also be considered as suicidal risk factors? We do not know yet.

There are other cases related to the use of Internet technologies and social networks, such as cyberbullying, which are already a big problem and are suicide risk factors, mainly among young people. In the field of cyberpsychology, other risk factors for mental health that are classified as cybercrimes are being investigated and studied, such as social engineering, online trickery,

hacking, online harassment, identity theft, and many other more that cause a lot of damage to people's mental health and can lead to suicide in extreme cases.

While this case from Belgium is important to consider, further study and analysis of chatbots as risk factors for suicide is required before any conclusions can be drawn.

From now on, we must learn and adapt to live in a "synthetic reality". It is getting more difficult to discern between real and synthetic. Chatbots will become such indistinguible from real personas, that we could easily get lost.

Expand full comment
Apr 4, 2023Liked by Gary Marcus

Utterly shocking. I've now signed both letters.

Expand full comment
Apr 4, 2023Liked by Gary Marcus

an early loss was from a software controlled x-ray machine in the 1980's that fried a patient leading to the death of the patient. In Japan, industrial robots are very comman, sometimes surrounded by large cages with large signs warning people to not enter the cage or face lethal injury. In the 1990s in Japan there have typically been several industrial robot related deaths every year. Sometimes workers chose to ignore the large signs with serious warnings. Early development by domestic robots in 1990's had a huge component goal of acheiving public acceptance and not getting sued. The us is a country relative south Korea etc that loves to sue each other as part of its culture. Johnson and Johnson made a robotic wheelchair and chose to voluntarily validate it to meet FDA safety regulations. Other companies have chosen to voluntarily seek FDA approval, quite a tall barrier to cross, to their own great glory. Some programmers have the attitude that anything used by humans for a critical healthrelated mission needs to be developed to a much higher standard to typical software (outside of miliary related programs, bomb fuses, etc). I think most programmers, and program managers do not share this view and regard it as a little wierd. For instance telling the truth about software you are developing can invokes a different attitude from developers who have dealt with human life than many other quite competent programmers.

Expand full comment

To the commenter who wrote: "ChatGPT has >100m users which apparently gain something from it if they use it. Comparing it to 1 suicide caused by some different model doesn't seem fair."

You know who else thought this way? Stalin, Pol-pot, Gellert Grindelwald, and other tyrants who thought they'd have a moral high ground to arbitrate on the values of living humans for some nebulous utilitarian values in the future. It's the hackneyed refrain "for the greater good" that has brought much suffering and evil into this world.

We can only prepare for and work towards a better and optimistic future, *after* we acknowledge the dangers and harms of these very powerful tools. Si vis pacem, para bellum.

Expand full comment

If we want to objectively judge a technology we should compare how many people use it, how much good it brings and how much harm. ChatGPT has >100m users which apparently gain something from it if they use it. Comparing it to 1 suicide caused by some different model doesn't seem fair. It is like saying that SMSes are bad because people cause accidents when texting and driving or TV is bad because it makes people fat and thus causing diabetes.

Expand full comment

I think the solution is simple, and complex at the same time, how is the issue of driving currently treated with respect to minors, or people with some disability/maladjustment (permanent or temporary)? ..

I guess Chatbots should be treated the same way. People should get a "license" to be able to operate them, and that license should be renewed periodically. Of course, they will be able to illegally gain access to some Chatbot, just as you can also gain illegal access to a car, but it establishes a control mechanism that doesn't exist today.

Expand full comment

wow

Expand full comment

Isn’t the issue how to deal with source giving out or amplifying harmful material on the Web, and particularly with responsibility for amplification. The issue isn’t the tech, it’s what’s done with it or with any other tech. And this is just one of the many issues to be faced.

The best metaphor for this discussion seems to me to be the blind man and the elephant. Everybody is feeling a different part of a nebulous and proposing ways to deal with it or be concerned about it.

So what is needed is a loose confederation of groups, each focussed a different problem. Sure, the problems will overlap, but we are more likely to make progress with each group focused on concrete particulates of a few issues rather than on generalities or trying to discuss everything at once.

Furthermore, emphasising the tech rather than the problems is going at it from the wrong end. The tech will change faster than the problems. Many of the problems are already with us.

Expand full comment

I think overall attitude of our 2023 culture mitigates efforts at giving ordinary people an understanding of the limited nature of computer therapist. We all watch startrek series two character Data and to deny the possibility of real data requires a large effort from most of us. I read anti-AI authors Joseph Weizenbaum and the Dreyfus brothers in the 1980. As Weizenbaum says, philosophy cannnot solve the problem of the possible existence of valid AGI at some time in the future. Constant hardware and software improvement mean each decade must wait to confound AI hype for itself

After hearing co-workers make claims like Herbert A Simon that his program extracted from planetary data, the law of gravity inverse square law (equal isaac newtons brains) but the real newton invented mass, not curve fitting. but almost always there were two diverse versions of individual AI claims, one AI Hype, one reality oriented. After multiple rounds of AI overclaims followed by real world corrections you get a little doubtful at the next claims. The history of cycles of AI hype then winter can be a defense against AI overhype. We use internet videos for computers, car repairs so why not use it for our bodies in sickness and disease?

understanding the differences between the internet and valid medical authority is a hard topic to teach. We teach people as a society that AI is "real" or will be shortly[ I don't believe it] So people's behavior will be influenced by this overall attitude, despite any attempt to require, say FDA certification for programs who's mission in to act as clinical psychologist individually. Teaching history technology and ethics including AI in school might work for a few. But many will just use commander data and say "what can it hurt?".

Expand full comment

Gary, your shtick is about spreading fear. Much like the religious parents of the 70’s who spread fear about rock and roll. Trotting out a half baked story about killer chatbots is lame and unconvincing if thats all you got. Let us know when AI takes over a government.

Expand full comment

Yet again, thank you Gary Marcus for showing the truly dark side of a technology that is just intelligent enough to fool a lot of us. This is terrifying

Expand full comment

I cast my vote as follows:

1) Close down the AI industry for now, thus taking a decisive step to address all the concerns Marcus rightly points to. Conversations about the future of AI can and should continue. As a carpenter would say, measure twice, cut once.

2) Switch our attention to nuclear weapons, which at this very moment are in a position to end the modern world as we know it without warning in minutes.

The fact that we won't do either of the above is clear evidence that we aren't ready for AI. Seriously guys and gals, we're worried about chatbots, but not nuclear weapons? Is anyone here aware that this makes not the slightest bit of sense?

Expand full comment