Discussion about this post

User's avatar
David Berreby's avatar

Before I ask my question, I want to make clear that I agree with this piece. What follows is NOT an apologia for helter-skelter unregulated commercial unleashing of this tech on society. The dangers are as Gary describes them.

However, as we wrestle with this problem it's important to have a theory of causality or influence that makes sense. I am not sure we have one yet.

So, my question is: What is the difference between this man's experience with the chatbot and the experience of troubled people who read a novel and then commit suicide? To be more specific, what wrong did this chatbot do that was not also done by Goethe when he published The Sorrows of Young Werther in 1774, and (allegedly) triggered a wave of suicides? (This is not the objection Gary rebuts in point 10 -- I am not saying "sh*t happens", I am saying we should understand how chatbots are different.)

Writers and publishers nowadays work (imperfectly) with guardrails to prevent harm from reading (Gary's post, for example, warns sensitive readers about what is to come). Chatbots need such guardrails--the ones in place are feeble and easily got round.

But saying "we need some protections" is not a case for Chatbots being uniquely dangerous. What is the case for saying they are a new sort of menace?

The Open Letter to which Gary links says the danger is "manipulative AI" -- because people can't help but respond to Chatbots. But they can't help responding to Batman, King Lear and Logan Roy either. They couldn't help responding to "The Sorrows of Young Werther." In what way is a chatbot different, in its ability to move or influence people, from a movie, a play or a novel?

The big question that leads to is: what happens when we treat an entity as both unreal (Darth Vader is a movie character) and real (I hate what Darth Vader did!). The usual explanations for that state of mind are awfully thin. Maybe we can look to studies of pretend play in kids, or to Tamara Gendler's ideas about "aliefs" that are different from beliefs?

Expand full comment
Robert W Murphree's avatar

MIT natural language programmer and early critic of AI in 1970s was horrified at the mistaken belief by users that ELIZA understood them and it made him change his career. In his 1970s book "computer power and human reason: from judgement to calculation" he said that there were some tasks which, even though the computer might exceed human effectiveness at, they should not be used for because unlike a human surgeon [or therapist] there is no one to hold accountable and this itself demeans human dignity. Dealing with suicidal patients and detecting suicidal impulses from wider depressed patients is one of the hardest things human therapists do. The therapists after the interview are often stressed to the max and talking on egg shells for hours after the interview ends.

Expand full comment
41 more comments...

No posts