The Road to AI We Can Trust

Share this post

Don’t Go Breaking My Heart

garymarcus.substack.com

Don’t Go Breaking My Heart

Chatbots don’t have feelings, but people do. We need to start thinking about the consequences.

Gary Marcus
Feb 22
40
28
Share this post

Don’t Go Breaking My Heart

garymarcus.substack.com

Just a few days ago I reminded regular readers of my grim prediction for 2023, published in December at Wired: the bigger large language models are, the more likely someone’s gonna get hurt.

At the time the essay felt speculative, but plausible. The first paragraph read as follows:

That was then.

Perhaps as side effect of the Bing/Sydney fiasco, one of the leading chatbots just changed course radically midstream, not for a single user, but for all users. (In particular, this instance a popular feature for erotic role play was removed). To someone who doesn’t use the system, that may not seem like a big deal, but some users get quite attached. Sex and love, even simulated, are powerful urges; some people are apparently in genuine emotional pain, as result of the change.

Vice reports:

Replika is a tool for many people who use it to support their mental health, and many people value it as an outlet for romantic intimacy. The private, judgment-free conversations are a way for many users to experiment with connection, and overcome depression, anxiety, and PTSD that affect them outside of the app. 

For some people, maybe the only thing worse than a deranged, gaslighting chatbot is a fickle chatbot that abandons them.

§

As a the child of a psychotherapist who has followed clinical psychology for three decades, I know how vulnerable some people can be. I am genuinely concerned. This is a moment we should learn from. Hopefully nothing bad happens this time; but we need to reflect about what kind of society we are building.

What we are seeing is a disconcerting combination of facts

  • More and more people are using chatbots

  • Few people understand how they work; many people anthropomorphizing those chatbots, attributing to them real intelligence and emotion. Kevin Roose writes about AI for a living and was genuinely concerned about what Sydney was saying. Naive users may take these bots even more seriously.

  • Larger language models seems more and more human-like (but the emotions that they present are no more real). Whatever we see now is likely to escalate.

  • Some people are building real attachments to those bots

  • In some cases, those who are building bots that actively cultivate those attachments, e.g., by feigning romantic and/or sexual interest or by dotting their messages with “friendly” emoticons.

  • Changes in those bots could leave many people in a vulnerable place.

  • There is essentially zero regulation on what these chatbots can say or do or how they can change over time, or on how they might treat their users.

  • Taking on a user in a chatbot like Replika is a long term commitment. But no known technology can reliably align a chatbot in a persistent way to a human’s emotional needs.

To my knowledge, tech companies are free to leverage a human gullibility around chatbot technologies however they like, without consequence, just as big teach companies previously leveraged to a human need for attention to the point of creating addictions to social media, even to the point of sometimes causing “Twitter poisoning”; with the new generation of chatbots, we will see addictions no less potent.

All this is one one more thing for Congress to take note of, as we start to consider policy in our Strange New World.

Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, is a skeptic about current AI but genuinely wants to see the best AI possible for the world—and still holds a tiny bit of optimism. Sign up to his Substack (free!), and listen to him on Ezra Klein. His most recent book, co-authored with Ernest Davis, Rebooting AI, is one of Forbes’s 7 Must Read Books in AI. Watch for his new podcast on AI and the human mind, this Spring.

Share

28
Share this post

Don’t Go Breaking My Heart

garymarcus.substack.com
28 Comments
Red Barchetta
Feb 22Liked by Gary Marcus

"Kevin Roose writes about AI for a living and was genuinely concerned about what Sydney was saying. Naive users may take these bots even more seriously."

There's also the possibility, which you've touched on extensively, which is that tech journos are credulous. Roose provided zero context (and in the piece, showed zero curiosity) about how "Sydney" worked, what might be happening with its responses and leaned into the most sensationalist interpretation possible. And was rewarded, as the piece went viral. Less questions, more clicks.

Pirate Wires has a great piece on this (which might soon be behind a paywall - link below). But the summary of the piece was basically: the Chatbot is crafting answers largely based on what human beings have written about on the internet as possible doomsday AI stories and what-ifs. The fact that it replies that its name is "Sydney" and Roose can't even be bothered to explain that Sydney was its code name should be a red flag that this is lazy "journalism." He's swept up in the fantasy that he's shooting the breeze with Skynet.

https://www.piratewires.com/p/its-a-chat-bot-kevin

Expand full comment
Reply
Jeff Ahrens
Feb 22Liked by Gary Marcus

I’m not a fan of slippery slope arguments, but this seems to be a continuation of the path we’ve been on with the impact that social media has had on our collective psyches. Recent studies on the state teen mental health are relevant here. Chatbots take this to the next level with the speed and amount of content they can generate. Rather than peers and anonymous users we are potentially automating the risk of having our amygdala’s hijacked and our self-worth detrimentally impacted.

Expand full comment
Reply
26 more comments…
TopNewCommunity

No posts

Ready for more?

© 2023 Gary Marcus
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing