Don’t Go Breaking My Heart
Chatbots don’t have feelings, but people do. We need to start thinking about the consequences.
Just a few days ago I reminded regular readers of my grim prediction for 2023, published in December at Wired: the bigger large language models are, the more likely someone’s gonna get hurt.
At the time the essay felt speculative, but plausible. The first paragraph read as follows:
That was then.
Perhaps as side effect of the Bing/Sydney fiasco, one of the leading chatbots just changed course radically midstream, not for a single user, but for all users. (In particular, this instance a popular feature for erotic role play was removed). To someone who doesn’t use the system, that may not seem like a big deal, but some users get quite attached. Sex and love, even simulated, are powerful urges; some people are apparently in genuine emotional pain, as result of the change.
Vice reports:
Replika is a tool for many people who use it to support their mental health, and many people value it as an outlet for romantic intimacy. The private, judgment-free conversations are a way for many users to experiment with connection, and overcome depression, anxiety, and PTSD that affect them outside of the app.
For some people, maybe the only thing worse than a deranged, gaslighting chatbot is a fickle chatbot that abandons them.
§
As a the child of a psychotherapist who has followed clinical psychology for three decades, I know how vulnerable some people can be. I am genuinely concerned. This is a moment we should learn from. Hopefully nothing bad happens this time; but we need to reflect about what kind of society we are building.
What we are seeing is a disconcerting combination of facts
More and more people are using chatbots
Few people understand how they work; many people anthropomorphizing those chatbots, attributing to them real intelligence and emotion. Kevin Roose writes about AI for a living and was genuinely concerned about what Sydney was saying. Naive users may take these bots even more seriously.
Larger language models seems more and more human-like (but the emotions that they present are no more real). Whatever we see now is likely to escalate.
Some people are building real attachments to those bots
In some cases, those who are building bots that actively cultivate those attachments, e.g., by feigning romantic and/or sexual interest or by dotting their messages with “friendly” emoticons.
Changes in those bots could leave many people in a vulnerable place.
There is essentially zero regulation on what these chatbots can say or do or how they can change over time, or on how they might treat their users.
Taking on a user in a chatbot like Replika is a long term commitment. But no known technology can reliably align a chatbot in a persistent way to a human’s emotional needs.
To my knowledge, tech companies are free to leverage a human gullibility around chatbot technologies however they like, without consequence, just as big teach companies previously leveraged to a human need for attention to the point of creating addictions to social media, even to the point of sometimes causing “Twitter poisoning”; with the new generation of chatbots, we will see addictions no less potent.
All this is one one more thing for Congress to take note of, as we start to consider policy in our Strange New World.
Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, is a skeptic about current AI but genuinely wants to see the best AI possible for the world—and still holds a tiny bit of optimism. Sign up to his Substack (free!), and listen to him on Ezra Klein. His most recent book, co-authored with Ernest Davis, Rebooting AI, is one of Forbes’s 7 Must Read Books in AI. Watch for his new podcast on AI and the human mind, this Spring.




I’m not a fan of slippery slope arguments, but this seems to be a continuation of the path we’ve been on with the impact that social media has had on our collective psyches. Recent studies on the state teen mental health are relevant here. Chatbots take this to the next level with the speed and amount of content they can generate. Rather than peers and anonymous users we are potentially automating the risk of having our amygdala’s hijacked and our self-worth detrimentally impacted.
It's called The ELIZA effect "the tendency to unconsciously assume computer behaviors are analogous to human behaviors; that is, anthropomorphisation" and has its very own wikipedia page quoted previously because I'm lazy. You'll find references at the page. The tl;dr version: because it is an example of anthropomorphisation the Eliza Effect is innate human behavior.
Offsetting the Eliza Effect is the Uncanny Valley. This is caused when an object approaches ever-closer to human behavior without actually behaving as a human. Eventually a person starts to experience unease and revulsion to the object. Again, there is a wikipedia page with references.
So if we can't avoid the Eliza Effect the answer is to move chatbots ever closer, and ever failing, to achieving human behavior until the whole endeavor collapses? Essentially, that is what happened with Siri and Alexia.