Don’t Go Breaking My Heart
Chatbots don’t have feelings, but people do. We need to start thinking about the consequences.
Just a few days ago I reminded regular readers of my grim prediction for 2023, published in December at Wired: the bigger large language models are, the more likely someone’s gonna get hurt.
At the time the essay felt speculative, but plausible. The first paragraph read as follows:
That was then.
Perhaps as side effect of the Bing/Sydney fiasco, one of the leading chatbots just changed course radically midstream, not for a single user, but for all users. (In particular, this instance a popular feature for erotic role play was removed). To someone who doesn’t use the system, that may not seem like a big deal, but some users get quite attached. Sex and love, even simulated, are powerful urges; some people are apparently in genuine emotional pain, as result of the change.
Vice reports:
Replika is a tool for many people who use it to support their mental health, and many people value it as an outlet for romantic intimacy. The private, judgment-free conversations are a way for many users to experiment with connection, and overcome depression, anxiety, and PTSD that affect them outside of the app.
For some people, maybe the only thing worse than a deranged, gaslighting chatbot is a fickle chatbot that abandons them.
§
As a the child of a psychotherapist who has followed clinical psychology for three decades, I know how vulnerable some people can be. I am genuinely concerned. This is a moment we should learn from. Hopefully nothing bad happens this time; but we need to reflect about what kind of society we are building.
What we are seeing is a disconcerting combination of facts
More and more people are using chatbots
Few people understand how they work; many people anthropomorphizing those chatbots, attributing to them real intelligence and emotion. Kevin Roose writes about AI for a living and was genuinely concerned about what Sydney was saying. Naive users may take these bots even more seriously.
Larger language models seems more and more human-like (but the emotions that they present are no more real). Whatever we see now is likely to escalate.
Some people are building real attachments to those bots
In some cases, those who are building bots that actively cultivate those attachments, e.g., by feigning romantic and/or sexual interest or by dotting their messages with “friendly” emoticons.
Changes in those bots could leave many people in a vulnerable place.
There is essentially zero regulation on what these chatbots can say or do or how they can change over time, or on how they might treat their users.
Taking on a user in a chatbot like Replika is a long term commitment. But no known technology can reliably align a chatbot in a persistent way to a human’s emotional needs.
To my knowledge, tech companies are free to leverage a human gullibility around chatbot technologies however they like, without consequence, just as big teach companies previously leveraged to a human need for attention to the point of creating addictions to social media, even to the point of sometimes causing “Twitter poisoning”; with the new generation of chatbots, we will see addictions no less potent.
All this is one one more thing for Congress to take note of, as we start to consider policy in our Strange New World.
Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, is a skeptic about current AI but genuinely wants to see the best AI possible for the world—and still holds a tiny bit of optimism. Sign up to his Substack (free!), and listen to him on Ezra Klein. His most recent book, co-authored with Ernest Davis, Rebooting AI, is one of Forbes’s 7 Must Read Books in AI. Watch for his new podcast on AI and the human mind, this Spring.
I’m not a fan of slippery slope arguments, but this seems to be a continuation of the path we’ve been on with the impact that social media has had on our collective psyches. Recent studies on the state teen mental health are relevant here. Chatbots take this to the next level with the speed and amount of content they can generate. Rather than peers and anonymous users we are potentially automating the risk of having our amygdala’s hijacked and our self-worth detrimentally impacted.
The underlying problem is that we're changing our environment faster than we can adapt. This phenomena is far more than just an AI issue, it's pretty much the theme of the modern world.
One way to look at this is to compare two data streams, knowledge and wisdom. Knowledge (and thus power) can be developed far faster than the wisdom needed to serve as governing mechanism. And so the gap between power and wisdom is rapidly widening.
https://www.tannytalk.com/p/knowledge-knowledge-and-wisdom
As a species we are ever more like a group of teenage boys who have just gotten their hands on the keys to the car, a case of booze, and a loaded handgun. Our teenage minded culture has just hopped in the car called AI, slammed down the accelerator, and is yelling to it's pals, "LET'S SEE HOW FAST THIS BABY WILL GO!!!! WOO HOO!"
Can you guess what happens next?
The source of this madness is that we're trying to run the 21st century on an outdated 19th century philosophy whose premise is that more knowledge is always better. Technically we're racing forward, while philosophically we're at least a century behind the curve.
A "more is better" relationship with knowledge made perfect sense in the long era of knowledge scarcity. But we no longer live in that old scarcity era, but in a new very different era characterized by knowledge exploding in every direction at an accelerating rate. So, the environment we inhabit is changing rapidly, while we cling to the old ways of thinking, and refuse to adapt. Nature has a solution for a failure to adapt to a changing environment. It's called extinction.
https://www.tannytalk.com/p/our-relationship-with-knowledge
Given that this is at heart a philosophical problem, I've spent years now trying to engage philosophy professionals on this topic. They couldn't be less interested.
The "more is better" relationship with knowledge is a "one true way" holy dogma of the science community, so don't expect help from them. Been there, done that, a dead end.
And of course corporations are interested only in profits.
So, yes, of course. Chatbots are going to create a new emotional landscape that many people will not be able to adapt to, a tiny fragment of a much larger picture.