20 Comments
User's avatar
Jeff Ahrens's avatar

I’m not a fan of slippery slope arguments, but this seems to be a continuation of the path we’ve been on with the impact that social media has had on our collective psyches. Recent studies on the state teen mental health are relevant here. Chatbots take this to the next level with the speed and amount of content they can generate. Rather than peers and anonymous users we are potentially automating the risk of having our amygdala’s hijacked and our self-worth detrimentally impacted.

Expand full comment
A Thornton's avatar

It's called The ELIZA effect "the tendency to unconsciously assume computer behaviors are analogous to human behaviors; that is, anthropomorphisation" and has its very own wikipedia page quoted previously because I'm lazy. You'll find references at the page. The tl;dr version: because it is an example of anthropomorphisation the Eliza Effect is innate human behavior.

Offsetting the Eliza Effect is the Uncanny Valley. This is caused when an object approaches ever-closer to human behavior without actually behaving as a human. Eventually a person starts to experience unease and revulsion to the object. Again, there is a wikipedia page with references.

So if we can't avoid the Eliza Effect the answer is to move chatbots ever closer, and ever failing, to achieving human behavior until the whole endeavor collapses? Essentially, that is what happened with Siri and Alexia.

Expand full comment
Spherical Phil's avatar

Gary you are right on target with one of the greatest unacknowledged dangers of chats. We just completed an 11-month project, with funding support for the Well Being Trust, to contextualize the factors impacting the well-being in America. 90% of Americans believe we America has a mental health crisis. The shortage of mental health workers is project to be from a low of 250,000 to over 4 million. The educational pipeline is, and has been for decades, unable to respond to the need. Tech is a vital tool to address this real crisis, RESPONSIBLE TECH that is. But the tech world has been, and continues failing to develop responsible tech that actually has a positive impact (this is a very long story itself). Indeed, social media has resulted in real human deaths because the developers had no clue what was unquestionably going to happen with the tech they developed and they just lauched. Remember the really cry, “move fast and break things.” But the ‘state-of-the-art’ GPTchat embedded into any app or tech supposedly to assist or deal with mental health and well-being will lead to profound suffering and real deaths. It would be simple to call for banning its use in mental health apps, but that is not only impossible, but the reality is that there is a real desperate need for tech, including chats, to help. But current state of the art chats, and the current approach used by the tech world at large, will result in deaths, as you are have already pointed out. But congress? I wish.

Expand full comment
User's avatar
Comment deleted
Feb 22, 2023
Comment deleted
Expand full comment
Spherical Phil's avatar

Indeed, there are major geographic and economic factors impacting mental health care access to health care workers. And current mental health apps (tens of thousands) are, as a group, not well designed nor effective. Research by Harvard Health last year on almost 50,000 patients "did not find convincing evidence that any mobile app intervention greatly improved outcomes..." In another report of the 25 highest-rated apps for anxiety, "exactly zero contained any content consistent with evidence-based treatments." Now add ChatGPT, which can sound very smart and convincing in its stupidity, has no meaningful guardrails, no mental health training and no training or guidance of how to engage a person with a mental health issue, designed quickly by a tech person to go to market fast and Gary’s prediction of a death in 2023 from chat is almost certain, though as he said, most likely causality will remain unprovable.

Expand full comment
jk's avatar

We knew this would happen. It goes back to the Ray Bradbury fiction about the electric grandma to the altogether real story of funerals for military robots. https://www.nbcnews.com/technolog/soldiers-3-robots-military-bots-get-awards-nicknames-funerals-4b11215746

Expand full comment
Brad & Butter's avatar

Counter-thesis: the "dating market" (manosphere conceptualization) is by its very nature heart-breaking for the majority of lonely men. AI is merely completing its linguistic parity with "adult human females". Tinder-esque chatbots are no less detached from reality as "racial profiling AI" (regarding SES inference).

One cannot simultaneously demand AI to appease men while at the same time to be an accurate replication of feminine personality. Fanatical ideals has no backing in reality. To say this is a harmful break from reality, denies the harm of reality in and of itself.

Expand full comment
Michelle R's avatar

Great summary of very important concerns related to these current application and deployments of LLM and ML applications. Even more concerning to me is how generative AI models are being applied to audio and video.

You are likely already familiar with The Center for Humane Technology org (https://www.humanetech.com/) has been covering issues with social media (produced the film "Social Dilemma"), and now covering the social impact of ML ("AI") application, including chat bots. If not, I recommend looking into some of the work they are doing. Joining forces in raising awareness of these issues and getting the word out is critical.

Really appreciate the writing you have been doing in this area.

Expand full comment
Steve Berman's avatar

Gary: did you ever think you’d find yourself writing these stark warnings as AI became easily packaged, relatively simple to deploy, but incomprehensibly complex once trained to respond based on a biased or malicious data set? Did you think the genies would be out of so many bottles at once?

Expand full comment
Gary Marcus's avatar

Never imagined so many bottles at once (but mentioned earlier this morning on Alex Kantrowitz’s podcast)

Expand full comment
David Evanoff's avatar

Perhaps any conversation cannot but be anything other than an anthropomorphization. Seems there is correlation with bing derailing and its users conversing with an inconsiderate attitude. We do acknowledge that turnabout is fair play. Do we want circa 1800 servile, or a nanny? Bullying is bullying, whether it's done to a person or a thing.

Expand full comment
Scott E Fahlman's avatar

I share your concern about people anthropomorphizing these systems and then becoming seriously emotionally disturbed if the system "breaks their heart" or suggests suicide or something like that. But I can't think of any worse approach to addressing this set of issues than having Congress pass restrictions on this kind of research.

Expand full comment
Gary Marcus's avatar

i wouldnt restrict *research*, but i might restrict use as product in some fashion

Expand full comment
Scott E Fahlman's avatar

Even if the issue is just deployment, I can't imagine that the current U.S. Congress will pass (or recommend) any regulation that does more good than harm.

As a society need to have some difficult, adult conversations, including people who actually understand the technology and how it might evolve, on this any many other issues -- for example, the tradeoff between security and privacy when it comes to facial recognition, the limits on use of drones (autonomous and otherwise), targeted advertising and harassment...

That can't happen on Twitter, and it is unlikely to happen in the hands of politicians looking for gotcha lines for their next 30-second commercial or political rally.

Expand full comment
Rebel Science's avatar

Restriction is not good enough imo. Both LLMs and autonomous vehicles should be banned by law and severe penalties should be used against lawbreakers. They are both dangerous technologies and allowing their use by the public is unethical. A governmental body should be formed immediately to regulate their use in the public interest. With regard to autonomous vehicles, we already have such a body (department of transportation) but they are obviously not doing their job. It's criminal.

One man's opinion, of course.

Expand full comment
Joe Canimal's avatar

Now you're talking. This seems like a potential cause of action for the Plaintiffs' bar.

Expand full comment
Jurgen Gravestein's avatar

I'm suprised by how little attention the story about Replika received online.

Expand full comment
User's avatar
Comment deleted
Feb 23, 2023
Comment deleted
Expand full comment
Gary Marcus's avatar

thanks!

Expand full comment
User's avatar
Comment deleted
Feb 22, 2023
Comment deleted
Expand full comment
Spherical Phil's avatar

Phil, SphericalPhil here, the core point of my 2004 book (Being Spherical: Reshaping Our Lives and Our World) is well described in your statement, "The source of this madness is that we're trying to run the 21st century on an outdated 19th century philosophy..." The resulting "mindset" is a core issue as this mindset is not only used to build tech, but it is embedded into the tech. The challenge is to develop tech with a "wisdom mindset." In 2014 we started a blog on Socialing AI, this was our pinned statement at the top fo the page, “Artificial Intelligence must be about more than our things. It must be about more than our machines. It must be a way to advance human behavior in complex human situations. But this will require wisdom-powered code. It will require imprinting AI’s genome with social intelligence for human interaction. It must begin right now.”

Expand full comment
User's avatar
Comment deleted
Feb 22, 2023
Comment deleted
Expand full comment
Spherical Phil's avatar

Hello Phil, great to connect. I share your lack of optimism about interest in wisdom powered code, we wrote in the blog from 2014 to 2019 and found no one was interested (SocialingAI dot com). Website for my book is ThinkSpherical dot com AI is here, the genie can't be put back in the bottle. The driving "benefits" of AI, as it is being developed and used to justify its existence, is the financial gain and competitive advantage, to the tech owner. There are other benefits, significant benefits, to be sure, and the potential for positive use is enormous, but not as we are going. I am a systems designer (not coder) and my work is about embedding a 'new worldview approach' into tech that would allow for wisdom, a bit of a wisdom first kind of approach. As a boutique human-centered R&D we have been live with our tech working in the area since 2005, using tech to engage humans, successfully, in some of the most sensitive areas of their lives. Not an AI, but an outline for a safe interface between Humans and AI to collaborate, wisely.

Expand full comment
User's avatar
Comment deleted
Feb 23, 2023
Comment deleted
Expand full comment
Spherical Phil's avatar

Phil, in view of your comments above, I beileve you would appreciate the post that we did in LinkedIn in August 2018 that is a fairly detailed examination of the lives of two people and what could happen if the "amateurs" and the "experts" collaborated. "Could the story of Bill and Phil change the way humans use AI to grow, learn and heal?" See what you think. https://www.linkedin.com/pulse/story-bill-phil-change-way-humans-use-ai-grow-learn-heal-phil-lawson/

Expand full comment