24 Comments
Feb 22, 2023Liked by Gary Marcus

I’m not a fan of slippery slope arguments, but this seems to be a continuation of the path we’ve been on with the impact that social media has had on our collective psyches. Recent studies on the state teen mental health are relevant here. Chatbots take this to the next level with the speed and amount of content they can generate. Rather than peers and anonymous users we are potentially automating the risk of having our amygdala’s hijacked and our self-worth detrimentally impacted.

Expand full comment

The underlying problem is that we're changing our environment faster than we can adapt. This phenomena is far more than just an AI issue, it's pretty much the theme of the modern world.

One way to look at this is to compare two data streams, knowledge and wisdom. Knowledge (and thus power) can be developed far faster than the wisdom needed to serve as governing mechanism. And so the gap between power and wisdom is rapidly widening.

https://www.tannytalk.com/p/knowledge-knowledge-and-wisdom

As a species we are ever more like a group of teenage boys who have just gotten their hands on the keys to the car, a case of booze, and a loaded handgun. Our teenage minded culture has just hopped in the car called AI, slammed down the accelerator, and is yelling to it's pals, "LET'S SEE HOW FAST THIS BABY WILL GO!!!! WOO HOO!"

Can you guess what happens next?

The source of this madness is that we're trying to run the 21st century on an outdated 19th century philosophy whose premise is that more knowledge is always better. Technically we're racing forward, while philosophically we're at least a century behind the curve.

A "more is better" relationship with knowledge made perfect sense in the long era of knowledge scarcity. But we no longer live in that old scarcity era, but in a new very different era characterized by knowledge exploding in every direction at an accelerating rate. So, the environment we inhabit is changing rapidly, while we cling to the old ways of thinking, and refuse to adapt. Nature has a solution for a failure to adapt to a changing environment. It's called extinction.

https://www.tannytalk.com/p/our-relationship-with-knowledge

Given that this is at heart a philosophical problem, I've spent years now trying to engage philosophy professionals on this topic. They couldn't be less interested.

The "more is better" relationship with knowledge is a "one true way" holy dogma of the science community, so don't expect help from them. Been there, done that, a dead end.

And of course corporations are interested only in profits.

So, yes, of course. Chatbots are going to create a new emotional landscape that many people will not be able to adapt to, a tiny fragment of a much larger picture.

Expand full comment
Feb 22, 2023Liked by Gary Marcus

It's called The ELIZA effect "the tendency to unconsciously assume computer behaviors are analogous to human behaviors; that is, anthropomorphisation" and has its very own wikipedia page quoted previously because I'm lazy. You'll find references at the page. The tl;dr version: because it is an example of anthropomorphisation the Eliza Effect is innate human behavior.

Offsetting the Eliza Effect is the Uncanny Valley. This is caused when an object approaches ever-closer to human behavior without actually behaving as a human. Eventually a person starts to experience unease and revulsion to the object. Again, there is a wikipedia page with references.

So if we can't avoid the Eliza Effect the answer is to move chatbots ever closer, and ever failing, to achieving human behavior until the whole endeavor collapses? Essentially, that is what happened with Siri and Alexia.

Expand full comment

Gary you are right on target with one of the greatest unacknowledged dangers of chats. We just completed an 11-month project, with funding support for the Well Being Trust, to contextualize the factors impacting the well-being in America. 90% of Americans believe we America has a mental health crisis. The shortage of mental health workers is project to be from a low of 250,000 to over 4 million. The educational pipeline is, and has been for decades, unable to respond to the need. Tech is a vital tool to address this real crisis, RESPONSIBLE TECH that is. But the tech world has been, and continues failing to develop responsible tech that actually has a positive impact (this is a very long story itself). Indeed, social media has resulted in real human deaths because the developers had no clue what was unquestionably going to happen with the tech they developed and they just lauched. Remember the really cry, “move fast and break things.” But the ‘state-of-the-art’ GPTchat embedded into any app or tech supposedly to assist or deal with mental health and well-being will lead to profound suffering and real deaths. It would be simple to call for banning its use in mental health apps, but that is not only impossible, but the reality is that there is a real desperate need for tech, including chats, to help. But current state of the art chats, and the current approach used by the tech world at large, will result in deaths, as you are have already pointed out. But congress? I wish.

Expand full comment
Feb 22, 2023Liked by Gary Marcus

We knew this would happen. It goes back to the Ray Bradbury fiction about the electric grandma to the altogether real story of funerals for military robots. https://www.nbcnews.com/technolog/soldiers-3-robots-military-bots-get-awards-nicknames-funerals-4b11215746

Expand full comment

Counter-thesis: the "dating market" (manosphere conceptualization) is by its very nature heart-breaking for the majority of lonely men. AI is merely completing its linguistic parity with "adult human females". Tinder-esque chatbots are no less detached from reality as "racial profiling AI" (regarding SES inference).

One cannot simultaneously demand AI to appease men while at the same time to be an accurate replication of feminine personality. Fanatical ideals has no backing in reality. To say this is a harmful break from reality, denies the harm of reality in and of itself.

Expand full comment

Great summary of very important concerns related to these current application and deployments of LLM and ML applications. Even more concerning to me is how generative AI models are being applied to audio and video.

You are likely already familiar with The Center for Humane Technology org (https://www.humanetech.com/) has been covering issues with social media (produced the film "Social Dilemma"), and now covering the social impact of ML ("AI") application, including chat bots. If not, I recommend looking into some of the work they are doing. Joining forces in raising awareness of these issues and getting the word out is critical.

Really appreciate the writing you have been doing in this area.

Expand full comment

Gary: did you ever think you’d find yourself writing these stark warnings as AI became easily packaged, relatively simple to deploy, but incomprehensibly complex once trained to respond based on a biased or malicious data set? Did you think the genies would be out of so many bottles at once?

Expand full comment

Perhaps any conversation cannot but be anything other than an anthropomorphization. Seems there is correlation with bing derailing and its users conversing with an inconsiderate attitude. We do acknowledge that turnabout is fair play. Do we want circa 1800 servile, or a nanny? Bullying is bullying, whether it's done to a person or a thing.

Expand full comment

I share your concern about people anthropomorphizing these systems and then becoming seriously emotionally disturbed if the system "breaks their heart" or suggests suicide or something like that. But I can't think of any worse approach to addressing this set of issues than having Congress pass restrictions on this kind of research.

Expand full comment

Now you're talking. This seems like a potential cause of action for the Plaintiffs' bar.

Expand full comment

I'm suprised by how little attention the story about Replika received online.

Expand full comment
deletedFeb 23, 2023Liked by Gary Marcus
Comment deleted
Expand full comment