24 Comments
Feb 22, 2023Liked by Gary Marcus

I’m not a fan of slippery slope arguments, but this seems to be a continuation of the path we’ve been on with the impact that social media has had on our collective psyches. Recent studies on the state teen mental health are relevant here. Chatbots take this to the next level with the speed and amount of content they can generate. Rather than peers and anonymous users we are potentially automating the risk of having our amygdala’s hijacked and our self-worth detrimentally impacted.

Expand full comment

The underlying problem is that we're changing our environment faster than we can adapt. This phenomena is far more than just an AI issue, it's pretty much the theme of the modern world.

One way to look at this is to compare two data streams, knowledge and wisdom. Knowledge (and thus power) can be developed far faster than the wisdom needed to serve as governing mechanism. And so the gap between power and wisdom is rapidly widening.

https://www.tannytalk.com/p/knowledge-knowledge-and-wisdom

As a species we are ever more like a group of teenage boys who have just gotten their hands on the keys to the car, a case of booze, and a loaded handgun. Our teenage minded culture has just hopped in the car called AI, slammed down the accelerator, and is yelling to it's pals, "LET'S SEE HOW FAST THIS BABY WILL GO!!!! WOO HOO!"

Can you guess what happens next?

The source of this madness is that we're trying to run the 21st century on an outdated 19th century philosophy whose premise is that more knowledge is always better. Technically we're racing forward, while philosophically we're at least a century behind the curve.

A "more is better" relationship with knowledge made perfect sense in the long era of knowledge scarcity. But we no longer live in that old scarcity era, but in a new very different era characterized by knowledge exploding in every direction at an accelerating rate. So, the environment we inhabit is changing rapidly, while we cling to the old ways of thinking, and refuse to adapt. Nature has a solution for a failure to adapt to a changing environment. It's called extinction.

https://www.tannytalk.com/p/our-relationship-with-knowledge

Given that this is at heart a philosophical problem, I've spent years now trying to engage philosophy professionals on this topic. They couldn't be less interested.

The "more is better" relationship with knowledge is a "one true way" holy dogma of the science community, so don't expect help from them. Been there, done that, a dead end.

And of course corporations are interested only in profits.

So, yes, of course. Chatbots are going to create a new emotional landscape that many people will not be able to adapt to, a tiny fragment of a much larger picture.

Expand full comment

Phil, SphericalPhil here, the core point of my 2004 book (Being Spherical: Reshaping Our Lives and Our World) is well described in your statement, "The source of this madness is that we're trying to run the 21st century on an outdated 19th century philosophy..." The resulting "mindset" is a core issue as this mindset is not only used to build tech, but it is embedded into the tech. The challenge is to develop tech with a "wisdom mindset." In 2014 we started a blog on Socialing AI, this was our pinned statement at the top fo the page, “Artificial Intelligence must be about more than our things. It must be about more than our machines. It must be a way to advance human behavior in complex human situations. But this will require wisdom-powered code. It will require imprinting AI’s genome with social intelligence for human interaction. It must begin right now.”

Expand full comment

Hello fellow Phil! :-). I always welcome any opportunity to discuss these topics. Can you provide a link to your blog? And/or, I'd be happy to chat where ever you prefer to do so.

Honestly, I'm not too optimistic about "wisdom powered code". For example, the biggest producer of AI systems is likely to be the Chinese Communist Party, given that they are the largest country. However, my mind is not closed, and I'm hardly an AI expert, so still willing to learn more.

As I've written here a few times, it's still not clear to me what benefits of AI justify the creation of what could turn out to be yet another existential risk. From my point of view as a former coder, sometimes wisdom powered code is simply not coding.

Anyway, enough ranting for now, tell me more about your project if you wish, and thanks for the reply.

Expand full comment

Hello Phil, great to connect. I share your lack of optimism about interest in wisdom powered code, we wrote in the blog from 2014 to 2019 and found no one was interested (SocialingAI dot com). Website for my book is ThinkSpherical dot com AI is here, the genie can't be put back in the bottle. The driving "benefits" of AI, as it is being developed and used to justify its existence, is the financial gain and competitive advantage, to the tech owner. There are other benefits, significant benefits, to be sure, and the potential for positive use is enormous, but not as we are going. I am a systems designer (not coder) and my work is about embedding a 'new worldview approach' into tech that would allow for wisdom, a bit of a wisdom first kind of approach. As a boutique human-centered R&D we have been live with our tech working in the area since 2005, using tech to engage humans, successfully, in some of the most sensitive areas of their lives. Not an AI, but an outline for a safe interface between Humans and AI to collaborate, wisely.

Expand full comment

I liked this quote from your blog:

http://www.socializingai.com/going-radically-new-ideas/

"What we should be going for, particularly in the basic science conferences, is radically new ideas."

Another way to say this could be....

If conventional thinking considered realistic, reasonable, normal and generally accepted as uncontroversial by the group consensus could solve a problem, that problem would probably already be solved. Thus, the most promising arena for investigation will often be weird ideas. I call this outlook "crackpot philosophy".

Another quote from your blog:

"if you send in a paper that has a radically new idea, there’s no chance in hell it will get accepted"

Those who make their living doing intellectual work can't afford to publicly explore too far outside the group consensus because doing so can present an existential threat to their professional reputations, which can be lethal to one's career. They can't be "crackpot philosophers".

Those who can be crackpot philosophers can do so because they have no cultural authority, and thus nothing to lose. But no matter what ideas they come up with it won't matter, because no one will listen to those without cultural authority.

Thus, neither the experts nor the amateurs are really in a very good position to present radical new ideas. And so we see phenomena like the entire culture clinging to a 19th century relationship with knowledge.

Expand full comment

Phil, in view of your comments above, I beileve you would appreciate the post that we did in LinkedIn in August 2018 that is a fairly detailed examination of the lives of two people and what could happen if the "amateurs" and the "experts" collaborated. "Could the story of Bill and Phil change the way humans use AI to grow, learn and heal?" See what you think. https://www.linkedin.com/pulse/story-bill-phil-change-way-humans-use-ai-grow-learn-heal-phil-lawson/

Expand full comment

I don't doubt emerging technologies like AI and genetic engineering were invented with good intentions, and will deliver many positive benefits. So grow, learn and heal, sure, that will happen.

The problem is that these powerful technologies will also empower people with bad intentions, and people who make honest mistakes. And because of the vast scale of these emerging powers, evil doing and mistakes can have disastrous consequences with the potential for erasing all the benefits.

The obvious example in today's headlines is that Putin has the power to erase much of the positive benefits scientists and engineers have delivered over the last century. One guy, one decision on one day, game over for the miracle of the modern world. Such is the scale of these emerging powers.

We should have learned all this 75 years ago at Hiroshima. The fact that we didn't says to me that we aren't intelligent enough to be trusted developing AI.

Expand full comment
Feb 22, 2023Liked by Gary Marcus

It's called The ELIZA effect "the tendency to unconsciously assume computer behaviors are analogous to human behaviors; that is, anthropomorphisation" and has its very own wikipedia page quoted previously because I'm lazy. You'll find references at the page. The tl;dr version: because it is an example of anthropomorphisation the Eliza Effect is innate human behavior.

Offsetting the Eliza Effect is the Uncanny Valley. This is caused when an object approaches ever-closer to human behavior without actually behaving as a human. Eventually a person starts to experience unease and revulsion to the object. Again, there is a wikipedia page with references.

So if we can't avoid the Eliza Effect the answer is to move chatbots ever closer, and ever failing, to achieving human behavior until the whole endeavor collapses? Essentially, that is what happened with Siri and Alexia.

Expand full comment

Gary you are right on target with one of the greatest unacknowledged dangers of chats. We just completed an 11-month project, with funding support for the Well Being Trust, to contextualize the factors impacting the well-being in America. 90% of Americans believe we America has a mental health crisis. The shortage of mental health workers is project to be from a low of 250,000 to over 4 million. The educational pipeline is, and has been for decades, unable to respond to the need. Tech is a vital tool to address this real crisis, RESPONSIBLE TECH that is. But the tech world has been, and continues failing to develop responsible tech that actually has a positive impact (this is a very long story itself). Indeed, social media has resulted in real human deaths because the developers had no clue what was unquestionably going to happen with the tech they developed and they just lauched. Remember the really cry, “move fast and break things.” But the ‘state-of-the-art’ GPTchat embedded into any app or tech supposedly to assist or deal with mental health and well-being will lead to profound suffering and real deaths. It would be simple to call for banning its use in mental health apps, but that is not only impossible, but the reality is that there is a real desperate need for tech, including chats, to help. But current state of the art chats, and the current approach used by the tech world at large, will result in deaths, as you are have already pointed out. But congress? I wish.

Expand full comment
Comment deleted
Expand full comment

Indeed, there are major geographic and economic factors impacting mental health care access to health care workers. And current mental health apps (tens of thousands) are, as a group, not well designed nor effective. Research by Harvard Health last year on almost 50,000 patients "did not find convincing evidence that any mobile app intervention greatly improved outcomes..." In another report of the 25 highest-rated apps for anxiety, "exactly zero contained any content consistent with evidence-based treatments." Now add ChatGPT, which can sound very smart and convincing in its stupidity, has no meaningful guardrails, no mental health training and no training or guidance of how to engage a person with a mental health issue, designed quickly by a tech person to go to market fast and Gary’s prediction of a death in 2023 from chat is almost certain, though as he said, most likely causality will remain unprovable.

Expand full comment
Feb 22, 2023Liked by Gary Marcus

We knew this would happen. It goes back to the Ray Bradbury fiction about the electric grandma to the altogether real story of funerals for military robots. https://www.nbcnews.com/technolog/soldiers-3-robots-military-bots-get-awards-nicknames-funerals-4b11215746

Expand full comment

Counter-thesis: the "dating market" (manosphere conceptualization) is by its very nature heart-breaking for the majority of lonely men. AI is merely completing its linguistic parity with "adult human females". Tinder-esque chatbots are no less detached from reality as "racial profiling AI" (regarding SES inference).

One cannot simultaneously demand AI to appease men while at the same time to be an accurate replication of feminine personality. Fanatical ideals has no backing in reality. To say this is a harmful break from reality, denies the harm of reality in and of itself.

Expand full comment

Great summary of very important concerns related to these current application and deployments of LLM and ML applications. Even more concerning to me is how generative AI models are being applied to audio and video.

You are likely already familiar with The Center for Humane Technology org (https://www.humanetech.com/) has been covering issues with social media (produced the film "Social Dilemma"), and now covering the social impact of ML ("AI") application, including chat bots. If not, I recommend looking into some of the work they are doing. Joining forces in raising awareness of these issues and getting the word out is critical.

Really appreciate the writing you have been doing in this area.

Expand full comment

Gary: did you ever think you’d find yourself writing these stark warnings as AI became easily packaged, relatively simple to deploy, but incomprehensibly complex once trained to respond based on a biased or malicious data set? Did you think the genies would be out of so many bottles at once?

Expand full comment
author

Never imagined so many bottles at once (but mentioned earlier this morning on Alex Kantrowitz’s podcast)

Expand full comment

Perhaps any conversation cannot but be anything other than an anthropomorphization. Seems there is correlation with bing derailing and its users conversing with an inconsiderate attitude. We do acknowledge that turnabout is fair play. Do we want circa 1800 servile, or a nanny? Bullying is bullying, whether it's done to a person or a thing.

Expand full comment

I share your concern about people anthropomorphizing these systems and then becoming seriously emotionally disturbed if the system "breaks their heart" or suggests suicide or something like that. But I can't think of any worse approach to addressing this set of issues than having Congress pass restrictions on this kind of research.

Expand full comment
author

i wouldnt restrict *research*, but i might restrict use as product in some fashion

Expand full comment
Feb 22, 2023·edited Feb 22, 2023Liked by Gary Marcus

Even if the issue is just deployment, I can't imagine that the current U.S. Congress will pass (or recommend) any regulation that does more good than harm.

As a society need to have some difficult, adult conversations, including people who actually understand the technology and how it might evolve, on this any many other issues -- for example, the tradeoff between security and privacy when it comes to facial recognition, the limits on use of drones (autonomous and otherwise), targeted advertising and harassment...

That can't happen on Twitter, and it is unlikely to happen in the hands of politicians looking for gotcha lines for their next 30-second commercial or political rally.

Expand full comment

Restriction is not good enough imo. Both LLMs and autonomous vehicles should be banned by law and severe penalties should be used against lawbreakers. They are both dangerous technologies and allowing their use by the public is unethical. A governmental body should be formed immediately to regulate their use in the public interest. With regard to autonomous vehicles, we already have such a body (department of transportation) but they are obviously not doing their job. It's criminal.

One man's opinion, of course.

Expand full comment

Now you're talking. This seems like a potential cause of action for the Plaintiffs' bar.

Expand full comment

I'm suprised by how little attention the story about Replika received online.

Expand full comment
Comment deleted
Expand full comment
author

thanks!

Expand full comment