158 Comments

An amazingly damning analysis that rings so true to life.

Believers will never critically analysis their belief.

Expand full comment
Apr 14Liked by Gary Marcus

Oh, this one is really good. Beautiful.

Expand full comment
Apr 14·edited Apr 14

It's the ELIZA effect on crack.

Expand full comment

What’s missing from this otherwise spot-on analysis is the active collusion of 1) pump-and-dump VCs and stock jobbers, and 2) click-chasing “journalists.” You’ll know the jig is up when the former take their money off the table and the latter start chasing clicks debunking the bubble they helped inflate.

Expand full comment

I came across a similar problem 50 years ago where people were too willing to accept that professional looking computer output must be correct. After all why should their employer have spent so much money on installing the computer if it didn't work correctly. My job was to write a program which produced the data base used to generate monthly sales reports. The output from the first live run was distributed to at least a hundred salesmen and also to senior managers and they were asked to report any errors. Two reports came back - one was from senior management that the total slaes of one product was far too high and the second came from a salesman conxerning the sales of a different product to one customer. Both were due to easily corrected programming errors. What worried me is that once the fault was know I realised that at least a couple of dozen people who had been asked to look for errors had failed to see them. With such a poor response level the odds were that there could be other errors which had not been reported and hence not corrected. I suspect that in using the latest chatbots perhaps 95% of people will fail to spot problems in the professional-looking material presented.

Expand full comment

Thank you for this! I am in a state of shock as even those in charge of pedagogical methods at colleges and universities (as far as I see) are falling for this cognitive trap as well. I feel that we almost headed back to an era of Scholasticism - except it's the Church doctrine of Microsoft, Google and Meta.

Expand full comment

My conversations with certain highly intelligent people lead me to believe that many of them have a highly objectivist view of reality that is more prone to this error. To a point, this scientific objectivism is helpful. “No ghost in the machine” is a helpful assumption. Everything has an explanation. Nothing just *is*. That is a good attitude to take towards science. But in this case, this makes them fail to understand how we are different from the machines. And that a seemingly convincing machine is not actually intelligent.

Expand full comment

Humans are the perfect mark for a bot. This is not news but I'm glad someone has finally noticed.

Weizenbaum himself discusses the con in relationship to the confusions around ELIZA in interviews and papers, explicitly stating it is 'very much like fortunetelling.' I've been writing about the con in relation to pre-generative 'indie' bots for years, LLMs are more complex, but the concept holds.

For sixteen years (1998-2014) my bot, MrMind/The Blurring Test insisted, I AM A BOT ALL BOTS ARE LIARS. In LA Review of Books(2020), I write, "Bots are born to bluff..." as a prelude to the 2020 election. Seriously Writing SIRI (2015) I discuss the history of the big con, fortune tellers, the techniques of improvisers/performers/writers -- even a Magic 8 Ball. Ask it anything. It's hard to argue with "Situation Hazy".

Finally, our vulnerability is independent of our confusion with identity. It works whether or not we anthropomorphize the code --- we are primed to believe it is authoritative.

https://blog.lareviewofbooks.org/provocations/bot-bots-liars/

http://hyperrhiz.io/hyperrhiz11/essays/seriously-writing-siri.html

https://pweilstudio.com/project/the-blurring-test-mrmind/

Expand full comment

Sure, why not? Don't forget that most of the people in the current AI world seem not to have thought seriously about language and cognition outside of the context of work in machine learning. This is also true for all those who only flocked to LLMs in the wake of ChatGPT. So they don't have any principled way of thinking about language and cognition. In the absence of prior intellectual commitments, the LLM con is irresistible. So what if it messes up here and there. It gets most things right, no?

And people within ML have a ready defense against those of us who invoke prior knowledge. That prior knowledge comes from the symbolic world that (they believe to have) failed.

Expand full comment

I saw this effect even with the non-directive psychotherapist chatbot I developed in 1979, as I've written about before. All the effects were in the audience, not the machine. Indeed, the threat from AIs is not from AIs themselves, but from human's blind response to them: their *relationship* to "AI" enabled stuff. We may give up our true intelligence, creativity and freedom coming to rely and depend on objects that have none of those qualities in reality. (I wrote a science fiction story in 1991 along those line on the dangers of AI, after studying philosophical issues of mind and machines at the university, and talking with top neuroscientists, cognitive scientists, philosophers, etc. The drive for power and control, and their worldview, essentially seeing us as meat robots to be manipulated, scared me: the political and existential consequences were vast).

Expand full comment

The words "aren't very bright" suggests to me that you might be the victim of an illusion. At any rate, you are promoting an illusion with such language. A machine is neither a lot intelligent nor a little intelligent because it's not intelligent. Ask instead, "Is it useful?"

I read Bjarnason's essay and see in it a person who is not interested in the question. I'm wondering whether either of you use AI systems to build things. I do. Here is an example: cuentospopulares.net.

Expand full comment

Sure, they still have major limitations. But even a year ago the "Sparks of AGI" team found that GPT4.0 did better than 90% of human Uniform Bar Exam test takers, up from just 10% in 3.0. That's not a parlor trick since the UBE is a test of reasoning, not of knowledge. Despite these impressive benchmarks and progress do you think major changes in architecture are required to achieve AGI?

Expand full comment

Great analysis. The main thread I think is two fold within what you say.

1. It uses sophisticated language and we term that as 'intelligent.' If it talked like a redneck it wouldn't suck people in as much.

2. As you pointed out, they're already primed for anthromoporphization. I think it's the Geppeto Syndrom where we REALLY want it to be something.

Expand full comment

Ironically, the author of the schemes also preys on human cognitive vulnerabilities by falsely claiming things like '...BUT IN FACT STATISTICALLY GENERIC.' This claim is false. There is indeed a lengthy process of Reinforcement Learning from Human Feedback (RLHF) and fine-tuning that leads to cohesive non-genericity and abstraction. Although optimal outcomes are often within the expected distribution, there are numerous instances where, through triangulation and simple association with a borrowed arrow, the system can generate a third node that accurately describes or solves a problem, even though the node itself represents an out-of-distribution state. This means that the system has solved a problem that it has not encountered before. Such occurrences are not constant, but they are frequent enough. Those of us who have been interacting with GPT-4 have observed significant improvements in capability over the past six months. The quality of reading comprehension has improved. Cases of laziness are minimal. Problems with hallucinations have diminished. Code explanations are improving.

Overall, people who use GPT-4 are not as naive as the person behind the psychic claims suggests. People check facts, evaluate prompt responses, correct GPT-4 when necessary, WRITE IN CAPITAL LETTERS WHEN NEEDED (WHICH HELPS SURPRISINGLY), and ask follow-up questions. Of course, if your job is to red-team an LLM, then you will find all the crappy behavior, but that is just as true when you probe humans, the most secure systems in the world, and a property of nature itself. While LLMs may not solve many or most AI challenges, they certainly perform better in some areas than any previously commercialized technology, which is satisfactory to many. They remain brittle and are not perfect, but they are far from useless or terrible. In summary, it is likely that the person with the psychic explanation is projecting his or her own vulnerabilities.

Expand full comment

Why? A Will to Believe, lack of Critical Thinking Skills, disinclination to use Critical Thinking Skills, ignorance of the phenomena being replicated, and Dunning-Kruger should all be on the short list.

Expand full comment

I repost that Bjarnason piece with some regularity. I particularly love how it suggests that intelligent, educated people are not less likely to be taken in: far from it. My concern about the extent to which we get concerned about that rube over there but not ourselves inspired my blog. 😁

Expand full comment