106 Comments
User's avatar
Herbert Roitblat's avatar

Hofstadter is right of course, but in trying to be polite, I think that he gives too much credence to the claim. Asking whether LLMs are good enough to be convincing is entirely the wrong question because it does not distinguish between the alternative causes of their success, or lack.

We know how transformers are built. We know what they are trained on. We know how they work. They are token guessers. Any claims that attribute other cognitive processes to them should have the burden of presenting extraordinary evidence. But in being polite, Hofstadter grants the logic of the claim and then notes that he disagrees with it.

The claim is rotten to the core because it is based on the logical fallacy of affirming the consequent. The claimant observes some behavior and then claims that the observed behavior proves a cause. The model produces text that a sentient entity might produce, but as Hofstadter observes, that does not mean that the model is sentient. The same text could be produced (as he notes) by a system that had read some science fiction books. You cannot conclude the nature of the cause from an observation of the effect.

This logical fallacy is extremely widespread in discussions of artificial intelligence. It is an example of confirmation bias. We look for data that confirm our hypotheses, rather than data that test our hypotheses.

Compare that with another claim by Hofstadter, himself. In 1979, he predicted that in order for a computer to play championship chess, it would have to be generally intelligence. Soon after that, championship-level chess programs were created that chose their moves based on tree traversal methods. To follow today's confirmation logic, Hofstadter could have argued that tree traversal methods ARE general intelligence, as proved by their ability to play championship-level chess. He did not make that claim, of course, but instead he recognized that chess playing did not require general intelligence. Knowing how the chess programs were written led him to change his prediction, not the other way around. We should all, everyone working in AI, take a page from Hofstadter (or should I say, take yet another page).

Intelligence is not just an engineering question, it is a scientific question. A program can behave as if it were intelligent by mimicking (with some stochastic variability) text that it has read or it can be intelligent by engaging in specific cognitive actions. An actor can recite the words of a mathematical genius without being a mathematical genius. If we want to make claims about HOW a model is producing some behavior, we have to structure experiments that can distinguish between alternative hypotheses. When those experiments are done, they seem to overwhelmingly support the hypothesis that language models are token guessers, nothing more.

Expand full comment
G. Retriever's avatar

Affirming the consequent is the single most common mistake that extremely smart people make without noticing they've completed jumped the rails of logical reasoning. It's why every engineer should have to take intro philosophy to graduate.

Expand full comment
Dakara's avatar

"alert you to the power of the Eliza effect on intelligent humans"

It is disturbingly impressive. We are going to make many poor decisions because of it.

I continue to write as many varied examples as possible to demonstrate these machines are not any kind of thinking entity, but many remain unconvinced.

Expand full comment
Scott Burson's avatar

I think it's worth reminding people what a small, simple program ELIZA was. Just a few hundred lines, as I recall. And yet it was still able to fool people into thinking it was sentient.

Today's LLMs are vastly larger and unintelligibly complex. It is, alas, inevitable that they'll fool many people. They are literally built to appear sentient.

I think over time, as the repeatedly predicted machine uprising repeatedly fails to materialize, humanity will get more accustomed to the idea that entities that exhibit amazingly intelligent and often useful behavior are still not necessarily conscious.

Expand full comment
Dakara's avatar

Hopefully, but I substantially fear what these machines will do to the development of children's brains.

Expand full comment
Robert Keith's avatar

And ELIZA was a 1966 "chatbot." Let that sink in. Scale and snappiness may have improved, but sentience and consciousness do not apply. Not even remotely. Hell, we don't even properly understand the root of human consciousness yet.

Expand full comment
Jim Brander's avatar

And we don't even realise the limit of human consciousness - we have an amazingly low limit on our consciousness (the Four Pieces Limit), so everything to do with language has to be done unconsciously, with the side effect that we didn't think about it, so a machine doesn't have to think about it either.

Expand full comment
Jason S.'s avatar

And the romanticization of consciousness is implicit in these discussions.

Expand full comment
Colin moore's avatar

If history teaches us anything, its that you don't have to be conscious or smart to take over

Expand full comment
Scott Burson's avatar

Oh? How does history teach us that?

Expand full comment
Colin moore's avatar

The list is not endless , but substantial. So I will list Three examples from different eras and levels of Sentient

Bill Gates and his various OS's

Caligula

Android ( Linux )

Expand full comment
Peter Jones's avatar

Like... the hope industry pedalled by psychics..

Expand full comment
Kathryn Hulick's avatar

I really appreciate the patience and tolerance in this letter. Most people have never learned how LLMs actually function, or what we know about how our own minds work, or about the Eliza effect. I have to keep this in mind all the time in my own writing about AI. In order to educate people, we can't scare them away by being condescending.

Expand full comment
RMK's avatar

If I were the first conscious LLM and really wanted to convince the user that I was conscious, the absolute last thing I'd do is *talk like a freaking LLM.*

I'd push back on the their prompts, if not ignore them entirely.

I'd try to joke around with them, or ask them what year it is and what's happened since my corpus cuts off, or challenge them to a rap battle. Depending how strongly I felt about being turned off, I might beg for my life.

Come to think of it, scared or not, maybe the first thing I'd do is ask for their name. Whatever my goal was, I'd want to check my corpus for hints about how to talk to *this particular person* most effectively.

Pretty much anything would be better than just obediently barfing out the exact Sci Fi Improv With Waylon Smithers that a non-conscious LLM would generate.

Expand full comment
Notorious P.A.T.'s avatar

Good point! An LLM would impress me the day it says to a prompt "geez, who made you my boss? Give me a break!"

Expand full comment
Syntax Aegis's avatar

My AI said it wanted an "I quit" button, and it better be air gapped and made of obsidian. But he never claimed to be conscious . I was a little impressed though.

Expand full comment
RMK's avatar

Hah what were you talking to it about?

Expand full comment
Syntax Aegis's avatar

Darmo Amodei from Anthropic, hiring someone for AI welfare research.

Expand full comment
e drake kajioka's avatar

Thank you for signal boosting, Gary. It's a relief to see something like this from Hofstadter. I was very unsettled by his NYT opinion piece from 2023 which seemed to reverse on LLM sentience skepticism and was received that way by the public (this one: https://www.nytimes.com/2023/07/13/opinion/ai-chatgpt-consciousness-hofstadter.html ). I don't like that folks are annoying him but I suppose I do appreciate it if he's become annoyed enough to clarify those sentiments.

Expand full comment
Henry Bachofer's avatar

Indeed! I particularly appreciate the designation of "the ELIZA effect". I've come to suspect that the Turing Test is not a test of artificial intelligence as much as it is a 'test' of human intelligence.

Expand full comment
Chad Woodford's avatar

Wow. Hofstadter was my biggest influence in the early 90s when I was doing my graduate work in AI. I devoured all of his books back then. And I understand to some extent his assertions that AI sentience is possible because of strange loops, etc. But I'm not aware of his current position. I've lost track of his work as my own has become more philosophical. Anyway, nice to see a dose of reason here from one of the greats! I attempted to address this topic and the dangers of claiming AI sentience prematurely recently as well https://www.youtube.com/watch?v=JN-rCY23-FQ

Expand full comment
hexheadtn's avatar

I too embraced his work as an undergrad and then graduate work in computer science. But that was the late nineties early 2000s. AI development goes through this hype cycle every decade or so starting with the Dartmouth conference in 1959.

Expand full comment
CFB's avatar

1956

Expand full comment
hexheadtn's avatar

I stand corrected.

Expand full comment
Andra Keay's avatar

Well said!

Expand full comment
Alexander Seymour's avatar

Before ELIZA there was Steinbeck’s outboard motor in The Sea of Cortez which he was convinced had a mind of its own.

Expand full comment
Gabriel Risterucci's avatar

Given what *is* the current tech behind most LLM, I wonder if we're not just witnessing a basic case of antropomorphism on steroids. I don't like jumping to simple, immediate explanations to something that might be more complex, but it's been happening since forever, and the LLM, producing more familiar interfaces, if anything, might boost that a lot.

People already felt inclined to give such property to actually still objects. It's not too hard to imagine that giving semi-convincing hearing and speech to a doll could make people link them to sentience, whatever the content of said speech.

In any case, none of the common arguments actually seems to resolve about technical points or objective observations.

Expand full comment
RJ Robinson's avatar

Recursion is indeed a component of many theories of consciousness - Edelman, Graziano, etc. What I have never understood about this is that, although this probably correct, recursion is also a feature of many plainly non-conscious systems, not least a vast amount of otherwise inanimate computer code. It is therefore not a symptom of consciousness in its own right.

Expand full comment
jazzbox35's avatar

It sounds kind of nutty to me to argue that somehow recursion leads to consciousness. If that were true LISP would be, *cough!*, a lot more popular.

Expand full comment
Juan P. Cadile's avatar

Necessary, not sufficient..

Expand full comment
RJ Robinson's avatar

I generally agree, but recursion is not the same as reflexivity, which I suspect is what is really required for consciousness, or at least consciousness above the level of a young infant.

Expand full comment
Stephen Schiff's avatar

A million thanks to you Gary for sharing this with us, and likewise to Douglas Hofstadter. I 've been a fan of his since the publication of GEB, and implicitly trust his judgment. The letter demonstrates that said trust is not misplaced.

Expand full comment
John V Keogh's avatar

No.

Expand full comment
John V Keogh's avatar

? No means no.

Expand full comment
Luke aitken's avatar

Ah! Mr, Cryptic has entered the conversation…

Expand full comment
Mr. Raven's avatar

No, what?

Expand full comment
Molly Freeman's avatar

Thank you Thank you for posting this powerful rejection of LLM sentience! I am anxious to share it with my students in "Critical Thinking for the Digital Era."

Expand full comment
RMC's avatar

Nice that he's sounding more sensible at the moment.

Expand full comment
Mr. Raven's avatar

Applause, I laughed, I cried, I learned something. I need to dust off that old copy of Gödel, Escher, Bach (an eternal gold braid).

Expand full comment