90 Comments
User's avatar
Paul Topping's avatar

I'm glad you addressed this. It seems AI hypesters have graduated from telling everyone how LLMs think like we do into claiming that humans think like LLMs or that there's a "mapping" between the two. In order to make these claims, they are willing to ignore how human brains work and/or how LLMs work. Sometimes it is certainly ignorance but mostly it is willful ignorance or simple denial of reality.

Expand full comment
Joy in HK fiFP's avatar

Or maybe wishful thinking.

Expand full comment
Paul Topping's avatar

I think that's a kind of reality denial but, yeah, lots of wishful thinking.

Expand full comment
E.R. Flynn's avatar

Good grief, Trome is serving until 2431???

Don't let Trump see this, it'll go straight to his head.

Also, notice how all of the Presidents resemble white car company CEOs, even Obama. Nothing too racist about that, huh? Eeesh.

Expand full comment
Amy A's avatar

It feels like anthropomorphizing LLMs is really about dehumanizing people. The same tech executives who tell us to treat language models as coworkers seem downright gleeful about taking work from flesh and blood humans.

Expand full comment
Joe's avatar

They're anti-human. I'm sick of hearing about how the LLMs are conscious!

Expand full comment
Bruce Cohen's avatar

It’s very telling of the arrogance of LLM developers that they think they can easily duplicate or oven outperform natural intelligence when we have never seen it in any sort of system that wasn’t a living, embodied organism, and when the scientists who have spent entire careers studying organic brains, the only examples we have, simply shrug when asked how intelligence manifests in those brains.

Thank you, Gary, for consistently standing up to the flood of hype, disingenuous nonsense, and category errors that has created what amounts to a cult of LLMs. Hopefully the day is near when the world and the AI community can admit that it’s time to broaden our definition of AI to include technologies other than stochastic parrots.

Expand full comment
Peter's avatar

Agreed. I remember reading Clancey’s Situated Cognition in the 90s amongst a pile of other books/papers. Embodiment is critical. I feel like this is Expert Systems deja vu except at a scale where we are inundated with slop. There is a lot of conflation of ‘models’ and the ‘actual thing’—I mutter to myself ‘Don’t eat the menu’ regularly.

Expand full comment
Miriam Malthus's avatar

They train them on written discourse - an idealised model of human language - as if that were the same thing as language itself, and as if language were intelligence itself rather than a product of intelligence, and assume that this will make the model understand the world. It's a layer beyond confusing the map for the territory, it's confusing a schematic of a map of a small portion of the territory for the whole territory.

Expand full comment
Bruce Cohen's avatar

Lying, and typos, and category errors, oh my!

Expand full comment
Irish-99's avatar

I’m amazed that even professional scientists still consult LLMs as if they were Newton plus Einstein. The brutal debunking by Professor Emily Bender et al in 2021, describing them accurately as “stochastic parrots” should have broken the spell:

https://s10251.pcdn.co/pdf/2021-bender-parrots.pdf

Even stronger was the discovery that while LLMs vacuum up vast amounts of data, they cannot distinguish reliable from garbage data. Their top sources are Reddit, YouTube, Yelp, Facebook - plus Wikipedia.

https://www.statista.com/statistics/1620335/top-web-domains-cited-by-llms/

This is all quite odd. Perhaps a mania, like the South Sea Bubble.

Expand full comment
Joe's avatar

It's stunning that anyone can both consider themself intelligent and also think they can manipulate that garbage information into AGI.

Expand full comment
Jay Rooney's avatar

God, I’m so sick of people ripping on em dashes as an AI tell. “Tell me you’re unread without telling me you’re unread.”

Expand full comment
khimru's avatar

What's wrong with em-dashes? They are entered with “⌥” + “-” on MacOS in most editors without any special setup and, last time I've checked, Mac wasn't exactly “AI-use only” device…

Expand full comment
Jay Rooney's avatar

It’s even better than that: Macs and iOS both autocorrect two regular dashes [--] to em dashes [—] as you type, it’s so easy!

People are dumb, people are lazy, people don’t read, but people nevertheless want to feel smart and superior to others, and people (understandably) are sick of slop, so people latch onto em dashes as an indicator of AI generation because it happens to show up more in the wild now because chatbots use it more than people, making it an easier way for people to get their self-righteous dopamine hit without having to actually engage with the content. Because… well, see above lol.

It’s the same thing that happened with “delve,” a perfectly fine and useful word that’s been ruined because using it will cause the uncultured masses to should “dUrR yOu UsEd Ai!!!1!”

Oh wait, is that an em dash in my comment?? Oh well, sorry guys, guess I’m a robot now 🤖🤷🏻‍♂️

Expand full comment
Bruce Cohen's avatar

I expect that punctuation in general is becoming more and more the sole use of AIs and literate humans. I’ll bet the semicolon is next on the chopping block. Too bad for those like me who have always seen punctuation as a way to signal pauses and breath control when reading aloud.

Expand full comment
Melissa Steinman's avatar

I have always used both em dashes and semicolons quite frequently in my writing as a means of facilitating expression of complex ideas and thoughts and also as a way to keep the writing interesting and varied in rhythm. I guess I’ve always written like a chatbot. 😉 But in all seriousness, the whole “you used an em dash, you must be using AI” callout is pure idiocy.

Expand full comment
Joe's avatar

That's an en dash. Em dash needs the 'shift' too.

Expand full comment
PH's avatar

Oh no, Franklin D. Roseaneh exceeded the term limit by a few hundred years.

Expand full comment
Steve's avatar

Maybe if Gary had another trillion data points, he could find the "r" in "William Lamb"?

Expand full comment
Dakara's avatar

After the apparent failure of GPT-5 to live up to hype, I'm not sure what else will convince the remaining believers. I suspect they will hold on until the industry bubble collapses. That is the only option now. Billion dollar investments weren't going for a hallucinating auto-complete machine.

BTW, some additional arguments I've put together for those who believe current AI systems are same as humans. Some might find useful.

https://www.mindprison.cc/p/intelligence-is-not-pattern-matching-perceiving-the-difference-llm-ai-probability-heuristics-human

Expand full comment
Tim Nguyen's avatar

That's where the overpriced, presumably made in China robots come in, still probably remotely and secretly operated by tech support in India pretending to be AI.

Expand full comment
Joe's avatar

Looks like a massive bubble to me.

Expand full comment
Quality Control's avatar

The LLM hype bubble can't burst soon enough. Thanks for the sanity verification.

Expand full comment
Lance Khrome's avatar

Never mind chess games or math problems, but I would like to see the latest and greatest AI agent translate a novel from a writer's native language to, well, English. For example, take novels by Dostoevsky or Tolstoy, set GPT-5 to it, and compare the output with a direct Constance Garnett translation, as a test of any "humanizing" or linguistic "nuancing" based upon the agent's vast memory/processing "skills". And if the result can be seen as comparable, then task the agent with a de novo translation of a Russian work not yet having any presence in English translation, where nothing has been "scraped" for LLM storage from that particular author's corpus.

This is where I would benchmark "success" in AGI.

Expand full comment
Tom's avatar

Mother Nature bats last, and AI is the sugar rush before the collapse of the house of cards we call global industrial civilization. Collectively, we are not solving our problems and the tech Elite is giving the masses false hope by suggesting AI will solve them for us. Look, the reality is that the rate of scientific and technological progress has slowed dramatically over the last half century such that, even with more scientists and engineers now than ever armed with orders of magnitude better information technology, disruptive breakthroughs are fewer and further between—this is the diminishing returns on the road to collapse. The diminishing returns seen in AI are a subset of the overall diminishing returns.

Expand full comment
Bruce Cohen's avatar

I’m not sure we’re that close to a technological plateau. Consider all the recent discoveries in condensed matter physics , all the new methods of manipulating light, electrons, and matter at nano and micro scale. Or the discoveries in what’s been (IMO) a golden age of astrophysics. Granted we’re stalled out on basic physics as we try to understand the quantum universe and how it fits in with the other physics we know, but that doesn’t mean everybody is stuck. Also, despite the rapid and insane destruction of the mechanisms of scientific investigation here in the US science is still alive in Asia, Africa, and Europe.

Expand full comment
Lojban Chauvanist's avatar

Loved this comment, Mr. Cohen!

Expand full comment
TheAISlop's avatar

Explains how OpenAI made their charts for the GPT-5 launch. The hype quickly turns to dust as real life use shows reality. The humanness of an LLM is only how much humanness you decide to give it.

Expand full comment
Nanstar's avatar

I almost think the 2nd US presidents example is worse..the confidence of the person that sent it to you is why. When it’s still hallucinating but harder to detect it seems like people will trust more, check less and take more risks. I don’t understand how people can downplay hallucinations when predictability seems imperative for any application. Sure this calculator can multiply 4 digit numbers in a fraction of a second but 5-10% of the time it will be wrong (though it will seem correct) and we can’t tell you when it will happen.

Expand full comment
Paul Jurczak's avatar

"There is a rumor going around that LLM’s are basically just like humans."

Well, LLMs are just like some humans: parroting what they heard somewhere without critical thought.

Expand full comment
Tom's avatar

Gary Marcus comes out swinging—mercilessly going for that knock out punch against his staggering, wounded opponent, the LLM paradigm, post-GPT-5.

Expand full comment