Share this comment
I was working on a longer statement along these lines, which I have now posted at: linkedin.com/pulse/stat…
Generative AI is still not General AI
TLDR: I go into a lot of details about the current state of thinking about GenAI and why much of it is nonsense. With the release of GPT 4o and other advancements, the hype train is again acceler…
© 2025 Gary Marcus
Substack is the home for great culture
I was working on a longer statement along these lines, which I have now posted at: https://www.linkedin.com/pulse/state-thought-genai-herbert-roitblat-kxvmc
Generative AI is still not General AI
TLDR: I go into a lot of details about the current state of thinking about GenAI and why much of it is nonsense. With the release of GPT 4o and other advancements, the hype train is again accelerating. I argue that the idea that language models could achieve intelligence or any level of cognition is a massive self-deception. There is no plausible theory by which a word guessing language model would acquire reasoning, intelligence, or any other cognitive process. Claims that scaling alone will produce cognition are the result of a logical fallacy (affirming the consequent) and are not supported by any evidence. These claims are akin to biological theories of spontaneous generation, and they demonstrate a lack of understanding of what intelligence is. If the statistical properties of language patterns were the only level of intelligence, every statement would be true and accurate. Intelligence requires multiple levels of representation of the world, the language, and of abstract concepts.
The “and then a miracle occurs” analogy is particularly apt in this case, because even the people developing the models really don’t know why scaling works.
Sam Altman calls it a “religious belief “, which is in line with the “miracle” claim.
To call any of this stuff science or engineering is actually very odd.
It really has far more in common with the occult.
Despite the claims coming from their salesmen and saleswomen, LLMs don’t actually understand the real world and are simply recreating superficial patterns present in their training data
“AI video generators like OpenAI's Sora don't grasp basic physics, study finds”
https://the-decoder.com/ai-video-generators-like-openais-sora-dont-grasp-basic-physics-study-finds/
Hard to see how something like Sora is going to “solve physics” when it has no understanding of even rudimentary physical concepts
LLMs are reminiscent of Clever Hans, the “mathematical horse” that got correct answers to math problems by picking up on subtle behavioral patterns provided (perhaps unwittingly) by its owner.
And like Clever Hans, “Clever LLMs” have fooled (unwittingly, of course) a lot of intelligent people.
But Clever Hans actually WAS clever, (just not in the way everyone thought.)
The same can not be said for LLMs which are simply outputting patterns based on statistics.
There is an interesting psychological aspect at work in that even after the evidence makes it clear, people don’t wish to admit that they were fooled by a horse, so they will continue to defend the horse manure.
https://bdtechtalks.com/2023/04/20/llm-stories/. Right again.
I just read your article; it is excellent. Thanks for taking the time to write all that out. I agree that the fundamental reason for doubting the big claims of AI is that there's just no good reason to believe intelligence works this way. All the benchmarks in the world are still no substitute for a plausible theory, and right now all we're offered is the magic of emergence.
“When one develops artificial intelligence, either one should have a clear physical model in mind or one should have a rigorous mathematical basis. AI-chemy has neither” — Enrico Fermi
https://m.youtube.com/watch?v=hV41QEKiMlM
And with 175 billion parameters, Johnny Von Neumann could make the elephant hallucinate like an LLM