The industry, despite the hype, is working on problems that are tractable with today's knowledge. Scientists are more the purist type, pointing problems but lacking alternatives. Those will arrive, but it will be industry that discovers them.
I'm glad to see that your views are being more widely diseminated, but I'm afraid that much of the audience just won't get it.
You wrote, "Even significantly scaled, they still don’t fully understand the concepts they are exposed to — which is why they sometimes botch answers or generate ridiculously incorrect drawings."
I'm concerned that the average reader might take this to mean that AI's partially understand concepts. As I'm sure you know, computers don't "understand" anything -- at least not in the way the word is usually used.
This is central to understand. Humans are infatuated with the anthropomorphizing of objects which appear to be autonomous agents! Computers don’t understand anything because they have no capacity to anything besides computing.
Both of you are making logic errors. LLMs don't understand anything because they are the wrong kind of program. But computers/programs/software certainly have the capacity to understand, even though none currently do.
So what? Penrose is a great physicist but completely wrong when it comes to consciousness. His own math mentor, Solomon Feferman, eviscerated his nonsensical take in his books. And even if correct Penrose's argument does not show that computers can't understand anything, it only shows that they can't understand Godel's Incompleteness theorems. But it's nonsense--the theorems can be mechanically proved.
BTW, Penrose thinks that human consciousness depends on quantum effects in microtubules ... so add some microtubules to the computer. And even if his argument were correct, it would fail for quantum computers. But the whole thing is deeply conceptually confused.
How well do *you* understand consciousness, that's what I want to know. You are confident to the point of religious zealotry, which always makes me suspicious of someone's claims.
Computers will only have the capacity to understand once they are an embodied being in the world (alas once they share the same psychosomatic characteristics of humans). Functionalism is a critical error in the understanding of humanity. We are not just the software which runs on hardware!
Prove it. This is all akin to vitalism and has no basis in fact or logic. There are no magical "capacities" acquired by being made of meat. Computers *are* embodied in the world--they aren't mere abstractions. And think of someone like Hawking ... his mind worked fine despite having no controllable body. This is just a pile of fallacies and bioromanticism.
Again, LLMs don't understand anything because they are the wrong kind of program--they have no cognitive states. But programs can be written that do. Humans are made of molecules and are subject to the Church-Turing thesis ... there's no valid reason to think that the algorithms executed by the molecules of the brain cannot be replicated in silicon, nor that the brain needs to be attached to a body in order to understand things. Here's a relevant story: https://thereader.mitpress.mit.edu/daniel-dennett-where-am-i/
Excellent article Gary. But even with your new ideas of how, I am still left asking why. Read somewhere, perhaps here, about China's more practical goal-led approach, very different from this rather teenage sci fi vision which ends up having so many unintended consequences. It seems to me a focused and deliberate 'pro-society' problem solving approach, as opposed to this the technological possibility led approach would be preferable for investors and the rest of us. This is the finding of my last 25 years work in 'responsible tech' which was pretty obvious well before LLM's and AI became the tech du jour.
Just finished reading it and am recommending it (and you) to all my friends who know nothing about AI, aren't interested in knowing about it, and have no clue what impending doom its unregulated power could unleash upon us if we don't shape it wisely now.
Thank you for the guest link to your NYT opinion piece; it sums up the nut of your longer discussions for lay people like me and my friends with trenchant clarity. Thank you also for allowing us to read and comment without a paywall; many can't afford it and need the info at least as much as the well off who can.
And thank you for putting your concern for humanity ahead of a quest for personal profits and fame. I hope to live to see you appointed our first Secretary of Technology when the new department is created. You have earned my respect and trust.
Seems to me the final solution needs to find a place for knowledge found by moving about in the real world, equipped with two eyes, two ears, and a parent who tutors them.
Your intuition is spot on Bruce! It's critical that the agent isn't just equipped with senses, but also that they're saddled with a never-ending set of survival challenges that constitute the typical loops of biological life (sleeping, reproducing, eating, playing, etc.)
So how will you solve the conundrum that this supposed "AI" still needs to run on a human defined optimization algorithm with human designed optimality criteria? It will still remain an extremely shallow machine.
Gary: "The cognitive sciences (including psychology, child development, philosophy of mind and linguistics) teach us that intelligence is about more than mere statistical mimicry and suggest three promising ideas for developing A.I. that is reliable enough to be trustworthy, with a much richer intelligence."
One of the core critiques of AI going back to the 1960s is that human intelligence is embodied (embodiment is also mentioned by Malik elsewhere in this thread), that it's not all in the head/mind. I agree with your critique of LLMs but I'm curious why you see the possible solution in neurosymbolic AI and world models. Why aren't enactivist approaches in the cognitive sciences, approaches that take embodiment seriously and which grew out of the phenomenological critique, relevant?
RCThweatt mentions Wittgenstein, although I think his comment may be referencing The Philosophical Investigations rather than the earlier period of the Tractatus. I don't see how the world models approach escapes Wittgenstein's critique of private languages and mental representations.
With regards to psychology and child development, back in 1990 Jerome Bruner, one of the supposed fathers of the cognitive revolution, was critiquing the computational metaphor in cognitive science and arguing for a refocus on situated action and interaction (see Acts of Meaning).
"We don't just need systems that mimic human language...". Yet again, Wittgenstein's Tractatus occurs...if I remember aright, it's concerned with the inability of language to reliably and accurately denote and describe "states of afffairs" in the world. So we're relying on more than language to understand what language is trying to say. Maybe Altman should read it.
These guys seem willing to do almost anything rather than really think, including massively overbuild data centers which promise a truly epic crash.
Surprised someone else name-dropped Wittgenstein on AI!
There are two schools of thought on the Tractatus. One school believes that it is actually satirizing the idea that there are limits to language. The other school believes that it is sincere.
Another group that draws heavily on Wittgenstein and Ryle to critique AI and the computer metaphor in cognitive science is the Manchester School of ethnomethodologists (maybe a little obscure, even within sociology). See for example:
Button, Graham, Jeff Coulter, John R. Lee, and Wes Sharrock. Computers, Minds, and Conduct. Polity Press, 1995.
Coulter, Jeff. “Twenty-Five Theses against Cognitivism.” Theory Culture & Society (March 2008): 19–32.
Ethnomethology influenced the work of the anthropologist Lucy Suchman (Hubert Dreyfus was on her dissertation committee) :
Suchman, Lucy A. Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge University Press, 1987.
Another critic that draws heavily on Wittgenstein is Stuart Shanker, a philosopher turned psychologist who was at Oxford with Bruner during the 1970s:
Shanker, Stuart G. Wittgenstein’s Remarks on the Foundations of AI. Routledge, 1998.
The enactivists don't make much reference to Wittgenstein. One exception is Daniel Hutto:
Hutto, Daniel D. “Enactivism, from a Wittgensteinian Point of View.” American Philosophical Quarterly 50, no. 3 (2013): 281–302.
"And since I was teaching those guys, I knew that the AI people had inherited a lemon. They had taken over in their research program a 2000-year failure"
AI research still carries on mostly in complete disregard of these philosophers.
I never detected a satirical note in the Tractatus, nor, back in the 1970s when I took a course on it, was that offered as a possibility. We were told, pay no attention to Russell's introduction.
Gary Marcus is right to call for a return to the cognitive sciences. Clinical psycholinguistic research has already shown that language patterns reveal how people process information, make decisions, handle stress, and can even predict distress (depression, risk of self-harm) weeks before it surfaces as behaviour. If AI is to move beyond statistical mimicry, it needs models of human psychology alongside world models. Psycholinguistics sits at that intersection of language and psychology, and offers proven methods to build systems that are not only safer, but also more intelligent, reliable, and genuinely human-aligned.
Thank you for hooking us up with the link, sir. And congratulations on the NYT article.
For those of us still deeply concerned that the LLM leash hasn't been yanked back hard enough, that suicide/psychosis issue remains the most troubling to me.
While (hopefully) we can all agree that LLMs aren't that good, people are dead because of them. It's easy to forget that, but we gotta focus on the human cost, which is non-zero.
Have come to the conclusion that AGI is just a smokescreen. Aim is to build a business much like a mix of Facebook and Google - own the customer’s attention and serve as the intermediary between the consumer and the internet. For most people under 25, Google Search is already a distant memory. Only question is: when to monetise?
Owning customers' attention (from birth to death) is what it is ALL about.
"AI psychosis" is a feature, not a bug.
Unfortunately, AIs sometimes contribute to the death of the customer, which, apart from being horrible for the customer and their loved ones, is not particularly good for business, either.
The latter is like a parasite that kills its host.
Even "cognitive" jobs require persistent learning, physical embodiment and building healthy relationships with humans. AGI is a lot more difficult than you think. This is my recent thinking.
Just as airplanes are not modelled on the way birds fly (or at best only vaguely so), it may be that the best way to get AGI is by not modelling it on the way humans think.
I'm just a lowly English teacher, but doesn't Hashim 100% misunderstand that your EXACT POINT is that GPT 5 quietly put scaling to rest so that it could actually get something done?
"So like a mediocre deep learning skeptic to write about misinformation, deepfakes, A.I. slop, cybercrime, copyright infringement, mental-health damage, and egregious energy usage as if they're necessarily bad things." -- Sam I. Am
Keep going Gary. You're the boy telling the AI industry it's wearing no clothes.
Good to know people like you are focused on the future of media.
Thank you! Check out https://grahamlovelace.substack.com
The industry, despite the hype, is working on problems that are tractable with today's knowledge. Scientists are more the purist type, pointing problems but lacking alternatives. Those will arrive, but it will be industry that discovers them.
The Emp-AI-ror's AIry clothes
To err is human, to AIr is AIry
AIry Bubble
The bubble's filled with AIr
There's really nothing there
The VCs might not care
But bubble's bound to tear
I'm glad to see that your views are being more widely diseminated, but I'm afraid that much of the audience just won't get it.
You wrote, "Even significantly scaled, they still don’t fully understand the concepts they are exposed to — which is why they sometimes botch answers or generate ridiculously incorrect drawings."
I'm concerned that the average reader might take this to mean that AI's partially understand concepts. As I'm sure you know, computers don't "understand" anything -- at least not in the way the word is usually used.
This is central to understand. Humans are infatuated with the anthropomorphizing of objects which appear to be autonomous agents! Computers don’t understand anything because they have no capacity to anything besides computing.
Both of you are making logic errors. LLMs don't understand anything because they are the wrong kind of program. But computers/programs/software certainly have the capacity to understand, even though none currently do.
Not according to Roger Penrose will they ever have the capacity to understand anything.
So what? Penrose is a great physicist but completely wrong when it comes to consciousness. His own math mentor, Solomon Feferman, eviscerated his nonsensical take in his books. And even if correct Penrose's argument does not show that computers can't understand anything, it only shows that they can't understand Godel's Incompleteness theorems. But it's nonsense--the theorems can be mechanically proved.
BTW, Penrose thinks that human consciousness depends on quantum effects in microtubules ... so add some microtubules to the computer. And even if his argument were correct, it would fail for quantum computers. But the whole thing is deeply conceptually confused.
How well do *you* understand consciousness, that's what I want to know. You are confident to the point of religious zealotry, which always makes me suspicious of someone's claims.
There is yet no indication that they will at some point. You are just making a bold prediction with 0 evidence to back it up.
Computers will only have the capacity to understand once they are an embodied being in the world (alas once they share the same psychosomatic characteristics of humans). Functionalism is a critical error in the understanding of humanity. We are not just the software which runs on hardware!
Prove it. This is all akin to vitalism and has no basis in fact or logic. There are no magical "capacities" acquired by being made of meat. Computers *are* embodied in the world--they aren't mere abstractions. And think of someone like Hawking ... his mind worked fine despite having no controllable body. This is just a pile of fallacies and bioromanticism.
Again, LLMs don't understand anything because they are the wrong kind of program--they have no cognitive states. But programs can be written that do. Humans are made of molecules and are subject to the Church-Turing thesis ... there's no valid reason to think that the algorithms executed by the molecules of the brain cannot be replicated in silicon, nor that the brain needs to be attached to a body in order to understand things. Here's a relevant story: https://thereader.mitpress.mit.edu/daniel-dennett-where-am-i/
Excellent article Gary. But even with your new ideas of how, I am still left asking why. Read somewhere, perhaps here, about China's more practical goal-led approach, very different from this rather teenage sci fi vision which ends up having so many unintended consequences. It seems to me a focused and deliberate 'pro-society' problem solving approach, as opposed to this the technological possibility led approach would be preferable for investors and the rest of us. This is the finding of my last 25 years work in 'responsible tech' which was pretty obvious well before LLM's and AI became the tech du jour.
see first chapter of taming silicon valley
Just finished reading it and am recommending it (and you) to all my friends who know nothing about AI, aren't interested in knowing about it, and have no clue what impending doom its unregulated power could unleash upon us if we don't shape it wisely now.
Thank you for the guest link to your NYT opinion piece; it sums up the nut of your longer discussions for lay people like me and my friends with trenchant clarity. Thank you also for allowing us to read and comment without a paywall; many can't afford it and need the info at least as much as the well off who can.
And thank you for putting your concern for humanity ahead of a quest for personal profits and fame. I hope to live to see you appointed our first Secretary of Technology when the new department is created. You have earned my respect and trust.
Great piece.
Seems to me the final solution needs to find a place for knowledge found by moving about in the real world, equipped with two eyes, two ears, and a parent who tutors them.
Your intuition is spot on Bruce! It's critical that the agent isn't just equipped with senses, but also that they're saddled with a never-ending set of survival challenges that constitute the typical loops of biological life (sleeping, reproducing, eating, playing, etc.)
So how will you solve the conundrum that this supposed "AI" still needs to run on a human defined optimization algorithm with human designed optimality criteria? It will still remain an extremely shallow machine.
Gary: "The cognitive sciences (including psychology, child development, philosophy of mind and linguistics) teach us that intelligence is about more than mere statistical mimicry and suggest three promising ideas for developing A.I. that is reliable enough to be trustworthy, with a much richer intelligence."
One of the core critiques of AI going back to the 1960s is that human intelligence is embodied (embodiment is also mentioned by Malik elsewhere in this thread), that it's not all in the head/mind. I agree with your critique of LLMs but I'm curious why you see the possible solution in neurosymbolic AI and world models. Why aren't enactivist approaches in the cognitive sciences, approaches that take embodiment seriously and which grew out of the phenomenological critique, relevant?
RCThweatt mentions Wittgenstein, although I think his comment may be referencing The Philosophical Investigations rather than the earlier period of the Tractatus. I don't see how the world models approach escapes Wittgenstein's critique of private languages and mental representations.
With regards to psychology and child development, back in 1990 Jerome Bruner, one of the supposed fathers of the cognitive revolution, was critiquing the computational metaphor in cognitive science and arguing for a refocus on situated action and interaction (see Acts of Meaning).
Oh snap, someone dropped enactivism in the comments! And another Wittgenstein name-check! Sheesh, my people!
Gary's got a smart readership!
Here's something I wrote on the topic I think you might like: https://tailwindthinking.substack.com/p/the-gnostic-cartesian-confusions
"We don't just need systems that mimic human language...". Yet again, Wittgenstein's Tractatus occurs...if I remember aright, it's concerned with the inability of language to reliably and accurately denote and describe "states of afffairs" in the world. So we're relying on more than language to understand what language is trying to say. Maybe Altman should read it.
These guys seem willing to do almost anything rather than really think, including massively overbuild data centers which promise a truly epic crash.
Surprised someone else name-dropped Wittgenstein on AI!
There are two schools of thought on the Tractatus. One school believes that it is actually satirizing the idea that there are limits to language. The other school believes that it is sincere.
I belong to the first school. If you want an introduction to the debate, this is a good place to start: https://static.hum.uchicago.edu/philosophy/conant/Bronzo-ResoluteReadingsandItsCritics-W-S2012.pdf
Another group that draws heavily on Wittgenstein and Ryle to critique AI and the computer metaphor in cognitive science is the Manchester School of ethnomethodologists (maybe a little obscure, even within sociology). See for example:
Button, Graham, Jeff Coulter, John R. Lee, and Wes Sharrock. Computers, Minds, and Conduct. Polity Press, 1995.
Coulter, Jeff. “Twenty-Five Theses against Cognitivism.” Theory Culture & Society (March 2008): 19–32.
Ethnomethology influenced the work of the anthropologist Lucy Suchman (Hubert Dreyfus was on her dissertation committee) :
Suchman, Lucy A. Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge University Press, 1987.
Another critic that draws heavily on Wittgenstein is Stuart Shanker, a philosopher turned psychologist who was at Oxford with Bruner during the 1970s:
Shanker, Stuart G. Wittgenstein’s Remarks on the Foundations of AI. Routledge, 1998.
The enactivists don't make much reference to Wittgenstein. One exception is Daniel Hutto:
Hutto, Daniel D. “Enactivism, from a Wittgensteinian Point of View.” American Philosophical Quarterly 50, no. 3 (2013): 281–302.
Love these sources! Not surprised that much of it was Dreyfus-influenced.
I'd throw Alva Noë in the mix. He's definitely Wittgenstein-influenced.
Good to hear. I haven't read any Alva Noë yet but I picked up used copies of Action in Perception and Out of Our Heads a couple of weeks ago.
There's an amusing interview with Dreyfus discussing his experience teaching Heidegger and Wittgenstein at MIT in the 1960s:
https://www.youtube.com/watch?v=oUcKXJTUGIE&t=160s
"And since I was teaching those guys, I knew that the AI people had inherited a lemon. They had taken over in their research program a 2000-year failure"
AI research still carries on mostly in complete disregard of these philosophers.
I never detected a satirical note in the Tractatus, nor, back in the 1970s when I took a course on it, was that offered as a possibility. We were told, pay no attention to Russell's introduction.
Gary Marcus is right to call for a return to the cognitive sciences. Clinical psycholinguistic research has already shown that language patterns reveal how people process information, make decisions, handle stress, and can even predict distress (depression, risk of self-harm) weeks before it surfaces as behaviour. If AI is to move beyond statistical mimicry, it needs models of human psychology alongside world models. Psycholinguistics sits at that intersection of language and psychology, and offers proven methods to build systems that are not only safer, but also more intelligent, reliable, and genuinely human-aligned.
I've written about this here:
https://kreindler.substack.com/p/ai-doesnt-understand-how-we-think
Thank you for hooking us up with the link, sir. And congratulations on the NYT article.
For those of us still deeply concerned that the LLM leash hasn't been yanked back hard enough, that suicide/psychosis issue remains the most troubling to me.
While (hopefully) we can all agree that LLMs aren't that good, people are dead because of them. It's easy to forget that, but we gotta focus on the human cost, which is non-zero.
These things are already undoubtedly being (mindlessly) embedded in systems that we rely on daily for our saftey and security.
As bad as the situation currently is, there will be a lot more people dead as a result.
Imagine if they had spent that VC money on funding USAID or cancer research.
Have come to the conclusion that AGI is just a smokescreen. Aim is to build a business much like a mix of Facebook and Google - own the customer’s attention and serve as the intermediary between the consumer and the internet. For most people under 25, Google Search is already a distant memory. Only question is: when to monetise?
You are right.
Owning customers' attention (from birth to death) is what it is ALL about.
"AI psychosis" is a feature, not a bug.
Unfortunately, AIs sometimes contribute to the death of the customer, which, apart from being horrible for the customer and their loved ones, is not particularly good for business, either.
The latter is like a parasite that kills its host.
The AI company faces the same problem as the heroin dealer: how to keep the customer coming back for more without killing them.
Ivermectine is actually quite effective as a dewormer, right?
I suppose that makes Gary a sort of "AI-vermectine".
A brain worm
I guess that would be brAIn worm
What Is Artificial General Intelligence, Exactly?
Even "cognitive" jobs require persistent learning, physical embodiment and building healthy relationships with humans. AGI is a lot more difficult than you think. This is my recent thinking.
https://ericnavigator4asc.substack.com/p/what-is-artificial-general-intelligence
Maybe we need a different path to AGI that we can trust? We are already trying to make one.
https://ericnavigator4asc.substack.com/p/hello-world
Hello World! -- From the Academy for Synthetic Citizens
Exploring the future where humans and synthetic beings learn, grow, and live together.
Just as airplanes are not modelled on the way birds fly (or at best only vaguely so), it may be that the best way to get AGI is by not modelling it on the way humans think.
Of old "Slyme Pit" fame? 😉🙂
Sad to hear the proprietor bit the dust or departed for parts unknown, but you might have some interest in an archive link or two in any case:
https://archive.ph/offset=40/http://slymepit.com/phpbb/*
Saw that ACX / Scott Alexander published a tweet-length criticism of your essay in his link roundup for the month: https://x.com/ShakeelHashim/status/1963182536353280012
I'm just a lowly English teacher, but doesn't Hashim 100% misunderstand that your EXACT POINT is that GPT 5 quietly put scaling to rest so that it could actually get something done?
Help me comment section! You're my only hope!
A rarely discussed tradeoff faced by AI programmers is that if you cut back on the amount of ALSD fed to the models, they become even less creative.
"So like a mediocre deep learning skeptic to write about misinformation, deepfakes, A.I. slop, cybercrime, copyright infringement, mental-health damage, and egregious energy usage as if they're necessarily bad things." -- Sam I. Am
I do not like that Marcus man.
I do not like him, Sam I am
Would you like him with a symbol?
Would you like him on Jimmy Kimbel?
Would you like him in The Times?
Would you like him dropping dimes?
No, not with symbol or with Kimbel or in the Times or dropping dimes
I do not like that Marcus man
I do not like him, Sam I am