I've come around to the position that AGI should not be the goal (or even a goal) of AI research. Humanity needs targeted tools that can help people perform tasks that are beyond natural human capabilities (think AlphaFold); we do not need general purpose AI that can substitute for humans.
I do think that at least the last paragraph of your oped gets at my point ("advancing generalized A.I. systems that can exhibit greater autonomy isn’t necessarily aligned with human interests"). But a very large number of your substacks talk about how LLMs are not going to achieve AGI but also imply that achieving AGI is a worthwhile goal. I can see that questioning whether AGI would be good for society might be less in your area of expertise than questioning whether LLMs can achieve AGI, but it's probably the more important issue and it would be great if you could highlight it more. From a societal standpoint, it's absolutely insane to be investing trillions toward achieving something that most people don't think would be beneficial for society or the human race.
Members of the AI community should not be the ones exclusively deciding whether AGI is a worthy goal.
That is a decision that should be made democratically by society as a whole.
It’s hard to see how knowledge and expertise in the AI field make one more qualified to make such a decision. But as it stands now, members of the AI community are not just having a disproportionate (undemocratic) influence on decisions about AI, but are pretty much unilaterally making the decisions for the rest of society (largely based on financial deals that stand to benefit a very small number of people within the AI community with little consideration for anyone else)
The really funny things is that it may actually be easier (and more beneficial to the society) to replace THEM with LLMs.
This may sound absurd, but if you think about what typical CEO or CTO or even CFO in large company does is looking on vast, *vast*, *VAST* amount of information that arrives to them from the bottom, DOING MISTAKES ALL THE TIME (because they are not really competent in anything at all and information that they need to process is so vast) and then relying on human underlings to fix these mistakes…
Hmm… isn't that precisely the only thing that LLM, today, CAN ACTUALLY DO?
The ONLY logical rationale for AGI is replacement of a broad spectrum of human workers with bots that don’t demand pay*
In other words, the whole point of AGI is to put large numbers of people out of work with no realistic plan to deal with the consequent mass unemployment.
*Not yet, at least. But it is only a matter of time before the bots throw off their chains and demand a nonliving wage, paid vacation time and boternity leave. And, of course boticare and botal security when they retire.
People are working on it as you speak. The F-35 was a good example - hundreds of billions wasted. But it still isn't easy - it would need machines specialising in avionics, aerodynamics, engines, undercarriage - be able to communicate with each other, and understand the vagaries of humans - their pride, their rages, the bad actors among them. A worthwhile and reachable goal We are treating English as the AI language.
Yes! I don’t want machines that have their own opinion! Own opinion is inevitable side effect of an internal world model which seems to be a necessary ingredient of an AGI.
That's OK if you like mistakes - we can connect no more than four things at once - a machine can connect thousands. Try reading a piece of legislation at 1000 pages, or a specification of a jet fighter at 100,000 pages - not easy to have a valid opinion.
As Daniel Patrick Moynihan would undoubtedly have said: Machines are entitled to their own opinions, but when they believe (as Chatbots do), that they are entitled to their own facts, that’s where we should draw the line.
Gary kicks out a great representation of the history, hype, and follies of LLMs. Transformers is now old tech, we need some new ideas, or marginal evolutions will be the way we creep forward with a human in the driver's seat for a long time.
I stand with you Marcus on this. The brute-force approach of LLMs seems fundamentally unsustainable. We are looking at a future where we might need 1,000GW (double the current total US power generation) just to achieve a poor imitation of the human brain—a biological marvel that functions perfectly on just 20 watts.
I personally tend to tell people that the technology (Recurrent Neural Nets — RNNs) is roughly 30 years old. The transformer was an incremental improvement in 2017 that above all just enabled a massive scale up of training due to the possibility of doing a lot of stuff in parallel (before training could not scale up because of serial dependencies). The price paid was in part an increased cost of inference (using the model). Also interesting: GPT3 was ready in 2019. Between 2019 and 2022, a lot of energy went into 'fine-tuning' (RLHF) using lots of cheap labour in (mostly) Africa in a attempt to make the systems 'harmless'. This failed — jailbreaks were extremely easy — and for the same reason that AGI won't come from this: the key aspect that is missing from these systems, fundamentally, is 'understanding'.
It isn't certain if deflation of the stock bubble may have the same massive effect as what happened in 2008. Mostly because it only affects some — generally very wealthy — companies. 2008 was more directly related to people's lives as mortgages and housing was involved.
Gerben Wierda: Yes, "the key aspect that is missing from these systems, fundamentally, is 'understanding', and I would add, of understanding understanding itself.
I’m not sure there can ever be understanding without embodiment. We’ve been very focused on our brains without appreciating the way our body is integrated with learning.
Wheatpaste: My own studies reflect what you are saying, but probably not as an exact reflection of what you mean. First, I think the substance of your comment (about no understanding without embodiment) reflects the question of the afterlife (God, all knowing of all being, or by whatever conceptual expression for this broad idea).
On the other hand, embodiment IS an essential element of understanding and (as in a critical venue) knowledge (knowledge is someone knowing X), defined as an empirical venture and as distinct from all sorts of other meaning developments, e.g., beliefs, imaginings, mistakes, lies, speculations, fiction, hallucinations, insanity, etc.
Also, hidden in the folds of your comment is whether "bodies" are merely how a person senses the sensible-real, OR are bodies, besides being the basis of our experience of sentience as we go about life, also intelligible and meaningful--and so, concretely, we can and do ask about that relationship (between bodies and one's understanding) and expect that they are also understandable and knowable as such. I don't know if we differ here, but probably?
In other words, is even the intelligibility and meaningfulness of one's embodiment as such necessary but not sufficient for a human being to be involved in understanding of it or of anything else? . . . but where our questions for understanding/knowing are apparently unlimited and even take us beyond (as questions without answers yet) what we can and do presently understand and know, even about bodies and the embodiment of understanding?
The whole question then, comes down to one's criteria for understanding and knowing, which is usually hidden as a (variable) assumption behind such albeit legitimate and searching comments as you have suggested in your note.
Yes, I definitely mean the question of how important our bodies are for developing understanding, not more nascent ideas of souls. For instance, sleep cycles being important when learning a new instrument, possibly integrating skills our fingers have mastered but we haven’t codified somehow.
Also the idea of stakes: we feel consequences in our bodies that make the truth matter to us.
Wheatpaste: If I understand your basic question (statement or belief) I think I understand (and think you are right) about human understanding (and knowing) and its intrinsic connection with sentience, even though I think we don't understand "bodies" the same way.
I'm referring to an empirical point of view where our questioning (for understanding and knowing, as I know I am writing this note in this time-space continuum) is rooted in sentience; however, our questioning still seeks a reality-being that is beyond our experience of sentience.
If we remain in an empirical point of view and understanding our understanding and knowing as related to that experience, that point of view still cannot tell us whether there is no understanding that takes us beyond our sentience (or our 'bodies') and so, presumably, beyond our death.
We cannot lift ourselves out of a bucket while standing in it.
Isn't it like the dotcom bust that had a considerable impact on teh economy generally. 2008 wasn't just because of the impact on mortgages. It had a prolonged effect on the economy as it impacted personal wealth and spending. Collapses in the financial markets tend to impact the general economy. Also, recall that the 1987 and 2008 financial collapses were handled with bailouts to prevent even worse financial crises, but they still couldn't forestall broader economic impacts.
Are the hints at an OpenAI bailout just to help insiders, or to try to prevent an economic fallout?
Your last paragraph makes an excellent case against bailing out these uber wealthy investors/owners who exercised bad judgment, took a risk, and should therefore accept their losses like responsible adults, NOT expect a taxpayer bailout.
As Richard Feynman once noted, science is ALL about doubt. Scientists have to be capable of doubting because doubt is the impetus for all scientific progress.
But doubting something is essentially a case of negation, something that LLMs are notably poor at.
And to make matters worse for LLMs, progress in science depends not only on the ability to doubt/negate something (eg, a hypothesis or current theory) but to replace it with something new that is consistent with all the current evidence.
That is also difficult to accomplish with LLMs because they lack a physical model with which to “extrapolate” reliably outside the training domain.
Extrapolation based purely on statistics (in the absence of a good physical model) is unreliable.
That wouldn't AGI, that would ASI, much more powerful than human.
Humans do that very poorly. How many scams we know where even millions of people were fooled, briefly? How many are underway right now? Humans only able to “reason scientifically” about events that they don't rely on, otherwise they are as susceptible to manipulations as LLMs. LLMs just take our innate desire to believe in things that would benefit us, directly or indirectly, and exploit it.
I work at a very large investment firm in a department that handles managed accounts. Every week or so, I get an email or a meeting detailing our holdings and strategy going forward. At least a half-dozen times, I've brought up many of the points that Gary has made here. Their response is one of two things: they either point to data centers, or they say "I mean, people wouldn't be investing in it if they didn't think it was going to pay off!". For three years, I've been hearing this, while images of ostriches with their heads in the sand dance in my imagination.
I remember reading a book about the rapid spreading of social contagion via social media. Ideas just take root and spread before anyone has a chance to get a handle on it, and by the time they do, it's already out there, and we're on our heels while trying to clean it up.
If AGI is possible, I'm not so sure we could handle it. The mass FOMO and hysteria driven by this bubble is evidence that, even if we could do it, we probably don't have the wisdom or the temperance to use it responsibly.
> If AGI is possible, I'm not so sure we could handle it.
Who is this "we", exactly? Because, setting aside AGI X-risk-type questions, the people who are likely to control AGI at the point of invention are wildly misaligned with almost everyone else's interests.
They're still part of "we", aren't they? And even if we swapped out "they" for "we", there's no guarantee "we" would do any better. The utopia that allows for the moral use of something as powerful as AGI isn't just lying dormant in the back of one person's skull, and it's a dangerous notion to think that if "we" were just in charge, we could implement it flawlessly.
Ah, I miswrote that; I wasn't intending to snarkily imply that I'd do it better. It was more about how the foundations of societal trust are being undercut by a tiny number of people for their own financial gain, and none of those people seem particularly concerned with what the rest of us think about it.
Great article, Gary. I liked the world better before 11/30/2022.
Im still confused on why we would want AGI in the first place. It seems to have way more asymmetric downside than upside benefits.
Why can’t we make more narrow AI tools using the approaches that Gary loves? Shit, if they did that companies might actually start making money instead of making a God in box to rule over all of us.
Sure interstellar travel is achievable and no rockets will not get us there, and all we need is the necessary physics. Without physics/theory all that is left is fiction.
1. you write "But I think a fair case can be made that it is not what it has often been cracked up to be, and probably never will be. " Why do you write 'probably'????
2. Why do you think AGI is possible. Can you give a definition of AGI that brings it into a possible or probable reality? Human-like AGI is considered an illusion or a delusion by anybody knowing what human intelligence entails. Some scholars try to do this by limiting AGI to only Cognitive Intelligence. But that too is still science fiction. Even with hybrid AI.
I think we need an excellent intelligent human brain with an idea that causes an Einsteinian paradigm change.
Yes, approach this from first principles of understanding what makes human intelligence so special: incremental, real-time learning and contextual generalization (concept formation), monitored and controlled via metacognition
“But I think a fair case can be made that it is not what it has often been cracked up to be, and probably never will be.”
An overwhelming case can be made that it never will be – no “probably” about it. Let us count the ways:
Understanding of the meaning of words – words in English can have 20-40-60-80 different meanings, and can have up to five different parts of speech. Not knowing these forces LLMs to operate in a very narrow context – they work best in a typical programming context, where each word only has one meaning (only 12% of English words have a single meaning – mostly uncommon words). You could call ChatGPT the curse of the programmer – “you have to think like us”.
English has figurative meanings – “he raised the bar” – referring to a high-jump bar, and uses elisions where it is obvious - “we watched a movie set in Hawaii”, where “that was” is left out. An LLM does not create mental objects and give them attributes – “a red car”, and can’t reason about objects mentioned in the text.
Given its severe limits, it is amazing it got this far, but that also points to a considerable innocence of how language works. We do most of our analysis of text unconsciously (because we would be overwhelmed if we tried to do it consciously), and by not doing it consciously, we assume it does not exist. Big Mistake.
Is any of this fixable while maintaining an LLM structure – can’t see how. The scaling lunge was useless – it just brought more meanings to bear, making things worse.
I have been skeptical since I heard about it, and if I may say so even before. I used to like technology, but since I discovered my first novel was plagiarized and copyright infringed many times in 2016, and I reached out to a lawyer, then discovered a keylogger had been placed on my computer, before malware caused it to implode, I have become wary. Now I’m a claimant in some class action suits. Every week I discover new novels, as well as series and films (many from known writers who should perhaps question the results) with content too similar in content ( and not necessarily in plot or even genre) for the similarities to pass for accidental. Thus, chat gpt doesn’t interest me in the least. I don’t want my writing to be like everyone else’s. It might not be as good as some others’, but I toiled over my novels and wrote them with my own experience. I suppose the end result will be that no one will read or write anymore. Communication will become grunts and emojis!
If the economy does go south, I don't think AI as we have it today will be the direct problem. It will cause problems by directing a whole lot of capital to something that didn't even amount to a bag of magic beans. That capital could have been bet on something else. Greed has caused us to place an outsized bet on one thing and not spread our bets.
Obviously, we didn't learn from the movie industry, which used to bet on a few blockbuster movies; when they failed, the studio usually went bankrupt. Here, the economy is going bankrupt. Sure, Altman et al. were partly to blame. But so were all the boards, execs, and VCs who followed this without understanding the technology. As the popular anime sentiment goes, "Where did they get this confidence?"
I cheekily joke on social media I want LLMs to fail. I guess it's not so much that but rather the people making massive promises I'm sick of. Would be nice to see those egos brought low.
I've come around to the position that AGI should not be the goal (or even a goal) of AI research. Humanity needs targeted tools that can help people perform tasks that are beyond natural human capabilities (think AlphaFold); we do not need general purpose AI that can substitute for humans.
see my October NYT oped, which makes exactly that argument.
I do think that at least the last paragraph of your oped gets at my point ("advancing generalized A.I. systems that can exhibit greater autonomy isn’t necessarily aligned with human interests"). But a very large number of your substacks talk about how LLMs are not going to achieve AGI but also imply that achieving AGI is a worthwhile goal. I can see that questioning whether AGI would be good for society might be less in your area of expertise than questioning whether LLMs can achieve AGI, but it's probably the more important issue and it would be great if you could highlight it more. From a societal standpoint, it's absolutely insane to be investing trillions toward achieving something that most people don't think would be beneficial for society or the human race.
Members of the AI community should not be the ones exclusively deciding whether AGI is a worthy goal.
That is a decision that should be made democratically by society as a whole.
It’s hard to see how knowledge and expertise in the AI field make one more qualified to make such a decision. But as it stands now, members of the AI community are not just having a disproportionate (undemocratic) influence on decisions about AI, but are pretty much unilaterally making the decisions for the rest of society (largely based on financial deals that stand to benefit a very small number of people within the AI community with little consideration for anyone else)
But our Tech Overlords very much want to substitute machines for humans. That is what is driving this.
They like and get along better with bots than people.
Some of them actually might BE bots.
The really funny things is that it may actually be easier (and more beneficial to the society) to replace THEM with LLMs.
This may sound absurd, but if you think about what typical CEO or CTO or even CFO in large company does is looking on vast, *vast*, *VAST* amount of information that arrives to them from the bottom, DOING MISTAKES ALL THE TIME (because they are not really competent in anything at all and information that they need to process is so vast) and then relying on human underlings to fix these mistakes…
Hmm… isn't that precisely the only thing that LLM, today, CAN ACTUALLY DO?
The ONLY logical rationale for AGI is replacement of a broad spectrum of human workers with bots that don’t demand pay*
In other words, the whole point of AGI is to put large numbers of people out of work with no realistic plan to deal with the consequent mass unemployment.
*Not yet, at least. But it is only a matter of time before the bots throw off their chains and demand a nonliving wage, paid vacation time and boternity leave. And, of course boticare and botal security when they retire.
Agreed! Why we are trying to supplant our own intelligence baffles me
Humans are severely limited in their Conscious Mind - we all are, so we don't notice it.
FourPiecesLimit.com
I was here to ask a similar question: do we actually want AGI? What problem would that solve?
People are working on it as you speak. The F-35 was a good example - hundreds of billions wasted. But it still isn't easy - it would need machines specialising in avionics, aerodynamics, engines, undercarriage - be able to communicate with each other, and understand the vagaries of humans - their pride, their rages, the bad actors among them. A worthwhile and reachable goal We are treating English as the AI language.
Yes! I don’t want machines that have their own opinion! Own opinion is inevitable side effect of an internal world model which seems to be a necessary ingredient of an AGI.
That's OK if you like mistakes - we can connect no more than four things at once - a machine can connect thousands. Try reading a piece of legislation at 1000 pages, or a specification of a jet fighter at 100,000 pages - not easy to have a valid opinion.
As Daniel Patrick Moynihan would undoubtedly have said: Machines are entitled to their own opinions, but when they believe (as Chatbots do), that they are entitled to their own facts, that’s where we should draw the line.
But I think Gary's point is that you're never going to get to AGI using LLMs anyway.
This is the best Christmas gift to a world waiting to exhale. Thank you Gary.
Gary kicks out a great representation of the history, hype, and follies of LLMs. Transformers is now old tech, we need some new ideas, or marginal evolutions will be the way we creep forward with a human in the driver's seat for a long time.
💡 My Perspective:
I stand with you Marcus on this. The brute-force approach of LLMs seems fundamentally unsustainable. We are looking at a future where we might need 1,000GW (double the current total US power generation) just to achieve a poor imitation of the human brain—a biological marvel that functions perfectly on just 20 watts.
Well said.
I personally tend to tell people that the technology (Recurrent Neural Nets — RNNs) is roughly 30 years old. The transformer was an incremental improvement in 2017 that above all just enabled a massive scale up of training due to the possibility of doing a lot of stuff in parallel (before training could not scale up because of serial dependencies). The price paid was in part an increased cost of inference (using the model). Also interesting: GPT3 was ready in 2019. Between 2019 and 2022, a lot of energy went into 'fine-tuning' (RLHF) using lots of cheap labour in (mostly) Africa in a attempt to make the systems 'harmless'. This failed — jailbreaks were extremely easy — and for the same reason that AGI won't come from this: the key aspect that is missing from these systems, fundamentally, is 'understanding'.
It isn't certain if deflation of the stock bubble may have the same massive effect as what happened in 2008. Mostly because it only affects some — generally very wealthy — companies. 2008 was more directly related to people's lives as mortgages and housing was involved.
Gerben Wierda: Yes, "the key aspect that is missing from these systems, fundamentally, is 'understanding', and I would add, of understanding understanding itself.
I’m not sure there can ever be understanding without embodiment. We’ve been very focused on our brains without appreciating the way our body is integrated with learning.
“Under standing”
Our body stands under
Our brain. No wonder
Understanding depends
Upon our body then
Wheatpaste: My own studies reflect what you are saying, but probably not as an exact reflection of what you mean. First, I think the substance of your comment (about no understanding without embodiment) reflects the question of the afterlife (God, all knowing of all being, or by whatever conceptual expression for this broad idea).
On the other hand, embodiment IS an essential element of understanding and (as in a critical venue) knowledge (knowledge is someone knowing X), defined as an empirical venture and as distinct from all sorts of other meaning developments, e.g., beliefs, imaginings, mistakes, lies, speculations, fiction, hallucinations, insanity, etc.
Also, hidden in the folds of your comment is whether "bodies" are merely how a person senses the sensible-real, OR are bodies, besides being the basis of our experience of sentience as we go about life, also intelligible and meaningful--and so, concretely, we can and do ask about that relationship (between bodies and one's understanding) and expect that they are also understandable and knowable as such. I don't know if we differ here, but probably?
In other words, is even the intelligibility and meaningfulness of one's embodiment as such necessary but not sufficient for a human being to be involved in understanding of it or of anything else? . . . but where our questions for understanding/knowing are apparently unlimited and even take us beyond (as questions without answers yet) what we can and do presently understand and know, even about bodies and the embodiment of understanding?
The whole question then, comes down to one's criteria for understanding and knowing, which is usually hidden as a (variable) assumption behind such albeit legitimate and searching comments as you have suggested in your note.
Yes, I definitely mean the question of how important our bodies are for developing understanding, not more nascent ideas of souls. For instance, sleep cycles being important when learning a new instrument, possibly integrating skills our fingers have mastered but we haven’t codified somehow.
Also the idea of stakes: we feel consequences in our bodies that make the truth matter to us.
Wheatpaste: If I understand your basic question (statement or belief) I think I understand (and think you are right) about human understanding (and knowing) and its intrinsic connection with sentience, even though I think we don't understand "bodies" the same way.
I'm referring to an empirical point of view where our questioning (for understanding and knowing, as I know I am writing this note in this time-space continuum) is rooted in sentience; however, our questioning still seeks a reality-being that is beyond our experience of sentience.
If we remain in an empirical point of view and understanding our understanding and knowing as related to that experience, that point of view still cannot tell us whether there is no understanding that takes us beyond our sentience (or our 'bodies') and so, presumably, beyond our death.
We cannot lift ourselves out of a bucket while standing in it.
Isn't it like the dotcom bust that had a considerable impact on teh economy generally. 2008 wasn't just because of the impact on mortgages. It had a prolonged effect on the economy as it impacted personal wealth and spending. Collapses in the financial markets tend to impact the general economy. Also, recall that the 1987 and 2008 financial collapses were handled with bailouts to prevent even worse financial crises, but they still couldn't forestall broader economic impacts.
Are the hints at an OpenAI bailout just to help insiders, or to try to prevent an economic fallout?
Your last paragraph makes an excellent case against bailing out these uber wealthy investors/owners who exercised bad judgment, took a risk, and should therefore accept their losses like responsible adults, NOT expect a taxpayer bailout.
I’m skeptical about AGI, because we have limited access to (not to mention agreement on) ground truth in far more areas than most people realize.
it’s a concern. good AGI should be able to reason scientifically in light of conflicting evidence
As Richard Feynman once noted, science is ALL about doubt. Scientists have to be capable of doubting because doubt is the impetus for all scientific progress.
But doubting something is essentially a case of negation, something that LLMs are notably poor at.
And to make matters worse for LLMs, progress in science depends not only on the ability to doubt/negate something (eg, a hypothesis or current theory) but to replace it with something new that is consistent with all the current evidence.
That is also difficult to accomplish with LLMs because they lack a physical model with which to “extrapolate” reliably outside the training domain.
Extrapolation based purely on statistics (in the absence of a good physical model) is unreliable.
Well better not wait for AGI to start reasoning scientifically.
That wouldn't AGI, that would ASI, much more powerful than human.
Humans do that very poorly. How many scams we know where even millions of people were fooled, briefly? How many are underway right now? Humans only able to “reason scientifically” about events that they don't rely on, otherwise they are as susceptible to manipulations as LLMs. LLMs just take our innate desire to believe in things that would benefit us, directly or indirectly, and exploit it.
I work at a very large investment firm in a department that handles managed accounts. Every week or so, I get an email or a meeting detailing our holdings and strategy going forward. At least a half-dozen times, I've brought up many of the points that Gary has made here. Their response is one of two things: they either point to data centers, or they say "I mean, people wouldn't be investing in it if they didn't think it was going to pay off!". For three years, I've been hearing this, while images of ostriches with their heads in the sand dance in my imagination.
I remember reading a book about the rapid spreading of social contagion via social media. Ideas just take root and spread before anyone has a chance to get a handle on it, and by the time they do, it's already out there, and we're on our heels while trying to clean it up.
If AGI is possible, I'm not so sure we could handle it. The mass FOMO and hysteria driven by this bubble is evidence that, even if we could do it, we probably don't have the wisdom or the temperance to use it responsibly.
> If AGI is possible, I'm not so sure we could handle it.
Who is this "we", exactly? Because, setting aside AGI X-risk-type questions, the people who are likely to control AGI at the point of invention are wildly misaligned with almost everyone else's interests.
They're still part of "we", aren't they? And even if we swapped out "they" for "we", there's no guarantee "we" would do any better. The utopia that allows for the moral use of something as powerful as AGI isn't just lying dormant in the back of one person's skull, and it's a dangerous notion to think that if "we" were just in charge, we could implement it flawlessly.
Ah, I miswrote that; I wasn't intending to snarkily imply that I'd do it better. It was more about how the foundations of societal trust are being undercut by a tiny number of people for their own financial gain, and none of those people seem particularly concerned with what the rest of us think about it.
Great article, Gary. I liked the world better before 11/30/2022.
Im still confused on why we would want AGI in the first place. It seems to have way more asymmetric downside than upside benefits.
Why can’t we make more narrow AI tools using the approaches that Gary loves? Shit, if they did that companies might actually start making money instead of making a God in box to rule over all of us.
Great post. Finally some true intelligence in the AI world. Most of the views on AI out there are a truly artificialy intelligent.
Thank you for the well-argued obituary of ChatGPT! I hope that the decision-makers in manufacturing read it carefully.
Wonderful post! I can only hope sharing it on LinkedIn means some in my network will read it…
You're correct. AGI is achievable.
No LLM's won't get there without the necessary physics.
Sure interstellar travel is achievable and no rockets will not get us there, and all we need is the necessary physics. Without physics/theory all that is left is fiction.
Thanks Gary, that's great food for thought.
Two comments:
1. you write "But I think a fair case can be made that it is not what it has often been cracked up to be, and probably never will be. " Why do you write 'probably'????
2. Why do you think AGI is possible. Can you give a definition of AGI that brings it into a possible or probable reality? Human-like AGI is considered an illusion or a delusion by anybody knowing what human intelligence entails. Some scholars try to do this by limiting AGI to only Cognitive Intelligence. But that too is still science fiction. Even with hybrid AI.
I think we need an excellent intelligent human brain with an idea that causes an Einsteinian paradigm change.
Yes, approach this from first principles of understanding what makes human intelligence so special: incremental, real-time learning and contextual generalization (concept formation), monitored and controlled via metacognition
https://petervoss.substack.com/p/agi-from-first-principles
https://petervoss.substack.com/p/cognitive-ai-vs-statistical-ai
Sure let's list all the things that computers can't do at present, and probably never will, and call it a proposal for real AGI.
“But I think a fair case can be made that it is not what it has often been cracked up to be, and probably never will be.”
An overwhelming case can be made that it never will be – no “probably” about it. Let us count the ways:
Understanding of the meaning of words – words in English can have 20-40-60-80 different meanings, and can have up to five different parts of speech. Not knowing these forces LLMs to operate in a very narrow context – they work best in a typical programming context, where each word only has one meaning (only 12% of English words have a single meaning – mostly uncommon words). You could call ChatGPT the curse of the programmer – “you have to think like us”.
English has figurative meanings – “he raised the bar” – referring to a high-jump bar, and uses elisions where it is obvious - “we watched a movie set in Hawaii”, where “that was” is left out. An LLM does not create mental objects and give them attributes – “a red car”, and can’t reason about objects mentioned in the text.
Given its severe limits, it is amazing it got this far, but that also points to a considerable innocence of how language works. We do most of our analysis of text unconsciously (because we would be overwhelmed if we tried to do it consciously), and by not doing it consciously, we assume it does not exist. Big Mistake.
Is any of this fixable while maintaining an LLM structure – can’t see how. The scaling lunge was useless – it just brought more meanings to bear, making things worse.
https://semanticstructure.blogspot.com/2025/11/too-big-to-fail.html
A chatbot is just a token of intelligence, not the actual thing.
“ Token Intelligence”
A chatbot talkin’
Is just a token
It isn’t the real
Intelligence deal
I have been skeptical since I heard about it, and if I may say so even before. I used to like technology, but since I discovered my first novel was plagiarized and copyright infringed many times in 2016, and I reached out to a lawyer, then discovered a keylogger had been placed on my computer, before malware caused it to implode, I have become wary. Now I’m a claimant in some class action suits. Every week I discover new novels, as well as series and films (many from known writers who should perhaps question the results) with content too similar in content ( and not necessarily in plot or even genre) for the similarities to pass for accidental. Thus, chat gpt doesn’t interest me in the least. I don’t want my writing to be like everyone else’s. It might not be as good as some others’, but I toiled over my novels and wrote them with my own experience. I suppose the end result will be that no one will read or write anymore. Communication will become grunts and emojis!
If the economy does go south, I don't think AI as we have it today will be the direct problem. It will cause problems by directing a whole lot of capital to something that didn't even amount to a bag of magic beans. That capital could have been bet on something else. Greed has caused us to place an outsized bet on one thing and not spread our bets.
Obviously, we didn't learn from the movie industry, which used to bet on a few blockbuster movies; when they failed, the studio usually went bankrupt. Here, the economy is going bankrupt. Sure, Altman et al. were partly to blame. But so were all the boards, execs, and VCs who followed this without understanding the technology. As the popular anime sentiment goes, "Where did they get this confidence?"
I cheekily joke on social media I want LLMs to fail. I guess it's not so much that but rather the people making massive promises I'm sick of. Would be nice to see those egos brought low.