AGI wolf calls have done their job. Anyone who believed CEOs were bearers of wisdom can take a moment, breathe, and guard their wallets until the next time Silicon Valley’s siren songs resurface.
That depends on how you look. The fundamental limit is that no amount of induction (statistics) gets you deduction (symbolic logic), even if you can approximate close. The other way around is true too as we found out 40-20 years ago.
To add, AI vendors are not in principle opposed to deductive reasoning or symbolic methods. The problem is that, in practice, those are just a different kind of approximation to the messy reality. They do not solve the problems.
AGI is an immense problem, and what is in our heads is not easy to represent properly. The industry is onto something by focusing on incremental work and leveraging data as much as possible. That will inspire new directions as need be.
Symbolic logic may be seen as part of our brain's efficiency apparatus, that is, it for instance prevents infinite scaling issues (e.g. the outlier problem, but also others).
The symbolic logic people had it the wrong way around. Stuff like emotions doesn't emerge from large amounts of discrete facts and rules (the problem being that 'large' here in effect is 'infinite'), but discrete facts and rules are created out of the non-discrete, messy, chaotic, stuff below. You can create Q out of R, but not the other way around.
In that sense it may be evolutionary related to our 'conviction' efficiency apparatus, such as convictions that 'incremental work' 'will' inspire new directions 😀. Or convictions that these 'new directions' — when mentioned now — are the equivalent of vapourware.
Symbolic logic surely has a place. When people learned to go from daily messy situations to high-level abstract rules, and then apply those rules in other contexts, that made us much smarter.
It is important to note, however, that abstractions alone cannot fix all outliers, just certain categories of them. The real work still happens at the messy detail level, where the rules you know many not apply, or you need to know how to apply them.
Which is true. And here you are close to why systems with massive discrete logic (like the bits and operands of digital computers) always have some trouble in messy reality. What holds for that scaling symbolic logic doesn't get you there probably is true to for discrete logic in general. (Just riding a hobby horse 😀).
There seems to be a fundamental gap in understanding how *meaning* is produced. AI systems are incapable of producing meaning, and therefore cannot achieve 'AGI' in any capacity no matter how much training data they are subjected to.
There is no “understanding” either by the AIs OR by the folks developing them (who don’t understand — and don’t seem to even care — precisely why they seem so convincing)
The latter fact leads to all the hyperbolic claims that have no grounding in reality.
It’s all very weird for a field that calls itself “science”
Grounded meaning requires no symbols... that's where the disconnect lies, between reality (natural intelligence) and ALL of AI-to-date including the latest dot-product-based "generative" ['computational', really] fakes.
They don't (have any bearing on intelligence). For the past 70 to 75 years, it's all been about mimicking intelligent behavior. Whether symbol processing systems or ANNs, everyone has been and is still developing complex functions, trying to mimic the input-output behavior of thinking humans. And, not surprisingly, all they have to show for it to date is narrow AI. That's why people like Peter Norvig, who should know better, given their stature in the field, are claiming that AGI is already here: because they don't want to believe that all they've been doing their entire careers is developing complex functions, instead of making inroads toward understanding intelligence. If the AI community (or psychologists or neuroscientists) understood the nature of intelligence, no one would continue to develop systems the way they have been and still are (except perhaps to model some function of the mind-brain) if their intention is to achieve AGI.
There are serious limitations using a neural net to fit inputs to outputs, indeed. We see that in practice. The systems do not understand what they do.
However, I think you are missing the significance of o3, AlphaProof, and upcoming agents. Neural nets are then used only to make a hypothesis. Then, more rigorous tools, including formal verifiers, simulators, code execution, etc., kick in, to keep the system honest.
With such an approach, AI explores the problem space with the neural net giving it ideas, and some model keeping the AI on track.
We are very early in this, but the approach is sound. It is like with people. First use your imagination, but then do rigorous work and adjust based on feedback.
But as I said in my previous post, what you describe above is just mimicking intelligent behavior, albeit improved behavior. However, it is no different in principle than AI in the latter part of the 20th century. Just different processes and representations. Basically, history is repeating.
What separates the modeling from the modeled? This: all models of phenomena can be undone, in the stack sense - previous states (eg of fluid flowing) can be reversed merely by setting the model variables to past states, all the way to the start. In contrast, reality can never, ever, ever, be undone - time doesn't flow backwards.
In our heads we also run models of the world. There is no difference between organic brains processing information and acting intelligently, or software-base systems doing same thing.
True meaning/understanding would require qualitative experience imo. Meaning for humans is done, at least initially, by mapping symbols to qualia and stringing symbols that are mapped together to produce concepts.
A camera hooked up to ai won’t even do this, as it’s just computing raw digital data.
From the ongoing timeline these A.I guys should be part of WWE hype promos! Just give it to us do not keep telling us. I will not be waiting. The human is superior for me till further notice.
I feel like the whole "AGI" debate is moving into absurdity. So, tech bros get to decide for everyone else with brute force? AGI is not real and has no meaning...hell it can't even be defined in contextualized linguistics/language...I think we need Noam Chomsky right now to bring some intellectual clarity and stop this science fiction nonsense.
You don't need a linguist. You need a philosopher. I recommend Robert Brandom, who wrote regarding rationality: "Rational beings are ones that ought to have reasons for what they do, and ought to act as they have reason to. They are subjects of rational obligations, prohibitions, and permissions." Given this definition it is obvious what the problem is: LLMs lack this normative dimension. They are not capable of binding themselves by what they believe or say in the way that any rational being would.
It looks really cool, if hacky in the way that all deep learning based attempts at performing deductive logic are hacky. Notice that there is an element of deduction involved: AI-generated proposed solution steps are turned into Python code and then run to see if they actually work.
Ultimately, this is still a probabilistic "does this solution resemble solutions from the training?" approach, but done in a principled manner so as to avoid the usual problems in using language models to answer logic questions.
I'd say this is a good example of a well-established phenomenon: the more narrowly an AI is tailored to solving a specific kind of problem, the better it will perform.
GPT-5 should arrive right around the time those "few simple words" refuting Hubert Dreyfus McCarthy promised us back in the late 1960s are finally written and published.
That is a good question. I think OpenAI has enough of a clout to sell services and make a buck, eventually. What will happen to Grok, Mistral, and Zuck's vanity project is less clear.
I meant for all those capital expenditures not by the data center buildouts. I meant those corporations who are buying into all this and choosing vendors due to AI investing being requirement by c suite to not fall behind. where is their ROI?
I don't think managers are that clueless to invest in AI just because it is the fad of the day. In some industries, chatbots are useful and augment workers. If vendors like OpenAI offer tools that have value, and some customers buy them and then renew their deals, things will work out.
Zoom has already rebranded as an AI-first company just by incorporating a couple AI "assistants" into their software. I highly doubt that it will be a good long-term decision.
As usual, people who buy in blindly will not do well, and those who fail to take advantage of new functionality will not do well either. Some ongoing evaluations of the state of things is likely what they should do.
What’s with the version numbering anyway? Isn’t it mere marketing to call it GPT 5 instead of GPT 4.7, for example? Sorry if this is a dumb question, I’m not a software developer.
Changes in the transformer architecture, I guess. Turbo was 'long context'. .5 may have been dimensional increases, like token dictionary size, embedding size, parameter volume)
Hi Gary - more and more I find articles written with AI tools - such as this one: https://www.sencha.com/blog/how-ui-components-help-developers-create-scalable-and-user-friendly-web-apps/ . These articles are dry and devoid of the meaning that other articles written with a clear intent by humans have. I was thinking of inventing a term for this type of articles, or maybe it already exists. Do you know? They are like zombies.
Even worse are YouTube videos of which the script is written by a LLM and the voiceover is produced by AI reading the script. The footage consists of cut-and-paste jobs or even AI-generated videos and imagery. Dreadful, dystopian stuff.
AGI will be here when Sam Altman decides it's time to call OpenAI's latest model "AGI". The believers will insist it meets the qualifications, the non-believers will mock it, and the term itself will finally be permitted to show the world its meaninglessness.
Gary, the consummate beat reporter, has put his thumb on the matter. The Eureka! moments of random walk scaling are done. But, as Ilya Sutskever says, now is the time for 'scaling the right thing' - a more deliberative walk. Deliberation has long been fraught, but lessons have been learned in contracting to ease the way forward.
I just wrapped up a small review of research on AI agents and how they fare in high-level roles and the challenges are significant. Even if we get LLMs that are much better task planners there is something about the social level and the ability of LLMs to operate in that space with information asymmetry that seems quite a reach right now - so yeah I would be surprised if AGI showed up this year https://www.agentsdecoded.com/p/research-roundup-can-ai-agents-actually
AGI wolf calls have done their job. Anyone who believed CEOs were bearers of wisdom can take a moment, breathe, and guard their wallets until the next time Silicon Valley’s siren songs resurface.
Little Red Writing Hood:
“Grandma, what big AI’s you have.”
Silicon Valley Wolf: “ The better to fleece you with my dear”
Fleece Navidad
Merry Shipmas
https://futurism.com/the-byte/enron-banker-parallels-openai
Nor will we see it in 2026, 2027, ...
(It's not an incremental issue, it is a fundamental issue. 'Wide' AI will not become'General' AI)
Intelligence is not one thing, as in you have it or you don't. It is a giant collection of skills and their seamless integration.
Machinery for high-level approximate synthesis is absolutely necessary, which is what the vendors have now.
The logic will get better in areas where there's profit. Any failures will inspire custom solutions. No fundamental limits in sight.
That depends on how you look. The fundamental limit is that no amount of induction (statistics) gets you deduction (symbolic logic), even if you can approximate close. The other way around is true too as we found out 40-20 years ago.
To add, AI vendors are not in principle opposed to deductive reasoning or symbolic methods. The problem is that, in practice, those are just a different kind of approximation to the messy reality. They do not solve the problems.
AGI is an immense problem, and what is in our heads is not easy to represent properly. The industry is onto something by focusing on incremental work and leveraging data as much as possible. That will inspire new directions as need be.
Symbolic logic may be seen as part of our brain's efficiency apparatus, that is, it for instance prevents infinite scaling issues (e.g. the outlier problem, but also others).
The symbolic logic people had it the wrong way around. Stuff like emotions doesn't emerge from large amounts of discrete facts and rules (the problem being that 'large' here in effect is 'infinite'), but discrete facts and rules are created out of the non-discrete, messy, chaotic, stuff below. You can create Q out of R, but not the other way around.
In that sense it may be evolutionary related to our 'conviction' efficiency apparatus, such as convictions that 'incremental work' 'will' inspire new directions 😀. Or convictions that these 'new directions' — when mentioned now — are the equivalent of vapourware.
Symbolic logic surely has a place. When people learned to go from daily messy situations to high-level abstract rules, and then apply those rules in other contexts, that made us much smarter.
It is important to note, however, that abstractions alone cannot fix all outliers, just certain categories of them. The real work still happens at the messy detail level, where the rules you know many not apply, or you need to know how to apply them.
Which is true. And here you are close to why systems with massive discrete logic (like the bits and operands of digital computers) always have some trouble in messy reality. What holds for that scaling symbolic logic doesn't get you there probably is true to for discrete logic in general. (Just riding a hobby horse 😀).
Much work people do involves diligently going through steps, and checking what you get as you go. The feedback informs the next steps.
Symbolic and principled methods suffer from the same problems as LLM unless you are able to validate and model precisely what you are dealing with.
There seems to be a fundamental gap in understanding how *meaning* is produced. AI systems are incapable of producing meaning, and therefore cannot achieve 'AGI' in any capacity no matter how much training data they are subjected to.
There is no “understanding” either by the AIs OR by the folks developing them (who don’t understand — and don’t seem to even care — precisely why they seem so convincing)
The latter fact leads to all the hyperbolic claims that have no grounding in reality.
It’s all very weird for a field that calls itself “science”
Exactly.
Grounded meaning requires no symbols... that's where the disconnect lies, between reality (natural intelligence) and ALL of AI-to-date including the latest dot-product-based "generative" ['computational', really] fakes.
Grounding requires no symbols, indeed. But you have to model things somehow.
If you are arguing that none of the computational models developed for a century have no bearing on intelligence, that would require a solid argument.
They don't (have any bearing on intelligence). For the past 70 to 75 years, it's all been about mimicking intelligent behavior. Whether symbol processing systems or ANNs, everyone has been and is still developing complex functions, trying to mimic the input-output behavior of thinking humans. And, not surprisingly, all they have to show for it to date is narrow AI. That's why people like Peter Norvig, who should know better, given their stature in the field, are claiming that AGI is already here: because they don't want to believe that all they've been doing their entire careers is developing complex functions, instead of making inroads toward understanding intelligence. If the AI community (or psychologists or neuroscientists) understood the nature of intelligence, no one would continue to develop systems the way they have been and still are (except perhaps to model some function of the mind-brain) if their intention is to achieve AGI.
There are serious limitations using a neural net to fit inputs to outputs, indeed. We see that in practice. The systems do not understand what they do.
However, I think you are missing the significance of o3, AlphaProof, and upcoming agents. Neural nets are then used only to make a hypothesis. Then, more rigorous tools, including formal verifiers, simulators, code execution, etc., kick in, to keep the system honest.
With such an approach, AI explores the problem space with the neural net giving it ideas, and some model keeping the AI on track.
We are very early in this, but the approach is sound. It is like with people. First use your imagination, but then do rigorous work and adjust based on feedback.
But as I said in my previous post, what you describe above is just mimicking intelligent behavior, albeit improved behavior. However, it is no different in principle than AI in the latter part of the 20th century. Just different processes and representations. Basically, history is repeating.
What separates mimicking something from the real thing?
Is it accuracy, predictability?
Or is it some philosophical thing?
It's not about the modeling - it's about conflating the model with the modeled.
What separates the modeling from the modeled? This: all models of phenomena can be undone, in the stack sense - previous states (eg of fluid flowing) can be reversed merely by setting the model variables to past states, all the way to the start. In contrast, reality can never, ever, ever, be undone - time doesn't flow backwards.
In our heads we also run models of the world. There is no difference between organic brains processing information and acting intelligently, or software-base systems doing same thing.
True meaning/understanding would require qualitative experience imo. Meaning for humans is done, at least initially, by mapping symbols to qualia and stringing symbols that are mapped together to produce concepts.
A camera hooked up to ai won’t even do this, as it’s just computing raw digital data.
That's why o3 searches around, and AI agents will invoke tools that know about meaning. One has to ground the generation, somehow.
From the ongoing timeline these A.I guys should be part of WWE hype promos! Just give it to us do not keep telling us. I will not be waiting. The human is superior for me till further notice.
I feel like the whole "AGI" debate is moving into absurdity. So, tech bros get to decide for everyone else with brute force? AGI is not real and has no meaning...hell it can't even be defined in contextualized linguistics/language...I think we need Noam Chomsky right now to bring some intellectual clarity and stop this science fiction nonsense.
You don't need a linguist. You need a philosopher. I recommend Robert Brandom, who wrote regarding rationality: "Rational beings are ones that ought to have reasons for what they do, and ought to act as they have reason to. They are subjects of rational obligations, prohibitions, and permissions." Given this definition it is obvious what the problem is: LLMs lack this normative dimension. They are not capable of binding themselves by what they believe or say in the way that any rational being would.
So what do you think about things like this? https://huggingface.co/papers/2501.04519
Looks overwhelming and a little scary. Do you think it's real or they just overtrained on that exact benchmark?
It looks really cool, if hacky in the way that all deep learning based attempts at performing deductive logic are hacky. Notice that there is an element of deduction involved: AI-generated proposed solution steps are turned into Python code and then run to see if they actually work.
Ultimately, this is still a probabilistic "does this solution resemble solutions from the training?" approach, but done in a principled manner so as to avoid the usual problems in using language models to answer logic questions.
I'd say this is a good example of a well-established phenomenon: the more narrowly an AI is tailored to solving a specific kind of problem, the better it will perform.
GPT-5 should arrive right around the time those "few simple words" refuting Hubert Dreyfus McCarthy promised us back in the late 1960s are finally written and published.
And what is the ROI for all this investment?
That is a good question. I think OpenAI has enough of a clout to sell services and make a buck, eventually. What will happen to Grok, Mistral, and Zuck's vanity project is less clear.
I meant for all those capital expenditures not by the data center buildouts. I meant those corporations who are buying into all this and choosing vendors due to AI investing being requirement by c suite to not fall behind. where is their ROI?
I don't think managers are that clueless to invest in AI just because it is the fad of the day. In some industries, chatbots are useful and augment workers. If vendors like OpenAI offer tools that have value, and some customers buy them and then renew their deals, things will work out.
I have been a manager at Adobe and a Masa Softbank co. You are incorrect. They will and have.
Zoom has already rebranded as an AI-first company just by incorporating a couple AI "assistants" into their software. I highly doubt that it will be a good long-term decision.
As usual, people who buy in blindly will not do well, and those who fail to take advantage of new functionality will not do well either. Some ongoing evaluations of the state of things is likely what they should do.
What’s with the version numbering anyway? Isn’t it mere marketing to call it GPT 5 instead of GPT 4.7, for example? Sorry if this is a dumb question, I’m not a software developer.
Changes in the transformer architecture, I guess. Turbo was 'long context'. .5 may have been dimensional increases, like token dictionary size, embedding size, parameter volume)
The “.5” means it’s half-baked.
Hi Gary - more and more I find articles written with AI tools - such as this one: https://www.sencha.com/blog/how-ui-components-help-developers-create-scalable-and-user-friendly-web-apps/ . These articles are dry and devoid of the meaning that other articles written with a clear intent by humans have. I was thinking of inventing a term for this type of articles, or maybe it already exists. Do you know? They are like zombies.
Even worse are YouTube videos of which the script is written by a LLM and the voiceover is produced by AI reading the script. The footage consists of cut-and-paste jobs or even AI-generated videos and imagery. Dreadful, dystopian stuff.
AGI will be here when Sam Altman decides it's time to call OpenAI's latest model "AGI". The believers will insist it meets the qualifications, the non-believers will mock it, and the term itself will finally be permitted to show the world its meaninglessness.
Gary, the consummate beat reporter, has put his thumb on the matter. The Eureka! moments of random walk scaling are done. But, as Ilya Sutskever says, now is the time for 'scaling the right thing' - a more deliberative walk. Deliberation has long been fraught, but lessons have been learned in contracting to ease the way forward.
"Gary Marcus wishes the media would hold those who make unrealistic promises to account. Because they *rarely do...." right?
I just wrapped up a small review of research on AI agents and how they fare in high-level roles and the challenges are significant. Even if we get LLMs that are much better task planners there is something about the social level and the ability of LLMs to operate in that space with information asymmetry that seems quite a reach right now - so yeah I would be surprised if AGI showed up this year https://www.agentsdecoded.com/p/research-roundup-can-ai-agents-actually
Any thoughts about the impact that Titans (from Google) could have?
https://arxiv.org/abs/2501.00663
GPT-5 is more of a cryptid than a product.