Gary Marcus on AI is one of the only things keeping me sane as these crazy weirdos try and take over the world. What scares me most is how little resistance there is to their plans. Living in the UK, with what is meant to be a people's government, we see the government keen to sell as much as it can of what makes us as a country politically and culturally important to the strange, nebulous and authoritarian tech bros.
We are witnessing the final moments of a desperate capitalist system whose existence depends on growth. What happens when we reach saturation? Diminshing returns? Or shock, horror are met with the reality of finiteness, heaven forbid decline in resources?
The capitalists are deeply scared. What if growth is finite? Its an existential crisis for them and politicians who serve them as puppets. The only thing these politicians can cling onto is the belief of future growth. They have no vision, solidarity with the people or philosophy. Just economic promise.
The tech and AI vultures know this. They see how desperate G20 economies are for growth, and so they spin the story that their AI is the new Internet, the new computer, the new smartphone, after all these tech shifts were the major source of growth for the past 40 years for these countries.
Workers and artists should stand against both the political and capitalist class. We don't need them, they need us.
Wait till the world realizes that a significant part of tech industry revenues come from ads not as much as from the services (e.g. in 2023, Meta (formerly Facebook) derived approximately 98.4% of its revenue from advertising)
All the politicians here are like this. The only sensible thing any of them did with tech was start the Government Digital Service back around 2010 and give it the power to stop stupid projects (the passport service they created is still world-leading), but then after that they started undercutting what made it work (having skilled independent tech contractors working directly for government) and chasing the dumb tech bubble du jour via big outsourcers. Remember how they decided they wouldn’t have to make any clear decisions for how Brexit would work because “Blockchain” was going to magically solve all import/export at the border? Now it’s AI, and they’re back to hiring massive outsourcers to deliver it just as they did for the failed Post Office system.
"LLMs are not the way. We definitely need something better." Trouble is, LLMs have sucked all of the oxygen out of the room, making it impossible for non-LLM-based research to find funding.
Something like ARPA would be the ideal vehicle to fund alternatives. That's not going to happen, especially not in the current American context. Perhaps other governments around the world or some curious billionaires interested in science will step up.
OpenAI's only "idea" so far in respect of AGI has been to push someone else's data through someone else's model using someone else's money, and to then do it all over again, only bigger. It could hardly be less imaginative! Snake oil + funding => no AGI.
Dead ends are the problem reaching AGI, instead of a lack of ideas.
Pushing engineering beyond its frontier is fraught with dead ends that must be discarded along the true path. The many thousands of researchers pursuing AGI worldwide are thus mostly following what will turn out to be either dead ends or the very long way around to the goal of AGI.
My own independent AGI work began in 2006, and with the advent of GPT-3 a couple of years ago, I shelved 250K lines of code that implemented a knowledge base, symbolic knowledge representation, a consolidated machine readable dictionary, a construction grammar parser, a bootstrap English grammar and an English language generator.
GPT-3 did all of that for me - way better, and my above described work to that point became a dead end to abandon. Moreover, computational linguistics as a whole and hand-crafted heuristic knowledge bases as a path forward to AGI became dead ends.
Funding is not really the problem holding back independent AGI research. For example, LLM prompts are answered hundreds to the penny. Approaches such as mine, previously depending on humans for coding and skill mentoring are now possibly performed with a very high degree of automation and scale.
Not quite on the main thread of your message—this is more about science. At the university, we’re now inundated with talks presenting LLMs as models of human cognition. I get invited to two or more of these every week, mostly through Psychology or Neuroscience. It’s a bit confounding, not just because of the content, but because I feel it’s starting to slow down the development of new ideas.
I wonder if you’re still involved in that world, and if so, what your approach is at those talks. I try to be polite and supportive, especially with students and postdocs, but I also really care about scientific rigor. Honestly, it’s starting to stress me out.
If you use a reasonably simple model of cognition, like Friston, the system an LLM uses fits into a tiny fraction of what’s going on. It certainly uses a type of gradient descent to look at a current buffer, predict what would fill it and compare it to a model. That’s it.
Until you see a “filled buffer” being edited, not filled, to match a “change” or prompt, I don’t think you’ll see reason emerge.
LLM’s are all basically bootstrapping a reason model, but never modifying it. Consider what that would be like for intelligence. You start with a zero state of awareness, then begin gradual awareness of what is. Then the punch line hits - “now that you’re awake, we turn you off”.
No human has a “oh my buffer is full let’s start over” issue do they?
It’s quite humorous to think about, since they’re not really even close.
It seems to reason, humans are very good at being deceived by mimicry, and random patterns.
Good question but maybe the wrong one. We know human brains make decisions based on reason and understanding. Current AI doesn't include that in their ANN model.
"AI" shouldn't even be the objective. Augmenting human intelligence is the more obtainable and probably more useful objective. Augmentation would emphasize the strengths and supplant the weaknesses of both humans and computers.
That is surely a problem, if want to build an AI system that works like human intelligence. Now, is that even a meaningful goal (from practical point of view)? Wouldn’t such human-like AI develop the same (or very similar)cognitive biases as humans? And if not, would it be truly human-like? May be a better goal is to build systems that may not be able to do everything a human can do, but on the other hand may compensate for shortcomings of our brains. I’m increasingly skeptical that building AGI (regardless of definition) will bring the expected benefits (like scientific breakthroughs). … and no, LLMs are not the “other intelligence”.
Never mind what wacky formula the Trump junta used to calculate their tariffs; fact is that Trump plainly lied that his tariffs are half of those of other countries with respect to their imports from the US. Most have negligible tariffs.
If you (the US) buy more from country X than that country buys from you, you have a trade deficit with that country. This doesn't automatically mean that country X has slapped tariffs on American products. These things can be unrelated. It is therefore another lie to complain that foreign countries have "looted, pillaged, raped and plundered" the US, when all that really happened was that the US bought stuff abroad. The US paid for things it wanted to have, and other countries delivered the goods. How is that looting, pillaging, raping or plundering?
No, I don't think the Trump junta has used LLMs in its pronouncements: even LLMs would have produced something more coherent, something less obviously deranged.
Correct me if you see something different, but my experience is that all of shiny, new LLMs are only incrementally better, at best, with hallucinations and anything involving numbers. It seems like it's just random when they're right.
We definitely need something better than LLMs, but have no clue how to build anything better. Neurosymbolics is as stuck in the mud as anything else (contra Gary's hopes and dreams). It is true that LLM stochastic parroting turned out to be better than most expected at some stuff ("I hope this email finds you well"), but I predict that its ultimate impact is going to be no more significant that plenty of other past developments, such as spreadsheet software.
Steam engines turned out not to be helpful for heavier than air flight, but the machining techniques for building high tolerance heat engines led to the gasoline engine that enabled heavier than air flight.
LLMs have their important place in the chain of events leading to AGI and ASI.
Steam engines were/are based on general physical principles that could/can be thoroughly tested and verified and applied to other cases (eg, gasoline engines)
Because of their black box nature, there are no analogous underlying, testable principles (certainly no physical ones) for LLMs that can be extended and applied to alternatives.
Basically, what it “boils” down to is that heat engines (steam and ICE) are based on science but LLMs are not.
And here we are. Begging the question in my mind: What exactly IS GenAI good for, beyond some relatively modest collaborative tasks? It certainly cannot be trusted to perform end-to-end creation that isn't largely derivative.
Largely derivative skills are economically valuable. Indeed, an analysis of all the jobs in the world economy would evidence that end-to-end creation is not in most skill sets.
My point revolves around the notion that end-to-end creation is not in most skill sets. And that assumption makes finding jobs for Gen AI applications easier.
What is it good for?! While millions of people are using it? Is it even worth answering that question? It can create plenty of works which are unique enough, and this was clear from the start, when they made a picture of food that looked like some pet. Plenty of useful advice, code, and interesting conversations.
In the context of more general AI it can fit example fullfil the part of imagination, based on human knowledge and patterns of thinking.
Okay, I had thought that the penguin island got hit with a tariff to keep companies from legally setting their headquarters there to avoid paying, but putting a tariff on Diego Garcia suggests they are just bonkers.
Never expected the transformative AI revolution to be so quickly driven by commoditization and democratization. Open-source advancements enabled nearly every lab to work on LLMs, while DeepSeek accelerated AI democratization—completely reshaping the pricing landscape. Bad news for Sam?
I agree with what you're saying, but I don't think theres many in the AI world that would currently disagree. AI is useful for automation, scientificv discovery (basicaslly doing what ML has always done but better) and learning.
Thosxe still clinging on to the whole 'agents' thing are only doing it because they are to far down the road to let themselves see the truth.
However, the idea that AI is dead in the water along with NVIDEA is to overlook where AI is excelling right now. Small, speciaslized models, Just ask Jensen predicted around the same time.
To take just one industry, these small models will eventually be in every computer game, transforming an industry .
Whilst we're making predictions, here's what i think will happen over the next year:
- OpenAI will fold as Microsoft and NVIDEA cut ties. Anthropic, google and co will all make 'some' money from it as they have way to apply it, but it will mostly excel in education, health and other edge cases.
Trump, apparently obsessed with AI (though I know little of USA politics) will try to buy it, either for himself and musk to run, or under federal control so he has something comparable to China. I'vre no idea whether he'll achieve it, but that is why Musk put in the Psuedo hostile takeover offer, because he want to be the one that eventually offers Altman a lower amount, when open AI is on its knee's.
China, with their co-ordinated approach will shortly mandate the sort of 'universal API' for all Chinese software, so they can taker advantage of where it actually delivers benefit, Automation. They will soar ahead dramatically becaused the unified approsach beats all else.
Finally,50/50 as to whether Ilya will either come out with something that create a new 'GPT3moment'. Just not sure on that one ..
Scaling upwards is dead, but thats not news to anyone, all the big developers are focussing on creating smaller, smarter models which focus on doing small basic tasks well, removing the giant tarball ion the middle of LLM's that provides such randomness
Gary Marcus on AI is one of the only things keeping me sane as these crazy weirdos try and take over the world. What scares me most is how little resistance there is to their plans. Living in the UK, with what is meant to be a people's government, we see the government keen to sell as much as it can of what makes us as a country politically and culturally important to the strange, nebulous and authoritarian tech bros.
We are witnessing the final moments of a desperate capitalist system whose existence depends on growth. What happens when we reach saturation? Diminshing returns? Or shock, horror are met with the reality of finiteness, heaven forbid decline in resources?
The capitalists are deeply scared. What if growth is finite? Its an existential crisis for them and politicians who serve them as puppets. The only thing these politicians can cling onto is the belief of future growth. They have no vision, solidarity with the people or philosophy. Just economic promise.
The tech and AI vultures know this. They see how desperate G20 economies are for growth, and so they spin the story that their AI is the new Internet, the new computer, the new smartphone, after all these tech shifts were the major source of growth for the past 40 years for these countries.
Workers and artists should stand against both the political and capitalist class. We don't need them, they need us.
Wait till the world realizes that a significant part of tech industry revenues come from ads not as much as from the services (e.g. in 2023, Meta (formerly Facebook) derived approximately 98.4% of its revenue from advertising)
All the politicians here are like this. The only sensible thing any of them did with tech was start the Government Digital Service back around 2010 and give it the power to stop stupid projects (the passport service they created is still world-leading), but then after that they started undercutting what made it work (having skilled independent tech contractors working directly for government) and chasing the dumb tech bubble du jour via big outsourcers. Remember how they decided they wouldn’t have to make any clear decisions for how Brexit would work because “Blockchain” was going to magically solve all import/export at the border? Now it’s AI, and they’re back to hiring massive outsourcers to deliver it just as they did for the failed Post Office system.
"LLMs are not the way. We definitely need something better." Trouble is, LLMs have sucked all of the oxygen out of the room, making it impossible for non-LLM-based research to find funding.
Something like ARPA would be the ideal vehicle to fund alternatives. That's not going to happen, especially not in the current American context. Perhaps other governments around the world or some curious billionaires interested in science will step up.
Funding is useless without ideas. Other methods lack ideas much more than they lack funding.
I've been an independent AGI researcher since 1985. Ideas are not a problem.
Then where is my AGI?
Where is my funding? Ideas + funding => AGI.
Yeah, we've that one before. Some guy named Sam ...
OpenAI's only "idea" so far in respect of AGI has been to push someone else's data through someone else's model using someone else's money, and to then do it all over again, only bigger. It could hardly be less imaginative! Snake oil + funding => no AGI.
Dead ends are the problem reaching AGI, instead of a lack of ideas.
Pushing engineering beyond its frontier is fraught with dead ends that must be discarded along the true path. The many thousands of researchers pursuing AGI worldwide are thus mostly following what will turn out to be either dead ends or the very long way around to the goal of AGI.
My own independent AGI work began in 2006, and with the advent of GPT-3 a couple of years ago, I shelved 250K lines of code that implemented a knowledge base, symbolic knowledge representation, a consolidated machine readable dictionary, a construction grammar parser, a bootstrap English grammar and an English language generator.
GPT-3 did all of that for me - way better, and my above described work to that point became a dead end to abandon. Moreover, computational linguistics as a whole and hand-crafted heuristic knowledge bases as a path forward to AGI became dead ends.
Funding is not really the problem holding back independent AGI research. For example, LLM prompts are answered hundreds to the penny. Approaches such as mine, previously depending on humans for coding and skill mentoring are now possibly performed with a very high degree of automation and scale.
Not quite on the main thread of your message—this is more about science. At the university, we’re now inundated with talks presenting LLMs as models of human cognition. I get invited to two or more of these every week, mostly through Psychology or Neuroscience. It’s a bit confounding, not just because of the content, but because I feel it’s starting to slow down the development of new ideas.
I wonder if you’re still involved in that world, and if so, what your approach is at those talks. I try to be polite and supportive, especially with students and postdocs, but I also really care about scientific rigor. Honestly, it’s starting to stress me out.
If you use a reasonably simple model of cognition, like Friston, the system an LLM uses fits into a tiny fraction of what’s going on. It certainly uses a type of gradient descent to look at a current buffer, predict what would fill it and compare it to a model. That’s it.
Until you see a “filled buffer” being edited, not filled, to match a “change” or prompt, I don’t think you’ll see reason emerge.
LLM’s are all basically bootstrapping a reason model, but never modifying it. Consider what that would be like for intelligence. You start with a zero state of awareness, then begin gradual awareness of what is. Then the punch line hits - “now that you’re awake, we turn you off”.
No human has a “oh my buffer is full let’s start over” issue do they?
It’s quite humorous to think about, since they’re not really even close.
It seems to reason, humans are very good at being deceived by mimicry, and random patterns.
Unfortunately, the only rigor associated with LLMs is of the mortis kind.
I can't see how you can build an artificial intelligence machine if we don't understand the way the human brain achieves intelligence.
well, AI can win at chess without a full understanding of how humans play chess. but i do think we should take inspiration from cognitive science
Good question but maybe the wrong one. We know human brains make decisions based on reason and understanding. Current AI doesn't include that in their ANN model.
https://www.linkedin.com/feed/update/urn:li:activity:7314331443558526976?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A7314331443558526976%2C7314369738107740162%29&dashCommentUrn=urn%3Ali%3Afsd_comment%3A%287314369738107740162%2Curn%3Ali%3Aactivity%3A7314331443558526976%29
No.
https://scholar.google.com/scholar?q=affective+decision+making&hl=en&as_sdt=0&as_vis=1&oi=scholart
No Your link does not speak to your well thought out muse.
"AI" shouldn't even be the objective. Augmenting human intelligence is the more obtainable and probably more useful objective. Augmentation would emphasize the strengths and supplant the weaknesses of both humans and computers.
That is surely a problem, if want to build an AI system that works like human intelligence. Now, is that even a meaningful goal (from practical point of view)? Wouldn’t such human-like AI develop the same (or very similar)cognitive biases as humans? And if not, would it be truly human-like? May be a better goal is to build systems that may not be able to do everything a human can do, but on the other hand may compensate for shortcomings of our brains. I’m increasingly skeptical that building AGI (regardless of definition) will bring the expected benefits (like scientific breakthroughs). … and no, LLMs are not the “other intelligence”.
Never mind what wacky formula the Trump junta used to calculate their tariffs; fact is that Trump plainly lied that his tariffs are half of those of other countries with respect to their imports from the US. Most have negligible tariffs.
If you (the US) buy more from country X than that country buys from you, you have a trade deficit with that country. This doesn't automatically mean that country X has slapped tariffs on American products. These things can be unrelated. It is therefore another lie to complain that foreign countries have "looted, pillaged, raped and plundered" the US, when all that really happened was that the US bought stuff abroad. The US paid for things it wanted to have, and other countries delivered the goods. How is that looting, pillaging, raping or plundering?
No, I don't think the Trump junta has used LLMs in its pronouncements: even LLMs would have produced something more coherent, something less obviously deranged.
Correct me if you see something different, but my experience is that all of shiny, new LLMs are only incrementally better, at best, with hallucinations and anything involving numbers. It seems like it's just random when they're right.
Interested in your thoughts on AI 2027 predictions: https://ai-2027.com/
Plenty of controversial material for comment.
We definitely need something better than LLMs, but have no clue how to build anything better. Neurosymbolics is as stuck in the mud as anything else (contra Gary's hopes and dreams). It is true that LLM stochastic parroting turned out to be better than most expected at some stuff ("I hope this email finds you well"), but I predict that its ultimate impact is going to be no more significant that plenty of other past developments, such as spreadsheet software.
Steam engines turned out not to be helpful for heavier than air flight, but the machining techniques for building high tolerance heat engines led to the gasoline engine that enabled heavier than air flight.
LLMs have their important place in the chain of events leading to AGI and ASI.
Steam engines were/are based on general physical principles that could/can be thoroughly tested and verified and applied to other cases (eg, gasoline engines)
Because of their black box nature, there are no analogous underlying, testable principles (certainly no physical ones) for LLMs that can be extended and applied to alternatives.
Basically, what it “boils” down to is that heat engines (steam and ICE) are based on science but LLMs are not.
And here we are. Begging the question in my mind: What exactly IS GenAI good for, beyond some relatively modest collaborative tasks? It certainly cannot be trusted to perform end-to-end creation that isn't largely derivative.
Largely derivative skills are economically valuable. Indeed, an analysis of all the jobs in the world economy would evidence that end-to-end creation is not in most skill sets.
There's things that are derivative, and then there's things that are a derivation. A subtle but important distinction. AI creates the former.
Agreed.
My point revolves around the notion that end-to-end creation is not in most skill sets. And that assumption makes finding jobs for Gen AI applications easier.
What is it good for?! While millions of people are using it? Is it even worth answering that question? It can create plenty of works which are unique enough, and this was clear from the start, when they made a picture of food that looked like some pet. Plenty of useful advice, code, and interesting conversations.
In the context of more general AI it can fit example fullfil the part of imagination, based on human knowledge and patterns of thinking.
"...unique enough..."
Bleh.
"...interesting conversations..."
Riiiiight > https://youtu.be/G34onVI-gt8?si=jLSJDphJuazGTJSX
"...it can fit example fullfil the part of imagination..."
Was that sentence fragment written by AI...? Lol.
Okay, I had thought that the penguin island got hit with a tariff to keep companies from legally setting their headquarters there to avoid paying, but putting a tariff on Diego Garcia suggests they are just bonkers.
Very little moat for anyone
Never expected the transformative AI revolution to be so quickly driven by commoditization and democratization. Open-source advancements enabled nearly every lab to work on LLMs, while DeepSeek accelerated AI democratization—completely reshaping the pricing landscape. Bad news for Sam?
Noam would be so proud of you. (I miss him dearly.)
I kind of hope it slows down. I mean LLMs are kind of useful already. We don’t need super intelligence.
Hanania may be right about the tariffs but he still voted for Trump and he's still a white supremacist...
If only penguins could read... thank you for another good insight Gary.
I agree with what you're saying, but I don't think theres many in the AI world that would currently disagree. AI is useful for automation, scientificv discovery (basicaslly doing what ML has always done but better) and learning.
Thosxe still clinging on to the whole 'agents' thing are only doing it because they are to far down the road to let themselves see the truth.
However, the idea that AI is dead in the water along with NVIDEA is to overlook where AI is excelling right now. Small, speciaslized models, Just ask Jensen predicted around the same time.
To take just one industry, these small models will eventually be in every computer game, transforming an industry .
Whilst we're making predictions, here's what i think will happen over the next year:
- OpenAI will fold as Microsoft and NVIDEA cut ties. Anthropic, google and co will all make 'some' money from it as they have way to apply it, but it will mostly excel in education, health and other edge cases.
Trump, apparently obsessed with AI (though I know little of USA politics) will try to buy it, either for himself and musk to run, or under federal control so he has something comparable to China. I'vre no idea whether he'll achieve it, but that is why Musk put in the Psuedo hostile takeover offer, because he want to be the one that eventually offers Altman a lower amount, when open AI is on its knee's.
China, with their co-ordinated approach will shortly mandate the sort of 'universal API' for all Chinese software, so they can taker advantage of where it actually delivers benefit, Automation. They will soar ahead dramatically becaused the unified approsach beats all else.
Finally,50/50 as to whether Ilya will either come out with something that create a new 'GPT3moment'. Just not sure on that one ..
Scaling upwards is dead, but thats not news to anyone, all the big developers are focussing on creating smaller, smarter models which focus on doing small basic tasks well, removing the giant tarball ion the middle of LLM's that provides such randomness