61 Comments
User's avatar
William Finlator's avatar

Gary Marcus on AI is one of the only things keeping me sane as these crazy weirdos try and take over the world. What scares me most is how little resistance there is to their plans. Living in the UK, with what is meant to be a people's government, we see the government keen to sell as much as it can of what makes us as a country politically and culturally important to the strange, nebulous and authoritarian tech bros.

Expand full comment
Runner's avatar

We are witnessing the final moments of a desperate capitalist system whose existence depends on growth. What happens when we reach saturation? Diminshing returns? Or shock, horror are met with the reality of finiteness, heaven forbid decline in resources?

The capitalists are deeply scared. What if growth is finite? Its an existential crisis for them and politicians who serve them as puppets. The only thing these politicians can cling onto is the belief of future growth. They have no vision, solidarity with the people or philosophy. Just economic promise.

The tech and AI vultures know this. They see how desperate G20 economies are for growth, and so they spin the story that their AI is the new Internet, the new computer, the new smartphone, after all these tech shifts were the major source of growth for the past 40 years for these countries.

Workers and artists should stand against both the political and capitalist class. We don't need them, they need us.

Expand full comment
Ovais Quraishi's avatar

Wait till the world realizes that a significant part of tech industry revenues come from ads not as much as from the services (e.g. in 2023, Meta (formerly Facebook) derived approximately 98.4% of its revenue from advertising)

Expand full comment
Promachos's avatar

All the politicians here are like this. The only sensible thing any of them did with tech was start the Government Digital Service back around 2010 and give it the power to stop stupid projects (the passport service they created is still world-leading), but then after that they started undercutting what made it work (having skilled independent tech contractors working directly for government) and chasing the dumb tech bubble du jour via big outsourcers. Remember how they decided they wouldn’t have to make any clear decisions for how Brexit would work because “Blockchain” was going to magically solve all import/export at the border? Now it’s AI, and they’re back to hiring massive outsourcers to deliver it just as they did for the failed Post Office system.

Expand full comment
Aaron Turner's avatar

"LLMs are not the way. We definitely need something better." Trouble is, LLMs have sucked all of the oxygen out of the room, making it impossible for non-LLM-based research to find funding.

Expand full comment
Patrick Logan's avatar

Something like ARPA would be the ideal vehicle to fund alternatives. That's not going to happen, especially not in the current American context. Perhaps other governments around the world or some curious billionaires interested in science will step up.

Expand full comment
MarkS's avatar

Funding is useless without ideas. Other methods lack ideas much more than they lack funding.

Expand full comment
Aaron Turner's avatar

I've been an independent AGI researcher since 1985. Ideas are not a problem.

Expand full comment
MarkS's avatar

Then where is my AGI?

Expand full comment
Aaron Turner's avatar

Where is my funding? Ideas + funding => AGI.

Expand full comment
MarkS's avatar

Yeah, we've that one before. Some guy named Sam ...

Expand full comment
Aaron Turner's avatar

OpenAI's only "idea" so far in respect of AGI has been to push someone else's data through someone else's model using someone else's money, and to then do it all over again, only bigger. It could hardly be less imaginative! Snake oil + funding => no AGI.

Expand full comment
Stephen Reed's avatar

Dead ends are the problem reaching AGI, instead of a lack of ideas.

Pushing engineering beyond its frontier is fraught with dead ends that must be discarded along the true path. The many thousands of researchers pursuing AGI worldwide are thus mostly following what will turn out to be either dead ends or the very long way around to the goal of AGI.

My own independent AGI work began in 2006, and with the advent of GPT-3 a couple of years ago, I shelved 250K lines of code that implemented a knowledge base, symbolic knowledge representation, a consolidated machine readable dictionary, a construction grammar parser, a bootstrap English grammar and an English language generator.

GPT-3 did all of that for me - way better, and my above described work to that point became a dead end to abandon. Moreover, computational linguistics as a whole and hand-crafted heuristic knowledge bases as a path forward to AGI became dead ends.

Funding is not really the problem holding back independent AGI research. For example, LLM prompts are answered hundreds to the penny. Approaches such as mine, previously depending on humans for coding and skill mentoring are now possibly performed with a very high degree of automation and scale.

Expand full comment
Brian Wandell's avatar

Not quite on the main thread of your message—this is more about science. At the university, we’re now inundated with talks presenting LLMs as models of human cognition. I get invited to two or more of these every week, mostly through Psychology or Neuroscience. It’s a bit confounding, not just because of the content, but because I feel it’s starting to slow down the development of new ideas.

I wonder if you’re still involved in that world, and if so, what your approach is at those talks. I try to be polite and supportive, especially with students and postdocs, but I also really care about scientific rigor. Honestly, it’s starting to stress me out.

Expand full comment
Sufeitzy's avatar

If you use a reasonably simple model of cognition, like Friston, the system an LLM uses fits into a tiny fraction of what’s going on. It certainly uses a type of gradient descent to look at a current buffer, predict what would fill it and compare it to a model. That’s it.

Until you see a “filled buffer” being edited, not filled, to match a “change” or prompt, I don’t think you’ll see reason emerge.

LLM’s are all basically bootstrapping a reason model, but never modifying it. Consider what that would be like for intelligence. You start with a zero state of awareness, then begin gradual awareness of what is. Then the punch line hits - “now that you’re awake, we turn you off”.

No human has a “oh my buffer is full let’s start over” issue do they?

It’s quite humorous to think about, since they’re not really even close.

It seems to reason, humans are very good at being deceived by mimicry, and random patterns.

Expand full comment
Larry Jewett's avatar

Unfortunately, the only rigor associated with LLMs is of the mortis kind.

Expand full comment
Google Man's avatar

I can't see how you can build an artificial intelligence machine if we don't understand the way the human brain achieves intelligence.

Expand full comment
Gary Marcus's avatar

well, AI can win at chess without a full understanding of how humans play chess. but i do think we should take inspiration from cognitive science

Expand full comment
Patrick Logan's avatar

"AI" shouldn't even be the objective. Augmenting human intelligence is the more obtainable and probably more useful objective. Augmentation would emphasize the strengths and supplant the weaknesses of both humans and computers.

Expand full comment
Martin Machacek's avatar

That is surely a problem, if want to build an AI system that works like human intelligence. Now, is that even a meaningful goal (from practical point of view)? Wouldn’t such human-like AI develop the same (or very similar)cognitive biases as humans? And if not, would it be truly human-like? May be a better goal is to build systems that may not be able to do everything a human can do, but on the other hand may compensate for shortcomings of our brains. I’m increasingly skeptical that building AGI (regardless of definition) will bring the expected benefits (like scientific breakthroughs). … and no, LLMs are not the “other intelligence”.

Expand full comment
Jan Steen's avatar

Never mind what wacky formula the Trump junta used to calculate their tariffs; fact is that Trump plainly lied that his tariffs are half of those of other countries with respect to their imports from the US. Most have negligible tariffs.

If you (the US) buy more from country X than that country buys from you, you have a trade deficit with that country. This doesn't automatically mean that country X has slapped tariffs on American products. These things can be unrelated. It is therefore another lie to complain that foreign countries have "looted, pillaged, raped and plundered" the US, when all that really happened was that the US bought stuff abroad. The US paid for things it wanted to have, and other countries delivered the goods. How is that looting, pillaging, raping or plundering?

No, I don't think the Trump junta has used LLMs in its pronouncements: even LLMs would have produced something more coherent, something less obviously deranged.

Expand full comment
Stephen Schiff's avatar

Gary, I was skeptical about the tariff-LLM connection until I saw the vfxgordon post, which definitely made the case. (Not being an LLM user, I certainly failed to understand how deeply the prompt and response appear to be connected.)

I am wondering whether LLMs may be financially successful even though they are deeply flawed, given the wide spread inability to think critically. H.L. Mencken comes to mind.

Expand full comment
Promachos's avatar

The emphasis on “learning to prompt” LLMs to get anything vaguely useful out of them has kept my alarm bells ringing almost consistently. An enormous global crowd has been working away to find the “killer app” of LLMs for a few years now, and it’s still not achieved any kind of much-ballyhooed business application. That’s not stopped it from proliferating like lice through every bit of SaaS we have. All it seems to do is search for bits of info (fine if you can quick-check it’s not a hallucination - like say testing a macro in a spreadsheet - not if you have to spend time fact checking and wipe out the incremental gain from using it), or rewriting what you’ve said if you’re feeling lazy and disengaged. Everyone seemed to lean heavily on it for annual 360 performance review feedback this year, which kind of undermines the point of peer review, right?

It’s the worst and biggest tech bubble I’ve ever seen, and I remember the dot com crash when people were sending stock on airline/hotel resale sites through the roof. The financiers have lost their minds.

Expand full comment
Franklin S.'s avatar

Correct me if you see something different, but my experience is that all of shiny, new LLMs are only incrementally better, at best, with hallucinations and anything involving numbers. It seems like it's just random when they're right.

Expand full comment
name12345's avatar

Interested in your thoughts on AI 2027 predictions: https://ai-2027.com/

Expand full comment
Stephen Reed's avatar

Plenty of controversial material for comment.

Expand full comment
MarkS's avatar

We definitely need something better than LLMs, but have no clue how to build anything better. Neurosymbolics is as stuck in the mud as anything else (contra Gary's hopes and dreams). It is true that LLM stochastic parroting turned out to be better than most expected at some stuff ("I hope this email finds you well"), but I predict that its ultimate impact is going to be no more significant that plenty of other past developments, such as spreadsheet software.

Expand full comment
Stephen Reed's avatar

Steam engines turned out not to be helpful for heavier than air flight, but the machining techniques for building high tolerance heat engines led to the gasoline engine that enabled heavier than air flight.

LLMs have their important place in the chain of events leading to AGI and ASI.

Expand full comment
Larry Jewett's avatar

Steam engines were/are based on general physical principles that could/can be thoroughly tested and verified and applied to other cases (eg, gasoline engines)

Because of their black box nature, there are no analogous underlying, testable principles (certainly no physical ones) for LLMs that can be extended and applied to alternatives.

Basically, what it “boils” down to is that heat engines (steam and ICE) are based on science but LLMs are not.

Expand full comment
Robert Keith's avatar

And here we are. Begging the question in my mind: What exactly IS GenAI good for, beyond some relatively modest collaborative tasks? It certainly cannot be trusted to perform end-to-end creation that isn't largely derivative.

Expand full comment
Stephen Reed's avatar

Largely derivative skills are economically valuable. Indeed, an analysis of all the jobs in the world economy would evidence that end-to-end creation is not in most skill sets.

Expand full comment
Robert Keith's avatar

There's things that are derivative, and then there's things that are a derivation. A subtle but important distinction. AI creates the former.

Expand full comment
Stephen Reed's avatar

Agreed.

My point revolves around the notion that end-to-end creation is not in most skill sets. And that assumption makes finding jobs for Gen AI applications easier.

Expand full comment
Noido Dev's avatar

What is it good for?! While millions of people are using it? Is it even worth answering that question? It can create plenty of works which are unique enough, and this was clear from the start, when they made a picture of food that looked like some pet. Plenty of useful advice, code, and interesting conversations.

In the context of more general AI it can fit example fullfil the part of imagination, based on human knowledge and patterns of thinking.

Expand full comment
Robert Keith's avatar

"...unique enough..."

Bleh.

"...interesting conversations..."

Riiiiight > https://youtu.be/G34onVI-gt8?si=jLSJDphJuazGTJSX

"...it can fit example fullfil the part of imagination..."

Was that sentence fragment written by AI...? Lol.

Expand full comment
Notorious P.A.T.'s avatar

Okay, I had thought that the penguin island got hit with a tariff to keep companies from legally setting their headquarters there to avoid paying, but putting a tariff on Diego Garcia suggests they are just bonkers.

Expand full comment
Hans Jackson's avatar

Very little moat for anyone

Never expected the transformative AI revolution to be so quickly driven by commoditization and democratization. Open-source advancements enabled nearly every lab to work on LLMs, while DeepSeek accelerated AI democratization—completely reshaping the pricing landscape. Bad news for Sam?

Expand full comment
Jed Serrano's avatar

Noam would be so proud of you. (I miss him dearly.)

Expand full comment
Kevin's avatar
User was temporarily suspended for this comment. Show
Expand full comment
RMC's avatar

What's his grievance exactly? He seems quite pleased with himself to be getting these predictions right, and very reasonably so IMO. There are a huge number adult scientists who are very skeptical indeed of LLMs but who absolutely do not want to get involved in public. There's no upside. So Marcus is doing some work for us all here, and taking the heat for it in public. I'd say "somebody has to" but actually they don't. If he's not without entirely without human frailty or vanity, well who is? Certainly no one online.

Expand full comment
Larry Jewett's avatar

There are a huge number adult scientists who are very skeptical indeed of LLMs but who absolutely do not want to get involved in public“

Apart from a very few individuals like Gary Marcus and Ernie Davis, the silence within the computer science community gives me little reason to doubt that.

But if that is true, the actual problem is much much bigger than just LLMs.

Expand full comment
RMC's avatar

Is it a big problem? Or even an actual problem? I was referring to psychologists and neuroscientists since that's who I know these days, but I'm sure there's a ton of computer scientists too. I doubt Donald Knuth is hyping LLMs if he's still alive. For that matter I doubt Marvin Minsky would have.

I'm not talking about any sort of suppression of results or difficulty publishing. There's a ton of papers critically analysing LLM results. People like Melanie Mitchell have not reported any problems.

I'm just saying professional people don't really want to engage in this public discussion in the way Gary Marcus does, because it is stressful, time consuming and may be unhelpful to one's career. On the other hand I'm grateful Marcus is investing the time and mental energy and I'm pleased it seems to be going well for him.

The problem is the hype. Lots of random scientists voicing casual skepticism would not necessarily help matters, it's better than someone like Marcus give it the depth of analysis it needs.

Expand full comment
Andrew Cooper's avatar

I kind of hope it slows down. I mean LLMs are kind of useful already. We don’t need super intelligence.

Expand full comment
Chaos Goblin's avatar

Hanania may be right about the tariffs but he still voted for Trump and he's still a white supremacist...

Expand full comment