89 Comments

The most important sentence in this post?

"The sooner we stop climbing the hill we are on, and start looking for new paradigms, the better."

Expand full comment
Nov 17, 2023Liked by Gary Marcus

This is typical late stage hype-denialism: "We never believed what we said we believed"

Be careful, the next stage is: "Look at that Marcus dude, he was such a tool for hyping up LLMs as AGI" :)

Expand full comment
Nov 17, 2023Liked by Gary Marcus

“But the Turing test cuts both ways. You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you've let your sense of personhood degrade in order to make the illusion work for you?"

― Jaron Lanier

Expand full comment

Invite them over for Thanksgiving dinner with you - crow for the main course and humble pie for dessert.

Expand full comment
Nov 17, 2023Liked by Gary Marcus

Well said, Gary, and you should feel vindicated. We need more people to understand that these systems simply cannot reason as humans do.

On that point, and responding to Jan's comments, LLM-based AI systems can performs decently on a broad range of human benchmarks that can be reduced to text -- bar exams, for example. But they cannot apply this knowledge in novel contexts. Perhaps just as tellingly, they struggle with simple tasks that require them to generalize outside their training data -- see the paper Embers of Autoregression for examples. Our experience of the world cannot be reduced to a set of training data -- see the paper Benchmarks and Grover and the Whole Wide World Museum for more on this flaw.

Perhaps AGI will be reached one day, but not without further breakthroughs.

Expand full comment

It must feel good to be proven right.

I want to thank you as well, you have educated me more on the topic of deep learning and the importance of healthy skepticism toward the narratives that are being sold (quite literally), than all these industry leaders combined.

Expand full comment
Nov 17, 2023Liked by Gary Marcus

“What do we have to do in *addition to a language model* to make a system that can go discover new physics?"

OpenAI’s LLMs might not be necessary at all though, right?

Expand full comment

The provocative title of Gary Marcus’s March 2022 article was “Deep Learning Is Hitting a Wall”. GPT-4 was released about one year later in March 2023, and GPT-4 was a significant advance over GPT-3. Hence, deep learning did not hit a wall in 2022.

Sam Altman does not believe the current strategy has hit a wall, yet. Altman said the following at the Cambridge Union “We can still push on large language models quite a lot, and we will do that. We can take the hill that we're on and keep climbing it, and the peak of that is still pretty far away.”

The key phrase is “still pretty far away”. So Altman believes substantial progress is still possible. Nevertheless, it is true that Altman thinks “another breakthrough” is needed to create an AI system that can accomplish the following demanding task: “make a system that can go discover new physics”.

In Gary Marcus’s current essay he states that he “suggested that deep learning might be approaching a wall”. That thesis is more defendable, but it is rather weak because it does not specify the distance to the wall.

Large Language Models trained on human generated data are probably not enough to achieve the comprehensive superintelligence that AI practitioners dream about. The AI systems that have achieved superhuman performance such as AlphaGo and AlphaFold use neurosymbolic techniques. Digital neural networks are supplemented with strategies from good-old-fashioned-AI (GOFAI). Of course, neither system is perfect. Subsequent research has shown flaws in AlphaGo, but it is still superhuman overall.

I think mathematical theorem proving is an area that is ripe for breakthroughs that combine deep learning and GOFAI techniques.

Expand full comment

I'm sorry for the naive comment, but isn't it still fair to say that GPT-4 is pretty, well, awesome at doing lots of stuff and can make our lives plenty easier, even if it isn't the holy grail of AGI? I am in research and use ChatGPT daily to work more efficiently, to entertain and instruct myself and my child, and other things. Sure, ChatGPT isn't giving me entirely novel solutions to problems, but it's still changing my life for the better in a very noticeable way. And I'm just scratching the surface. Sure, it hallucinates but this doesn't affect things all that much. Why should people stop trying to climb this particular hill a bit further to more completely realize the potential here, even if it's not going to get anyone to AGI? Sincere question.

Expand full comment
Nov 17, 2023·edited Nov 17, 2023

I think the field is mightily improving. During the 'symbolic-AI' hype of from ~1960 to ~1975 the argument 'it's just a matter of scale' reigned too and it did linger until 'big data'. The 'big names' from that area took very long to accept — and some never did — that it wasn't a problem of scale. The fact that it took Sam about a year to publicly accept (still waiting for that blog post on OpenAI's site, though) that it isn't just a matter of 'scaling up' is a sign that the field has improved. Not so much technically, but psychologically.

Whereas these systems are every limited when 'trustworthiness' is required (as they are fundamentally confabulators), they may have uses where correctness is not a strong requirement, e.g. in the creative sector. While they may not deliver the next level of symbolic understanding, they might be fun.

Sam may still have some hopes in getting somewhere in the 'trustworthy' department. I did a quick and dirty calculation in preparation for a talk last month, and from that calculation, I gather the models have to become 10,000 to 100,000 TIMES as large to get in the range of humans (who — let's not forget — aren't paramounts of reliability themselves). It's here in the talk https://youtu.be/9Q3R8G_W0Wc?feature=shared&t=1665

Expand full comment

we have to always keep in mind that all these people have vested interest in the AI industry, so everything they say has to be considered in the context of the corporate strategy of the tech giant that they work. For example, Yann LeCun most likely "switched sides" because Meta changed it's policy towards AI. To paraphrase - "Money corrupts honesty, and big money corrupts honesty big time."

Expand full comment

I think Altman realized, after hyping GPT-x to the max, and then seeing the pendulum swing a little too far as more and more people started talking about AGI and claiming we are getting close or, in some cases, are already there, that he had better temper expectations. He knows we're not anywhere near it, but having set LLMs in motion and, with visions of dollar signs, he would like to avoid an AI winter, since there is a lot of money already invested and if that money thinks we're almost there or already there, it will eventually (perhaps sooner than later) become very unhappy.

Expand full comment

The race to mediocrity should not be scintillating. This is like watching a bunch of pre-teen boys, entirely unaware of what hormones are and what they're doing to their bodies, flail about in the playground during recess. A rather sorry lot supposedly going on about "intelligence" and showing rather little of it. Should the machines become sentient, this lot should get an F.

Expand full comment

“But here we are 20 months later and in some core sense not a lot has changed; hallucinations are still rampant, large language models still make a lot of ridiculous errors and so forth.”

No, they are not. Have you spent anytime using GPT-4? It is quite factually consistent.

Expand full comment

This is exactly what I have been thinking as well. The problem is that LLMs are not actually intelligent, they just mimic human intelligence very well. This is why my goal is to instill machine learning models with insights derived from neuroscience principles, which I believe is the best path forward for true human-level intelligence in AI.

If you are interested in my work / ideas, you can reach me at jeanmmoorman@gmail.com

Expand full comment

This article reads like so wrong from a man desperately searching for self reinforcement. Most of the examples aren't very close to aligning with each premise presented and you need to really stretch to the it together.

Expand full comment