89 Comments

The most important sentence in this post?

"The sooner we stop climbing the hill we are on, and start looking for new paradigms, the better."

Expand full comment

Can't we both finish climbing that hill and look for new paradigms ?

Expand full comment

The problem is that the current hill is supremely expensive and will not lead to any good place.

Expand full comment

“We’ve got a wall because I don’t want to work hard and dont understand statistics”

The latter part of this sentence implies that you are a dolt and believe there is not “any good place” for the outcome.

Expand full comment

Aren't ChatGPT and to a lesser extent Dall-E fantastic tools with real societal benefits that are still unfolding ? And we haven't even reached the summit yet.

Expand full comment

That IS the most important sentence! I think we can agree that hills are not walls. With this outlook, it would be a wall because your small mind would stop at its sight. Marcus, and you cannot see the forest through the trees. But there is no reason to blind everyone else.

Expand full comment
Nov 17, 2023Liked by Gary Marcus

This is typical late stage hype-denialism: "We never believed what we said we believed"

Be careful, the next stage is: "Look at that Marcus dude, he was such a tool for hyping up LLMs as AGI" :)

Expand full comment
author

🤣😂🤣

Expand full comment

🤦 🤦 🤦 🤦🤦 🤦 🤦 🤦

Gary's making stuff up.

1. From 2021, having implemented GPT3 + human feedback, Sam Altman had long said he was not not sure what model to build, to achieve safe Strong General Purpose Ai.

2. Bonus: No evidence apparently, that Sam thinks Deep Learning is supposedly hitting a wall.

_____________

Time stamp: 15:28+ in 2021 conversation:

https://www.ted.com/talks/the_ted_interview_the_race_to_build_ai_that_benefits_humanity_with_sam_altman_from_april_2021?autoplay=true&muted=true&language=en

Expand full comment
author

please. just stop. did you even read my post? Sam may have wavered but he sure as heck jumped on me in 2022.

Expand full comment

I clearly read the article, and I showed that Sam didn't suddenly desire to consider new avenues, after your famous "Deep Learning is Hitting a Wall" claim/article. Sam made this clear about a year before your article.

~Modern experts (like Ilya Sutskever) express that DL is currently in an acceleration phase.

Don't you consider their guidelines to be more up to date than yours? I don't tend to see you account or even cite such expressions.

Expand full comment

And you are jumping on him now. You’re being foolish to parade a win here as he was. You could at the very least admit that if it’s a wall there might very well be doors we haven’t found.

Expand full comment

A door in this analogy would be developing/using a paradigm other than mere scaling of existing DL approaches to get to the other side of that wall. I.e. exactly what Marcus is suggesting.

Expand full comment
Nov 17, 2023Liked by Gary Marcus

“But the Turing test cuts both ways. You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you've let your sense of personhood degrade in order to make the illusion work for you?"

― Jaron Lanier

Expand full comment

Invite them over for Thanksgiving dinner with you - crow for the main course and humble pie for dessert.

Expand full comment
author

Yann is so allergic to humility the coroner might have to come for a visit

Expand full comment

Parade laps before the finish. I’ll send you the crow soon!

Expand full comment

Well said, Gary, and you should feel vindicated. We need more people to understand that these systems simply cannot reason as humans do.

On that point, and responding to Jan's comments, LLM-based AI systems can performs decently on a broad range of human benchmarks that can be reduced to text -- bar exams, for example. But they cannot apply this knowledge in novel contexts. Perhaps just as tellingly, they struggle with simple tasks that require them to generalize outside their training data -- see the paper Embers of Autoregression for examples. Our experience of the world cannot be reduced to a set of training data -- see the paper Benchmarks and Grover and the Whole Wide World Museum for more on this flaw.

Perhaps AGI will be reached one day, but not without further breakthroughs.

Expand full comment

Whether we would need further breakthroughs or just more data and bigger model is a matter of debate. My point is that Sam Altman didn't say what the post implies as he was talking about super-intelligence not human-level intelligence.

Expand full comment

I was about to ask if human-level intelligence, AGI, and superintelligence are each defined well enough to make them clearly distinct, but it's not even the point: the point is that if Altman had confidence in a path to them, he would just say it (like he's done much of this year, until recently with more tempered public remarks).

Expand full comment

Do we really need clear definitions to make the difference between the intelligence of a random dude (AGI) and an intelligence that could at the very least redo every major scientific discovery given identical context as the original scientist behind the breakthrough, in perhaps less than 10 seconds each time instead of the original years or decades (ASI) ?

Expand full comment

But is AGI the intelligence of some random human? We measure systems like Deep Blue or AlphaGo by the performances of the *best* humans, not random or even average. And is superintelligence just the broad version of that? Why not use AGI?

I don't even know how productive these concepts are sometimes, but I do know that Altman has no better idea than we do about what these systems would look like.

Expand full comment
Nov 17, 2023·edited Nov 17, 2023

AlphaGo and DeepBlue are narrow AIs, we expect them to excel because we know the problems they solved were vulnerable to massive amounts of computations. If you produced an AI that was as smart as any random dude, i can't see how anybody couldn't call it AGI. That's the threshold, and the gray area is below this : if you code the biggest idiot on earth, you'll probably get half of the researchers on the topic calling your system an AGI and long debates would ensue.

Expand full comment

He was asked about AGI (i.e. human-level) but then conflated his answer with 'super-intelligence' (i.e. beyond human-level). He also conflates 'discovering new physics' with super-intelligence', ignoring that discovering new physics is within the range of humans. It's a messy answer, except for that he is clear that 'another breakthrough is required and scaling is not enough' for both AGI and beyond.

Expand full comment

I believe he probably means 1 such super-intelligent AI should "discover new physics", it's like any such AI that could do that with 100% probability of success (as long as this new physics exists).

Expand full comment

Sam is very fuzzy on the AGI (human-level) and 'superintelligence' ('beyond human-level'). He throws them together in one basket. Besides, we know that discovering new physics is something humans have done for a while, so why that requires 'super' intelligence, I don't know.

Expand full comment

The difference is that the super intelligence is a single individual, this system would be a monster of insight about reality.

Expand full comment

I understand, but my point is that AI is nowhere near human-level intelligence at present, much less "superintelligence." Timothy Lee who substacks as Understanding AI has a good post up on this topic.

Expand full comment

Gary's making stuff up.

1. From 2021, having implemented GPT3 + human feedback, Sam Altman had long said he was not not sure what model to build, to achieve safe Strong General Purpose Ai.

2. Bonus: No evidence apparently, that Sam thinks Deep Learning is supposedly hitting a wall.

_____________

Time stamp: 15:28+ in 2021 conversation:

https://www.ted.com/talks/the_ted_interview_the_race_to_build_ai_that_benefits_humanity_with_sam_altman_from_april_2021?autoplay=true&muted=true&language=en

Expand full comment

It must feel good to be proven right.

I want to thank you as well, you have educated me more on the topic of deep learning and the importance of healthy skepticism toward the narratives that are being sold (quite literally), than all these industry leaders combined.

Expand full comment

🤦 🤦 🤦 🤦🤦 🤦 🤦 🤦

Gary's making stuff up.

1. From 2021, having implemented GPT3 + human feedback, Sam Altman had long said he was not not sure what model to build, to achieve safe Strong General Purpose Ai.

2. Bonus: No evidence apparently, that Sam thinks Deep Learning is supposedly hitting a wall.

_____________

Time stamp: 15:28+ in 2021 conversation:

https://www.ted.com/talks/the_ted_interview_the_race_to_build_ai_that_benefits_humanity_with_sam_altman_from_april_2021?autoplay=true&muted=true&language=en

Expand full comment
founding

Gary is cherry-picking Atlman's statements at the Cambridge Union. This excerpt from the transcript will show that he difinitely has not gone full Gary Marcus

""We definitely have not reached AGI yet, but if you went back in time five years and showed people a copy of GPT-4 and told them this was a real thing that was going to come, I think they would tell you that's closer to AGI than they thought it would be. There is novel understanding in these systems to some degree. Now, it's very weak, and I don't mean to make too much of it, but I don't want to undersell it either. The fact that we have a system that can understand the subtleties of language, combine concepts in novel ways, and do some of the things that many of us associate with general intelligence, that's a big deal.

The rate of improvement in front of us is so steep that we can see how good it's going to be in just a small handful of more years. I think the best test of all of this is just utility to people. So again, GPT-4 embarrasses us like we kind of just feel bad that it's out in the world because we know what all the flaws are. But it adds value to hundreds of millions of people's lives, more than that, people who benefit from the products and services other people are building with it.

And so, I think we're getting close enough that the definition of AGI matters. People have very different opinions of what that's going to be and when we cross a thing that you call AGI or superintelligence or whatever. But a thing that is in the rearview mirror is AI systems that are tremendously useful to people, and that I think came sooner than a lot of people thought."

Expand full comment
author

I don’t doubt that there is utility. But Sam himself made everything sound like it was about AGI when he wrote that “AGI is going to be wild” less than an hour after DALL-E came out, and when he attacked me for saying deep learning was not bringing us AGI, as noted above. (and in many corporate blogs about AGI, etc). That’s been his major goal, he acted for a while like scaling was going to get him there (see also his essay Moore’s Law for Everything), ridiculed me for saying otherwise. He’s not really full Gary Marcus, but this is a big change.

Expand full comment
author

also he made a remark about plateau’s recently, and conceded earier this week that we have no idea how GPT-5 is going to turn out. so there’s definitely a change in his thinking that converges on what I have been saying.

Expand full comment

It's not a "big change".

⚫⚫⚫⚫Are you not seeing the 2021 interview (a year before your famous claim that DL was hitting a wall)?

ime stamp: 15:28+ in 2021 conversation:

https://www.ted.com/talks/the_ted_interview_the_race_to_build_ai_that_benefits_humanity_with_sam_altman_from_april_2021?autoplay=true&muted=true&language=en

_______________________________________________

Sam's words in 2021: "

...we need to pursue now I think at this point the questions of whether we can build really powerful general purpose ai system I won't say they're in the rear view mirror :

----> we still have a lot of hard engineering work to do<---- but I'm pretty confident we're going to be able toand now the questions are like

---> what should we build and how and why and what data should we training on

"

Expand full comment
founding

Maybe the real reason he's out is because his thinking completely converged with yours:)

Expand full comment
Nov 17, 2023Liked by Gary Marcus

“What do we have to do in *addition to a language model* to make a system that can go discover new physics?"

OpenAI’s LLMs might not be necessary at all though, right?

Expand full comment
author

Correct!

Expand full comment

Incorrect, this is pre-revisionist history. Evolution doesn’t skip steps, you just think they’re meaningless.

Expand full comment

The provocative title of Gary Marcus’s March 2022 article was “Deep Learning Is Hitting a Wall”. GPT-4 was released about one year later in March 2023, and GPT-4 was a significant advance over GPT-3. Hence, deep learning did not hit a wall in 2022.

Sam Altman does not believe the current strategy has hit a wall, yet. Altman said the following at the Cambridge Union “We can still push on large language models quite a lot, and we will do that. We can take the hill that we're on and keep climbing it, and the peak of that is still pretty far away.”

The key phrase is “still pretty far away”. So Altman believes substantial progress is still possible. Nevertheless, it is true that Altman thinks “another breakthrough” is needed to create an AI system that can accomplish the following demanding task: “make a system that can go discover new physics”.

In Gary Marcus’s current essay he states that he “suggested that deep learning might be approaching a wall”. That thesis is more defendable, but it is rather weak because it does not specify the distance to the wall.

Large Language Models trained on human generated data are probably not enough to achieve the comprehensive superintelligence that AI practitioners dream about. The AI systems that have achieved superhuman performance such as AlphaGo and AlphaFold use neurosymbolic techniques. Digital neural networks are supplemented with strategies from good-old-fashioned-AI (GOFAI). Of course, neither system is perfect. Subsequent research has shown flaws in AlphaGo, but it is still superhuman overall.

I think mathematical theorem proving is an area that is ripe for breakthroughs that combine deep learning and GOFAI techniques.

Expand full comment
author

Sir, reread the oiriginal essay. We have not made the progress on the things I mentioned. We have more memorized items to rely on, but no robust solution to any of them.

Expand full comment

🤦 🤦 🤦 🤦🤦 🤦 🤦 🤦

Very true.

Gary's making stuff up.

1. From 2021, having implemented GPT3 + human feedback, Sam Altman had long said he was not not sure what model to build, to achieve safe Strong General Purpose Ai.

2. Bonus: No evidence apparently, that Sam thinks Deep Learning is supposedly hitting a wall.

_____________

Time stamp: 15:28+ in 2021 conversation:

https://www.ted.com/talks/the_ted_interview_the_race_to_build_ai_that_benefits_humanity_with_sam_altman_from_april_2021?autoplay=true&muted=true&language=en

Expand full comment

I'm sorry for the naive comment, but isn't it still fair to say that GPT-4 is pretty, well, awesome at doing lots of stuff and can make our lives plenty easier, even if it isn't the holy grail of AGI? I am in research and use ChatGPT daily to work more efficiently, to entertain and instruct myself and my child, and other things. Sure, ChatGPT isn't giving me entirely novel solutions to problems, but it's still changing my life for the better in a very noticeable way. And I'm just scratching the surface. Sure, it hallucinates but this doesn't affect things all that much. Why should people stop trying to climb this particular hill a bit further to more completely realize the potential here, even if it's not going to get anyone to AGI? Sincere question.

Expand full comment

Your points aren't naive at all. Of course these LLMs can be useful, interesting tools to play with. If the AI companies sold them like that then Gary would have little to say. But they are hyping them up as something they are simply not, nor never will be. If you don't have people like Gary giving them reality checks, they will simply continue running around whispering dystopian fantasies into politicians ears, whipping up social anxiety, and saying any bs to keep the investments coming in.

Expand full comment

🤦 🤦 🤦 🤦🤦 🤦 🤦 🤦

Gary's making stuff up.

1. From 2021, having implemented GPT3 + human feedback, Sam Altman had long said he was not not sure what model to build, to achieve safe Strong General Purpose Ai.

2. Bonus: No evidence apparently, that Sam thinks Deep Learning is supposedly hitting a wall.

_____________

Time stamp: 15:28+ in 2021 conversation:

https://www.ted.com/talks/the_ted_interview_the_race_to_build_ai_that_benefits_humanity_with_sam_altman_from_april_2021?autoplay=true&muted=true&language=en

Expand full comment

Hallucinations are not a big problem for people who are cautious and have already some knowledge on the subject considered. An expert in a given field will easily detect and avoid flaws in his field. Hallucinations are huge problem for no experts, the general audience, the average not prepared user. Generally, people are not well aware of the shortcomings of AI systems but they do know already from the media that AI driven chat-bots are much better that the classical internet browsers. They have been said that these systems are providing not only links to sources or elements of response but also ready to use contextualized solutions. So they will want to use them extensively for everything, including social, engineering or economic problems and sometimes they will be completely misled.

Expand full comment

But people can and will learn to use these to avoid being misled, insofar as being misled leads to undesirable outcomes. I am not an expert in most things I use ChatGPT for. I've just learned to calibrate what I ask it and I have developed a healthy skepticism of its responses.

Expand full comment

You are much more optimistic than me about the readiness, the willingness of all people using AI driven tools to be cautious, rigorous and critically reserved. I am worried about the fact that people will use not fully reliable AI tools for their professional or personal activities, without discrimination, without caution, just because is so convenient, so helpful, so apparently efficient. While AI-based tools will be more and more extensively used, there will be less and less space for criticism. People will not like to hear criticism about handy tools that they use every day and that they believe satisfying.

Expand full comment

I think the field is mightily improving. During the 'symbolic-AI' hype of from ~1960 to ~1975 the argument 'it's just a matter of scale' reigned too and it did linger until 'big data'. The 'big names' from that area took very long to accept — and some never did — that it wasn't a problem of scale. The fact that it took Sam about a year to publicly accept (still waiting for that blog post on OpenAI's site, though) that it isn't just a matter of 'scaling up' is a sign that the field has improved. Not so much technically, but psychologically.

Whereas these systems are every limited when 'trustworthiness' is required (as they are fundamentally confabulators), they may have uses where correctness is not a strong requirement, e.g. in the creative sector. While they may not deliver the next level of symbolic understanding, they might be fun.

Sam may still have some hopes in getting somewhere in the 'trustworthy' department. I did a quick and dirty calculation in preparation for a talk last month, and from that calculation, I gather the models have to become 10,000 to 100,000 TIMES as large to get in the range of humans (who — let's not forget — aren't paramounts of reliability themselves). It's here in the talk https://youtu.be/9Q3R8G_W0Wc?feature=shared&t=1665

Expand full comment

we have to always keep in mind that all these people have vested interest in the AI industry, so everything they say has to be considered in the context of the corporate strategy of the tech giant that they work. For example, Yann LeCun most likely "switched sides" because Meta changed it's policy towards AI. To paraphrase - "Money corrupts honesty, and big money corrupts honesty big time."

Expand full comment

🤦 🤦 🤦 🤦🤦 🤦 🤦 🤦

Gary's making stuff up.

1. From 2021, having implemented GPT3 + human feedback, Sam Altman had long said he was not not sure what model to build, to achieve safe Strong General Purpose Ai.

2. Bonus: No evidence apparently, that Sam thinks Deep Learning is supposedly hitting a wall.

_____________

Time stamp: 15:28+ in 2021 conversation:

https://www.ted.com/talks/the_ted_interview_the_race_to_build_ai_that_benefits_humanity_with_sam_altman_from_april_2021?autoplay=true&muted=true&language=en

Expand full comment

I think Altman realized, after hyping GPT-x to the max, and then seeing the pendulum swing a little too far as more and more people started talking about AGI and claiming we are getting close or, in some cases, are already there, that he had better temper expectations. He knows we're not anywhere near it, but having set LLMs in motion and, with visions of dollar signs, he would like to avoid an AI winter, since there is a lot of money already invested and if that money thinks we're almost there or already there, it will eventually (perhaps sooner than later) become very unhappy.

Expand full comment

The race to mediocrity should not be scintillating. This is like watching a bunch of pre-teen boys, entirely unaware of what hormones are and what they're doing to their bodies, flail about in the playground during recess. A rather sorry lot supposedly going on about "intelligence" and showing rather little of it. Should the machines become sentient, this lot should get an F.

Expand full comment

🤦 🤦 🤦 🤦🤦 🤦 🤦 🤦

Gary's making stuff up.

1. From 2021, having implemented GPT3 + human feedback, Sam Altman had long said he was not not sure what model to build, to achieve safe Strong General Purpose Ai.

2. Bonus: No evidence apparently, that Sam thinks Deep Learning is supposedly hitting a wall.

_____________

Time stamp: 15:28+ in 2021 conversation:

https://www.ted.com/talks/the_ted_interview_the_race_to_build_ai_that_benefits_humanity_with_sam_altman_from_april_2021?autoplay=true&muted=true&language=en

Expand full comment

“But here we are 20 months later and in some core sense not a lot has changed; hallucinations are still rampant, large language models still make a lot of ridiculous errors and so forth.”

No, they are not. Have you spent anytime using GPT-4? It is quite factually consistent.

Expand full comment

If it isn’t reliably correct then you can’t rely on it.

Useful tool for very many things, sure, like generating drafts and suggesting stuff. Facts, not so much.

Expand full comment

This is exactly what I have been thinking as well. The problem is that LLMs are not actually intelligent, they just mimic human intelligence very well. This is why my goal is to instill machine learning models with insights derived from neuroscience principles, which I believe is the best path forward for true human-level intelligence in AI.

If you are interested in my work / ideas, you can reach me at jeanmmoorman@gmail.com

Expand full comment

This article reads like so wrong from a man desperately searching for self reinforcement. Most of the examples aren't very close to aligning with each premise presented and you need to really stretch to the it together.

Expand full comment
author

Literally empty, absent a single specific example.

But do feel free to unsubscribe!

Expand full comment