Discover more from Marcus on AI
Has Sam Altman gone full Gary Marcus?
What a difference a year makes
On March 20, 2022 in Nautilus, in the essay I am perhaps most notorious for, I suggested that deep learning might be approaching a wall — and made a lot of people, some very very important, really really angry.
I had the temerity to argue, right in the middle of an AI explosion, that (a) scale was not everything; (ii) we had no serious solution to problems of compositionality (understanding whole in terms of their parts); (iii) hallucinations were not going away; (iv) reliability would continue to be a problem; (v) misinformation and factuality were unsolved and unlikely to be solved soon; and (vi) large language models would not get us to AGI and that we needed a new paradigm.
The reaction from the machine learning community was swift and vitriolic; thousands of people ridiculed me on Twitter. Even famous ones. OpenAI CEO Sam Altman, for took a potshot at me, in April of 2022.
Notice how his graphics are like mine? (Scroll back up if you missed that easter egg.) So cute!
Altman’s sidekick, Greg Brockman, then OpenAI’s CTO, now President of OpenAI, took a shot at me too, posting a Dall-E 2 take on my title (which, too my eternal amusement, butchered the title):
The slightly less well-knonw Joscha Bach’s meme was the funniest:
Meta’s AI exec Yann LeCun jumped in, too, posting this on Facebook on May 7, 2022, continuing the months-long chorus of remarks about hitting walls:
(We all know how that turned out).
For good measure, in May 2022, on the very day this blog was conceived (and indeed the impetus for conceiving it), a DeepMind executive declared that AGI had basically been solved:
But here we are 20 months later and in some core sense not a lot has changed; hallucinations are still rampant, large language models still make a lot of ridiculous errors and so forth.
But you know what has changed? The hubris of Spring of 2022 has to some small but detectable extent diminishing. GPT-4 (introduced in the intervening period) is surely better, but people have little by little started to recognize that scoring better on benchmarks is NOT the same as making foundational progress.
Yann LeCun was one of the first of the big tech leaders to switch from cheerleader to skeptic:
But Yann’s just an employee of Mark Zuckerberg, with a model (Galactica) that got trounced by OpenAI’s ChatGPT; some have dismissed his reorientation as sour grapes. And this song is not really about him.
Because the interesting thing is that others are also changing sides. Rarlier this month, Bill Gates, who not that long said that GPT-4 was a revolution, recently that he did not expect GPT-5 to be that much better than GPT-4. Coming from Gates, this is a big deal, since Gates probably has access to drafts of GPT-5, given that he still owns a lot of Microsoft which owns a lot of OpenAI.
I will confess that I found all of this to be more than a little bit vindicating; I am human after all.
But the sweetest vindication of all came just now, November 16, 2023, from Sam Altman, now gone full Marcus. Here’s a transcript excerpt from a talk he just gave in Cambridge:
Exactly. I couldn’t have said it better.
As the psychiatrist says at the end of Portnoy’s Complaint, “now vee may perhaps to begin”.
The sooner we stop climbing the hill we are on, and start looking for new paradigms, the better.
Gary Marcus is the co-author, with Ernie Davis, of Rebooting AI, a 2019 book that was devoted to asking what a paradigm shift in AI might look like.
If you want to know where AI is headed, and not just where Silicon Valley wishes you think it was headed, consider becoming a free or paid subscriber.