Important update, that somewhat undermines the thesis of this essay. Ray Kurzweil just wrote to me (22 June 2024 ) with an important clarification of his position:
I have not revised and not redefined my prediction of AGI, still defined as AI that can perform any cognitive task an educated human can. I still still believe that will happen by 2029. My comment to Steven Levy was intended to specify that reaching the level of the best human poets is a higher bar for writing than what a "mere" AGI would achieve, and thus might take longer.
Kudos to him for staying strong to his position.
It was always going to happen; the ludicrously high expectations from last 18 ChatGPT-drenched months were never going to be met. LLMs are not AGI, and (on their own) never will be; scaling alone was never going to be enough. The only mystery was what would happen when the big players realized that the jig was up, and that scaling was not in fact “All You Need”.
Yann LeCun, was to his credit, one of the first off the sinking ship (I of course refused to board in the first place), calling LLMs an “off-ramp” to AGI. But that was only after ChatGPT ate Galactica’s lunch; until then he was publicly supportive even if privately skeptical.
Others are bailing now, too. Or at least trying to subtly alter their positions, committing less to the unrealistic.
Exhibit A: OpenAI’s CTO Mira Murati just publicly acknowledged what I long suspected: there is no mind blowing GPT-5 behind the scenes as of yet. At an interview with Fortune, she let slip that “inside the labs we have these capable models and they’re not that far ahead”.
Exhibit B: For years—and as recently as April in his TED talk—Ray Kurzweil famously projected that AGI would arrive in as 2029. But in an interview just published in WIRED, Kurzweil (who I believe to still works at Alphabet, hence knows what is immediately afoot) let his predictions slip back, for the first time, to 2032. (He also seemingly dropped the standard for AGI from general intelligence to writing topnotch poetry).
Expect more revisionism and downsized expectations throughout 2024 and 2025.
You heard it here first.
Gary Marcus is not shocked to see this retrenching.
Appreciate you, Gary. An honest voice among zealots who have so much invested financially, mentally, and emotionally that they can't think straight.
Even if we agree that AGI is relatively imminent, the question is would that AGI merely be an engineering achievement or would it also shed some scientific light on how human (or biological) intelligence works? If it would be nothing more than engineering feat then trying to achieve AGI is also off-ramp from the goals set for themselves by the founders of Artificial Intelligence.