Why the collapse of the Generative AI bubble may be imminent
An update from the person who first called the bubble
I just wrote a hard-hitting essay for WIRED predicting that the AI bubble will collapse in 2025 — and now I wish I hadn’t.
Clearly, I got the year wrong. It’s going to be days or weeks from now, not months.
§
No joke. Here are the first two paragraphs of what I sent to my editor on Monday:
The Generative AI Bubble Will Collapse in 2025
Generative AI took the world by storm in November 2022, with the release of ChatGPT. 100 million people started using it, practically overnight. Sam Altman, the CEO of OpenAI, the company that created ChatGPT, became a household name. And at least half a dozen companies raced OpenAI in effort to build a better system. OpenAI itself raced to outdo “GPT-4”, their flagship model, introduced in March of 2023 with a successor, presumably to be called GPT-5. Virtually every company raced to find ways of adopting ChatGPT (or similar technology, made by other companies) into their business.
There is just one thing: Generative AI, at least we know it know, doesn’t actually work that well, and maybe never will.
§
I’ve always thought GenAI was overrated. In a moment, though, I will tell you why the collapse of the generative AI bubble – in a financial sense – appears imminent, likely before the end of the calendar year.
To be sure, Generative AI itself won’t disappear. But investors may well stop forking out money at the rates they have, enthusiasm may diminish, and a lot of people may lose their shirts. Companies that are currently valued at billions of dollars may fold, or stripped for parts. Few of last year’s darlings will ever meet recent expectations, where estimated values have often been a couple hundred times current earnings. Things may look radically different by the end of 2024 from how they looked just a few months ago.
First, though: why should you take my prediction seriously?
Here are four bona fides, aside from my training and work experience (e.g., MIT Ph.D., tenure at NYU, built and sold a machine learning company to Uber, etc):
In 2012, in the New Yorker, I pointed out a series of problems with deep learning, including troubles with reasoning and abstraction that were often ignored (or denied) for years, but that continue to plague deep learning to this day – and that now, at last, have come to be very widely recognized.
In December 2022, at the height of ChatGPT’s popularity I made a series of seven predictions about GPT-4 and its limits, such as hallucinations and making stupid errors, in an essay called What to Expect When You Are Expecting GPT-4]. Essentially all have proven correct, and held true for every other LLM that has come come since.
Almost exactly a year ago, in August 2023, I was (AFAIK) the first person to warn that Generative AI could be a dud.
In March of this year, I made a series of seven predictions about how this year would go. Every one of them has held firm, for every model produced by every developer ever since, throughout what is likely the most-capitalized race in history:
With bona fides established, let’s turn to why have I moved from the dark “this could well happen” (a year ago) to the much darker “I am really feeling like it’s going to happen very soon”.
Keep reading with a 7-day free trial
Subscribe to Marcus on AI to keep reading this post and get 7 days of free access to the full post archives.