What if GPT-5 didn’t meet expectations?
It is quite possible that no prereleased product in the history of technology has ever has had more expectations put upon it
It is quite possible that no preleased product in the history of technology (with the possible exception of the iPhone) has ever has had more expectations put upon it that GPT-5.
It’s not just that consumers are hyped to get it, or that a whole bunch of business are planning to build companies around it; foreign policy is being built around it. Senators and military strategists have worried about would happen if China got GPT-5 before we did; the chip war, which just escalated further, might be built on such concerns. Still others sought a moratorium specifically for models of GPT-5’s anticipated size. The upcoming Bletchley Park summit seems to be built on a simlar premise. Others, meanwhile, have imagined that GPT-5 might eradicate, or at least greatly eliminate, the many concerns people rightly have with current models, such as their unreliability, their tendency towards bias, and their propensity to confabulate authoritative nonsense. .
But it’s never been clear to me that simply building a bigger model which would actually solve any of those problems.
Today, The Information broke the news that another OpenAI project, Arrakis, designed to make smaller, more efficient models, went bust, cancelled by the top brass after it didn’t meet expectations.
For months, something that Sam Altman said while sitting next to me in May at the US Senate Judiciary Subcommittee on AI Oversight has been reverberating back and forth across my brain: “We are not currently training what will be GPT-5. We don’t have plans to do it in the next six months.”
Why not?
Since pretty much all of us assumed that GPT-4 would be followed by GPT-5, often imagined to be significantly more powerful, as quickly as possible, that pair of Sam’s sentences came as a surprise. There are various theories one could have. For example, OpenAI might not have had enough cash on hand to train these models (they are notoriously expensive to train). But OpenAI is about as flush with cash as almost any startup has ever been. For a company that had just raised $10 billion, even a $500 million training run wouldn’t be out of the question.
Another theory is that OpenAI realized how expensive it would be either to train the model or to run it, and wasn’t sure they could make a profit given those costs. That still seems viable.
Still a third theory, my own, is that by the time of Altman’s remarks in May, OpenAI had already run some tests as proof of concept, and didn’t like what they saw. They might have decided that GPT-5, if it was simply a bigger version of GPT-4, would not meet expectations, and that it wasn’t worth spending the hundreds of millions of dollars required, if the outcome would only turn out to be disappointing, or even embarrassing.
The Information’s scoop is a reminder that all men (and women), and all companies, are mortal. Few have risen as quickly, but there’s no guarantee that OpenAI can maintain their stratospheric trajectory. If Yann LeCun and I are correct, AI will need genuinely new paradigms to get to the next level in AI, and OpenAI may be no closer to that than anyone else.
§
If GPT-5 did fail to please, or even if it were just deferred indefinitely (like so many promises that have been about driverless cars), it could have a rapid, deflationary effect on sky-high valuations.
People might even start to realize that building truly reliable, trustworthy AI is actually really, really hard.
Gary Marcus has seen neural networks rise and fall, and rise, and wonders what’s next.
There is an infantile assumption in anything to do with AI that exponential growth is expected as normal, and also sustainable. I wonder why? Is it just a lack of numeracy skills, or basic scientific ignorance? Exponential growth is very rare in the natural world. Thing saturate or break very quickly. I wouldn't be surprised if we only get minor incremental improvements in GPTs from now on, improvements that would be hard to financially justify.
The main obstacle is the fundamental impossibility of eliminating shortcomings by increasing the size.