What exactly are the economics of AI?
Some AI makes a ton of money; a lot of it is still speculative
Is the fever for Generative AI (the form of AI that is currently most popular) on track to be the tulip mania of the 2020s? As I argued yesterday, the whole thing could turn out to be a fad, but it’s still way too early too tell.
Sometimes AI has hit it big. Google Search has made more money than almost any product in history, and it’s powered by AI. Google Search has been powered by AI since the outset (when it included zero percent Generative AI) and continues to make a ton of money now, as bits of generative AI are presumably blended in. (Generative AI presumably helps the quality of results; I don’t know whether that’s had a material impact on the bottom line.)
Meta, too, has made an immense amount of money selling ads, and AI (though again not necessarily cutting-edge AI) has always been part of what allowed them to place those ads with precision. (As in the case of Alphabet and Google Search, it’s hard to see from the outside whether Generative AI has had a material effect on Meta’s profits).
A few years ago there was a joke in Silicon Valley, anchored to some degree in reality, that if you had .ai in your startup’s domain name you could add a zero to your valuation (100M instead of 10M, etc).
Nowadays it feels like it could be two zeros, especially if you claim to be using Generative AI. But of course just because something is powered by AI doesn’t mean it will make truckloads of money.
And so far Generative AI hasn’t. Maybe hundreds of millions, probably not billions. certainly not hundreds of billions. A rumor that’s been circulating the internet in the last few days went so far as to suggest that OpenAI could even face bankruptcy, possibly as soon as 2024, per a tweet from Rowan Cheung.that got deleted almost as quickly as it got posted.
§
Should we believe the rumor?
Driverless cars are powered by AI, but so far the money that has gone into them, on order of $100 billion dollars, vastly exceeds the actual revenue that has come out of them. (Perhaps several billion dollars, from driver assist software sold to car manufacturers, AutoPilot upcharges on Tesla, a little bit from paying rides in experimental programs, etc).
Cheung’s Tweet (deleted without explanation) ultimately derives from an analysis by Mohit Pandey at Analytics India Magazine.
Pandey’s basic premise was that OpenAI is spending about $700,000/day, and not making all that much revenue. A December estimate that the company reportedly provided was $200 million for 2023, with a rosy prediction of $2 billion for 2024. As Pandey reported, however, the month by month website visit data, however, no longer seem to fit with exponential growth:
That said, I don’t think OpenAI is actually in any kind of imminent danger, but nor do I think they are out of the woods.
Here are a bunch of considerations in either direction, partly drawn from a fun conversation I had earlier today with the VC and tech writer Om Malik:
Reasons why OpenAI, and generative AI, might still do well financially:
Coding is a solid use case that will continue to keep generative AI in the limelight for a long time. Programmers will never go back.
As Malik points out, OpenAI is probably already making nearly enough money to cover costs, and they have plenty in the bank; imminent bankruptcy is a real stretch.
Most of the costs are GPU server time that Microsoft could easily provide to OpenAI with more server time in exchange for greater control. (I would not be at all surprised to this happen). I am doubtful that Microsoft would let OpenAI go under anytime soon, given the optics, and impact on their own stock price.
LLMs will probably get cheaper to operate over time, both as the hardware required inevitably gets cheaper, and as people figure out how to make them more efficient.
Some use case that provided significant profits could still be discovered or perfected. (For example, ChatGPT-style search seems vary shaky now but could improve if new discoveries are made).
Reasons why OpenAI, and generative AI, might struggle financially:
Current revenue appears modest, perhaps roughly in the ballpark of operating costs, but not far beyond those costs. With the exception of Chat-style search, which doesn’t (yet?) work well, no killer app has yet emerged. Investors will not be infinitely patient.
For some use cases (e.g. writing efficient but bland copy), the novelty might already be wearing off. Users may or may not maintain long-term subscriptions.
Most of the basic technology is well-understood, and to a large degree readily copyable. It’s not clear that any commercial company (eg OpenAI) has a durable moat to keep competitors out.
The underlying technology—often referred to as Foundation Models, because systems are fine-tuned on top of large pretrained (foundation) model—is deeply unstable and thus difficult to engineer into reliable products. I plan to write more about this soon, but the basic gist is that you can never really predict for any given question whether a large language model will give you a correct answer, We now know that answers can even change from one month to the next—which makes third-party engineering of large language models into complex systems a huge uphill struggle. In some mission-critical cases that engineering challenge may be insuperable.
Much of the current enthusiasm if fueled by enthusiasm; bubbles beget bubbles … until they don’t. If some people get off the bus, and valuations begin to drop a little, that could launch a potent negative feedback spiral, sparking a sudden deceleration in values that heretofore had been rapidly accelerating. A lot of talent and investors might move on to a new set of shiny things, fleeing generative AI as quickly as some of the same people not long ago fled crypto.
Conclusions
Many of my essays end with a high degree of certainty; this one doesn’t. We just don’t really know how much gold lies at the end of this particular rainbow..
AI is not a magical economic engine; it works brilliantly in some use cases (such as selling ads, helping coders write code faster, playing classic board games) but in many others it simply isn’t reliable enough (e.g., truly autonomous driverless cars, medical diagnosis, ChatGPT-style search).
The astronomical valuations for Generative AI companies might be justified, but might well not be. Thus far, the valuations seem to be predicated on hopes and dreams, without really factoring in the serious engineering risks.
It’s not a house of cards, but the financial foundation of so-called Foundation Models is hardly as robust as it seems.
Caveat emptor.
Gary Marcus, well known for his recent testimony on AI oversight at the US Senate, recently co-founded the Center for the Advancement of Trustworthy AI, and in 2019 wrote a book with Ernest Davis about getting to a thriving world with AI we can trust. He still desperately hopes we can get there.
This is one of the very best essays I read by you, and I very much appreciate the careful analysis and pointing out the possibilities and the uncertainties.
I fully agree that things are greatly hyped up and in practice generative AI may turn out to result in a set of niche (but important) products rather than a societal overhaul.
The closer parallel may be with the dot-com crash. The internet was a real big deal, but people jumped on the bandwagon prematurely.
I think LLM has been a big step forward in handling very large unstructured data. The insights that lead to it may lead to further progress, if not with the same architecture. And even with LLM, there are applications that have not been there before, even if it falls way short of true intelligence.
Great thought provoking piece.
And all the more so for drawing out the comment that the cost component derived from current and future litigation around lack of explainability, responsibility, injustice, negligence, copyright (and others yet to emerge), remains far from fully understood.