Top 5 reasons OpenAI is probably not worth 90 billion dollars
Are OpenAI employees unloading at the moment of peak AI hype?
According to the Wall Street Journal, OpenAI employees are trying to sell some of their shares at a valuation of 90 billion dollars. Wise idea for them, not so much for the buyers?
Five reasons why I wouldn’t make that investment:
The profit isn’t there. Supposedly OpenAI is on track to make a billion dollars in revenue this year, but
That projection probably comes from their two best quarters, and some data suggest their spectacular growth has already been a slow down after the amazing initial takeoff.
A billion dollars in revenue is not a billion dollars in profit; it’s extremely expensive to train these models, and fairly expensive just to operate them. (And as I understand it, OpenAI has to share their profits with Microsoft, as part of the recent Microsoft investment.)
Current models still have a lot of problems, necessitating future models. Future models will likely be bigger models, even more expensive to train, even more expensive to operate, further reducing profits.
There is a ton of pending litigation, with multiple lawsuits from artists and writers, and rumors of a major NYT lawsuit coming. (“OpenAI could be fined up to $150,000 for each piece of infringing content”, according the report, and there could be millions of such pieces). If OpenAI were obliged to retrain models only on materials for which they had consent, it significantly would weaken their results, and cost a great deal of money to do that retraining. Retraining repeatedly would be extremely costly. In the current state, even small changes to the underlying database might require complete retraining. Imagine if every copyright strike demanded millions of dollar in retraining.
There’s not a huge moat here to protect OpenAI from competitors. The central technology at OpenAI (as far as anyone knows) is large language models, and many other companies know how to build them. Some are even open-sourcing competing models, and the competing models are developing quickly. The competitors may not (yet) be as good, but they are free, which will suit some customers fine. How much is the OpenAI brand name worth? (Example: a lot of the initial use may have been driven by undergrads and high school students writing term papers. Students don’t as rule have a lot of money; they may quickly turn to free alternatives.)
Big customers are still skittish; lots of people are trying LLMs, but JP Morgan, Apple, and others have bans on their use, at least partly motivated around concerns about data leaks, but reliability is a problem too. Hallucinations are a huge, unsolved issue.
LLMs will likely never deliver on the promise that they are somehow close to AGI. Even now, GPT can’t be trusted to follow the rules of chess, for example, and those rules are stated directly (probably multiple times) in the training set. Explicit verbal instruction is key in training human employees, and LLMs aren’t very good at following those instructions. It might be OK in domains where 70% correct is OK, but unlikely to be acceptable in mission-critical applications.
I have no doubt, of course, that some AI companies will eventually merit these stratospheric valuations, but I am not convinced that a pure LLM play ever will.
Gary Marcus sure hopes that AI can live up to the hype without undermining society.
So let me get this straight. In April, OpenAI raised $300M against a 29B valuation. And now, 5 months later, employees are selling shares against a 90B evaluation.
If the company really tripled in valuation in 5 months, you would never sell right?
Hallucinations are not a "huge unsolved issue" but a "huge unsolvable issue" given how RNNs (including transformer LLMs) work. I read somewhere (don't remember why) that Sam Altman has started to rebrand hallucination as feature ('creativity' — which is defendable I think) already and not as a problem that needs to be solved.