I don’t usually write about business deals, much less about rumors about business deals, but this one has me scratching my head, and and is actually super relevant to how people on the inside - both at Microsoft and OpenAI are viewing the future of AI.
Thank you, Gary, for your efforts to insert some form of sense and reasoning into a highly charged emotional subject. While LLMs, ML, DL, ANNs etc. are all valuable, none of these individually or integrated provides any basis for tech reasoning. Reasoning is foundational to any true AGI (if that is even possible) and while LLMs are, sometimes, doing amazing things with language, LLMs alone have no understanding of the meaning in what it writes, which requires reasoning.
Wow. $10 billion for an essentially open source technology that pretends to be intelligent but is really a super expensive statistical engine on steroids? Wonders never cease. This is worse than folly. It has fraud written all over it in my opinion. Sorry.
Maybe there is some money to be made but my gut feeling is that it's a scam that uses AGI as a marketing gimmick. OpenAI has zero chance of cracking AGI. Their approach to AGI is not even wrong. Just my opinion.
LLMs are no closer to AGI than 1980s expert systems were. LLMs can do wonders with language, but that's it. I would classify ChatGPT as being close to AGK, Artificial General Knowledge, a term I made up (I think), but being able to manipulate information is a whole lot different to any kind of "intelligence" which is a much more complex subject.
Oh boy. The culture at Microsoft must harbour a deep-felt conviction about the ultimate and inescapable success of digital AI. I am reminded that Microsoft wasted billions (and they weren't the only one — but Gates was a believer) on the previous round of AI bullishness resulting in notable results like Microsoft Bob and Clippy. And more recently: anybody recall Tay and Zo?
GPT and friends (the 'transformers') can produce 'well-structured' and 'fitting' results. They are so well-structured and have so much fit to the subject of the prompt, that most of us are fooled into thinking it has anything to do with AGI. But well-structured and fitting doesn't equate actual intelligence.
Yeah. Seems that with dimming or, at best, extremely expensive prospects for AGI (or at minimum, non-trivial, reliable and useful vertically-integrated AI) Altman et al are looking to cash in while the cashing in is hot.
The question that I ask and may well be vexing OpenAI's principals and investors is: Is there a practical upper bounds to network size? Much of the hype surrounding LLMs is centered on prospects for the technology at greater and greater scale. Is there a theoretical network size after which system noise (hallucination, factual inaccuracy etc.) can *only* grow? Has that point already been reached with GPT-3? Will GPT-4 etc necessarily mark improvements in fidelity?
Great analysis, thanks. Really feels like Altman is getting a great deal here.
There is also the legal angle; OpenAI put themselves in the spotlight and an easy target for lawsuits. As far as I know, there arent ones currently happening, but the risk is very obviously out there, however that may turn out. So getting out and making it Microsofts problem (who dont seem to care much, see activision/blizzard) also seems like an good choice from Altmans perspective.
There is also the Elon Musk thing, as one of the original backers. The most optimistic and gracious description is that his companies value growth at all costs, and are willing to cut very large corners to get there. Based on his personal, Altman seems pretty affiliated with that. So it might also fit in Altman's view of how business works: growth insanely fast, dont care about breaking laws, and get out at the top.
ChatGPT is actually trained on people data and they are going to make money on that data without getting consent from the data owners :) lawsuit, I don't know :)
They're basically paying $93 billion interest on a $13 billion loan and all the assets are ceded as security. Only the repayment terms are flexible. Hardly a good deal for OpenAI.
Bit of a nitpick, but the $29B figure mentioned so far is described as the post-money valuation. So it would be valuing the current company at $19B.
In general this is a really large amount of cash. What other tech companies have raised $10B in any transaction whatsoever? The Facebook and Alibaba IPOs were larger, raising $16B and $22B. The Uber IPO was "only" $8B. Those companies all had far larger valuations than OpenAI does, though.
There isn't really any other tech company that seems comparable to me. OpenAI is unique in just how much cash they are piling up, for their size. It is not surprising that there are some unusual terms, given that it's an unusually large transaction.
Isn't AGI valuation kind of tricky, since a simple copy paste can put your product in the wild ? The risk seems huge. How do you prove that a competitor is using AGI, or your AGI ?
My take is that somebody who wants to build and monetize AGI will have to do it from a bunker of some sort with employees not allowed to go out or communicate somehow.
Dear Gary Marcus, thank you for your sharp insights. What do you think of the recent ideas, in order to add trustworthiness to l’m, to train them to translate the question asked by the user into computational language , like wolfram alpha , so that an accurate database can be requested and then translate back the answer in natural language ? Thanks .
You can't "train them to translate the question into computational language". With which sample data? where should the quality assurance come from?
This is much harder than, say, an autonomous car, where you can easily (for some value of "easy" anyway) generate video and assorted metadata, based on a million different and difficult-to-handle real-world situations, and train the new version of your AI driver on the "doesn't crash the car" outcome.
Poignant analysis Gary, thank you! Something odd about this deal: OpenAI Inc. (the non-profit) is supposed to be the parent company of OpenAI LP. (the for-profit corporation), and if we squint, the deal terms for stakes look oddly like a spinoff to Microsoft, before OpenAI makes bank.
Looks like OpenAI/Altman found a fairy godmother in Microsoft, who will supply the computing infra and absorb other costs, while trying to make money off of it. When enough money is made, all the dough will go back to OpenAI/Altman.
Thank you, Gary, for your efforts to insert some form of sense and reasoning into a highly charged emotional subject. While LLMs, ML, DL, ANNs etc. are all valuable, none of these individually or integrated provides any basis for tech reasoning. Reasoning is foundational to any true AGI (if that is even possible) and while LLMs are, sometimes, doing amazing things with language, LLMs alone have no understanding of the meaning in what it writes, which requires reasoning.
Wow. $10 billion for an essentially open source technology that pretends to be intelligent but is really a super expensive statistical engine on steroids? Wonders never cease. This is worse than folly. It has fraud written all over it in my opinion. Sorry.
Thank you for another insightful analysis.
i think they can make back the 10b on codex etc but certainly don’t think it is AGI
Maybe there is some money to be made but my gut feeling is that it's a scam that uses AGI as a marketing gimmick. OpenAI has zero chance of cracking AGI. Their approach to AGI is not even wrong. Just my opinion.
LLMs are no closer to AGI than 1980s expert systems were. LLMs can do wonders with language, but that's it. I would classify ChatGPT as being close to AGK, Artificial General Knowledge, a term I made up (I think), but being able to manipulate information is a whole lot different to any kind of "intelligence" which is a much more complex subject.
Oh boy. The culture at Microsoft must harbour a deep-felt conviction about the ultimate and inescapable success of digital AI. I am reminded that Microsoft wasted billions (and they weren't the only one — but Gates was a believer) on the previous round of AI bullishness resulting in notable results like Microsoft Bob and Clippy. And more recently: anybody recall Tay and Zo?
GPT and friends (the 'transformers') can produce 'well-structured' and 'fitting' results. They are so well-structured and have so much fit to the subject of the prompt, that most of us are fooled into thinking it has anything to do with AGI. But well-structured and fitting doesn't equate actual intelligence.
Microsoft has a long history of paying too much for companies: TellMe (https://en.wikipedia.org/wiki/Tellme_Networks), Avenue-A (https://en.wikipedia.org/wiki/AQuantive). That's what they do.
> On the hand
Typo, should be "on the one hand".
Yeah. Seems that with dimming or, at best, extremely expensive prospects for AGI (or at minimum, non-trivial, reliable and useful vertically-integrated AI) Altman et al are looking to cash in while the cashing in is hot.
The question that I ask and may well be vexing OpenAI's principals and investors is: Is there a practical upper bounds to network size? Much of the hype surrounding LLMs is centered on prospects for the technology at greater and greater scale. Is there a theoretical network size after which system noise (hallucination, factual inaccuracy etc.) can *only* grow? Has that point already been reached with GPT-3? Will GPT-4 etc necessarily mark improvements in fidelity?
Great analysis, thanks. Really feels like Altman is getting a great deal here.
There is also the legal angle; OpenAI put themselves in the spotlight and an easy target for lawsuits. As far as I know, there arent ones currently happening, but the risk is very obviously out there, however that may turn out. So getting out and making it Microsofts problem (who dont seem to care much, see activision/blizzard) also seems like an good choice from Altmans perspective.
There is also the Elon Musk thing, as one of the original backers. The most optimistic and gracious description is that his companies value growth at all costs, and are willing to cut very large corners to get there. Based on his personal, Altman seems pretty affiliated with that. So it might also fit in Altman's view of how business works: growth insanely fast, dont care about breaking laws, and get out at the top.
ChatGPT is actually trained on people data and they are going to make money on that data without getting consent from the data owners :) lawsuit, I don't know :)
They're basically paying $93 billion interest on a $13 billion loan and all the assets are ceded as security. Only the repayment terms are flexible. Hardly a good deal for OpenAI.
Interesting way to think about it (though they have profit sharing before the 93)
Bit of a nitpick, but the $29B figure mentioned so far is described as the post-money valuation. So it would be valuing the current company at $19B.
In general this is a really large amount of cash. What other tech companies have raised $10B in any transaction whatsoever? The Facebook and Alibaba IPOs were larger, raising $16B and $22B. The Uber IPO was "only" $8B. Those companies all had far larger valuations than OpenAI does, though.
There isn't really any other tech company that seems comparable to me. OpenAI is unique in just how much cash they are piling up, for their size. It is not surprising that there are some unusual terms, given that it's an unusually large transaction.
I am not a native English speaker, but I noticed a lot of errors. Thank you for the content.
Isn't AGI valuation kind of tricky, since a simple copy paste can put your product in the wild ? The risk seems huge. How do you prove that a competitor is using AGI, or your AGI ?
My take is that somebody who wants to build and monetize AGI will have to do it from a bunker of some sort with employees not allowed to go out or communicate somehow.
Is ChatGPT about to Exit-scam?
https://devaraj2.substack.com/p/is-chatgpt-about-to-exit-scam
Great analysis, much appreciated
Dear Gary Marcus, thank you for your sharp insights. What do you think of the recent ideas, in order to add trustworthiness to l’m, to train them to translate the question asked by the user into computational language , like wolfram alpha , so that an accurate database can be requested and then translate back the answer in natural language ? Thanks .
You can't "train them to translate the question into computational language". With which sample data? where should the quality assurance come from?
This is much harder than, say, an autonomous car, where you can easily (for some value of "easy" anyway) generate video and assorted metadata, based on a million different and difficult-to-handle real-world situations, and train the new version of your AI driver on the "doesn't crash the car" outcome.
To train Llm, sorry for the typo
I think it is pretty hard to do well, because LLMs are black boxes with only strings as output. But it an open area of research
Poignant analysis Gary, thank you! Something odd about this deal: OpenAI Inc. (the non-profit) is supposed to be the parent company of OpenAI LP. (the for-profit corporation), and if we squint, the deal terms for stakes look oddly like a spinoff to Microsoft, before OpenAI makes bank.
Looks like OpenAI/Altman found a fairy godmother in Microsoft, who will supply the computing infra and absorb other costs, while trying to make money off of it. When enough money is made, all the dough will go back to OpenAI/Altman.