When you're doing 100mph, your chances of surviving an impact plummet. The one thing I have not seen too many reporters focus on, in this story, is the ludicrous speed with which this is developing. To Gary's implied point at the very end, it's the weekend before Thanksgiving. The fact that the Board could not wait to let Sam go, speaks volumes. We're still waiting to hear what those volumes are actually saying.
Also, the fact that they all (Sam, Greg, the Board, the investors, etc) spent this entire weekend trying to resolve the situation by Monday (with two 5pm deadlines no less), speaks even bigger volumes.
This isn't FTX, but it strikes me that the more sudden and dramatic an implosion of a high profile company, the deeper and darker the core of the story.
My guess there "[...] there is a conflict between the lofty goals — create safe and beneficial human-level AI for the world — of the non-profit OpenAI Inc. and the goals of the commercial sub-venture OpenAI LLC — which is there to make money, and which allows for commercial investors such as Microsoft to participate. It looks like Sam is looking for ways to escape the remaining constraints of the lofty non-profit. A new restructuring that turns the tables — the commercial sub venture becoming the lead and the not-for-profit lofty one simply becoming subsidised by the commercial arm — seems a possible outcome. Frankly, everybody is guessing now and so am I."
This turning of the tables *inside* the OpenAI structure seems to be off the table now. So, a new venture will be looked at. Now is the time we're going to see if it is possible to fund a second GPT-sized model (there is some uncertainty if there is a efficiency innovation in the wings that makes things cheaper — like retentive network for using or something else for training, otherwise I wonder how you are going to create a business case for training a second 500 million to 1 billion 'GPT-like' model and running it)
Other startups have already made the business case for training huge 'GPT-like' models: AI21, Anthropic (started by ex-OpenAI executives, just got zillions more from Amazon and Google), Inflection AI (got zillions from Google and Microsoft), and probably others I can't recall. The money is definitely there for anyone with any connection with OpenAI; in fact the more money founders say they will spend on for tens of thousands of Nvidia H100 GPUs for training, the more money VCs and big tech companies give them.
It will probably require a big billionaire to step in if Sam wants to replicate the for-profit OpenAI outside of OpenAI. Interesting times, not necessarily beneficial for realistic assessments of the technology, though. At this point this power grabbing soap sustains the GPT-fever more than it breaks it. But if no big investor steps in, people might start to wonder why. And boy, Microsoft will be so angry (unless they are in part behind this after all). Conspiracy-theory time!
I also wonder if Mr. Musk has anything to do with the OpenAI drama... even if tangentially. I always wonder about the conversations we don't hear, the texts we don't see, the emails we don't read. After all, Musk recruited Sutskever away from Google and then left OpenAI bc of a fundamental disagreement with Altman. Maybe that's the screenwriter in me talking. Or maybe...
Yes, I suspect Elon is part of this too. I also suspect this had to do with the baby GPT-5 that Altman said was recently developed -- and the accelerationist urges that Altman has been typically channeling in his talks.
1. None of these people have a plan. They are all making it up as they go along.
2. Why did they even negotiate to bring Altman back?
3. How much of OpenAI is know how versus IP? If it’s all the former, does competitive advantage even exist?
4. Does OpenAI just turn into a licensor while all the product development happens at Microsoft? How do these people work together?
5. Is the board going to tell us what triggered this? My outsider guess is slow burn that started at least back when Altman released ChatGPT against internal objections last year.
Thanks for your perspective! Re, competitive advantage, that matters to the entire industry. If they can’t protect something that everyone wants, ultimately it may turn into a commodity. And that’s best cases scenario, assuming all the limitations don’t start really crashing the hype.
For point 1, agree that the board may never have been serious about bringing Altman back.
In any case, odds are at least 75% that this all ends in a lawsuit.
OpenAI has a lot more leverage than I think the public has been giving it credit for, even with a ton of talent leaving along with Sam. Microsoft is in a really difficult position, but they are surely forced to try and figure out how to salvage whatever they can. Like I feel like they have to at least publicly support OpenAI, and are probably forced to continue it's financial support as well if they don't want to write off the investment to 0.
Super complicated situation, so much to consider. What follows next is probably a purging/exodus of the non-AGI believers. Profit seeking VC options are probably gone.. but the tech as of now still stands ahead, so maybe some opportunists who are looking for a similar all-access deal like Microsoft has are going to be considering options. This also is going to probably pressure Microsoft to reassert itself as the main supporter for OpenAI, but maybe they try and play hardball with them.. I wouldn't be surprised if we hear Elon try and throw his hat in the ring this week, he's expressed a lot of public support for Ilya in the past.
This just got way more chaotic (which I personally, am all here for). Good time to be a LLM researcher.
Doesn't seem like this is over "believing" in AGI or not, but about how safely such development can and should occur. My view is that it's not possible to develop safe AGI so we simply shouldn't be doing it -- kind of like how it's simply not safe to crowdsource nuclear bombs. https://nautil.us/building-superintelligence-is-riskier-than-russian-roulette-358022/
Thanks to all participants for a great conversation. I just read the missive from the NYT this morning and with Sam going to Microsoft (and likely to be joined there by a some of OpenAI's talent) it appears to me the OpenAI's board just got a lesson of what the real game is. It is about Microsoft beating Google, Amazon and other large players for dominance in the emerging AI commercial-industrial-consumer complex. This game is far larger than the small point of view from which OpenAI's board was viewing this (a dualistic how to be safe and "good," versus unleashing something dangerous). This might be better understood as like the discovery of flight or the early oil and electricity industries. No one knows all the commercial-industrial-consumer industrial manifestations of this, Whole new as yet unclear markets will emerge. It is time for a series of long term strategic scenarios to be created to guide thinking about what is emerging. Using tools like scenario analysis and chasm analysis will help. Reading the history of key industries and how they emerged will help. The small OpenAI board is in over its head. Billions will be lost and gained, and now it is about who can play in that field over the next decade.
Apparently there are no non-compete agreements at any level at OpenAI. Who allows a vendor or even an investor to poach talent? Who allows founders to leave and two days later go to work for an investor or a vendor? It's as mind boggling as ChatGPT's capabilities.
When you're doing 100mph, your chances of surviving an impact plummet. The one thing I have not seen too many reporters focus on, in this story, is the ludicrous speed with which this is developing. To Gary's implied point at the very end, it's the weekend before Thanksgiving. The fact that the Board could not wait to let Sam go, speaks volumes. We're still waiting to hear what those volumes are actually saying.
Also, the fact that they all (Sam, Greg, the Board, the investors, etc) spent this entire weekend trying to resolve the situation by Monday (with two 5pm deadlines no less), speaks even bigger volumes.
This isn't FTX, but it strikes me that the more sudden and dramatic an implosion of a high profile company, the deeper and darker the core of the story.
Thanks Gary for keeping us updated
Erik Larson (author of the book "The Myth of AI") invited me for a guest blog on his substack: https://erikjlarson.substack.com/p/gerben-wierda-on-chatgpt-altman-and which was posted yesterday.
My guess there "[...] there is a conflict between the lofty goals — create safe and beneficial human-level AI for the world — of the non-profit OpenAI Inc. and the goals of the commercial sub-venture OpenAI LLC — which is there to make money, and which allows for commercial investors such as Microsoft to participate. It looks like Sam is looking for ways to escape the remaining constraints of the lofty non-profit. A new restructuring that turns the tables — the commercial sub venture becoming the lead and the not-for-profit lofty one simply becoming subsidised by the commercial arm — seems a possible outcome. Frankly, everybody is guessing now and so am I."
This turning of the tables *inside* the OpenAI structure seems to be off the table now. So, a new venture will be looked at. Now is the time we're going to see if it is possible to fund a second GPT-sized model (there is some uncertainty if there is a efficiency innovation in the wings that makes things cheaper — like retentive network for using or something else for training, otherwise I wonder how you are going to create a business case for training a second 500 million to 1 billion 'GPT-like' model and running it)
Other startups have already made the business case for training huge 'GPT-like' models: AI21, Anthropic (started by ex-OpenAI executives, just got zillions more from Amazon and Google), Inflection AI (got zillions from Google and Microsoft), and probably others I can't recall. The money is definitely there for anyone with any connection with OpenAI; in fact the more money founders say they will spend on for tens of thousands of Nvidia H100 GPUs for training, the more money VCs and big tech companies give them.
It will probably require a big billionaire to step in if Sam wants to replicate the for-profit OpenAI outside of OpenAI. Interesting times, not necessarily beneficial for realistic assessments of the technology, though. At this point this power grabbing soap sustains the GPT-fever more than it breaks it. But if no big investor steps in, people might start to wonder why. And boy, Microsoft will be so angry (unless they are in part behind this after all). Conspiracy-theory time!
I love the update, but you really can take some time off. OpenAI could dissolve to and the only thing affected would be CO2 emissions, in a good way.
You started and ended a sentence with “instead”. God bless your in-that-moment exuberance. =) can’t blame ya!
And just like that...Sam & Greg, with colleagues, join Microsoft.
I also wonder if Mr. Musk has anything to do with the OpenAI drama... even if tangentially. I always wonder about the conversations we don't hear, the texts we don't see, the emails we don't read. After all, Musk recruited Sutskever away from Google and then left OpenAI bc of a fundamental disagreement with Altman. Maybe that's the screenwriter in me talking. Or maybe...
Yes, I suspect Elon is part of this too. I also suspect this had to do with the baby GPT-5 that Altman said was recently developed -- and the accelerationist urges that Altman has been typically channeling in his talks.
Ah the depths of the human psyche... and Silicon Valley power players :)
A little drama, yes. With not much of long-term implications. Technology waits for no man, and the field is remarkably fluid and advancing furiously.
News of Microsoft hiring Altman and others.
1. None of these people have a plan. They are all making it up as they go along.
2. Why did they even negotiate to bring Altman back?
3. How much of OpenAI is know how versus IP? If it’s all the former, does competitive advantage even exist?
4. Does OpenAI just turn into a licensor while all the product development happens at Microsoft? How do these people work together?
5. Is the board going to tell us what triggered this? My outsider guess is slow burn that started at least back when Altman released ChatGPT against internal objections last year.
Thanks for your perspective! Re, competitive advantage, that matters to the entire industry. If they can’t protect something that everyone wants, ultimately it may turn into a commodity. And that’s best cases scenario, assuming all the limitations don’t start really crashing the hype.
For point 1, agree that the board may never have been serious about bringing Altman back.
In any case, odds are at least 75% that this all ends in a lawsuit.
OpenAI has a lot more leverage than I think the public has been giving it credit for, even with a ton of talent leaving along with Sam. Microsoft is in a really difficult position, but they are surely forced to try and figure out how to salvage whatever they can. Like I feel like they have to at least publicly support OpenAI, and are probably forced to continue it's financial support as well if they don't want to write off the investment to 0.
Super complicated situation, so much to consider. What follows next is probably a purging/exodus of the non-AGI believers. Profit seeking VC options are probably gone.. but the tech as of now still stands ahead, so maybe some opportunists who are looking for a similar all-access deal like Microsoft has are going to be considering options. This also is going to probably pressure Microsoft to reassert itself as the main supporter for OpenAI, but maybe they try and play hardball with them.. I wouldn't be surprised if we hear Elon try and throw his hat in the ring this week, he's expressed a lot of public support for Ilya in the past.
This just got way more chaotic (which I personally, am all here for). Good time to be a LLM researcher.
Doesn't seem like this is over "believing" in AGI or not, but about how safely such development can and should occur. My view is that it's not possible to develop safe AGI so we simply shouldn't be doing it -- kind of like how it's simply not safe to crowdsource nuclear bombs. https://nautil.us/building-superintelligence-is-riskier-than-russian-roulette-358022/
I really, really hope that this (extremely common-sense and self evident!!) opinion becomes more widespread before humanity really jumps the shark.
I want TikToks from former OpenAI folks who defect to Microsoft on dealing with MS politics & Azure technology.
Thanks to all participants for a great conversation. I just read the missive from the NYT this morning and with Sam going to Microsoft (and likely to be joined there by a some of OpenAI's talent) it appears to me the OpenAI's board just got a lesson of what the real game is. It is about Microsoft beating Google, Amazon and other large players for dominance in the emerging AI commercial-industrial-consumer complex. This game is far larger than the small point of view from which OpenAI's board was viewing this (a dualistic how to be safe and "good," versus unleashing something dangerous). This might be better understood as like the discovery of flight or the early oil and electricity industries. No one knows all the commercial-industrial-consumer industrial manifestations of this, Whole new as yet unclear markets will emerge. It is time for a series of long term strategic scenarios to be created to guide thinking about what is emerging. Using tools like scenario analysis and chasm analysis will help. Reading the history of key industries and how they emerged will help. The small OpenAI board is in over its head. Billions will be lost and gained, and now it is about who can play in that field over the next decade.
Apparently there are no non-compete agreements at any level at OpenAI. Who allows a vendor or even an investor to poach talent? Who allows founders to leave and two days later go to work for an investor or a vendor? It's as mind boggling as ChatGPT's capabilities.
NDAs address some of that. that’s how CA works, no noncompetes but lots of NDAs
...and now Satya & MS are swooping in picking up the pieces sama, gdb, and other “colleagues”. Wow. Just wow.
Wait for Elon to come in and scoop up the best people leaving OpenAI (sans Altman of course)
Emmett who...?
as they say...the house always wins :)