One theory of the case, popular on the social media platform X, is that the board was a bunch of clowns; they fired Sam for no really good reason beyond some ongoing tensions around company direction.
Nov 18, 2023·edited Nov 18, 2023Liked by Gary Marcus
My operating belief on this has been that recently Sam has kind of become disillusioned with the AGI narrative, and because he was less afraid of any near-term consequences with the technology, that he started pushing for rapid commercialization. He seemed to be speed running every monetization playbook that's been popular over the last two decades in the period of like 3 months, make the iPhone of AI, the app store of AI, run a big AI consultancy, etc. etc.
So I think it just builds on what you're saying but just put into more birds eye view terms: Sam recently changed his own mission from "Prevent AGI from destroying the world" (or whatever) to "monetize the tools we have made now". Ilya and others I think are still on the original mission and saw it as their moral prerogative to continue that mission (with maybe the exception of Adam because he has what I would consider a conflict of interest in Poe).
This has drastically changed how I view OpenAI and the people who work there, I used to think more cynically about their narratives around the AGI narrative and all, I believed it was their marketing strategy, but now I think they are true believers, and that Sam too once was.
A question I have would be, does that interpretation gel with what you know about the board members? Is it your feel that their non-profit mission is so important to them such that they would do what they did?
The problem is that the entire notion of "AI safety" is a scam and "AGI" and the singularity stuff is a doomsday cult.
To put it bluntly, none of these tools are even remotely intelligent, and never will be. It might be possible to somehow create a synthetic intelligence, but none of the present-day approaches are even remotely capable of doing that, because they aren't actually even trying to generate intelligence. And the entire idea of an "intelligence explosion" is just wrong, and misunderstands how technology works and how experiments and science works.
That doesn't mean that this technology isn't potentially useful, but it isn't at all what it has been hyped to be from various quarters.
I think the sad reality is that Yudkowsky scammed people out of millions of dollars and created a weird doomsday cult, and that a lot of people don't want to admit that they were suckered because they bought into it. Like many such things, people are being irrational, and like many cults, if people start pointing out that the Emperor has no clothes, the cult members are going to freak out. People who leave cults get treated terribly by the members.
The fact that there's a valuable (or potentially valuable) product involved is a huge point of tension, because people living in reality realize that the product is not even remotely intelligent but does potentially have major commercial applications. When your non-profit's purpose is to do something that is nonsensical and delusional, and you end up producing a useful product, a huge amount of tension is absolutely understandable.
I guess what I was thinking the "less than candid" thing was in my head, was that Sam was pitching returns to investors based upon the array of monetization channels that he was developing, while simultaneously representing a commitment to the non-profit mission of OpenAI. I think that the two perspectives are kind of diametrically opposed, and you therefore inherently have to be telling parties separate things.
Like in the past, when he was more of a true believer in the non-profit mission, the pitch was to organizations to "donate". Whereas recently, esp. with the for-profit arm being a thing, the pitch has probably made a hard turn to pitching "investors" to "invest". This builds all kinds of pressure around the organization, and continuing to let it fester would likely mean the purging of its believers and the eventual death of the original mission in its entirety.
I do think Gary is right in that there probably is a single recorded event that they're using as the sort of casus belli to justify the firing, but what I guess I was trying to get at is, it probably could have been any number of things, and this switch in mentality made this clash inevitable. It's possible that Ilya and others have been looking for a valid excuse for a bit. I don't believe the board is lying about anything, I think that Sam had to be doing an amount of lying to continue to be in the position that he was in.
This being the central piece to the tension I think also explains why it happened as it did. Like if you're in the position that those board members are, and you're trying to keep OpenAI on track with it's non-profit mission, like do you notify Microsoft or any of the other investors? Do you have the time and space to organize a proper PR effort to coordinate communication?
I think the most powerful organization and VCs on earth want to guarantee returns on their investments, and probably would have wanted Sam to stay on and keep moving things in the direction of commercialization. You probably have to keep a really tight lid on what's going on because letting it get out may have initiated all kinds of actors who are strongly incentivized to prevent such a move to do pretty extreme things.
Like if Microsoft catches wind of this effort, what moves do they start making? Maybe they tell Sam, and maybe Sam starts organizing an internal coup of his own. Maybe they lobby together with all the other existing and interested investors and force them to the negotiating table. I'm sure you can imagine many other scenarios.
In a room of four very smart conscientious people who have experience in the hyper-capitalist world of silicon valley, these are the things that you probably worry about, some rational and some irrational. I don't think this says anything negative about the group, it's just a really difficult situation to navigate.
The only other option as far as I can tell would have been some kind of scandal on part of Sam, but the memo from the COO Brad Lightcap came out with a statement that kind of ruled that out.
The legal point you bring up is interesting and not something I had yet considered, how important that is I think depends on the exact legal structure and relationships between all of the different OpenAI entities.. maybe they believed they were required to act or would have been exposed to some kind of legal risk? But I guess my question there would be, why not just step down? Also, if it's just fear of legal liability, at least to me, I feel like the option of going to Microsoft and asking for help on the legal side of things would make more sense as an option. But I've never been in a non-profit so I don't have a great perspective on that.
"The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global LLC.[32] In addition, minority members with a stake in OpenAI Global LLC are barred from certain votes due to conflict of interest.[33] Some researchers have argued that OpenAI Global LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI.[42]"
Maybe it's just plain, old fashioned, much maligned....greed.
After all -
It's a learning curve, until it's an invoice, then it's a mistake.
He flat out said publicly that the flagship technology, ChatGPT, was simply "cloning human speech," and went on to ask hard questions about AI that can "go discover new physics theories." You posted this actually, days ago. How can that stand? It's philosophically at odds with the ethos of the company. He's the exec at GM saying yes I know it's a nice looking car, but it has crappy brakes, and the AC quits and costs you money at the dealership just after the warranty expired. Etc. I have a piece coming out about this on my Sub Colligo. I have a guest post, who has some further insight into it. Stay tuned. Thanks for getting on this story.
Makes sense but I think it’s difficult for people to understand because of the unique nonprofit mission of the top entity that the board controls. Their obligations are different from most boards.
Sounds possible. Do you think Greg Brockman was part of that planned venture, and referring to that ending his tweet “We will be fine. Greater things coming soon.”
The collapse of OpenAI is a consequence of the unattainability of the goal for which the company was created and declared. A scandalous board of directors meeting is just a reason the participants used to disassociate themselves from failure.
Some version of your theory makes sense. I have been on many public boards and have been involved in the exiting of 2 CEOs and you don’t do it lightly and you have cause. Ceo ‘s have a duty to their company. His new ventures could have been a conflict of interest and or violation of duty and he did not fully disclose ? It will come out. Most young messiahs turn out to be jerks or criminals
I think it’s much simpler. Altman & Brockman have not been communicating recognized revenue, pipeline, and cash burn to the board. They got better info from a unnamed insider, and quickly did damage control.
You can’t do an IPO on fantasy numbers these days.
I hope it's tension caused by the safety failures inherent in the ChatGPT model, and that safety and humanity won over rapid commercialization and valuation at apparently almost any cost.
So Ilya finally told the board that AGI wasn't what they thought it was and they've been hyping it this whole time and same lied about AGI altogether? I mean I've used all openai products and ChatGPT is actually a magic trick that seems super impressive and is super usefull for a lot of low level tasks. But I've been doing this since 2007 and theres something you should know if you dont already... its just a very impressive trick. That super intelligence that people think will come from scaling transformer models is not what you think. There is a decresing performance to the scaling that peters out and does not simply produce more complex responses. So we will have to wait a few yers for the thing everyone is hyping. And when Ilya (the most honest person in that group by orders of magnitude) when he finally ust admitted it the openai board flipped out. But Sam knows this and so does Satya thats why they're saying the things they say. It's cool. But Ilya should just come out and tell everyone that so its in the opne and people can stop wasting so much time on the hype or this will all end up like our fabulous crypto brothers and sisters. So those of you who know the trutch just encourage your friends to admit it and it will all be ok I promise ;)
From The Guardian: Sam Altman ‘was working on new venture’ before sacking from OpenAI
"The former OpenAI president, Greg Brockman, is also expected to join Altman after he quit the artificial intelligence firm along with other key senior executives following Altman’s abrupt departure."
The best way this thing turns out is Altman wakes up one morning in a remote village with all the other OpenAI employees. Number Two, played by Nadella, refers to him as Number Six and they all spend their time labeling images and moderating CSAM for $2 a day.
My operating belief on this has been that recently Sam has kind of become disillusioned with the AGI narrative, and because he was less afraid of any near-term consequences with the technology, that he started pushing for rapid commercialization. He seemed to be speed running every monetization playbook that's been popular over the last two decades in the period of like 3 months, make the iPhone of AI, the app store of AI, run a big AI consultancy, etc. etc.
So I think it just builds on what you're saying but just put into more birds eye view terms: Sam recently changed his own mission from "Prevent AGI from destroying the world" (or whatever) to "monetize the tools we have made now". Ilya and others I think are still on the original mission and saw it as their moral prerogative to continue that mission (with maybe the exception of Adam because he has what I would consider a conflict of interest in Poe).
This has drastically changed how I view OpenAI and the people who work there, I used to think more cynically about their narratives around the AGI narrative and all, I believed it was their marketing strategy, but now I think they are true believers, and that Sam too once was.
A question I have would be, does that interpretation gel with what you know about the board members? Is it your feel that their non-profit mission is so important to them such that they would do what they did?
I think you're right.
The problem is that the entire notion of "AI safety" is a scam and "AGI" and the singularity stuff is a doomsday cult.
To put it bluntly, none of these tools are even remotely intelligent, and never will be. It might be possible to somehow create a synthetic intelligence, but none of the present-day approaches are even remotely capable of doing that, because they aren't actually even trying to generate intelligence. And the entire idea of an "intelligence explosion" is just wrong, and misunderstands how technology works and how experiments and science works.
That doesn't mean that this technology isn't potentially useful, but it isn't at all what it has been hyped to be from various quarters.
I think the sad reality is that Yudkowsky scammed people out of millions of dollars and created a weird doomsday cult, and that a lot of people don't want to admit that they were suckered because they bought into it. Like many such things, people are being irrational, and like many cults, if people start pointing out that the Emperor has no clothes, the cult members are going to freak out. People who leave cults get treated terribly by the members.
The fact that there's a valuable (or potentially valuable) product involved is a huge point of tension, because people living in reality realize that the product is not even remotely intelligent but does potentially have major commercial applications. When your non-profit's purpose is to do something that is nonsensical and delusional, and you end up producing a useful product, a huge amount of tension is absolutely understandable.
This is a way better theory than my conspiracy theory that he was advocating for the destruction of Gaza and the board wanted to distance from him.
I guess what I was thinking the "less than candid" thing was in my head, was that Sam was pitching returns to investors based upon the array of monetization channels that he was developing, while simultaneously representing a commitment to the non-profit mission of OpenAI. I think that the two perspectives are kind of diametrically opposed, and you therefore inherently have to be telling parties separate things.
Like in the past, when he was more of a true believer in the non-profit mission, the pitch was to organizations to "donate". Whereas recently, esp. with the for-profit arm being a thing, the pitch has probably made a hard turn to pitching "investors" to "invest". This builds all kinds of pressure around the organization, and continuing to let it fester would likely mean the purging of its believers and the eventual death of the original mission in its entirety.
I do think Gary is right in that there probably is a single recorded event that they're using as the sort of casus belli to justify the firing, but what I guess I was trying to get at is, it probably could have been any number of things, and this switch in mentality made this clash inevitable. It's possible that Ilya and others have been looking for a valid excuse for a bit. I don't believe the board is lying about anything, I think that Sam had to be doing an amount of lying to continue to be in the position that he was in.
This being the central piece to the tension I think also explains why it happened as it did. Like if you're in the position that those board members are, and you're trying to keep OpenAI on track with it's non-profit mission, like do you notify Microsoft or any of the other investors? Do you have the time and space to organize a proper PR effort to coordinate communication?
I think the most powerful organization and VCs on earth want to guarantee returns on their investments, and probably would have wanted Sam to stay on and keep moving things in the direction of commercialization. You probably have to keep a really tight lid on what's going on because letting it get out may have initiated all kinds of actors who are strongly incentivized to prevent such a move to do pretty extreme things.
Like if Microsoft catches wind of this effort, what moves do they start making? Maybe they tell Sam, and maybe Sam starts organizing an internal coup of his own. Maybe they lobby together with all the other existing and interested investors and force them to the negotiating table. I'm sure you can imagine many other scenarios.
In a room of four very smart conscientious people who have experience in the hyper-capitalist world of silicon valley, these are the things that you probably worry about, some rational and some irrational. I don't think this says anything negative about the group, it's just a really difficult situation to navigate.
The only other option as far as I can tell would have been some kind of scandal on part of Sam, but the memo from the COO Brad Lightcap came out with a statement that kind of ruled that out.
The legal point you bring up is interesting and not something I had yet considered, how important that is I think depends on the exact legal structure and relationships between all of the different OpenAI entities.. maybe they believed they were required to act or would have been exposed to some kind of legal risk? But I guess my question there would be, why not just step down? Also, if it's just fear of legal liability, at least to me, I feel like the option of going to Microsoft and asking for help on the legal side of things would make more sense as an option. But I've never been in a non-profit so I don't have a great perspective on that.
Quoting from Wikipedia:
https://en.wikipedia.org/wiki/OpenAI
"The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global LLC.[32] In addition, minority members with a stake in OpenAI Global LLC are barred from certain votes due to conflict of interest.[33] Some researchers have argued that OpenAI Global LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI.[42]"
Maybe it's just plain, old fashioned, much maligned....greed.
After all -
It's a learning curve, until it's an invoice, then it's a mistake.
Hi Gary,
He flat out said publicly that the flagship technology, ChatGPT, was simply "cloning human speech," and went on to ask hard questions about AI that can "go discover new physics theories." You posted this actually, days ago. How can that stand? It's philosophically at odds with the ethos of the company. He's the exec at GM saying yes I know it's a nice looking car, but it has crappy brakes, and the AC quits and costs you money at the dealership just after the warranty expired. Etc. I have a piece coming out about this on my Sub Colligo. I have a guest post, who has some further insight into it. Stay tuned. Thanks for getting on this story.
Erik J. Larson
Makes sense but I think it’s difficult for people to understand because of the unique nonprofit mission of the top entity that the board controls. Their obligations are different from most boards.
Sounds possible. Do you think Greg Brockman was part of that planned venture, and referring to that ending his tweet “We will be fine. Greater things coming soon.”
Nah, they would have at least sounded out MSFT if that was the case.
The only reason you move this aggressively and this quickly is if you are in legal jeopardy as a board member.
It looks like a “what did you know, and when did you know it” situation. Or they were concerned he would sign off some unwanted deal imminently.
Sam is not the product development guy, so the hiding tech theories make no sense.
But Sam is the capital raising and the recruiting guy, what skeletons might exist in those closets?
The fact that Microsoft was out of the loop on this is the weirdest part.
Very much feels like an important jigsaw piece is still missing, or this is just an extremely clumsy manoeuvre by an inexperienced board.
In 2023, Sam is the face of AI, the epicentre of funding, recruitment and regulation.
From here on OpenAI will have to compete with that and they’ve just blindsided all their investors with major boardroom shenanigans?
Suspect adults will now be parachuted in and the reporting burden within the org will mushroom.
The collapse of OpenAI is a consequence of the unattainability of the goal for which the company was created and declared. A scandalous board of directors meeting is just a reason the participants used to disassociate themselves from failure.
Some version of your theory makes sense. I have been on many public boards and have been involved in the exiting of 2 CEOs and you don’t do it lightly and you have cause. Ceo ‘s have a duty to their company. His new ventures could have been a conflict of interest and or violation of duty and he did not fully disclose ? It will come out. Most young messiahs turn out to be jerks or criminals
I think it’s much simpler. Altman & Brockman have not been communicating recognized revenue, pipeline, and cash burn to the board. They got better info from a unnamed insider, and quickly did damage control.
You can’t do an IPO on fantasy numbers these days.
Didn’t Altman release ChatGPT one year ago against the wishes of most of OpenAI? Perhaps something recent was the final straw.
I hope it's tension caused by the safety failures inherent in the ChatGPT model, and that safety and humanity won over rapid commercialization and valuation at apparently almost any cost.
So Ilya finally told the board that AGI wasn't what they thought it was and they've been hyping it this whole time and same lied about AGI altogether? I mean I've used all openai products and ChatGPT is actually a magic trick that seems super impressive and is super usefull for a lot of low level tasks. But I've been doing this since 2007 and theres something you should know if you dont already... its just a very impressive trick. That super intelligence that people think will come from scaling transformer models is not what you think. There is a decresing performance to the scaling that peters out and does not simply produce more complex responses. So we will have to wait a few yers for the thing everyone is hyping. And when Ilya (the most honest person in that group by orders of magnitude) when he finally ust admitted it the openai board flipped out. But Sam knows this and so does Satya thats why they're saying the things they say. It's cool. But Ilya should just come out and tell everyone that so its in the opne and people can stop wasting so much time on the hype or this will all end up like our fabulous crypto brothers and sisters. So those of you who know the trutch just encourage your friends to admit it and it will all be ok I promise ;)
From The Guardian: Sam Altman ‘was working on new venture’ before sacking from OpenAI
"The former OpenAI president, Greg Brockman, is also expected to join Altman after he quit the artificial intelligence firm along with other key senior executives following Altman’s abrupt departure."
https://www.theguardian.com/technology/2023/nov/18/earthquake-at-chatgpt-developer-as-senior-staff-quit-after-sacking-of-boss-sam-altman
Amateur Hour. OpenAI in talks with Altman to return as CEO.
Seems like MSFT are returning the humiliation!
https://www.theverge.com/2023/11/18/23967199/breaking-openai-board-in-discussions-with-sam-altman-to-return-as-ceo
The best way this thing turns out is Altman wakes up one morning in a remote village with all the other OpenAI employees. Number Two, played by Nadella, refers to him as Number Six and they all spend their time labeling images and moderating CSAM for $2 a day.
This story is continuing to develop with amazing rapidity--especially amazing because it's happening over a weekend that still has another day to go.