14 Comments

Are folks talking about this scenario:

1) in the latest funding round Altman proposes majority control of OpenAI shifts from nonprofit to investors

2) board asks Altman who denies it

3) board investigates and gets proof - they have to act quickly before Altman signs away their ability to govern.

Maybe this is not what went down, but it would explain why it was sudden and why the board didn’t reach out to investors.

Expand full comment

Quite possible. I also suspect dev of GPT-5 and the pace of its going public was a part of this.

Expand full comment

Just saw in the times Altman is describing this latest funding round as the way employees would get paid out for their equity - nice move to make the board out as the bad guys who don’t want employees to get paid.

Expand full comment

OpenAI needs 2 crucial things.

1). Microsofts compute

2). The loyalty of the engineering team

The board have neither, and so they have no real power regardless of what any legal contracts might state.

Sam Altman recruited the team, and Ilya is finding out how real power works in the real world. The kind of dynamics that Putin understands so well.

Real power is not legal power, it’s command of resources.

Expand full comment

The entire idea of "AGI" that OpenAI was founded on is more of a religious belief than anything with any relationship to reality.

I think that's the cold reality of the situation. They created a useful product as part of a group that has a central ethos that is nonsensical. It'd be like a group of people who were trying to eat the sun invented a useful new form of optics along the way. The people who want to eat the sun and believe that's possible are angry because the people who use the new telescopes realized that you can't actually eat the sun but that these new lenses let you see things at a much higher degree of magnification and that can make you a bunch of money.

I am not at all concerned about "safety" or "humanity". IRL, all of the "problems" arising from these things are just the same problems we had before.

Expand full comment

Same problems, at scale, are worse, not the same. Good metaphor in any case, for an historical example, see Oneida silverware. Altman and the board created this ridiculous non profit saving humanity for profit moving fast and breaking things structure, and none of them give me any confidence.

Expand full comment

So, Gary had it right. Altman probably did threaten to start a new company, probably in response to the board wanting a cut of the action. He held to his theology, and they fired him.

Apparently the board now thinks they can retain enough control of the technology to get along without Altman or the staff. And that while Altman is in the throes of a new start up they can stay ahead of him.

Altman has a few problems. First, the customer base are apostates. Second, the competition are pragmatic atheists. Third, who's going to provide seed money for a venture that will refuse to allow them to reap the rewards.

Reminds me of the Clone Wars. IBM published the IBM-PC Bios and it was all over but the crying.

Expand full comment

It's entirely possible it's the other way around - that Altman is the apostate now, and that the board are the true believers.

Altman making another company would be a way to escape their control.

We don't know who believes what, but it's worth remembering that Altman recently made those statements about how this technology won't lead to intelligence.

Expand full comment

You are correct. I wrote a clever reply, couched in religious terms, but suddenly realized that it was a rabbit hole I don't want to go down.

Let's just say that when billions of dollars are at stake, follow the money.

Expand full comment
Comment deleted
Nov 19, 2023
Comment deleted
Expand full comment

Agree, but how is the board displaying cowardice?

Expand full comment
Comment deleted
Nov 19, 2023
Comment deleted
Expand full comment

I agree! But I don’t trust the reporting that is making it look like investors and staff leaving *isn't* the boards plan. Looks to me like the work of an excellent crisis comms team (hired by Altman).

Expand full comment

A server I'm in posted this screenshot from Blind: https://www.teamblind.com/post/I-know-why-Sam-Altman-was-fired-Fr5cQ6Ne

I find this pretty fantastical, but given the world we live in, I could also say truth is stranger than fiction!

Expand full comment

nope satya is a firm supporter of sam’s so this doesn’t check out

Expand full comment

Here's another question/issue. As we all know, OpenAI now has a complicated corporate structure involving a "capped profit" company ultimately governed by the board of a not-for-profit company (https://openai.com/our-structure). One of the provisions of this structure reads as follows:

“• Fifth, the board determines when we’ve attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.”

I know what those words mean, I understand their intension, to use a term from logic. But does anyone understand their extension? Determining that, presumably, is the responsibility of the board. As a practical matter that likely means that the determination is subject to negotiation.

If the purpose of the company was to find the Philosopher's Stone, no one would invest in it. Though there was a time when many educated and intelligent men spent their lives looking for the Philosopher's Stone, that time is long ago and far away. OTOH, there are a number of companies seeking to produce practical fusion power. Although there have been encouraging signs recently, we have yet to see a demonstration that achieves commercial breakeven (https://en.wikipedia.org/wiki/Fusion_energy_gain_factor#Commercial_breakeven). Until that happens we won't really know whether or not practical fusion power is possible. However, the idea is not nonsense on the face of it, like the idea of the Philosopher's Stone.

If figure that the concept of AGI is somewhere between practical fusion power and the Philosopher's Stone. That leaves a lot of room for litigation over just when OpenAI has achieved AGI. That also puts a peculiar gloss on Altman's recent joke that they'd achieved AGI "internally."

Expand full comment