14 Comments

Are folks talking about this scenario:

1) in the latest funding round Altman proposes majority control of OpenAI shifts from nonprofit to investors

2) board asks Altman who denies it

3) board investigates and gets proof - they have to act quickly before Altman signs away their ability to govern.

Maybe this is not what went down, but it would explain why it was sudden and why the board didn’t reach out to investors.

Expand full comment

OpenAI needs 2 crucial things.

1). Microsofts compute

2). The loyalty of the engineering team

The board have neither, and so they have no real power regardless of what any legal contracts might state.

Sam Altman recruited the team, and Ilya is finding out how real power works in the real world. The kind of dynamics that Putin understands so well.

Real power is not legal power, it’s command of resources.

Expand full comment

The entire idea of "AGI" that OpenAI was founded on is more of a religious belief than anything with any relationship to reality.

I think that's the cold reality of the situation. They created a useful product as part of a group that has a central ethos that is nonsensical. It'd be like a group of people who were trying to eat the sun invented a useful new form of optics along the way. The people who want to eat the sun and believe that's possible are angry because the people who use the new telescopes realized that you can't actually eat the sun but that these new lenses let you see things at a much higher degree of magnification and that can make you a bunch of money.

I am not at all concerned about "safety" or "humanity". IRL, all of the "problems" arising from these things are just the same problems we had before.

Expand full comment

A server I'm in posted this screenshot from Blind: https://www.teamblind.com/post/I-know-why-Sam-Altman-was-fired-Fr5cQ6Ne

I find this pretty fantastical, but given the world we live in, I could also say truth is stranger than fiction!

Expand full comment
Nov 19, 2023·edited Nov 19, 2023

Here's another question/issue. As we all know, OpenAI now has a complicated corporate structure involving a "capped profit" company ultimately governed by the board of a not-for-profit company (https://openai.com/our-structure). One of the provisions of this structure reads as follows:

“• Fifth, the board determines when we’ve attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.”

I know what those words mean, I understand their intension, to use a term from logic. But does anyone understand their extension? Determining that, presumably, is the responsibility of the board. As a practical matter that likely means that the determination is subject to negotiation.

If the purpose of the company was to find the Philosopher's Stone, no one would invest in it. Though there was a time when many educated and intelligent men spent their lives looking for the Philosopher's Stone, that time is long ago and far away. OTOH, there are a number of companies seeking to produce practical fusion power. Although there have been encouraging signs recently, we have yet to see a demonstration that achieves commercial breakeven (https://en.wikipedia.org/wiki/Fusion_energy_gain_factor#Commercial_breakeven). Until that happens we won't really know whether or not practical fusion power is possible. However, the idea is not nonsense on the face of it, like the idea of the Philosopher's Stone.

If figure that the concept of AGI is somewhere between practical fusion power and the Philosopher's Stone. That leaves a lot of room for litigation over just when OpenAI has achieved AGI. That also puts a peculiar gloss on Altman's recent joke that they'd achieved AGI "internally."

Expand full comment