8 Comments

Not only Sam and Greg leave the board of OpenAI Inc. (the parent — it was weird that they had those seats as that made them their own employer in a way) but also all the others except one. It seems like a compromise where Sam en Greg return to OpenAI LLC, the whole OpenAI Inc board is made more independent (and is largely replaced) and everyone is off to a restart. And there is that rumour of an investigation. In the end, it is not very relevant what happens there, as all involved seem to think they are working on safe AGI, where both ‘safe’ and (certainly) ‘AGI’ are mirages.

Expand full comment

The last line - rather strange watching all of this if you believe that LLMs are not the way to AGI. AGI would require machines to reason and have a genuine model of the world, one equivalent to or even superior to the one in the human mind. I believe that would require more reliable data. These models treat data as as only important in terms of quantity, not quality. Take enough without asking and hire gig workers who aren’t allowed to see the elephant. There isn’t a way to make this reliable or truly innovative. So they may as well fight over the money, since their original mission is just a mirage (at least for now, perhaps forever).

Expand full comment

The problem is not so much reliable data (though that is important too). But the mechanism of Generative AI is inherently unable to contain logic and reasoning, so it effectively 'guesses' it by producing linguistically valid text. See https://erikjlarson.substack.com/p/gerben-wierda-on-chatgpt-altman-and or the fuller story https://youtu.be/9Q3R8G_W0Wc Better data is not going to solve the fundamental problem, and nor will scale. These models use data, but not the meaning of it. You might compare it to bibliography data in a publication. For a reader, these are somewhat meaningful references. For someone doing research on citations, they are meaningless data points, where it doesn't care what they are about. LLMs are like the latter, they have a different perspective on the training data which has no relation with our own. But the quantity has its *own* kind of quality. We humans tend to be bewitched by the language of and about the LLMs.

We humans have in general such a mistaken picture of our intelligence that it might take a while (decades) for us to come to grips with what is really the case (which we are learning now). It is a bit like Darwin's evolution in the 19th century or Kepler centuries before. It will take a while to sink in.

Expand full comment

I knew it wouldn't be the end of the story with your last post :)

Do these people ever sleep?

Expand full comment

The problem of a system where money always wins over principles is that we take the route of least resistance. That is not always optimal for humanity to flourish.

The founders of OpenAI tried to put checks on this short term thinking by the innovative organization and governance.

Our greatest drivers are fear, uncertainty and short term survival. These drivers impede our ability for compassion and long term thinking. When money wins we are more likely to follow this self destructive path.

AI will probably increase inequality because only those who already have the power, will be those in charge of the development and they will not give away their hegemony without a fight.

Maybe what we have seen at OpenAI is the symptoms of such a struggle where humans are not masters of money but rather its slave?

Expand full comment

What do you make of this, Gary? I'm no AI expert. When I read it I couldn't stop laughing because it seems so cartoonishly absurd. How did this ever make it into Reuters? https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/

Expand full comment

just answered, in latest essay here :)

Expand full comment

Damn. I was looking forward to reading about Microsoft employee Sam Altman's interactions with corporate partner OpenAI.

Expand full comment