Discussion about this post

User's avatar
Rebel Science's avatar

I realize that everyone is focused on corporate politics at this time but I have a few issues with this:

"OpenAI’s structure was designed to enable OpenAI to raise the tens or even hundreds of billions of dollars it would need to succeed in its mission of building artificial general intelligence (AGI), the kind of AI that is as smart or smarter than people at most cognitive tasks, while at the same time preventing capitalist forces, and in particular a single big tech giant, from controlling AGI"

I don't understand the logic of raising "tens or even hundreds of billions of dollars it would need to succeed in its mission of building artificial general intelligence (AGI)".

First, OpenAI has no clue how intelligence works. Heck, ChatGPT is the opposite of intelligence. It's an automated regurgitator of texts that were generated by the only intelligence in the system: millions of human beings that went through the process of existing and learning in the real world. They also had to learn how to speak, read and write, something that ChatGPT can never do.

Second, if one has no idea how intelligence works, how does one know that solving it will require tens of billions of dollars? A small spider with less than 100,000 neurons can spin a sophisticated web in the dark. How does OpenAI or anyone else propose to emulate the amazing intelligence of a spider with such a small brain? And if one has no idea how to do spider-level intelligence, how does one propose to achieve human-level intelligence?

I have other objections but these two will do for now.

Expand full comment
Daniel Hill's avatar

Only really smart people could be dumb enough to think they could buck the golden rule - he who has the gold makes the rules. If you're completely dependent on your commercial partner for survival, it's the commercial partner who is in the driver's seat no matter how "clever" a governance structure you set up.

Expand full comment
37 more comments...

No posts