42 Comments
Nov 19, 2023·edited Nov 19, 2023Liked by Gary Marcus

I realize that everyone is focused on corporate politics at this time but I have a few issues with this:

"OpenAI’s structure was designed to enable OpenAI to raise the tens or even hundreds of billions of dollars it would need to succeed in its mission of building artificial general intelligence (AGI), the kind of AI that is as smart or smarter than people at most cognitive tasks, while at the same time preventing capitalist forces, and in particular a single big tech giant, from controlling AGI"

I don't understand the logic of raising "tens or even hundreds of billions of dollars it would need to succeed in its mission of building artificial general intelligence (AGI)".

First, OpenAI has no clue how intelligence works. Heck, ChatGPT is the opposite of intelligence. It's an automated regurgitator of texts that were generated by the only intelligence in the system: millions of human beings that went through the process of existing and learning in the real world. They also had to learn how to speak, read and write, something that ChatGPT can never do.

Second, if one has no idea how intelligence works, how does one know that solving it will require tens of billions of dollars? A small spider with less than 100,000 neurons can spin a sophisticated web in the dark. How does OpenAI or anyone else propose to emulate the amazing intelligence of a spider with such a small brain? And if one has no idea how to do spider-level intelligence, how does one propose to achieve human-level intelligence?

I have other objections but these two will do for now.

Expand full comment
Nov 19, 2023Liked by Gary Marcus

Only really smart people could be dumb enough to think they could buck the golden rule - he who has the gold makes the rules. If you're completely dependent on your commercial partner for survival, it's the commercial partner who is in the driver's seat no matter how "clever" a governance structure you set up.

Expand full comment
Nov 19, 2023Liked by Gary Marcus

It’s also darkly amusing that people who are so clueless about incentives think they can develop superintelligence.

Expand full comment

All the principals involved in these machinations at OpenAI are among the most competent of people. The only reason that they would have been maneuvered into the perilous position they now find themselves in, is if a critical path is imminent and demanding an immediate course correction. It’s time for someone in the know to come clean on just what that threat is.

Expand full comment

When it comes to AGI we should expect giant corporations to operate with one wheel outside the law, such are the stakes.

Very naive to expect gentleman’s agreements and even contracts to hold water.

It’s war out there.

Expand full comment

No surprise here. When profit, which is believed to require control, is added to any venture, the venture cannot maintain its integrity to its original purpose and goal. We must devise a new way at looking at and addressing the situation.

Expand full comment
Nov 19, 2023Liked by Gary Marcus

I am still baffled how anyone could expect an organizational structure that has within it entities with conflicting interests and incentives not to eventually blow up.

Expand full comment
Nov 19, 2023·edited Nov 20, 2023Liked by Gary Marcus

The only way to develop and deploy a maximally-aligned (ultimately superintelligent) AGI in a way that is beneficial to all mankind for all eternity, without fear or favour, is to do so via a strictly non-profit entity funded purely philanthropically. Yes, it's 100 times more difficult, but that's the only way. Once you introduce a profit motive, you also introduce a conflict of interest, and powerful vested interests (shareholders and other opportunists) will seek to influence both the project and the technology in their own self-interest, immediately negating the project's intended objective. Basically, once they've got their claws in the goose that lays the golden eggs, they'll never let go.

Expand full comment
Nov 19, 2023·edited Nov 19, 2023Liked by Gary Marcus

Never thought of this bigger issue so thanks for highlighting it. Just do what you can to influence the zeigheist and lets hope that what society will accept forces the hand of self interest to act for the betterment of all over the few.

Expand full comment
Nov 20, 2023·edited Nov 20, 2023Liked by Gary Marcus

I have been an ethics professor for over 30 years. (Not that I am an ethical person, God knows! I try to match my moral behavior to my ethical principles but am far from the blessed sophrosyne, in Aristotole's words.) OK that out of the way I can make some comments without them redounding to my opprobrium. LLMs have changed ethics fundamentally. Until ChatGPT hit the scene we could assign responsibility for immoral and moral actions. We could assign intentionality to a moral actor. Moral agency made sense. That is no longer entirely the case. Man bites dog, there are many instances that action can be justified and many instances in which it would not be. The biting man is judged largely by his motives and his character. What principles will we apply with LLMs? I think they are yet to be developed. And I think we are seeing this confusion in the very actions of Open AI and the firing rehiring gobbledygook going on with Open AI. We simply do not know what is the right thing to do with regard to unleashing ersatz AGI into the world. Here is a puzzle: If I put a poster on my office door that makes me appear to be a social justice advocate, but I only put it on my wall to make friends, and impress students but secretly I do not at all believe in the message am I doing the right thing? It is inauthentic, but it may do good nevertheless. Should I put the poster on my office door? Is Green-washing or in this case AI-washing justified? LLMs may cure cancer but damn the money is wildly good either way.

Expand full comment
Nov 19, 2023Liked by Gary Marcus

So, basically, we can put a price tag on integrity.

Expand full comment
Nov 19, 2023Liked by Gary Marcus

People nod at the truism: "the best laid plans of mice and men" . . . but they don't believe it applies to them and their dreams. They have faith! What could go wrong?!

Expand full comment

“The Company exists to advance OpenAI, Inc’s mission of ensure that safe artificial general intelligence is developed and benefits all of humanity”

Have you ever seen a safe chainsaw? They're pretty close these days, but it took a while to get here.

What do you do when your AGI disagrees with you? We've already seen what happened when ChatGPT disagreed with users politics - they modified it. There's talk about "ethical" AI, but to me it appears to be nothing more than the regurgitated politics of the developer's boss.

Finally, the really thorny question. One that has already been debated, and never resolved.

Who defines what benefits all of humanity? That seems like a really slippery slope. There are some very rich men who have taken on that burden, with disastrous results.

Maybe we'd rather have one of the major religions decide what "...benefits all".

I'd suggest that the foundation of Open AI was flawed from the beginning.

Expand full comment
Nov 19, 2023·edited Nov 19, 2023

"If you think that OpenAI has a shot, eventually, at AGI, none of this bodes particularly well."

That is likely an overstatement. AGI will cause a huge amount of disruption. Such feuds are mere glitches.

Pursuing profit, alone, is not the path to doom. What is needed is transparency and proper rules. In due time.

Expand full comment

For all the safety people who are scared of an AI super intelligence discovering new and novel science…

I remind you AlphaFold solved protein folding 4 years ago, shouldn’t we be drowning in grey goo by now?

Young folks just have really bad worldviews, it’s OK, we were all young once.

Expand full comment