41 Comments
Nov 19, 2023·edited Nov 19, 2023Liked by Gary Marcus

I realize that everyone is focused on corporate politics at this time but I have a few issues with this:

"OpenAI’s structure was designed to enable OpenAI to raise the tens or even hundreds of billions of dollars it would need to succeed in its mission of building artificial general intelligence (AGI), the kind of AI that is as smart or smarter than people at most cognitive tasks, while at the same time preventing capitalist forces, and in particular a single big tech giant, from controlling AGI"

I don't understand the logic of raising "tens or even hundreds of billions of dollars it would need to succeed in its mission of building artificial general intelligence (AGI)".

First, OpenAI has no clue how intelligence works. Heck, ChatGPT is the opposite of intelligence. It's an automated regurgitator of texts that were generated by the only intelligence in the system: millions of human beings that went through the process of existing and learning in the real world. They also had to learn how to speak, read and write, something that ChatGPT can never do.

Second, if one has no idea how intelligence works, how does one know that solving it will require tens of billions of dollars? A small spider with less than 100,000 neurons can spin a sophisticated web in the dark. How does OpenAI or anyone else propose to emulate the amazing intelligence of a spider with such a small brain? And if one has no idea how to do spider-level intelligence, how does one propose to achieve human-level intelligence?

I have other objections but these two will do for now.

Expand full comment
author

that’s why the big IF at the end :)

Expand full comment

A critical and core recognition, to bad the tech world is not paying attention. "Second, if one has no idea how intelligence works, how does one know that solving it will require tens of billions of dollars?"

Expand full comment

I think a lot of this has to do fundamentally with how VC world (and its poster child Sam) likes to operate - they like to make a series of big BIG bets that are each a very high financial risk, but potentially high reward (for VCs, and not necessarily for those who they fund). None of them have any "vision" of whether any particular bet could work, but because they fund many such bets, only one of them has to succeed to make a windfall for VCs. This is the main reason why Sam is even attempting to do something like this - he doesn't have a grand vision, it is just one of his (and Microsoft's) big bets.

It is also one of the reasons, in my opinion, why we're seeing such an immature behavior of the key people involved in this case - it is inherent in how VCs pick smart, but naive people to make their big bet happen.

More on this topic: https://newsletter.smallbets.co/p/why-you-shouldnt-join-y-combinator

Expand full comment

So you're saying that VCs and hedge fund managers are just gamblers with lots of money? I tend to agree but the frenzy over AGI is unprecedented. Some powerful hidden interests are driving it.

Expand full comment

My guess is that AGI really gets the imagination working in a way that other hypothetical future technology doesn't.

Expand full comment

In my experience, investors are happy to lose tons of money, but they don't want to be told upfront that this could happen. I've been trying to raise some funds for a brain-based idea, novel algorithm, as yet unproven. To make it happen. People don't want to invest when you tell them outright the idea may flop. That is ok. I know this now. But I was amazed that they want to be lied to. I think it will work, I am willing to put my time into it. But they want to be convinced. They don't want risk. But they are ok to lose billions on a bad bet. Some schizophrenia going on.

Expand full comment

Indeed, but none of them actually know if it's going to work out. Same thing happened with driverless cars.

Expand full comment
Comment deleted
Expand full comment

I agree with your take. It's not encouraging to say the least. I can only hope that more level-headed thinkers are working on AGI behind the scene.

Expand full comment
Nov 19, 2023Liked by Gary Marcus

Only really smart people could be dumb enough to think they could buck the golden rule - he who has the gold makes the rules. If you're completely dependent on your commercial partner for survival, it's the commercial partner who is in the driver's seat no matter how "clever" a governance structure you set up.

Expand full comment
Nov 19, 2023·edited Nov 19, 2023

Exactly. Whoever controls the money, controls the project.

Expand full comment
Nov 19, 2023Liked by Gary Marcus

It’s also darkly amusing that people who are so clueless about incentives think they can develop superintelligence.

Expand full comment
author

aligned superintelligence, no less

Expand full comment
Nov 19, 2023·edited Nov 19, 2023

Maximal alignment (where alignment = liveness + safety, i.e. good things happen, and bad things don't) is the fundamental concept, not (surprisingly) intelligence. If the system is (genuinely) maximally aligned, then it doesn't matter how intelligent it is.

That said, if a system is maximally-aligned, then it makes sense to make it as (super) intelligent as possible, because the more intelligent it gets, the safer it becomes.

Expand full comment

All the principals involved in these machinations at OpenAI are among the most competent of people. The only reason that they would have been maneuvered into the perilous position they now find themselves in, is if a critical path is imminent and demanding an immediate course correction. It’s time for someone in the know to come clean on just what that threat is.

Expand full comment

When it comes to AGI we should expect giant corporations to operate with one wheel outside the law, such are the stakes.

Very naive to expect gentleman’s agreements and even contracts to hold water.

It’s war out there.

Expand full comment

No surprise here. When profit, which is believed to require control, is added to any venture, the venture cannot maintain its integrity to its original purpose and goal. We must devise a new way at looking at and addressing the situation.

Expand full comment
Nov 19, 2023Liked by Gary Marcus

I am still baffled how anyone could expect an organizational structure that has within it entities with conflicting interests and incentives not to eventually blow up.

Expand full comment
Nov 19, 2023·edited Nov 20, 2023Liked by Gary Marcus

The only way to develop and deploy a maximally-aligned (ultimately superintelligent) AGI in a way that is beneficial to all mankind for all eternity, without fear or favour, is to do so via a strictly non-profit entity funded purely philanthropically. Yes, it's 100 times more difficult, but that's the only way. Once you introduce a profit motive, you also introduce a conflict of interest, and powerful vested interests (shareholders and other opportunists) will seek to influence both the project and the technology in their own self-interest, immediately negating the project's intended objective. Basically, once they've got their claws in the goose that lays the golden eggs, they'll never let go.

Expand full comment

Philanthropic organizations have their own incentives. Monetary profit is not the only strong incentive. Power, recognition, ideology, and so on, are also powerful. The power that NGOs have in influencing government policies is clear to anyone whose eyes are open and who are not totally blinded by anti-capitalism ideology.

Expand full comment
Nov 19, 2023·edited Nov 20, 2023

Maybe, but of the incentives that you list, which is the most powerful in today's world? Clearly (IMHO), financial incentives (maximise salary, maximise revenues, maximise profit, maximise GDP, etc) are by far the most powerful, by orders of magnitude.

Besides, incentives are not the fundamental problem, self-interest is. Humans are primarily motivated by short-term self-interest, irrespective of the dimensionality of that self-interest (money, power, recognition, ideology, etc).

If people (specifically, the actors involved in an AGI project, including the sources of both funding and skills), could somehow be motivated to act in the best long-term interest of the human species as a whole (both living and future), rather than in their own short-term self-interest, then the problem of conflicting incentives is neutralised.

Unfortunately, acting in one's (or one's tribe's) own short-term self-interest is deeply ingrained in human nature, reinforced by hundreds of millions of years of evolution. It's not completely impossible to mitigate (for example, via the rule of law and other social contracts), just very very hard to do so reliably, especially when, as in AGI's case, there is so much money to be made (some $13.5 quadrillion, according to Stuart Russell).

Expand full comment
Nov 19, 2023·edited Nov 19, 2023Liked by Gary Marcus

Never thought of this bigger issue so thanks for highlighting it. Just do what you can to influence the zeigheist and lets hope that what society will accept forces the hand of self interest to act for the betterment of all over the few.

Expand full comment
Nov 20, 2023·edited Nov 20, 2023Liked by Gary Marcus

I have been an ethics professor for over 30 years. (Not that I am an ethical person, God knows! I try to match my moral behavior to my ethical principles but am far from the blessed sophrosyne, in Aristotole's words.) OK that out of the way I can make some comments without them redounding to my opprobrium. LLMs have changed ethics fundamentally. Until ChatGPT hit the scene we could assign responsibility for immoral and moral actions. We could assign intentionality to a moral actor. Moral agency made sense. That is no longer entirely the case. Man bites dog, there are many instances that action can be justified and many instances in which it would not be. The biting man is judged largely by his motives and his character. What principles will we apply with LLMs? I think they are yet to be developed. And I think we are seeing this confusion in the very actions of Open AI and the firing rehiring gobbledygook going on with Open AI. We simply do not know what is the right thing to do with regard to unleashing ersatz AGI into the world. Here is a puzzle: If I put a poster on my office door that makes me appear to be a social justice advocate, but I only put it on my wall to make friends, and impress students but secretly I do not at all believe in the message am I doing the right thing? It is inauthentic, but it may do good nevertheless. Should I put the poster on my office door? Is Green-washing or in this case AI-washing justified? LLMs may cure cancer but damn the money is wildly good either way.

Expand full comment
Nov 19, 2023Liked by Gary Marcus

So, basically, we can put a price tag on integrity.

Expand full comment
Nov 19, 2023Liked by Gary Marcus

People nod at the truism: "the best laid plans of mice and men" . . . but they don't believe it applies to them and their dreams. They have faith! What could go wrong?!

Expand full comment

“The Company exists to advance OpenAI, Inc’s mission of ensure that safe artificial general intelligence is developed and benefits all of humanity”

Have you ever seen a safe chainsaw? They're pretty close these days, but it took a while to get here.

What do you do when your AGI disagrees with you? We've already seen what happened when ChatGPT disagreed with users politics - they modified it. There's talk about "ethical" AI, but to me it appears to be nothing more than the regurgitated politics of the developer's boss.

Finally, the really thorny question. One that has already been debated, and never resolved.

Who defines what benefits all of humanity? That seems like a really slippery slope. There are some very rich men who have taken on that burden, with disastrous results.

Maybe we'd rather have one of the major religions decide what "...benefits all".

I'd suggest that the foundation of Open AI was flawed from the beginning.

Expand full comment

"If you think that OpenAI has a shot, eventually, at AGI, none of this bodes particularly well."

That is likely an overstatement. AGI will cause a huge amount of disruption. Such feuds are mere glitches.

Pursuing profit, alone, is not the path to doom. What is needed is transparency and proper rules. In due time.

Expand full comment

For all the safety people who are scared of an AI super intelligence discovering new and novel science…

I remind you AlphaFold solved protein folding 4 years ago, shouldn’t we be drowning in grey goo by now?

Young folks just have really bad worldviews, it’s OK, we were all young once.

Expand full comment