37 Comments

Amazing how simple your business model becomes when you eliminate duty of care.

Expand full comment

well-said

Expand full comment

His undisclosed logic, along with many of his tech bros is to make a quick 100-200 billion and execute on his escape plan (Survival of the Richest by Douglas Rushkopf).

Expand full comment

#1 priority: investor's wellbeing

#1000,000,000 priority: humanity's wellbeing

Expand full comment

#1 priority: Sam's own fortune

#1000 priority: investor's return

#1000,000 priority: other's safety

#1000,000,000 priority: humanity's wellbeing

Expand full comment

Investors must be betting on Hell freezing over.

Expand full comment

1000% agree with you on this and I wish you’d focus as much on alignment and OpenAI’s criminal lack of concern about safety, as you do on the limitations of LLMs. Altman also stated recently that he “has faith” that “smart people” will figure out the alignment issues at some point. This is insanity.

Expand full comment
7hEdited

To be fair "smart people will figure it out in the future" is both his alignment pitch and his product pitch. It's the AGI pitch, the "we're gonna solve physics" pitch, the Worldcoin for UBI pitch, the "enhancing humanity" (or however they put it) pitch... all of it. All the wild stuff he says is a request to have faith in how clever his team and the AI community are. And we should buy into it because after all look at this here amazing chatbot it sounds just like a real person.

Expand full comment

sure, but how do "smart people" figure out how to outsmart, in perpetuity, entities that are a million, a billion, a trillion, times smarter than they are?

Expand full comment

That’s the beauty of extinction.

Smart folks won’t have to concern themselves with such things.

Expand full comment

It is the consumer software mindset of "test your beta as a release". Tesla is another example of that recklessness.

You wouldn't see this in plane autopilots or Voyager spacecraft ;P

Expand full comment

Didn’t we see it in early aeronautics? Keep iterating on bad designs until we get it right. Same for rocket launches.

Expand full comment

Although in this analogy — which is generative — it should be pointed out that with aviation and such, it was not the case that a fair plurality of the planet was tooling around flying airplanes every which way and launching rockets up in the sky.

Expand full comment
7hEdited

That a fair point, though to make the AI analogy stick let's imagine that "flight" is a vague term that means totally different things to different people, and there's no way to know if or when it's actually been achieved. Now, try to achieve it iteratively.

Expand full comment

Just another Elizabeth Holmes!

Expand full comment

This stuff isn't completely nonfunctional.

Expand full comment

As someone who follows climate change and nuclear weapons issues, I have become increasingly convinced that we as a species are bent on self destruction. And that leaders are often the most self destructive of all.

Expand full comment

This is typical of someone raised in the computer industry, where it is ubiquitous to sign multimillion dollar contracts, deliver a bug-laden product, then charge the customer consulting fees to correct defects that should have been corrected BEFORE delivering the final product.

Expand full comment

“Move fast, pocket the money, and break everything.” One of many reasons I stopped working for Silly Valley companies decades ago.

Expand full comment

Well, "ship product and learn" might be a way to learn about product liability. The wrong way, though.

Expand full comment

Just another example of how unserious all of this is. Altman can say these things because they're goofy hypotheticals that belong in sci-fi roleplaying game sessions, preferably after the bong's been passed around. We're not going to actually face this challenge because the technology he's encouraging you to imagine is fictional. Maybe it'll actually be created one day, who knows. Maybe we'll get warp drives and transporters from Star Trek, too. No one can disprove the future existence of future technology.

I agree with everyone else here that his moral calculus would be shockingly irresponsible if any of this stuff was serious. But it isn't, so whatever. To criticize it at face value is to give it credit it doesn't deserve. I'll save my anger for whatever lies he tells in the next OpenAI "system card" or pretend research paper.

Expand full comment

The "AGI risk" hype is all in service to the "AGI will be awesome give me money" hype.

Expand full comment

I don't get how people can one minute say that AI will do all this world-changing stuff beyond what software has ever done in the past, and the next minute demand it be regulated exactly like software. The reason software isn't heavily regulated is that it has a limited potential to do harm. If AI different from software it should be regulated differently.

Expand full comment

I think it is fine to stop thinking Sam Altman is a visionary genius.

Expand full comment

I think it is fine to stop thinking geniuses exist outside of movies and cartoons.

Expand full comment

1. Sam the prophet probably believes his own prophecies on AGI-like performance and *thus* AGI-like risk.

2. Sam the business person has the ethics and wisdom of a [fill in your own horrible analogy]

3. These tools will be misused.

Expand full comment

Not agreeing with Altman as his motivation seems to be heavily influenced by pride and greed but just to play devils advocate.

I think interpreted charitably he means long term dangers are best handled by getting to know them while AI is still manageable.

Expand full comment

double-or-nothing till we all die? i am not a doomer but that runs some pretty heavy risk, whereas a moratorium for systems of certain capabilities might conceivably not.

Expand full comment