His undisclosed logic, along with many of his tech bros is to make a quick 100-200 billion and execute on his escape plan (Survival of the Richest by Douglas Rushkopf).
1000% agree with you on this and I wish you’d focus as much on alignment and OpenAI’s criminal lack of concern about safety, as you do on the limitations of LLMs. Altman also stated recently that he “has faith” that “smart people” will figure out the alignment issues at some point. This is insanity.
To be fair "smart people will figure it out in the future" is both his alignment pitch and his product pitch. It's the AGI pitch, the "we're gonna solve physics" pitch, the Worldcoin for UBI pitch, the "enhancing humanity" (or however they put it) pitch... all of it. All the wild stuff he says is a request to have faith in how clever his team and the AI community are. And we should buy into it because after all look at this here amazing chatbot it sounds just like a real person.
sure, but how do "smart people" figure out how to outsmart, in perpetuity, entities that are a million, a billion, a trillion, times smarter than they are?
No idea. Little of this makes sense to me. But, I don't believe that machines can have "intelligence" in the way the term is normally used outside of AI.
Although in this analogy — which is generative — it should be pointed out that with aviation and such, it was not the case that a fair plurality of the planet was tooling around flying airplanes every which way and launching rockets up in the sky.
That a fair point, though to make the AI analogy stick let's imagine that "flight" is a vague term that means totally different things to different people, and there's no way to know if or when it's actually been achieved. Now, try to achieve it iteratively.
This is typical of someone raised in the computer industry, where it is ubiquitous to sign multimillion dollar contracts, deliver a bug-laden product, then charge the customer consulting fees to correct defects that should have been corrected BEFORE delivering the final product.
Just another example of how unserious all of this is. Altman can say these things because they're goofy hypotheticals that belong in sci-fi roleplaying game sessions, preferably after the bong's been passed around. We're not going to actually face this challenge because the technology he's encouraging you to imagine is fictional. Maybe it'll actually be created one day, who knows. Maybe we'll get warp drives and transporters from Star Trek, too. No one can disprove the future existence of future technology.
I agree with everyone else here that his moral calculus would be shockingly irresponsible if any of this stuff was serious. But it isn't, so whatever. To criticize it at face value is to give it credit it doesn't deserve. I'll save my anger for whatever lies he tells in the next OpenAI "system card" or pretend research paper.
As someone who follows climate change and nuclear weapons issues, I have become increasingly convinced that we as a species are bent on self destruction. And that leaders are often the most self destructive of all.
I don't get how people can one minute say that AI will do all this world-changing stuff beyond what software has ever done in the past, and the next minute demand it be regulated exactly like software. The reason software isn't heavily regulated is that it has a limited potential to do harm. If AI different from software it should be regulated differently.
It should really be regulated like WMD because it can be used as a weapon of mass (social) destruction/disruption and/or to produce traditional WMD (and is also a weapon of MATH destruction — in more ways than one, since it gives legitimate math a bad name.)
I don't think AI will cause human extinction. It will 'just' create a dystopia, where everything of value is snowed under by AI-generated trash. I blame the likes of Altman, Musk and Zuckerberg for dragging humanity down into their dystopia.
"But I can simultaneously think that these risks are real and also believe that the only way to appropriately address them is to ship the product and learn."
Yes, learn that they should not have shipped the product.
Amazing how simple your business model becomes when you eliminate duty of care.
well-said
His undisclosed logic, along with many of his tech bros is to make a quick 100-200 billion and execute on his escape plan (Survival of the Richest by Douglas Rushkopf).
Oh wow, interesting-looking book!
#1 priority: investor's wellbeing
#1000,000,000 priority: humanity's wellbeing
#1 priority: Sam's own fortune
#1000 priority: investor's return
#1000,000 priority: other's safety
#1000,000,000 priority: humanity's wellbeing
Investors must be betting on Hell freezing over.
1000% agree with you on this and I wish you’d focus as much on alignment and OpenAI’s criminal lack of concern about safety, as you do on the limitations of LLMs. Altman also stated recently that he “has faith” that “smart people” will figure out the alignment issues at some point. This is insanity.
To be fair "smart people will figure it out in the future" is both his alignment pitch and his product pitch. It's the AGI pitch, the "we're gonna solve physics" pitch, the Worldcoin for UBI pitch, the "enhancing humanity" (or however they put it) pitch... all of it. All the wild stuff he says is a request to have faith in how clever his team and the AI community are. And we should buy into it because after all look at this here amazing chatbot it sounds just like a real person.
sure, but how do "smart people" figure out how to outsmart, in perpetuity, entities that are a million, a billion, a trillion, times smarter than they are?
That’s the beauty of extinction.
Smart folks won’t have to concern themselves with such things.
No idea. Little of this makes sense to me. But, I don't believe that machines can have "intelligence" in the way the term is normally used outside of AI.
Just another Elizabeth Holmes!
This stuff isn't completely nonfunctional.
It is the consumer software mindset of "test your beta as a release". Tesla is another example of that recklessness.
You wouldn't see this in plane autopilots or Voyager spacecraft ;P
"You wouldn't see this in plane autopilots"
Give Boeing time.
Haha true....
Didn’t we see it in early aeronautics? Keep iterating on bad designs until we get it right. Same for rocket launches.
Although in this analogy — which is generative — it should be pointed out that with aviation and such, it was not the case that a fair plurality of the planet was tooling around flying airplanes every which way and launching rockets up in the sky.
Yes but not *manned* flights, we tried to get the bugs out before, to the extent possible. The ethics was the other way around.
With AI, all we need to do is keep iterating until it doesn’t kill everyone.
Simple.
That a fair point, though to make the AI analogy stick let's imagine that "flight" is a vague term that means totally different things to different people, and there's no way to know if or when it's actually been achieved. Now, try to achieve it iteratively.
1. Sam the prophet probably believes his own prophecies on AGI-like performance and *thus* AGI-like risk.
2. Sam the business person has the ethics and wisdom of a [fill in your own horrible analogy]
3. These tools will be misused.
This is typical of someone raised in the computer industry, where it is ubiquitous to sign multimillion dollar contracts, deliver a bug-laden product, then charge the customer consulting fees to correct defects that should have been corrected BEFORE delivering the final product.
Just another example of how unserious all of this is. Altman can say these things because they're goofy hypotheticals that belong in sci-fi roleplaying game sessions, preferably after the bong's been passed around. We're not going to actually face this challenge because the technology he's encouraging you to imagine is fictional. Maybe it'll actually be created one day, who knows. Maybe we'll get warp drives and transporters from Star Trek, too. No one can disprove the future existence of future technology.
I agree with everyone else here that his moral calculus would be shockingly irresponsible if any of this stuff was serious. But it isn't, so whatever. To criticize it at face value is to give it credit it doesn't deserve. I'll save my anger for whatever lies he tells in the next OpenAI "system card" or pretend research paper.
“Move fast, pocket the money, and break everything.” One of many reasons I stopped working for Silly Valley companies decades ago.
The "AGI risk" hype is all in service to the "AGI will be awesome give me money" hype.
As someone who follows climate change and nuclear weapons issues, I have become increasingly convinced that we as a species are bent on self destruction. And that leaders are often the most self destructive of all.
Well, most of our species don't want self-destruction.
I don't get how people can one minute say that AI will do all this world-changing stuff beyond what software has ever done in the past, and the next minute demand it be regulated exactly like software. The reason software isn't heavily regulated is that it has a limited potential to do harm. If AI different from software it should be regulated differently.
It should really be regulated like WMD because it can be used as a weapon of mass (social) destruction/disruption and/or to produce traditional WMD (and is also a weapon of MATH destruction — in more ways than one, since it gives legitimate math a bad name.)
Unfortunately, the comparison of some of the current AI “leaders” with Robert Oppenheimer could not be more wrongheaded.
Oppenheimer was technically brilliant and concerned about the ethical ramifications of what he had produced.
I think it is fine to stop thinking Sam Altman is a visionary genius.
I think it is fine to stop thinking geniuses exist outside of movies and cartoons.
I don't think AI will cause human extinction. It will 'just' create a dystopia, where everything of value is snowed under by AI-generated trash. I blame the likes of Altman, Musk and Zuckerberg for dragging humanity down into their dystopia.
Extinction of humanity comes in more than one guise
"But I can simultaneously think that these risks are real and also believe that the only way to appropriately address them is to ship the product and learn."
Yes, learn that they should not have shipped the product.