32 Comments
User's avatar
Aaron Turner's avatar

So many questions! (A) Should the public be worried? -- The public should be worried (in particular) about any AI system that is deployed at scale, so whatever the major AI labs such as OpenAI are doing should be top of this list. All contemporary AI (including the major LLMs such as ChatGPT) is at best only minimally aligned with human values. BY DEFINITION, a minimally-aligned AI system will inexorably inflict societal harm (in parallel with any benefits) when deployed. Therefore a minimally-aligned AI system that is deployed at global scale (such as ChatGPT) will necessarily inflict societal harm at global scale (although it's difficult to predict in advance exactly what form that harm will take, we will doubtless gradually find out over the next 10-20 years). The only good news in this regard is that contemporary LLMs are simply too dumb by themselves to represent a significant x-risk. (B) Will the (new) board at OpenAI take note? -- Internally, for sure; these are not stupid people. (C) Will they do anything to address the situation? -- Only if they perceive it to be a problem relative to their (highly rationalised) objectives. It may take external pressure, such as robust legislation, to significantly temper their (and thus the company's) behaviour (ditto re the other AI labs of course). (D) Will OpenAI’s status as a nonprofit remain in good standing? -- In my assessment, OpenAI is already firmly under the control of profit-motivated interests. It's entirely possible however that key people within the company, even board members, have not quite worked this out yet. (E) Will [this] help Elon Musk’s case? -- Quite possibly. (F) Does Sam care? -- I believe he cares deeply. I also believe HE believes he's doing the right thing, which (when Kool-Aid is involved) is not necessarily the same thing as actually doing the right thing. (G) Is this what he wanted? -- I suspect not, but HE needs to work this out (ditto re the rest of the board). (H) Is OpenAI’s push to commercialization coming at the expense of AI safety? -- 10,000%.

Expand full comment
Joy in HK fiFP's avatar

One has to look long and hard with a magnifying glass to find examples of a profitable business willing to offset revenue for safety concerns. Ford Pinto, anyone?

Expand full comment
Claude Coulombe's avatar

Although here, the profitability at least in the short term, has not been demonstrated. Except for NVIDIA which hits the jackpot. In my opinion, that's more the revenge of M$ over Google.

Expand full comment
Joy in HK fiFP's avatar

I see what you are saying, and I think it is an important correction, which is that even the dream of profits, will override any concerns of safety.

Expand full comment
Stephen Schiff's avatar

It would be the height of ignorance to expect OpenAI to effectively self regulate. They are in that respect no different from any other industry or organization. Already we are witnessing the uselessness of the voluntary nature of Biden's Executive Order of 30 October 2023.

Expand full comment
Richard Self's avatar

No evidence that Sam cares at all. Just two more road blocks removed.

Expand full comment
Jimmy's avatar

Hard to see how any AI safety person in the 'pause' or 'slow down' or 'do not start a race to AGI' camp(s) could be happy with how OpenAI reportedly was planning to demo a new AI-powered search the day before Google IO. (Though this was reportedly delayed and replaced with the 4o demo, the timing remained something of a taunt.)

Expand full comment
Simon Au-Yong's avatar

The honeymoon is over. AI safety has left the building, and it might be because of the fatigue from all that hype.

Cool selfies from Bletchley Park, though!

https://www.reuters.com/technology/second-global-ai-safety-summit-faces-tough-questions-lower-turnout-2024-04-29/

Expand full comment
Ash's avatar

Simplest answer is usually correct. Ideologically "safety" and responsibility is antithetical to the values of wealth/power/elitism. If there is indeed as Adam Smith once wrote "an invisible hand" driving the worlds economic systems. Whatever shape/form that may take. Then Ai and robotics presents a potentially major ideological shift away from our Capitalist structures and into something entirely new. If however Ai can be turned now into a tool for protectionism, propaganda and so on, then the likelihood hood of us seeing the complete collapse of the working classes and some sort of godawful dystopia in future becomes more and more likely. What you are going to witness is the usual competitive/race dynamics prevalent in all previous technological revolutions in humanity. Only this time the potential power that can be derived from AI is ... well I have no words for it. I genuinely despair for the future of our species at this point.

Expand full comment
Devaraj Sandberg's avatar

Scary new algorithm, my ass! Gen AI is a one-trick pony and you can scale it to the moon, it won't get close to AGI. All it can do is try to simulate AGI in very limited sits on a bad day! Open AI has been reduced to pure hype, like so much else these days.

Expand full comment
Glen's avatar

Agreed. The computational power required to emulate what even a single animal brain can do is impossible. And the human brain scan project spent a billion euro over the past decade and they still don't understand how human cognition works. AGI is magical thinking, and the only reason it's take seriously is credulous tech reporters who should be doing their jobs better.

As to the LLMs and GMs themselves, they cost far FAR more than any customer would be willing to pay for them. They're more expensive and less reliable than tools we already have.

The only threats they present is they superpower botfarms and compel dimwitted management to fire productive employees in advance of an actual useful product.

Expand full comment
Devaraj Sandberg's avatar

The current developmental framework is akin to... managing to computationally reproduce how a cow detects smell, and then assuming you can scale that to eventually get AGI!

MVP hype MVP hype

Expand full comment
Herbert Roitblat's avatar

The most ridiculous thing is the belief that they are on the road to artificial general intelligence (or worse super-intelligence). That is just nonsense. They have built a word guesser. One could argue that they have built a remarkably powerful word guesser, but it is still just a word guesser. Any extrapolations that these models have gained other cognitive skills are wishful thinking and examples of confirmation bias. It's hype. The narrow intelligence of word guessing can be useful in specific situations, but it is very far from general intelligence. We may need policy to regulate artificial intelligence but that policy should be based on reality, not blind acceptance of the hype. When stripped of the hype, there is actually very little in the AI part of OpenAI to regulate (I do not speak to their business practices).

Expand full comment
Ondřej Frei's avatar

I think what's tricking everyone is already the "intelligence" in the name. I'm not a computer scientist, but at the end of the day I don't see why LLMs should be seen as something different from other computer programs. Probably because then the valuations wouldn't be high enough...?

Expand full comment
Tom Gottsche's avatar

The war on humans is on. On their way to "AGI" they don't need "idiot savants" (good at text, bad at visuals etc). They (openAI and others) also have to reduce the human mind to a computer-like machine. This way they don't have to explain things they cannot such as consciousness or an embodied mind or reasoning. They further have to adhere to a metaphysic of materialism. Emotion is nothing but facial expressions and certain voice patterns. They will continue with their crude operationalism. Porn is love after all.

In the end they'll try to convince everybody to lower their standards, because they cannot raise theirs. And there will be enough sycophants and politicians following them. It's about power, fame, and money. Maybe there will be ChatGPT Five-O after all? Or Mrs. Davis?

Expand full comment
Steven Marlow's avatar

Technically, they are not whistleblowers, but certainly look like a collection of dead canaries.

Expand full comment
Robert Keith's avatar

In a recent demo video, a photographer asked Google's Gemini AI what to do when the film advance lever stopped working on his 35mm camera. This is what Gemini advised him to do >

"Open the back door and gently remove the film if the camera is jammed."

Imagine if this had been a question where serious safety concerns were potentially at play.

Expand full comment
Victor Smirnov's avatar

> Should the public be worried?

No, unless someone stupid enough decides to use LLMs in mission-critical decision-making. They aren't yet ready for that. "Superalignment" is just not needed (especially at declared 20% of the total training budget) now. Instructing models to be aligned with current political agenda is more than enough.

Expand full comment
Ferit To's avatar

They are clearly not a research organisation anymore.

They are a product SaaS company trying to make money via nice Demos.

I‘d expect that because B2B SaaS is where money comes in, the B2C stuff is now completely free. At the end, improving the system for companies is what’s important for them. So they try to convince us How amazing a Voice assistant reading text messages is - as in reality, I like this voice assistant but at the end I realized it’s just piece of software reading out loud the Text gpt-4o generates under the hood. Nothing else

Expand full comment
Amy A's avatar

And, which of these people will be the main character when Walter Isaacson or Michael Lewis writes the book on this hot mess?

Expand full comment
Franklin Seal's avatar

I agree, they probably left due to a shift in corporate attitude rather than a specific safety concern. They probably realized that in the long battle between the forces of e/acc and (? at least a bit more caution) they had lost. And they probably figured since they couldn't stop bad things from happening on the inside, it wouldn't make a difference anymore if they left or stayed. So then, why stay?

Expand full comment