35 Comments
May 15·edited May 17Liked by Gary Marcus

So many questions! (A) Should the public be worried? -- The public should be worried (in particular) about any AI system that is deployed at scale, so whatever the major AI labs such as OpenAI are doing should be top of this list. All contemporary AI (including the major LLMs such as ChatGPT) is at best only minimally aligned with human values. BY DEFINITION, a minimally-aligned AI system will inexorably inflict societal harm (in parallel with any benefits) when deployed. Therefore a minimally-aligned AI system that is deployed at global scale (such as ChatGPT) will necessarily inflict societal harm at global scale (although it's difficult to predict in advance exactly what form that harm will take, we will doubtless gradually find out over the next 10-20 years). The only good news in this regard is that contemporary LLMs are simply too dumb by themselves to represent a significant x-risk. (B) Will the (new) board at OpenAI take note? -- Internally, for sure; these are not stupid people. (C) Will they do anything to address the situation? -- Only if they perceive it to be a problem relative to their (highly rationalised) objectives. It may take external pressure, such as robust legislation, to significantly temper their (and thus the company's) behaviour (ditto re the other AI labs of course). (D) Will OpenAI’s status as a nonprofit remain in good standing? -- In my assessment, OpenAI is already firmly under the control of profit-motivated interests. It's entirely possible however that key people within the company, even board members, have not quite worked this out yet. (E) Will [this] help Elon Musk’s case? -- Quite possibly. (F) Does Sam care? -- I believe he cares deeply. I also believe HE believes he's doing the right thing, which (when Kool-Aid is involved) is not necessarily the same thing as actually doing the right thing. (G) Is this what he wanted? -- I suspect not, but HE needs to work this out (ditto re the rest of the board). (H) Is OpenAI’s push to commercialization coming at the expense of AI safety? -- 10,000%.

Expand full comment
May 15·edited May 15Liked by Gary Marcus

One has to look long and hard with a magnifying glass to find examples of a profitable business willing to offset revenue for safety concerns. Ford Pinto, anyone?

Expand full comment

Although here, the profitability at least in the short term, has not been demonstrated. Except for NVIDIA which hits the jackpot. In my opinion, that's more the revenge of M$ over Google.

Expand full comment

I see what you are saying, and I think it is an important correction, which is that even the dream of profits, will override any concerns of safety.

Expand full comment

It would be the height of ignorance to expect OpenAI to effectively self regulate. They are in that respect no different from any other industry or organization. Already we are witnessing the uselessness of the voluntary nature of Biden's Executive Order of 30 October 2023.

Expand full comment

No evidence that Sam cares at all. Just two more road blocks removed.

Expand full comment
May 15Liked by Gary Marcus

Hard to see how any AI safety person in the 'pause' or 'slow down' or 'do not start a race to AGI' camp(s) could be happy with how OpenAI reportedly was planning to demo a new AI-powered search the day before Google IO. (Though this was reportedly delayed and replaced with the 4o demo, the timing remained something of a taunt.)

Expand full comment
May 15·edited May 15Liked by Gary Marcus

The honeymoon is over. AI safety has left the building, and it might be because of the fatigue from all that hype.

Cool selfies from Bletchley Park, though!

https://www.reuters.com/technology/second-global-ai-safety-summit-faces-tough-questions-lower-turnout-2024-04-29/

Expand full comment

Simplest answer is usually correct. Ideologically "safety" and responsibility is antithetical to the values of wealth/power/elitism. If there is indeed as Adam Smith once wrote "an invisible hand" driving the worlds economic systems. Whatever shape/form that may take. Then Ai and robotics presents a potentially major ideological shift away from our Capitalist structures and into something entirely new. If however Ai can be turned now into a tool for protectionism, propaganda and so on, then the likelihood hood of us seeing the complete collapse of the working classes and some sort of godawful dystopia in future becomes more and more likely. What you are going to witness is the usual competitive/race dynamics prevalent in all previous technological revolutions in humanity. Only this time the potential power that can be derived from AI is ... well I have no words for it. I genuinely despair for the future of our species at this point.

Expand full comment

Scary new algorithm, my ass! Gen AI is a one-trick pony and you can scale it to the moon, it won't get close to AGI. All it can do is try to simulate AGI in very limited sits on a bad day! Open AI has been reduced to pure hype, like so much else these days.

Expand full comment

Agreed. The computational power required to emulate what even a single animal brain can do is impossible. And the human brain scan project spent a billion euro over the past decade and they still don't understand how human cognition works. AGI is magical thinking, and the only reason it's take seriously is credulous tech reporters who should be doing their jobs better.

As to the LLMs and GMs themselves, they cost far FAR more than any customer would be willing to pay for them. They're more expensive and less reliable than tools we already have.

The only threats they present is they superpower botfarms and compel dimwitted management to fire productive employees in advance of an actual useful product.

Expand full comment

The current developmental framework is akin to... managing to computationally reproduce how a cow detects smell, and then assuming you can scale that to eventually get AGI!

MVP hype MVP hype

Expand full comment

The most ridiculous thing is the belief that they are on the road to artificial general intelligence (or worse super-intelligence). That is just nonsense. They have built a word guesser. One could argue that they have built a remarkably powerful word guesser, but it is still just a word guesser. Any extrapolations that these models have gained other cognitive skills are wishful thinking and examples of confirmation bias. It's hype. The narrow intelligence of word guessing can be useful in specific situations, but it is very far from general intelligence. We may need policy to regulate artificial intelligence but that policy should be based on reality, not blind acceptance of the hype. When stripped of the hype, there is actually very little in the AI part of OpenAI to regulate (I do not speak to their business practices).

Expand full comment

I think what's tricking everyone is already the "intelligence" in the name. I'm not a computer scientist, but at the end of the day I don't see why LLMs should be seen as something different from other computer programs. Probably because then the valuations wouldn't be high enough...?

Expand full comment

The war on humans is on. On their way to "AGI" they don't need "idiot savants" (good at text, bad at visuals etc). They (openAI and others) also have to reduce the human mind to a computer-like machine. This way they don't have to explain things they cannot such as consciousness or an embodied mind or reasoning. They further have to adhere to a metaphysic of materialism. Emotion is nothing but facial expressions and certain voice patterns. They will continue with their crude operationalism. Porn is love after all.

In the end they'll try to convince everybody to lower their standards, because they cannot raise theirs. And there will be enough sycophants and politicians following them. It's about power, fame, and money. Maybe there will be ChatGPT Five-O after all? Or Mrs. Davis?

Expand full comment

Technically, they are not whistleblowers, but certainly look like a collection of dead canaries.

Expand full comment

In a recent demo video, a photographer asked Google's Gemini AI what to do when the film advance lever stopped working on his 35mm camera. This is what Gemini advised him to do >

"Open the back door and gently remove the film if the camera is jammed."

Imagine if this had been a question where serious safety concerns were potentially at play.

Expand full comment

> Should the public be worried?

No, unless someone stupid enough decides to use LLMs in mission-critical decision-making. They aren't yet ready for that. "Superalignment" is just not needed (especially at declared 20% of the total training budget) now. Instructing models to be aligned with current political agenda is more than enough.

Expand full comment

They are clearly not a research organisation anymore.

They are a product SaaS company trying to make money via nice Demos.

I‘d expect that because B2B SaaS is where money comes in, the B2C stuff is now completely free. At the end, improving the system for companies is what’s important for them. So they try to convince us How amazing a Voice assistant reading text messages is - as in reality, I like this voice assistant but at the end I realized it’s just piece of software reading out loud the Text gpt-4o generates under the hood. Nothing else

Expand full comment

And, which of these people will be the main character when Walter Isaacson or Michael Lewis writes the book on this hot mess?

Expand full comment

I'm puzzled why you keep going on and on about safety, hallucinations, governance etc. Yes, there are real safety issues, and hallucinations are real, agreed. But how is any of this to be avoided???

A reality check:

1) The U.S. Congress has jurisdiction over only about 5% of the world's population.

2) About the same for the EU.

3) So even if these governing bodies were to enact perfect legislation, it would only affect around 10% of the global population.

4) Russia, China, North Korea, Iran, and many other bad actors around the world don't give a shit about any of your concerns. And nobody has the power to force the biggest players to do anything, because the biggest bad actors have nuclear weapons.

5) We live in a globalized world which is interconnected via the Internet. So if Silicon Valley were to shut down completely, other global AI actors would take over from Silicon Valley in no time at all.

And then there's this...

Experts don't know what they're talking about! Here's why. By the very nature of the expert business, experts are almost always focused on some small slice of the threat pie. Yes, they have tons of information about details, but out of their own self interest they fail to recognize or admit that their details don't really matter.

To illustrate, while we're wringing our hands about what to do about AI, an accelerating knowledge explosion is generating new powers that will also have to be made safe. The new powers are rolling off the end of the knowledge explosion assembly line faster than we can figure out how to make them safe. Thus.....

To focus on this or that threatening technology is a LOSER's GAME that is doomed to fail. The experts refuse to see or admit this, because this principle wrecks their career business model. Well, so what, who gives a shit???

The only way we're ever going to be safe is to take control of the knowledge explosion assembly line so that we can keep up. It's either that, or get rid of men, the gender responsible for the vast majority of trouble in the world.

More reality check:

Nature doesn't give a shit whether we think doing one or both of the above is too hard. Nature has a simple rule it applies fairly to all species without exception. Adapt or die.

So long as we're listening to "experts" whose business model bias renders them incapable of holistic thinking, we are not adapting to the revolutionary new environment we have created.

Expand full comment

Because suicide is not a plan for the future.

Expand full comment