72 Comments

I think (part of) Tyler Cowen's critique of the letter's relevance is spot on - it didn't include signers from other disciplines & parts of society (and outreach seemingly wasn't attempted). Where are the clergy? The unions? The many grasroots and grasstops organizations that care about widespread AI-caused unemployment and suppressed wages?

When regulation succeeds, it's going to be because a diverse coalition has forced it to happen.

Expand full comment
Apr 3, 2023Liked by Gary Marcus

What would be great is if the people objecting to the first petition, but who also want a pause for their own reasons, would write their own petitions. If you have enough agreement on the target - six months suspension of development - then you could leave people to make their own disparate arguments for why, yet the signal that would rise above the noise would be simple and coherent.

Expand full comment
Apr 2, 2023Liked by Gary Marcus

Well written and fully agreed with you. Thank you.

Expand full comment

I suspect we're going to have to experience at least some of the bad stuff before we'll have any chance of regulating the AI industry. You're never going to convince the politicians and the general public of the danger without people or money being lost or harmed. First, hardly anyone acts on such things without such things happening. So far, the few incidents that have occurred are either not well known or not totally attributable to AI. Second, we have no real idea what needs to be done. The fact that The Letter calls for a moratorium, instead of specific measures tied to specific and experienced dangers, signals that. Finally, the analogy with aerospace, nuclear energy, accounting regulation, and fire safety is laughable. Each of these is easily tied to well-known risks and/or actual accidents. Planes crashed so we needed to regulate them. Everyone knew exactly what to fear. We knew about nuclear explosions and meltdowns so we knew the danger there. People steal money and companies have gone bankrupt for centuries, so accounting regulation is obvious. Fire danger has a history beyond recording. I share the fear of AI but The Letter is mostly a joke because it seems to ignore this context. Still, as you point out, it has started some people talking so its probably a net good thing regardless.

Expand full comment

Sigh, Gary. It's gotten so bad I'm now having dreams about ChatGPT. Probably bc I write about what generative AI is doing to the world of writing, but to your point, it's not actually what gen AI is doing. It's what *people* are doing. I'm seeing the same behavior that I did when desktop publishing exploded and everyone and their grandmother thought they could now be "published authors" and they raced to Amazon and other platforms to publish all of their creations. We now live in a world so glutted by "content" it's like walking across the Atlantic ocean bc of the amount of sargassum.

If the adults in the AI room can't behave, what hope is there for the rest of the population?

p.s. I signed the letter as well. I've only got my right pinkie in the AI pie (worked on the Google Assistant) but it's all hands on deck as far as I'm concerned!

Expand full comment
Apr 7, 2023Liked by Gary Marcus

I'm late to this show, but: over on Ars Technica (https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/), I read "'A major factor in Chat's success is that it manages to suppress confabulation enough to make it unnoticeable for many common questions,' said Riley Goodside..."

I have no idea whether it's true that Chat does better on "many common questions" than it does on less common questions. But if that *is* the case, then Chat is actually worse than I (and maybe we) thought: One could ask it some common questions--questions for which the answer is reasonably well known--and decide that it's reasonably reliable, and thus be deluded into using it for more difficult questions--questions where the answer is perhaps not known in advance, or is controversial.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023Liked by Gary Marcus

Thanks for so articulating this issue so incredible well Gary! We need and - and - and solutions. Not either or. This whole debate reminds me of the incomprehensible storm I entered when I tried out veganism. All these people wanting the same thing - yet a loud portion fighting each other's steps in the right direction, because each one favours their own unique approach. There is no single solution. In the case of AI, we need regulation to protect data labelers, to avoid misinformation, security harms, unfair social biases and more. We also need big tech to deploy responsibly, we need risk management, ethics boards, mass education to the general public so more people can enter this debate. And a petition is a sensible way to raise the alarm for all of that. Thanks for putting differences aside. With you 100%.

PS: Victoria Krakovna co-founded FLI - you might want to rephrase that she is not associated with their movement :)

Expand full comment
Apr 3, 2023Liked by Gary Marcus

Gary, I see this issue as more of a forest for the trees problem.

The problem isn't even about interpretable AI, its much more fundamental.

Currently, AI is designed by people who are seeking to replace humans in the production of labor. The issues you describe are red herrings of a much more fundamental issue that everyone seems to be conveniently ignoring.

Given that today's societal mechanics and food are tied fundamentally to our ability to work in factor markets. What do you think will realistically happen when production is no longer constrained by factor markets, and in fact factor markets disappear as they are replaced by AI.

There are significant issues with managing any kind of centrally planned economy, which would be a necessity for food and other goods in the absence of factor markets, and you run into various forms of the economic calculation problem that results in failures and usually a lot of death (at least historically).

Overpopulation and allocation of resources become a much more difficult problem as well. No one has really thought through how AI's disruption of even a small chunk of the distribution curve (without corresponding replacement) will actually impact people's ability to survive. Unrest historically is what happens when basic needs can't be met.

People with low IQs have a terribly difficult time finding competitive work, AI basically expands this problem to include the low and mid IQ type jobs.

We also grow from our professional experiences, and if there is no opportunity what impact does that have on our intelligence as a species in general.

Historically, when no one can come up with a plan, someone decides for everyone at some point, and that decision may be horrific. If you haven't seen the movie Conspiracy (2001), it does a horrifying but very accurate portrayal of how unthinkable things can comes to pass.

When you look at AI realistically, what real benefit does it provide? Am I wrong in thinking it doesn't provide any net positive benefit at all?

Expand full comment
Apr 2, 2023Liked by Gary Marcus

Thank you for articulating that gray zone issue: we need to raise the alarm on this for all to hear. We have years--maybe months to solve the lack of discussion.

Expand full comment

Well said, agreed. A frightening trend is the focus on profit to the exclusion of everything else.

Expand full comment

As I’m reading through Don Normans latest book I couldn’t help but feel this text has relevance to the problem you are outlining, Gary.

“The most difficult part of dealing with large, complex problems concerns the complexities of implementation. Most people think that the technical issues and the accompanying technology are the most difficult, but they are actually the simplest. The implementation of recommendations involves human beliefs and behavior, so most difficulties arise from four components:

1. System design that does not take into account human psychology

2. The human tendency to want simple answers, decomposable systems, and straightforward linear causality

3. Collaborative interaction with multiple disciplines and perspectives

4. Mutually incompatible requirements”

Expand full comment

Thank you for writing this. I was very disappointed to see the cynicism with which the Open Letter was received, even though I agree it was not perfectly worded. Given the strong language of some of the responses, I got the sense that the AI & Ethics community felt the letter was encroaching upon their territory. And so rather than saying that the letter was generally going in a positive direction, they felt they had to tear down the whole effort, while pointing out the naivete of the letter's language...

Expand full comment

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”

― Frank Herbert (1920-1986)

Expand full comment

"AI research and development should be refocused on making ...systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal./

Who on earth could actually object to that?"

Me. Loyal to whom? Aligned with whom? Research does not need commissars.

Expand full comment

The problem is that AI is eroding trust - something that was already a problem.

It's easy to state that you "...can't trust ChatGPT.". But what does that really mean?

You can't tell who or WHAT you are communicating with. You don't know if what you are reading is regurgitated pablum or the insights of a real human being. Worse, you don't know the motives of that entity.

It can generate volumes of subtle lies and partial truths. And present them as fact behind the facade of a human being with feelings, commitments, relationships and some clue about ethics. But it is not.

Time becomes the issue. If you and I were a computer, additional cycles are cheap. But as a human we only get 24 hours in a day - we can't expand or contract it. The time we waste communicating with a computer can never be recovered.

I'm having a Jurassic Park moment. "But your scientists were so preoccupied with whether they could, they didn't stop to think if they should."

Somehow we have to tag AI vs human. And when people use AI they should tag it themselves.

Expand full comment

From Gary Marcus’s 4-2023 jeremiad: “Some are frightened about some eventual superintelligence that might take over the world; my current fears have less to do with recent advances in technology per se, and more to do with recent observations about people.”

Well, first off, I said exactly that in my June 2022 Newsweek op ed dealing with 'empathy bots' that feign sapience, describing how this is more about human nature than any particular trait of simulated beings.

https://www.newsweek.com/soon-humanity-wont-alone-universe-opinion-1717446

Marcus goes on to explain his participation in the recent “moratorium petition.”

“The real reason I signed The Letter was not because I thought the moratorium had any realistic chance of being implemented but because I thought it was time for people to speak up in a coordinated way.”

And how’s that working out any better than the last 25 years of turgid-preening conferences on ‘ethical AI?’ Conferences that I long ago stopped attending, due to their relentless repetitions of “should” and “oughta” clichés – like the following core catechism offered by Dr. Marcus:

“AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

Um, as “Mr. Transparency” I am all aboard with those fine goals! (See my 1997 book The Transparent Society.) But the proposed moratorium-and-focused-research process is vague, slow, unenforceable, preachy, it lacks any effective incentives and (above all) never cites even a single precedent. Not one example of such a moratorium ever (and I mean ever) working for any significant time, to any significant degree. And yes, it is refusal to even glance at human history that I deem utterly culpable.

Marcus proposes “If we want to get to AI we can trust, probably the first thing we are going to need is a coalition.” At which point he then complains that one isn’t forming.

Um, duh? Across 6000 years - excluding war - coalitions and consensus have not been good methods for dealing with dire inhomogeneities of power.

After 25 years of endlessly similar ravings, I have come to realize something. These folks will absolutely never look at two sources of actual insight into what might work.

(1) The extensive libraries of science fiction thought experiments about this very issue, and

(2) Actual, actual… palpably actual… human history. Especially the last 200 years of an increasingly sophisticated and agile Enlightenment Experiment that discovered and has kept improving ONE method for preventing harm by capriciously powerful beings.

Powerful beings like kings, lords, priests, demagogues… and lawyers.

There IS is one method with a track record at doing exactly what Dr. Marcus asks. It does not require that everyon agree on a kumbaya consensus. It is robust even if the Marcus Tenets are violated in secret labs. It is the only way humans have ever found to deal with potentially dangerous, if brilliant, predators. It is the method we habe already used – if imperfectly – to create the first civilization that functions (with many flaws) in ways that are “… accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

And no one… not a single one of the thousands who have signed that turgid, useless ‘petition’… have even remotely considered or mentioned it.

-David Brin, author of The Postman and The Transparent Society: Will Technology Make Us Choose Between Privacy and Freedom?

Expand full comment