72 Comments

I think (part of) Tyler Cowen's critique of the letter's relevance is spot on - it didn't include signers from other disciplines & parts of society (and outreach seemingly wasn't attempted). Where are the clergy? The unions? The many grasroots and grasstops organizations that care about widespread AI-caused unemployment and suppressed wages?

When regulation succeeds, it's going to be because a diverse coalition has forced it to happen.

Expand full comment

This is such an important point. Sure, the dis/mis information potential is alarming, but the massive labour market upheaval (that is already starting) has broad societal implications that are downright terrifying. Whole categories of jobs/careers are going to be made redundant overnight. Kids like my grandson, just learning to code, we will be replaced by AI because, for corporate America, the bottom line is always the almighty dollar.

Expand full comment

Pam, I feel your pain in the marrow of my bones. My daughter is a professional artist (at just 12 years old). Need I say more... the generative AI art craze is disheartening to see, at best. Same for us writers. And yet, and yet, I do see people calling for the heightened awareness, recognition, and support for human creators, so that might be a bright side to this dark cloud rolling in. We need to keep speaking up, showing up, and refusing to be overtaken.

Expand full comment

Great point Pam. And many of those made redundant will then become a threat to the larger social order. If they can't transition to other careers they may find themselves sitting in a dumpy trailer on the wrong side of the tracks nursing their rejection by society, and watching people on Fox News promising to "make America great again". There will always be some schemer standing by, ready to harvest their disappointment and despair.

Expand full comment

It might still be early for the broader coalition, but I do believe it will happen. No doubt it's already bubbling up. Similar to the coalitions that have formed to address climate change.

Expand full comment

Yes, where are the clergy and also people of faith who work in AI and STEM?

Expand full comment
Apr 3, 2023Liked by Gary Marcus

What would be great is if the people objecting to the first petition, but who also want a pause for their own reasons, would write their own petitions. If you have enough agreement on the target - six months suspension of development - then you could leave people to make their own disparate arguments for why, yet the signal that would rise above the noise would be simple and coherent.

Expand full comment
Apr 2, 2023Liked by Gary Marcus

Well written and fully agreed with you. Thank you.

Expand full comment

I suspect we're going to have to experience at least some of the bad stuff before we'll have any chance of regulating the AI industry. You're never going to convince the politicians and the general public of the danger without people or money being lost or harmed. First, hardly anyone acts on such things without such things happening. So far, the few incidents that have occurred are either not well known or not totally attributable to AI. Second, we have no real idea what needs to be done. The fact that The Letter calls for a moratorium, instead of specific measures tied to specific and experienced dangers, signals that. Finally, the analogy with aerospace, nuclear energy, accounting regulation, and fire safety is laughable. Each of these is easily tied to well-known risks and/or actual accidents. Planes crashed so we needed to regulate them. Everyone knew exactly what to fear. We knew about nuclear explosions and meltdowns so we knew the danger there. People steal money and companies have gone bankrupt for centuries, so accounting regulation is obvious. Fire danger has a history beyond recording. I share the fear of AI but The Letter is mostly a joke because it seems to ignore this context. Still, as you point out, it has started some people talking so its probably a net good thing regardless.

Expand full comment
Apr 2, 2023Liked by Gary Marcus

We HAVE experienced bad stuff... from the proliferation of online echo chambers driven by recommendation algorithms to chatbots convincing people to commit suicide. How far do we need to let it go?

Expand full comment

Much farther than that for sure. No one is going to create a new federal department, say, because of a couple of suicides from people reading bad stuff online that happens to be AI-generated. We do have a real problem but it lies mostly in the future. All I'm calling for is some perspective. The fight against this stuff could easily be set back years if the public perception is that its just a few people crying wolf.

Expand full comment

Mostly agreed especially on the point that AI has no plausible life threatening risks as of now or the immediate future. However, be a bit skeptical about airline regulation. If dangers are clear -- as they are with plane crashes -- people will demand safer planes and favor companies that provide them. The market works. There are non-obvious factors too. Excessive regulation of airline safety can raise costs, causing people to switch to far more dangerous forms of transport. Regulation typically ignores these kinds of effects.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023Liked by Gary Marcus

That's why regulation is only effective if partnered with licencing and liability.

If people were in a position to be allowed to sue a company for damages caused by an unlicenced AI model (as they can with drug companies, airlines, banks, car manufacturers etc.) then a well regulated market could indeed flex to accomodate consumer concerns.

At the moment though there are basically no repercusions for rampant and reckless profit seeking - so we are seeing a repeat of the NFT, crypto, banking and social media fiascos of recent years. No/Low regulation doesn't work and never will (assuming one cares about consumer rights and safety).

Expand full comment

Regulation generally creates more problems that it solves. Same with licensing. All you need is liability. "No regulation" is misleading because people in the market regulate things when companies are liable for false claims, outcomes not as promised, etc.

You mention banking fiascos but seem to be unaware that those were caused (2008 and now) by regulations and by the Federal Reserve holding rates artificially low for years.

Expand full comment

PS: The Letter had to lean on the sci-fi trope of super-intelligent AI in order to use it to gen up sufficient fear in its readership. Yes, the FLI also focuses more on such long-term fears but taking advantage of such fears is really dishonest and diverts attention from the much more real and near-term dangers of AI-driven misinformation and crime.

Expand full comment
Comment deleted
Expand full comment

I very much doubt that any problems the current generation of AI might cause would make people reject AI completely. Even if AI-generated misinformation changed the result of a US election, the incident would be sufficiently vague as to not generate much reaction other than politicians speaking loudly on TV. I am not downplaying AI risk here, just public and government reaction.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

No, but, say, chronic, structural 25 percent unemployment might.

Expand full comment

Agreed, today's chatbots won't radically change the perception of AI. But if we keep on going, something else likely will.

And developments in other fields may change the public's relationship with AI too. As quick example, say someone creates a four headed horse with genetic engineering, or some other such easily observed manipulation that freaks people out. In such a case, or series of cases, the public may not just reject genetic engineering, they may leap to a larger rejection of science and all it's many products.

The "more is better" relationship with knowledge that modern science is built upon is simplistic, outdated, and increasingly dangerous 19th century thinking. "Less is better" is equally stupid, but nonetheless the broad public may leap wildly from one to the other if these threats become tangible enough to them.

Expand full comment

Rejection of science as a society is mostly science fiction stuff. We already have people who reject science but they simply have a harder time finding jobs. Modern business depends so much on science, there's really no going back. Even genetic engineering won't get rejected wholesale if it can produce wins, which is likely.

As far as AI risk is concerned, it is a time where we need discussion and proposals for solutions. Even before that, I think we have to enumerate the risks we are exposed to right now and try to find solutions for those. Since there are real problems right now, we shouldn't get sidetracked trying to head off stuff like super-intelligence that isn't close to being real.

Expand full comment

The degree of science rejection which may occur is debatable of course, agreed. I don't claim to know any details about the future, I'm just trying to think through overall trend lines.

If science continues to accelerate, and is further accelerated by AI, that will also accelerate social change. The more social change, and the faster it happens, the more people who will be left behind.

Consider Trump and his "make America great again" slogan. A big percentage of his base are blue collar high school only educated people who are being left behind by globalization and automation, science driven phenomena. Consider how many rejected science advice during Covid. Consider how many seem to be rejecting our form of government. Consider how many are turning towards what have previously considered ridiculous solutions.

Now just try to follow such trend lines forward in to the future. Those left behind aren't going to just lie down and die, they're going to push back, they're going to attack the system which has rejected them.

The logic flaw I see in our thinking is that we seem to think we can introduce ever more revolutionary technologies in to society at an ever faster pace, and somehow our social fabric, and our thinking, will stay more or less the same.

Expand full comment
Apr 17, 2023·edited Apr 17, 2023

I'm more worried about their USES of the science.

The pool of disgruntled and disenfranchised will get larger and larger, and they'll have plenty of new fancy toys with which to cause destruction.

Deepfakes destroying the concept of truth? GPT-4 enabling people with no technical experience whatsoever to code malware?

Or for something slightly more far-fetched ... how long before self-replicating nanobots are created and some idiot creates grey goo in their garage?

I probably sound like an alarmist panicking about the apocalypse, but to be perfectly blunt - 10 years ago, if you predicted either the presidency of The Orange One or people denying the existence of the worst disease outbreak in a century while sick with the disease in question, let alone BOTH OF THEM AT THE SAME TIME, it would be too stupid even for the movies.

Expand full comment

Sigh, Gary. It's gotten so bad I'm now having dreams about ChatGPT. Probably bc I write about what generative AI is doing to the world of writing, but to your point, it's not actually what gen AI is doing. It's what *people* are doing. I'm seeing the same behavior that I did when desktop publishing exploded and everyone and their grandmother thought they could now be "published authors" and they raced to Amazon and other platforms to publish all of their creations. We now live in a world so glutted by "content" it's like walking across the Atlantic ocean bc of the amount of sargassum.

If the adults in the AI room can't behave, what hope is there for the rest of the population?

p.s. I signed the letter as well. I've only got my right pinkie in the AI pie (worked on the Google Assistant) but it's all hands on deck as far as I'm concerned!

Expand full comment
Apr 7, 2023Liked by Gary Marcus

I'm late to this show, but: over on Ars Technica (https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/), I read "'A major factor in Chat's success is that it manages to suppress confabulation enough to make it unnoticeable for many common questions,' said Riley Goodside..."

I have no idea whether it's true that Chat does better on "many common questions" than it does on less common questions. But if that *is* the case, then Chat is actually worse than I (and maybe we) thought: One could ask it some common questions--questions for which the answer is reasonably well known--and decide that it's reasonably reliable, and thus be deluded into using it for more difficult questions--questions where the answer is perhaps not known in advance, or is controversial.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023Liked by Gary Marcus

Thanks for so articulating this issue so incredible well Gary! We need and - and - and solutions. Not either or. This whole debate reminds me of the incomprehensible storm I entered when I tried out veganism. All these people wanting the same thing - yet a loud portion fighting each other's steps in the right direction, because each one favours their own unique approach. There is no single solution. In the case of AI, we need regulation to protect data labelers, to avoid misinformation, security harms, unfair social biases and more. We also need big tech to deploy responsibly, we need risk management, ethics boards, mass education to the general public so more people can enter this debate. And a petition is a sensible way to raise the alarm for all of that. Thanks for putting differences aside. With you 100%.

PS: Victoria Krakovna co-founded FLI - you might want to rephrase that she is not associated with their movement :)

Expand full comment
Apr 3, 2023Liked by Gary Marcus

Gary, I see this issue as more of a forest for the trees problem.

The problem isn't even about interpretable AI, its much more fundamental.

Currently, AI is designed by people who are seeking to replace humans in the production of labor. The issues you describe are red herrings of a much more fundamental issue that everyone seems to be conveniently ignoring.

Given that today's societal mechanics and food are tied fundamentally to our ability to work in factor markets. What do you think will realistically happen when production is no longer constrained by factor markets, and in fact factor markets disappear as they are replaced by AI.

There are significant issues with managing any kind of centrally planned economy, which would be a necessity for food and other goods in the absence of factor markets, and you run into various forms of the economic calculation problem that results in failures and usually a lot of death (at least historically).

Overpopulation and allocation of resources become a much more difficult problem as well. No one has really thought through how AI's disruption of even a small chunk of the distribution curve (without corresponding replacement) will actually impact people's ability to survive. Unrest historically is what happens when basic needs can't be met.

People with low IQs have a terribly difficult time finding competitive work, AI basically expands this problem to include the low and mid IQ type jobs.

We also grow from our professional experiences, and if there is no opportunity what impact does that have on our intelligence as a species in general.

Historically, when no one can come up with a plan, someone decides for everyone at some point, and that decision may be horrific. If you haven't seen the movie Conspiracy (2001), it does a horrifying but very accurate portrayal of how unthinkable things can comes to pass.

When you look at AI realistically, what real benefit does it provide? Am I wrong in thinking it doesn't provide any net positive benefit at all?

Expand full comment
Apr 2, 2023Liked by Gary Marcus

Thank you for articulating that gray zone issue: we need to raise the alarm on this for all to hear. We have years--maybe months to solve the lack of discussion.

Expand full comment

Well said, agreed. A frightening trend is the focus on profit to the exclusion of everything else.

Expand full comment

As I’m reading through Don Normans latest book I couldn’t help but feel this text has relevance to the problem you are outlining, Gary.

“The most difficult part of dealing with large, complex problems concerns the complexities of implementation. Most people think that the technical issues and the accompanying technology are the most difficult, but they are actually the simplest. The implementation of recommendations involves human beliefs and behavior, so most difficulties arise from four components:

1. System design that does not take into account human psychology

2. The human tendency to want simple answers, decomposable systems, and straightforward linear causality

3. Collaborative interaction with multiple disciplines and perspectives

4. Mutually incompatible requirements”

Expand full comment
author

100% what’s the title of the book?

Expand full comment

“Design for a better world” chapter 25 pg 206

Expand full comment

Thank you for writing this. I was very disappointed to see the cynicism with which the Open Letter was received, even though I agree it was not perfectly worded. Given the strong language of some of the responses, I got the sense that the AI & Ethics community felt the letter was encroaching upon their territory. And so rather than saying that the letter was generally going in a positive direction, they felt they had to tear down the whole effort, while pointing out the naivete of the letter's language...

Expand full comment
author

Exactly

Expand full comment

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”

― Frank Herbert (1920-1986)

Expand full comment
author

Chilling

Expand full comment

Reminds me a lot of Dan Fagella's "Substrate Monopoly" concept.

Expand full comment

"AI research and development should be refocused on making ...systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal./

Who on earth could actually object to that?"

Me. Loyal to whom? Aligned with whom? Research does not need commissars.

Expand full comment

The problem is that AI is eroding trust - something that was already a problem.

It's easy to state that you "...can't trust ChatGPT.". But what does that really mean?

You can't tell who or WHAT you are communicating with. You don't know if what you are reading is regurgitated pablum or the insights of a real human being. Worse, you don't know the motives of that entity.

It can generate volumes of subtle lies and partial truths. And present them as fact behind the facade of a human being with feelings, commitments, relationships and some clue about ethics. But it is not.

Time becomes the issue. If you and I were a computer, additional cycles are cheap. But as a human we only get 24 hours in a day - we can't expand or contract it. The time we waste communicating with a computer can never be recovered.

I'm having a Jurassic Park moment. "But your scientists were so preoccupied with whether they could, they didn't stop to think if they should."

Somehow we have to tag AI vs human. And when people use AI they should tag it themselves.

Expand full comment
author

see my substack on Jurassic Park moment!

Expand full comment

Duh... I even commented on it - about validation. Me and my Swiss cheese memory.

Your article is really starting to hit home. You should restate what needs to be done and start harping on it. Actually, anyone in the industry should.

FWIW - Jurassic Park was running in the background when I was writing the last comment (had it on for my daughter).

BTW - Ian Malcom lives on in the JP Movies - but in the book he didn't make it out of the park alive.

Expand full comment

From Gary Marcus’s 4-2023 jeremiad: “Some are frightened about some eventual superintelligence that might take over the world; my current fears have less to do with recent advances in technology per se, and more to do with recent observations about people.”

Well, first off, I said exactly that in my June 2022 Newsweek op ed dealing with 'empathy bots' that feign sapience, describing how this is more about human nature than any particular trait of simulated beings.

https://www.newsweek.com/soon-humanity-wont-alone-universe-opinion-1717446

Marcus goes on to explain his participation in the recent “moratorium petition.”

“The real reason I signed The Letter was not because I thought the moratorium had any realistic chance of being implemented but because I thought it was time for people to speak up in a coordinated way.”

And how’s that working out any better than the last 25 years of turgid-preening conferences on ‘ethical AI?’ Conferences that I long ago stopped attending, due to their relentless repetitions of “should” and “oughta” clichés – like the following core catechism offered by Dr. Marcus:

“AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

Um, as “Mr. Transparency” I am all aboard with those fine goals! (See my 1997 book The Transparent Society.) But the proposed moratorium-and-focused-research process is vague, slow, unenforceable, preachy, it lacks any effective incentives and (above all) never cites even a single precedent. Not one example of such a moratorium ever (and I mean ever) working for any significant time, to any significant degree. And yes, it is refusal to even glance at human history that I deem utterly culpable.

Marcus proposes “If we want to get to AI we can trust, probably the first thing we are going to need is a coalition.” At which point he then complains that one isn’t forming.

Um, duh? Across 6000 years - excluding war - coalitions and consensus have not been good methods for dealing with dire inhomogeneities of power.

After 25 years of endlessly similar ravings, I have come to realize something. These folks will absolutely never look at two sources of actual insight into what might work.

(1) The extensive libraries of science fiction thought experiments about this very issue, and

(2) Actual, actual… palpably actual… human history. Especially the last 200 years of an increasingly sophisticated and agile Enlightenment Experiment that discovered and has kept improving ONE method for preventing harm by capriciously powerful beings.

Powerful beings like kings, lords, priests, demagogues… and lawyers.

There IS is one method with a track record at doing exactly what Dr. Marcus asks. It does not require that everyon agree on a kumbaya consensus. It is robust even if the Marcus Tenets are violated in secret labs. It is the only way humans have ever found to deal with potentially dangerous, if brilliant, predators. It is the method we habe already used – if imperfectly – to create the first civilization that functions (with many flaws) in ways that are “… accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

And no one… not a single one of the thousands who have signed that turgid, useless ‘petition’… have even remotely considered or mentioned it.

-David Brin, author of The Postman and The Transparent Society: Will Technology Make Us Choose Between Privacy and Freedom?

Expand full comment

If I understand your linked article & website; you argue that transparency is the way we can manage, correct?

Expand full comment

Transparency is the minimal requirement that cannot suffice in a world where secret endeavors - in the Himalayas or despotic regimes etc - can easily evade it, rendering all notions of a 'moratorium" not just moot, but inherently self-defeating and absurd. There is an alternative that uses transparency, but can also cope with shadows. It happens to be the method that the 200 year enlightenment experiment has used to get positive sum outcomes from five great arenas... markets, democracy, science, courts and sports. It is a simple concept underlying everything we now have...

... and I am sick of repeating the obvious over and over and over again, to folks who yowl about present day problems without ever looking at how our ancestors solved very similar dilemmas, in the past. All that ever happens is that fellows like Marcus assiduously ignore the ideas... then steal them and pretend I was never here.

Notice that you were the only person in that whole community who even lifted an eyebrow of curiosity, for which I commend you! But...

...at this rate, I am wondering if we'll be better off with the robo apocalypse.

It's all there in The Transparent Society. Alas.

Expand full comment

All right. I spelled it out here. https://davidbrin.blogspot.com/2023/03/the-only-way-out-of-ai-dilemma.html

For all the good it will do.

Expand full comment