35 Comments

If we were the parent of a teenager who wanted their first car, we'd ask them to prove that they were ready for such power. So for example, if the teenagers was always crashing their moped, we'd tell them to fix that first, and then maybe we'll talk about a car. Simple common sense, right?

Here's a reason based path forward for AI. Get rid of nuclear weapons, and prove beyond question that we are going to conquer climate change.

Having thus proven that we are capable of fixing our mistakes when they occur, that would be a rational basis upon which to develop new powers of substantial scale.

Expand full comment

Gary, you asked, so … The short-term risks to democracy and the 2024 elections cannot be overstated. But, if we survive that, the long-term risks are literally beyond our ability to even begin to conceptualize. In our “post truth” world, it has been extremely difficult to decipher what is more true or perhaps more accurate than what is not true or even a lie. Up till now, with enough time and effort, those who cared could find the ‘more true’ instead of the ‘not true,’ but that was before search purposely repositioned itself to become the ultimate delivery of the chaos. That said, in the slightly longer term, the false bravado and fake intelligence manifest by current iterations of pretend AI will create social turmoil and upheavals, as well as mental and wellbeing injury, harming individuals, families, communities, and countries in ways that go far beyond what is being discussed today. And there is no government, or coalition of governments, other than an authoritarian one, that can develop and enforce regulations quick enough to even attempt to stop this. And never in human history has there been any universal agreement on universal values, or any form of consensus on human values, (and the human values we may imagine, or desire cannot be found in biased data, and all data is biased). The bigger challenge, how to embed these ‘values’ into non-reasoning technologies and enforce adherence to these values, in the extraordinarily short time window required, cannot happen, except again by an authoritarian regime. Values in an authoritarian regime do not come from the consensus of the people, but are dictated ones designed solely to benefit the authoritarians – which in the end may not be a government at all.

Expand full comment

Physics has Newton's 3rd law. Do social scientists have a law of unintended consequences?

Have you noticed that while in the early days of the Internet, when there was no spam or clickbait, you could search on something and get a real, helpful result? But not anymore?

Is it possible that mass generation of "misinformation" (yech, that word should be banned) - will simply cause users to look elsewhere to find information they can trust?

Consider the rise of The Free Press (Bari Weiss). Or substack?

Isn't this a reaction to the failure of mass media to do their jobs?

I guess you are correct to worry about the consequences of AI - but what about that 3rd law.

Thanks for reading my mental wanderings.

Expand full comment

We are also mistaken if we think that "we" have the ability to stop the work. It's happening everywhere, and the pace is so explosive that if Silicon Valley washed away in a flood, progress would still be incredibly rapid.

Expand full comment

Speaking solely as an outsider to the industry (although I try to keep up to date on what I can) it’s increasingly difficult to worry about either. This is coming from a place of observing an industry that has made it seemingly clear they have no desire to self regulate or proceed with caution and for the general public there is nothing we can do to stop it. I believe young people especially might feel crippled as they have seen a similar example of climate change unfold in front of them, and the world is carrying on as if it’s business as usual.

Expand full comment

Misinformation has already undermined at least US democracy. People would literally rather die than believe something tagged "liberal."

OK, well, tribalism fed by misinformation (and racism).

Expand full comment

SHORT TERM: If AI development were to stop now, there would be problems such the multiplication of misinformation, but then the Internet presents these same challenges, so it's sort of just more of the same. A problem, but not a crisis.

LONG TERM: Unless the threat presented by nuclear weapons is met and conquered, there's unlikely to be a long term future for AI. Well, maybe VERY long term, like centuries from now, but nothing within range of our vision.

Any discussion of long term technological trends in any field which doesn't include reference to nuclear weapons should probably be dismissed as lacking adequate insight.

Expand full comment

This might be a strange question, but is it not better to have machines that lack values and morals? Learning a machine what is good will make it capable of doing bad, intentionally.

Current systems simply fulfill the intent that we give it — which comes with obvious flaws, but the blame at least can be put on ourselves when misused.

Expand full comment

Text generated with AI should be labelled as such.

Expand full comment

Your article in _The Atlantic_ is great, and I'm very pleased that the "information spam" problem is starting to receive more attention in the mainstream. It's just too bad that, while we started looking at potential problems with AGI long before it's likely ever to be a problem (even now, it does not appear likely to be a problem soon) we're only really starting to seriously look at the "informational grey goo" now, when it appears there's a good chance we're already on the verge of it exploding.

Expand full comment

Yep, AI is very scary and we're only at the stochastic parrot phase, the kind of AI technology that is known to everyone. What if AGI is cracked in a garage by a small anti-establishment group or some lone-wolf, Isaac Newton type with a bright idea and an axe to grind? Intelligent machines will be fearless and highly motivated to do what they are trained to do. I'd hate to be on the receiving end of their wrath.

Brave new world.

Expand full comment
Mar 14, 2023·edited Mar 14, 2023

From the Atlantic article: "More recently, the Wharton professor Ethan Mollick was able to get the new Bing to write five detailed and utterly untrue paragraphs on dinosaurs"

I'd suggest you aren't going to be taken seriously if you exhibit seemingly no ability to grasp the desire to use LLMs to generate creative text like this. I suspect most people will consider that either demonstrating a robotic lack of sense of humor :-), or just being so seriously over the top paranoid that you aren't thinking clearly.

I'd also suggest considering how you are coming across since many will see irony in this statement: "The goal of the Russian “Firehose of Falsehood” model is to create an atmosphere of mistrust, allowing authoritarians to step in" when you come across to many as an authoritarian who wishes to step in.

I'd also suggest that you appear to likely not be well informed about tactics people are exploring for things like spotting botnets on social media or the web, verifying that someone is human (Sam Altman has a separate company working on that, even if many question his approach), etc. Many of the things you are concerned about regarding misinformation at scale were issues even before the current generation of LLMs made the quality better and people (sometimes behind closed doors in a lab doing proprietary research) are working on these issues. There are troll farms of cheap human labor in poor countries generating content.

Perhaps you might consider using LLMs to get perspective on how your opponents might view what you write if you goal is to persuade them to consider your views. Many find your arguments superficial and poorly informed about the downsides of government intervention given you don't take them seriously enough to even address.. "Naive realism", or the "trap of certainty" might be phrases for you to look up. Then again I guess the goal might not be productive dialog among informed people, but merely trying to scare the poorly informed into handing authoritarian politicians the ability to control the development of AI.

Expand full comment

The question is moot: short-term is that people accept AI and start buying it wholesale. Which leads to the long-term problems it will spawn. Whenever it starts, it's short-term, but the consequences won't stop. Ever.

The real problem, though, isn't AI, it's humans. Most humans aren't particularly bright. But as Dunning and Kruger have pointed out, that doesn't stop them from stating things authoritatively that they flat out don't understand.

Doubt that? Just look around.

Expand full comment

You write, "Geoffrey Miller’s lately been campaigning for an outright pause on AI, both research and deployment. I have caused for something less: stricter regulations governing deployment."

Who will regulate those most likely to dominate the field of AI going forward, the Chinese Communist Party?

Regulations are like the lock on your front door. The lock keeps your nosy neighbors out, but it's worthless against anyone willing to break a window.

Casting my vote with Miller.

Expand full comment

I'm not sold that there is any particularly great risk of misinfo from AI in the near term, much less that AI or computer science is the right discipline to make that sort of call. Seems to me that stuff like spearfishing fraud is a much larger concern.

Expand full comment

As the most powerful allied power, the US was responsible for fighting misinformation in Germany after WWII. They did this in many ways. But the most important one was to create public broadcasting (ARD, ZDF) that was run independently from the government with representatives from all stakeholders on the board and mostly free of advertising. Wouldn't that be a cheap and also tried and tested solution to start addressing the problem of how to defend humanity against AI-generated content?

Expand full comment