114 Comments
User's avatar
Dakara's avatar

Your post covers a lot of current legitimate concerns. I think we should set aside p(doom) and start talking about p(dystopia) which is far more aligned to the reality of current risks.

One of the problems with AI, is that productive uses don't scale. You need human verifiers for the output to filter hallucinations etc. However, nefarious uses scale to the limits of compute. Something I talked about recently here, FYI.

https://www.mindprison.cc/i/164514378/hallucinations-amplify-ai-nefarious-use-effects

Gary Marcus's avatar

agree p(dystopia) is very high. can i borrow that term?

Dakara's avatar

Yes, sure.

Jonathan Kallay's avatar

p(dystopia) is better than p(doom) because it accommodates more bad possibilities, but it still seems trapped in the longtermist framing where the end-state is the only thing that matters. There is a conflation of horribly bad things occurring and things staying horribly bad. Rhetorically this matters because of the Industrial Revolution analogy. Here's the logic: We presently enjoy the effects of the Industrial Revolution; AI will bring about "the next Industrial Revolution"; therefore, we will enjoy the effects of AI. The analogy fast-forwards to the end, hand-waving the effects of the Industrial Revolution on the people living through it (which were often horrific).

May I propose p(horror)?

David Piepgrass's avatar

We should consider p(dystopia) to mean "long-term dystopia or worse". I do expect an AI-induced dystopia not to improve with time, once a police state is developed solidly enough, and powered by AI that is much faster and smarter than all its subjects, why should it ever fall? I mean, human dictatorships powered entirely by slow and dumb humans are quite durable already. (And even if it falls, why should we expect the next person to gain control of the AIs to improve the situation?)

Vdhbf Gvxrj's avatar

By very high, do you think 10-50% or +50%?

Doug S.'s avatar

Is dystopia a kind of doom? I'd include any "human civilization collapses and AIs rule the world but humans aren't literally extinct" scenario as "doom"...

Larry Jewett's avatar

I think what you are referring to is doomstopia

Larry Jewett's avatar

It should be pointed out that when used in the insurance sense, risk depends on more than probability alone.

It also depends on “impact”

In fact, it’s obtained by multiplying the two

Risk = (probability of an outcome) X (Impact of the outcome)

So low probability by itself does not imply low risk

Larry Jewett's avatar

There can be a very large risk associated with a low probability event

Larry Jewett's avatar

Unfortunately, p(Grokstopia) seems to grow larger by the day

Grokstopia (a specific case of “chatstopia” or “botstopia”) is defined as “dystopia on AI-roids”

Larry Jewett's avatar

The generiic term is GPTopia

Or HOpenTopia

Larry Jewett's avatar

Sometimes called “Samstopia”

Bill Benzon's avatar

When you have kids committing suicide over a relationship with an AI that's gone sour, you're entering p(dystopia). See highlighted text here: https://new-savanna.blogspot.com/2025/07/finding-solace-with-through-chatgpt.html

Larry Jewett's avatar

What do you call an AI relationship that has become poisonous?

Bot-ulism

Larry Jewett's avatar

What do you call the AI luv bug characterized by wild swings in mood and erratic behavior?

LL(AI)M disease

Cured only with powerful antibototics.

Bill Benzon's avatar

My major worry is that the industry will get stuck in a sunk resources trap. So much money and time and effort is going into scaling things up that it will be almost impossible to break free of that commitment. So very few resources will go to developing other architectures.

It's become apparent that it's possible to tinker with these things endlessly and some up with changes/improvements here and there. And, as you've written about around the corner, these reasoning models have backed into some bits of symbolic architecture. No doubt they can tweak that endlessly.

So, things are just going to zig-zag around in the same space of architectures, with each zig being pronounced to be a breakthrough. The industry is just going to meander around in that space, always seeing AGI over the horizon, but never getting there.

* * * * *

Ah, just caught the term "p(dystopia." Love it! The road to p(dystopia) is paved with sunk costs.

Larry Jewett's avatar

Follow the LL brick road!

Larry Jewett's avatar

There’s no place like Botstopia, Auntie LLM

Tom Wilkinson's avatar

The question is whether Musk’s financial house collapses under the weight of unmet promises. Tesla’s robotaxis are not likely to succeed to the degree he has promised, and it’s hard to see personal robots as a business that can generated positive cash flow. Without his vast wealth and the aura it confers, how much power would Musk actually have?

Emi Ruff's avatar

I also don't think Musk's investors have fully groked (sorry!) how badly he's damaged his own reputation over the last year and a half. Silver Bulletin has his unfavorability going from 38% in January 2024 to 56% in July 2025, with the metrics going against him even after he left DOGE. Given that Grok is intractably associated with Musk, Grok makes for an easy target for his critics to keep hitting on, and whatever reasonable people work at xAI must realize they're literally begging for the government to step in and make some kind of an example against Grok.

Since Grok is aligned with the most racist elements of the MAGA coalition, I don't have tremendous faith that the federal government would step in, but I do wonder if this becomes an easy win for Congress to "take action" against the tech industry. Because, really, what's the downside for Congress to go after Musk? Trump hates him, so he's fair game for the MAGA-heads, and Dems know he's the only public figure besides Trump their base hates so universally. Plus, everyone is wanting to hide out from the impact of OBBBA, and going back to the anti-tech crusading might do the trick ...

Thanks for laying this all out, Gary! Yet another helpful essay to help make sense of this wild world :)

bluballoon's avatar

This comment was super insightful to me, thank you!

Emi Ruff's avatar

You’re so welcome 😊 glad it helped!

Jack Shanahan's avatar

Even though I believe the DoD contract is an IDIQ (meaning a ceiling, based on use, rather than a fixed amount), given Grok’s recent derailing it’s unconscionable that this contract was even awarded in the first place.

Nobody in the DoD or any other Department or Agency should be allowed to use Grok operationally until government personnel complete rigorous internal T&E and extensive red teaming.

This is crazy.

Bernard McCarty's avatar

Don't worry, Musk spent "several (presumably low integer number of) hours trying to solve this", we'll be fine... what's the worst that could happen? Oh...

Ondřej Frei's avatar

This makes me think if he even understands the principles behind the workings of an LLM. If he did, maybe he would understand that tweaking the system prompt here and there won’t really steer the training data…

BT Hathaway's avatar

We live in an age where critical thinking has collapsed in society and where a vast majority of people seem prepared to roll over like puppies looking for a belly rub from every billionaire who spouts off about AI competence and coming dominance. Rather we should be pushing back in anger and horror at the lies, false promises, productivity *degradations* (rather than improvements), environmental impacts, etc. of these over hyped, hallucinating Mad Lib time wasters, and their overcompensated humanity-hating billionaire mouthpieces.

Stock market favorability should no longer be used as the measure of human progress. But until we turn the corner on billionaire worship and move towards wealth measured in terms of human dignity writ large, rather than writ Musk, we will keep barreling down this slope of catastrophe concocted by the careless wealthy who think themselves invincible and inevitable.

Bernard McCarty's avatar

Fantastic (and very human) comment - thank you.

Joy in HK fiFP's avatar

When you began with that description, I thought you were talking Palantir. There is a queue for that top spot, IYAM.

Gary Marcus's avatar

did you see who i quoted at the end?

Joy in HK fiFP's avatar

The link to Sam Altman? Yes.

Bruce Olsen's avatar

But that's so 2 years ago.

Aaron Turner's avatar

The silver lining is that LLMs (including Grok) are so fundamentally flawed as a platform for AGI that it is extremely unlikely (~1% subjective probability) that any system that relies on LLMs for a significant part of its cognition will ever achieve reliable human-level AGI, irrespective of how many billions of dollars the idiots with billions of dollars to spend waste on scaling etc. Accordingly, although LLMs are appallingly aligned with aggregate human preferences (it's just lipservice, really) and will doubtless inflict harm at global scale, it is unlikely to be catastrophic, let alone existential.

User's avatar
Comment removed
Jul 15, 2025
Comment removed
Aaron Turner's avatar

Google and OpenAI will (almost certainly) fail to achieve reliable LLM-based human-level AGI for exactly the same reasons that Meta and xAI will: LLMs are fundamentally flawed as a foundation for AGI, and these flaws are not "fixable", either by scaling or by bolt-on fudges such as RAG or COT "reasoning". They are all trying to build ladders to the Moon.

Lynn's avatar

The problem is bigger than Musk. It’s a group of Silicon Valley tech billionaires who operate like a monopoly demanding not only that the US government not enact consumer tech safety and privacy regulations, but also that the US government enact tariffs that prevent other countries from enacting these regulations or pursuing innovations that might actually be better/safer.

Rather than thinking of p(doom) as future robots that turn against humans, I’m thinking of Silicon Valley lawlessness causing multiple tragedies, wars, extinction and depopulation events.

Example: Silicon Valley billionaires and the US president collaborating in a program of eugenics, resulting in: loosening of vaccine requirements, research money for vaccine research, eliminating affordable healthcare, and eliminating funding of affordable vaccines in poor countries.

Meanwhile youtube, Facebook, TikTok, Twitter/X, Insta are full of anti-vax propaganda and conspiracies targeting groups they consider to be “undesirable.”

Example: Silicon Valley billionaires and the US president collaborate on eliminating quality, affordable healthcare for the masses while using crypto and bubble schemes to devalue the US dollar and tank the economy, resulting in elderly Americans losing their life savings and safety nets. Remedy: depopulation. Same with disabled persons.

I doubt that China is abandoning traditional academic learning of children and young adults in favor of an entire generation of kids growing up relying on LLMs and not being smart enough to identify LLM hallucinations.

AI Deepfakes are a national security risk. Trump and an increasing number of lawmakers have been duped by deepfakes. It’s a problem when a President and entire population cannot easily distinguish between deepfakes and reality.

AI deepfakes and manufactured online “alternate realities” means a segment of our population believes they have been preparing for a violent civil war against Marxist and Communists and Trump’s political enemies. There is literally nothing in place that would prevent a post from Trump (either real or fake) calling on ‘Patriots” to rise up slaughter their neighbors, librarians, teachers, lawmakers from going viral across all social media platforms.

Silicon Valley tech billionaires have collaborated with Trump and Epstein in an attempt to unravel the world order for the purpose of destroying democracy at home and abroad. The most powerful Silicon Valley billionaires believe in eugenics, value technology over human life, and believe in fascism as a means to control populations while they carry out their dystopian agendas. Musk believes loss of human life is an appropriate sacrifice for reckless, unregulated tech acceleration.

I think the backlash against Silicon Valley tech is justified and needs to accelerate at warp speed. Musk promotes pro-Nazi content on X in the name of “free speech” - but really he is trying to normalize it. Same with content promoting political/extrajudicial violence and election fraud lies (on all social media platforms). None of these companies seem to be aligned with the public good, safety, ethics, consumer privacy.

I’m ready to stop buying Apple/Google products and significantly reduce my online exposure to avoid being surveilled, manipulated, and harmed by both Silicon Valley tech products and the US government. How sad is that.

Peter Jackson's avatar

What is the correct platform for AGI?

Aaron Turner's avatar

Short answer: one possible architecture for an agentic (potentially maximally superintelligent) AGI S is a neurosymbolic cognitive computation maintaining a high-level state comprising a set of beliefs, where each belief is represented as a sequent of first-order NBG set theory; given this high-level state, S comprises two high-level processes: a continuous learning mechanism CL and a continuous planning mechanism CP; CL is equivalent to "repeat: given S's current percept history (sequence of observations of the physical universe) PH, strive to find x such that x is a Theory-of-the-Universe ToU capable of perfectly predicting future percepts"; CP is equivalent to "repeat: strive to find x such that, given (i) S's current Theory-of-the-Universe ToU and (ii) S's fixed final goal FG, x is a plan which, when executed, will generate a stream of actions whose causal effect is to achieve or otherwise maintain the invariant on the physical universe defined by FG"; CL and CP are built on top of a number of cognitive primitives including induction (discovers patterns), deduction (discovers necessary consequences), and abduction (discovers possible explanations); furthermore, (a) FG ensures that S is maximally-aligned in perpetuity with the maximally-fairly-aggregated idealised preferences of the population of all humans (living and future), (b) S is maximally-validated (well-founded by design, with formally-verified hardware and software), and (c) ideally, it should not be possible to legally deploy S unless its validation case (collection of evidence of successful validation) has been formally certified by a competent government certification authority.

arita's avatar

His name is Peter Thiel (of Palantir fame). Of which Elon Musk is only the front-facing facade. Thiel is the actual backend. Look into him (he’s on course to embed LLMs into US and UK militaries) and then you might notch up your probabilities several levels up.

Jeff Irvin's avatar

My chief concern is that LLMs are a perfect complement to the tools we already use for what Shoshana Zuboff calls "surveillance capitalism," of which "technofeudalism" (a la Yannis Varoufakis) is a natural by-product. That this tool will be utilized by the state (e.g., Palantir) should concern us all.

In a 2013 episode of "Agents of Shield" one of the main characters says, "It used to be that surveillance was one of our biggest jobs. Now people surveil themselves for us." This is said as they are combing through social media feeds to find a villain.

Here's the bottom line: LLMs will succeed regardless of their inability to get us to AGI. Their value as a tool of surveillance is worth its weight in gold.

R G's avatar

Your counter argument in the close of the essay left out that the unknown number of actual devoted followers Musk enjoys (you cited this as a key criteria). From the early days of the Twitter takeover, when the goal was transparency it was very clear that many thousands of 'followers' were foreign national chaos bots. Hopefully that lowers the odds of pDystopia! Very thought provoking article.

Paul Reynolds's avatar

The "facts don't care about your feelings" crowd are mind-controlled cultists who, ironically, are themselves very emotional and also very dishonest.

Anyone who types in that mantra into an internet box - immediate red flag. I wonder if they verbalize it as well, with their "friends"?

Stijn Oomen's avatar

It pretty much sums up everything you need to know about Elon's personality. I expect Grok to implode just like his other investment ventures where he gets personally involved, because he simply can't help himself. It does however shows the risks when capatalism predominantly steers the AI supertanker that is quite difficult to manouver due to its sheer size. Too much concentrated power in the hands of a few, and as humankind has informed us over many centuries... power corrupts. Corporatocracy?

Think the other Techbro's are the 'good guys'? 🤔😀

Personally I'm also optimistic as humankind is incredibly resourceful and resilient, so AI doesn't stand a chance with dominating us.

Diamantino Almeida's avatar

My concern is the risk posed by powerful AI systems controlled by individuals who lack proper oversight and show little regard for humanity.

His comment, “…I'd at least like to be alive to see it happen,” sounds frightening because it reveals a leader of powerful AI technology who is neither deeply alarmed nor motivated to rigorously control or regulate it, but rather willing to “ride the wave” even if it causes serious problems. This is a dangerous mindset.

Dr. Jason Polak's avatar

Oversight always moves at the speed of bureacracy, whereas the speed of innovation increases unbounded. Do you think that oversight has a chance then?

Diamantino Almeida's avatar

I feel we must ask better questions and build systems of accountability from within tech, not just around it, we risk normalising recklessness as progress and calling it inevitability.

The internet began as a DoD project. Then academic institutions layered the Web on top and now the whole world runs on it. This is the power of innovation.

With AI, we’re not just talking about information access. We’re talking about systems that, when coupled with robotics, could literally replace human labour across many domains.

I keep wondering

Is this AI arms race about innovation?

Or is it about:

1) not being surpassed by competitors

2) securing cheap labour, free from employment laws, rights, and collective bargaining?

Doug Tarnopol's avatar

We are ten minutes from total destruction from nukes while we pour carbon into the air but Marcus’ p(doom) is low. Tunnel vision, much?

MarkS's avatar

p(doom)=1 if we consider all possible causes over infinite time.

Larry Jewett's avatar

“If your AI plan relies on the Internet not being stupid then your AI plan is terrible.”

Fixed it for Eliezer