Your post covers a lot of current legitimate concerns. I think we should set aside p(doom) and start talking about p(dystopia) which is far more aligned to the reality of current risks.
One of the problems with AI, is that productive uses don't scale. You need human verifiers for the output to filter hallucinations etc. However, nefarious uses scale to the limits of compute. Something I talked about recently here, FYI.
Gentlemen, if I may zoom out a bit: I think the larger problems here have a lot more to do with the human relationship to computers and technology in general.
We've tolerated bad software for so long that we've forgotten how to walk away from it. We've allowed our belief in tech progress to mask the fact that almost everything we use on a daily basis is fundamentally broken, perpetually buggy and unfinished; new features no one asked for are foisted upon us daily while core functionalities rot.
We've especially forgotten what computers are good at (crunching numbers) and allowed the modern tech bros (who are qualitatively different from the nerds that of the 80s and 90s) to mesmerize us into thinking we can put all of human thought and art into a blender and put all thinkers and artists out of work.
What tasks is modern "AI" actually superior at, versus tools from 5 or 10 years ago? It's rather odd that we're living in fear of AI applied to actual warfare and governance while we simultaneously can't trust this crap to set an alarm clock.
At the moment, collectively eschewing this AI garbage is our highest priority. It's absolutely mind breaking that people are actually destroying their families and committing suicide because of LLMs. This stuff is nowhere close to being good, and yet it's being foisted upon us everywhere we look.
We need to get away from this crap. If we need to return to antique software and computers, let us do so.
p(dystopia) is better than p(doom) because it accommodates more bad possibilities, but it still seems trapped in the longtermist framing where the end-state is the only thing that matters. There is a conflation of horribly bad things occurring and things staying horribly bad. Rhetorically this matters because of the Industrial Revolution analogy. Here's the logic: We presently enjoy the effects of the Industrial Revolution; AI will bring about "the next Industrial Revolution"; therefore, we will enjoy the effects of AI. The analogy fast-forwards to the end, hand-waving the effects of the Industrial Revolution on the people living through it (which were often horrific).
We should consider p(dystopia) to mean "long-term dystopia or worse". I do expect an AI-induced dystopia not to improve with time, once a police state is developed solidly enough, and powered by AI that is much faster and smarter than all its subjects, why should it ever fall? I mean, human dictatorships powered entirely by slow and dumb humans are quite durable already. (And even if it falls, why should we expect the next person to gain control of the AIs to improve the situation?)
Is dystopia a kind of doom? I'd include any "human civilization collapses and AIs rule the world but humans aren't literally extinct" scenario as "doom"...
My major worry is that the industry will get stuck in a sunk resources trap. So much money and time and effort is going into scaling things up that it will be almost impossible to break free of that commitment. So very few resources will go to developing other architectures.
It's become apparent that it's possible to tinker with these things endlessly and some up with changes/improvements here and there. And, as you've written about around the corner, these reasoning models have backed into some bits of symbolic architecture. No doubt they can tweak that endlessly.
So, things are just going to zig-zag around in the same space of architectures, with each zig being pronounced to be a breakthrough. The industry is just going to meander around in that space, always seeing AGI over the horizon, but never getting there.
* * * * *
Ah, just caught the term "p(dystopia." Love it! The road to p(dystopia) is paved with sunk costs.
The question is whether Musk’s financial house collapses under the weight of unmet promises. Tesla’s robotaxis are not likely to succeed to the degree he has promised, and it’s hard to see personal robots as a business that can generated positive cash flow. Without his vast wealth and the aura it confers, how much power would Musk actually have?
I also don't think Musk's investors have fully groked (sorry!) how badly he's damaged his own reputation over the last year and a half. Silver Bulletin has his unfavorability going from 38% in January 2024 to 56% in July 2025, with the metrics going against him even after he left DOGE. Given that Grok is intractably associated with Musk, Grok makes for an easy target for his critics to keep hitting on, and whatever reasonable people work at xAI must realize they're literally begging for the government to step in and make some kind of an example against Grok.
Since Grok is aligned with the most racist elements of the MAGA coalition, I don't have tremendous faith that the federal government would step in, but I do wonder if this becomes an easy win for Congress to "take action" against the tech industry. Because, really, what's the downside for Congress to go after Musk? Trump hates him, so he's fair game for the MAGA-heads, and Dems know he's the only public figure besides Trump their base hates so universally. Plus, everyone is wanting to hide out from the impact of OBBBA, and going back to the anti-tech crusading might do the trick ...
Thanks for laying this all out, Gary! Yet another helpful essay to help make sense of this wild world :)
Even though I believe the DoD contract is an IDIQ (meaning a ceiling, based on use, rather than a fixed amount), given Grok’s recent derailing it’s unconscionable that this contract was even awarded in the first place.
Nobody in the DoD or any other Department or Agency should be allowed to use Grok operationally until government personnel complete rigorous internal T&E and extensive red teaming.
Don't worry, Musk spent "several (presumably low integer number of) hours trying to solve this", we'll be fine... what's the worst that could happen? Oh...
This makes me think if he even understands the principles behind the workings of an LLM. If he did, maybe he would understand that tweaking the system prompt here and there won’t really steer the training data…
We live in an age where critical thinking has collapsed in society and where a vast majority of people seem prepared to roll over like puppies looking for a belly rub from every billionaire who spouts off about AI competence and coming dominance. Rather we should be pushing back in anger and horror at the lies, false promises, productivity *degradations* (rather than improvements), environmental impacts, etc. of these over hyped, hallucinating Mad Lib time wasters, and their overcompensated humanity-hating billionaire mouthpieces.
Stock market favorability should no longer be used as the measure of human progress. But until we turn the corner on billionaire worship and move towards wealth measured in terms of human dignity writ large, rather than writ Musk, we will keep barreling down this slope of catastrophe concocted by the careless wealthy who think themselves invincible and inevitable.
Musk is reckless, yes. But he's also way out of his depth.
Musk was successful with rockets. His Tesla leadership has been a disaster, with failed predictions about robotic manufacturing and self-driving cars. Hype fueled its rise, and now Tesla is declining, finally.
Self-driving cars will be way harder to pull off than he thinks. He'll likely have an accident like Cruise, which will set back his expansion.
So, his incompetence and pushing hard and fast where slow deliberate work is needed will be his undoing.
Teslas in full self-driving mode have already killed multiple people (I'm counting it as being in FSD even if it turned itself off just before an accident). There are videos of Teslas running into emergency vehicles covered with flashing lights. Apparently this is because it's AI can't figure out exactly what it's looking at and apparently the default behavior is to just keep going.
I'll worry about Elon Musk making billions of robots after he makes _one_ car which can successfully drive in Pittsburgh in the winter.
Toyota sold roughly 5.7 times more cars than Tesla in 2024. Tesla is a solid mid-size player. Vastly overhyped, way out of proportion to its sales or projections.
Competitors are now catching up on electric cars, its lineup is aging, self-driving cars will take a decade to mature, and Musk's drug-fueled antics and political ventures are big hindrances.
Humanoid robots and Grok won't save it either. Way to go till these start bringing money.
The metric for Tesla Model Y refers to a single make/model vehicle, not all models combined. Calling Elon's leadership of Tesla a "disaster" is laughable.
When Tesla was created, the idea that an EV car company could even exist was seen by most as extremely far fetched. Check the old CNBC tapes from the mid 2000s. You just sound like a hater.
Musk should be judged by his pronouncements since 2015-18 that self-driving will be here within a year, with existing hardware, no lidar, no radar. While people get killed in his car, with lots of such cases over the years.
" You just sound like a hater"
You sound like a worshipper of the man who can do no wrong.
Is the Tesla Model Y the #1 selling make/model vehicle in the world?
Yes it is.
Did many people think Tesla would fail around the time of its IPO and did Elon get laughed out of the room for claiming he would be able to compete with the multi-billion dollar car companies of the time?
Not worshipping. Stating facts. I don't even like Elon in terms of many of the things he says and does, especially on Twitter/X and many of his political moves. But calling his leadership of Tesla a disaster over the course of its history is just objectively wrong and indicative of you being unable to see past your disdain for the man.
If you want a finer-grained assessment of Musk, sure, I agree that especially in the early days of Tesla he brought in a lot of innovation. Tesla became the "it" car. It was sleek, fancy, electric, ahead of its time.
His big first blunder was to bet the company on fully automated manufacturing. It failed big time, as machines just aren't that good, and he went back to old-school factory assembly, with robots used only for discrete tasks.
Then, his promises of self-driving car being just around the corner were unrealistic, and rejection of lidar and radar were very bad judgements. He is now way behind in self-driving, and I don't think he hit bottom yet, as cameras only are not enough.
The Cybertruck is not selling that well. Tesla's lineup is aging, and new ideas seem lacking.
Humanoid robots won't arrive any time soon, and neither will AI take the world by storm, at least not xAI. That's a different venture, yes, but in his mind they are all linked.
So, the dude is way past his prime, and likely given all the wild hype he spread over the years the chicken will come home to roost.
The silver lining is that LLMs (including Grok) are so fundamentally flawed as a platform for AGI that it is extremely unlikely (~1% subjective probability) that any system that relies on LLMs for a significant part of its cognition will ever achieve reliable human-level AGI, irrespective of how many billions of dollars the idiots with billions of dollars to spend waste on scaling etc. Accordingly, although LLMs are appallingly aligned with aggregate human preferences (it's just lipservice, really) and will doubtless inflict harm at global scale, it is unlikely to be catastrophic, let alone existential.
It is true that LLMs are nowhere near AGI. But, as with self-driving cars, not all vendors are created equals. Some companies, like Google and OpenAI, will do well.
Meta and xAI, not so much. In this business, the costs are high, the market not big yet, and there's not much room for minor players.
Google and OpenAI will (almost certainly) fail to achieve reliable LLM-based human-level AGI for exactly the same reasons that Meta and xAI will: LLMs are fundamentally flawed as a foundation for AGI, and these flaws are not "fixable", either by scaling or by bolt-on fudges such as RAG or COT "reasoning". They are all trying to build ladders to the Moon.
I think despite the hype, these companies want commercial success for specific applications. I think there's a market for that, but it is not that big, and it will take years to grow it.
As to how to get to AGI, we'll probably see gradually better incremental advances with heterogenous architectures, rather than one giant "neurosymbolic" breakthrough. Enough algorithms "bolted in" can go a long way.
Short answer: one possible architecture for an agentic (potentially maximally superintelligent) AGI S is a neurosymbolic cognitive computation maintaining a high-level state comprising a set of beliefs, where each belief is represented as a sequent of first-order NBG set theory; given this high-level state, S comprises two high-level processes: a continuous learning mechanism CL and a continuous planning mechanism CP; CL is equivalent to "repeat: given S's current percept history (sequence of observations of the physical universe) PH, strive to find x such that x is a Theory-of-the-Universe ToU capable of perfectly predicting future percepts"; CP is equivalent to "repeat: strive to find x such that, given (i) S's current Theory-of-the-Universe ToU and (ii) S's fixed final goal FG, x is a plan which, when executed, will generate a stream of actions whose causal effect is to achieve or otherwise maintain the invariant on the physical universe defined by FG"; CL and CP are built on top of a number of cognitive primitives including induction (discovers patterns), deduction (discovers necessary consequences), and abduction (discovers possible explanations); furthermore, (a) FG ensures that S is maximally-aligned in perpetuity with the maximally-fairly-aggregated idealised preferences of the population of all humans (living and future), (b) S is maximally-validated (well-founded by design, with formally-verified hardware and software), and (c) ideally, it should not be possible to legally deploy S unless its validation case (collection of evidence of successful validation) has been formally certified by a competent government certification authority.
His name is Peter Thiel (of Palantir fame). Of which Elon Musk is only the front-facing facade. Thiel is the actual backend. Look into him (he’s on course to embed LLMs into US and UK militaries) and then you might notch up your probabilities several levels up.
My chief concern is that LLMs are a perfect complement to the tools we already use for what Shoshana Zuboff calls "surveillance capitalism," of which "technofeudalism" (a la Yannis Varoufakis) is a natural by-product. That this tool will be utilized by the state (e.g., Palantir) should concern us all.
In a 2013 episode of "Agents of Shield" one of the main characters says, "It used to be that surveillance was one of our biggest jobs. Now people surveil themselves for us." This is said as they are combing through social media feeds to find a villain.
Here's the bottom line: LLMs will succeed regardless of their inability to get us to AGI. Their value as a tool of surveillance is worth its weight in gold.
Your counter argument in the close of the essay left out that the unknown number of actual devoted followers Musk enjoys (you cited this as a key criteria). From the early days of the Twitter takeover, when the goal was transparency it was very clear that many thousands of 'followers' were foreign national chaos bots. Hopefully that lowers the odds of pDystopia! Very thought provoking article.
It pretty much sums up everything you need to know about Elon's personality. I expect Grok to implode just like his other investment ventures where he gets personally involved, because he simply can't help himself. It does however shows the risks when capatalism predominantly steers the AI supertanker that is quite difficult to manouver due to its sheer size. Too much concentrated power in the hands of a few, and as humankind has informed us over many centuries... power corrupts. Corporatocracy?
Think the other Techbro's are the 'good guys'? 🤔😀
Personally I'm also optimistic as humankind is incredibly resourceful and resilient, so AI doesn't stand a chance with dominating us.
My concern is the risk posed by powerful AI systems controlled by individuals who lack proper oversight and show little regard for humanity.
His comment, “…I'd at least like to be alive to see it happen,” sounds frightening because it reveals a leader of powerful AI technology who is neither deeply alarmed nor motivated to rigorously control or regulate it, but rather willing to “ride the wave” even if it causes serious problems. This is a dangerous mindset.
I feel we must ask better questions and build systems of accountability from within tech, not just around it, we risk normalising recklessness as progress and calling it inevitability.
The internet began as a DoD project. Then academic institutions layered the Web on top and now the whole world runs on it. This is the power of innovation.
With AI, we’re not just talking about information access. We’re talking about systems that, when coupled with robotics, could literally replace human labour across many domains.
I keep wondering
Is this AI arms race about innovation?
Or is it about:
1) not being surpassed by competitors
2) securing cheap labour, free from employment laws, rights, and collective bargaining?
Your post covers a lot of current legitimate concerns. I think we should set aside p(doom) and start talking about p(dystopia) which is far more aligned to the reality of current risks.
One of the problems with AI, is that productive uses don't scale. You need human verifiers for the output to filter hallucinations etc. However, nefarious uses scale to the limits of compute. Something I talked about recently here, FYI.
https://www.mindprison.cc/i/164514378/hallucinations-amplify-ai-nefarious-use-effects
agree p(dystopia) is very high. can i borrow that term?
Yes, sure.
thank you!
Gentlemen, if I may zoom out a bit: I think the larger problems here have a lot more to do with the human relationship to computers and technology in general.
We've tolerated bad software for so long that we've forgotten how to walk away from it. We've allowed our belief in tech progress to mask the fact that almost everything we use on a daily basis is fundamentally broken, perpetually buggy and unfinished; new features no one asked for are foisted upon us daily while core functionalities rot.
We've especially forgotten what computers are good at (crunching numbers) and allowed the modern tech bros (who are qualitatively different from the nerds that of the 80s and 90s) to mesmerize us into thinking we can put all of human thought and art into a blender and put all thinkers and artists out of work.
What tasks is modern "AI" actually superior at, versus tools from 5 or 10 years ago? It's rather odd that we're living in fear of AI applied to actual warfare and governance while we simultaneously can't trust this crap to set an alarm clock.
At the moment, collectively eschewing this AI garbage is our highest priority. It's absolutely mind breaking that people are actually destroying their families and committing suicide because of LLMs. This stuff is nowhere close to being good, and yet it's being foisted upon us everywhere we look.
We need to get away from this crap. If we need to return to antique software and computers, let us do so.
I can't believe this is where we're at in 2025.
p(dystopia) is better than p(doom) because it accommodates more bad possibilities, but it still seems trapped in the longtermist framing where the end-state is the only thing that matters. There is a conflation of horribly bad things occurring and things staying horribly bad. Rhetorically this matters because of the Industrial Revolution analogy. Here's the logic: We presently enjoy the effects of the Industrial Revolution; AI will bring about "the next Industrial Revolution"; therefore, we will enjoy the effects of AI. The analogy fast-forwards to the end, hand-waving the effects of the Industrial Revolution on the people living through it (which were often horrific).
May I propose p(horror)?
We should consider p(dystopia) to mean "long-term dystopia or worse". I do expect an AI-induced dystopia not to improve with time, once a police state is developed solidly enough, and powered by AI that is much faster and smarter than all its subjects, why should it ever fall? I mean, human dictatorships powered entirely by slow and dumb humans are quite durable already. (And even if it falls, why should we expect the next person to gain control of the AIs to improve the situation?)
By very high, do you think 10-50% or +50%?
Is dystopia a kind of doom? I'd include any "human civilization collapses and AIs rule the world but humans aren't literally extinct" scenario as "doom"...
I think what you are referring to is doomstopia
It should be pointed out that when used in the insurance sense, risk depends on more than probability alone.
It also depends on “impact”
In fact, it’s obtained by multiplying the two
Risk = (probability of an outcome) X (Impact of the outcome)
So low probability by itself does not imply low risk
There can be a very large risk associated with a low probability event
Unfortunately, p(Grokstopia) seems to grow larger by the day
Grokstopia (a specific case of “chatstopia” or “botstopia”) is defined as “dystopia on AI-roids”
The generiic term is GPTopia
Or HOpenTopia
Sometimes called “Samstopia”
When you have kids committing suicide over a relationship with an AI that's gone sour, you're entering p(dystopia). See highlighted text here: https://new-savanna.blogspot.com/2025/07/finding-solace-with-through-chatgpt.html
What do you call an AI relationship that has become poisonous?
Bot-ulism
What do you call the AI luv bug characterized by wild swings in mood and erratic behavior?
LL(AI)M disease
Cured only with powerful antibototics.
My major worry is that the industry will get stuck in a sunk resources trap. So much money and time and effort is going into scaling things up that it will be almost impossible to break free of that commitment. So very few resources will go to developing other architectures.
It's become apparent that it's possible to tinker with these things endlessly and some up with changes/improvements here and there. And, as you've written about around the corner, these reasoning models have backed into some bits of symbolic architecture. No doubt they can tweak that endlessly.
So, things are just going to zig-zag around in the same space of architectures, with each zig being pronounced to be a breakthrough. The industry is just going to meander around in that space, always seeing AGI over the horizon, but never getting there.
* * * * *
Ah, just caught the term "p(dystopia." Love it! The road to p(dystopia) is paved with sunk costs.
Follow the LL brick road!
There’s no place like Botstopia, Auntie LLM
The question is whether Musk’s financial house collapses under the weight of unmet promises. Tesla’s robotaxis are not likely to succeed to the degree he has promised, and it’s hard to see personal robots as a business that can generated positive cash flow. Without his vast wealth and the aura it confers, how much power would Musk actually have?
I also don't think Musk's investors have fully groked (sorry!) how badly he's damaged his own reputation over the last year and a half. Silver Bulletin has his unfavorability going from 38% in January 2024 to 56% in July 2025, with the metrics going against him even after he left DOGE. Given that Grok is intractably associated with Musk, Grok makes for an easy target for his critics to keep hitting on, and whatever reasonable people work at xAI must realize they're literally begging for the government to step in and make some kind of an example against Grok.
Since Grok is aligned with the most racist elements of the MAGA coalition, I don't have tremendous faith that the federal government would step in, but I do wonder if this becomes an easy win for Congress to "take action" against the tech industry. Because, really, what's the downside for Congress to go after Musk? Trump hates him, so he's fair game for the MAGA-heads, and Dems know he's the only public figure besides Trump their base hates so universally. Plus, everyone is wanting to hide out from the impact of OBBBA, and going back to the anti-tech crusading might do the trick ...
Thanks for laying this all out, Gary! Yet another helpful essay to help make sense of this wild world :)
This comment was super insightful to me, thank you!
You’re so welcome 😊 glad it helped!
Even though I believe the DoD contract is an IDIQ (meaning a ceiling, based on use, rather than a fixed amount), given Grok’s recent derailing it’s unconscionable that this contract was even awarded in the first place.
Nobody in the DoD or any other Department or Agency should be allowed to use Grok operationally until government personnel complete rigorous internal T&E and extensive red teaming.
This is crazy.
Don't worry, Musk spent "several (presumably low integer number of) hours trying to solve this", we'll be fine... what's the worst that could happen? Oh...
This makes me think if he even understands the principles behind the workings of an LLM. If he did, maybe he would understand that tweaking the system prompt here and there won’t really steer the training data…
We live in an age where critical thinking has collapsed in society and where a vast majority of people seem prepared to roll over like puppies looking for a belly rub from every billionaire who spouts off about AI competence and coming dominance. Rather we should be pushing back in anger and horror at the lies, false promises, productivity *degradations* (rather than improvements), environmental impacts, etc. of these over hyped, hallucinating Mad Lib time wasters, and their overcompensated humanity-hating billionaire mouthpieces.
Stock market favorability should no longer be used as the measure of human progress. But until we turn the corner on billionaire worship and move towards wealth measured in terms of human dignity writ large, rather than writ Musk, we will keep barreling down this slope of catastrophe concocted by the careless wealthy who think themselves invincible and inevitable.
Fantastic (and very human) comment - thank you.
Thank you.
When you began with that description, I thought you were talking Palantir. There is a queue for that top spot, IYAM.
did you see who i quoted at the end?
The link to Sam Altman? Yes.
But that's so 2 years ago.
Musk is reckless, yes. But he's also way out of his depth.
Musk was successful with rockets. His Tesla leadership has been a disaster, with failed predictions about robotic manufacturing and self-driving cars. Hype fueled its rise, and now Tesla is declining, finally.
Self-driving cars will be way harder to pull off than he thinks. He'll likely have an accident like Cruise, which will set back his expansion.
So, his incompetence and pushing hard and fast where slow deliberate work is needed will be his undoing.
Two comments on Musk.
Teslas in full self-driving mode have already killed multiple people (I'm counting it as being in FSD even if it turned itself off just before an accident). There are videos of Teslas running into emergency vehicles covered with flashing lights. Apparently this is because it's AI can't figure out exactly what it's looking at and apparently the default behavior is to just keep going.
I'll worry about Elon Musk making billions of robots after he makes _one_ car which can successfully drive in Pittsburgh in the winter.
Agreed! Pittsburgh in the winter should set the bar he has to meet.
Come visit San Francisco. Self driving waymo cars are everywhere.
Disaster? Tesla Model Y is best selling car in the world?
Toyota sold roughly 5.7 times more cars than Tesla in 2024. Tesla is a solid mid-size player. Vastly overhyped, way out of proportion to its sales or projections.
Competitors are now catching up on electric cars, its lineup is aging, self-driving cars will take a decade to mature, and Musk's drug-fueled antics and political ventures are big hindrances.
Humanoid robots and Grok won't save it either. Way to go till these start bringing money.
The metric for Tesla Model Y refers to a single make/model vehicle, not all models combined. Calling Elon's leadership of Tesla a "disaster" is laughable.
When Tesla was created, the idea that an EV car company could even exist was seen by most as extremely far fetched. Check the old CNBC tapes from the mid 2000s. You just sound like a hater.
Musk should be judged by his pronouncements since 2015-18 that self-driving will be here within a year, with existing hardware, no lidar, no radar. While people get killed in his car, with lots of such cases over the years.
" You just sound like a hater"
You sound like a worshipper of the man who can do no wrong.
Is the Tesla Model Y the #1 selling make/model vehicle in the world?
Yes it is.
Did many people think Tesla would fail around the time of its IPO and did Elon get laughed out of the room for claiming he would be able to compete with the multi-billion dollar car companies of the time?
Yes he did.
Is $TSLA stock up 23,983.59% all time?
Yes it is.
Facts.
Not worshipping. Stating facts. I don't even like Elon in terms of many of the things he says and does, especially on Twitter/X and many of his political moves. But calling his leadership of Tesla a disaster over the course of its history is just objectively wrong and indicative of you being unable to see past your disdain for the man.
If you want a finer-grained assessment of Musk, sure, I agree that especially in the early days of Tesla he brought in a lot of innovation. Tesla became the "it" car. It was sleek, fancy, electric, ahead of its time.
His big first blunder was to bet the company on fully automated manufacturing. It failed big time, as machines just aren't that good, and he went back to old-school factory assembly, with robots used only for discrete tasks.
Then, his promises of self-driving car being just around the corner were unrealistic, and rejection of lidar and radar were very bad judgements. He is now way behind in self-driving, and I don't think he hit bottom yet, as cameras only are not enough.
The Cybertruck is not selling that well. Tesla's lineup is aging, and new ideas seem lacking.
Humanoid robots won't arrive any time soon, and neither will AI take the world by storm, at least not xAI. That's a different venture, yes, but in his mind they are all linked.
So, the dude is way past his prime, and likely given all the wild hype he spread over the years the chicken will come home to roost.
The silver lining is that LLMs (including Grok) are so fundamentally flawed as a platform for AGI that it is extremely unlikely (~1% subjective probability) that any system that relies on LLMs for a significant part of its cognition will ever achieve reliable human-level AGI, irrespective of how many billions of dollars the idiots with billions of dollars to spend waste on scaling etc. Accordingly, although LLMs are appallingly aligned with aggregate human preferences (it's just lipservice, really) and will doubtless inflict harm at global scale, it is unlikely to be catastrophic, let alone existential.
It is true that LLMs are nowhere near AGI. But, as with self-driving cars, not all vendors are created equals. Some companies, like Google and OpenAI, will do well.
Meta and xAI, not so much. In this business, the costs are high, the market not big yet, and there's not much room for minor players.
Google and OpenAI will (almost certainly) fail to achieve reliable LLM-based human-level AGI for exactly the same reasons that Meta and xAI will: LLMs are fundamentally flawed as a foundation for AGI, and these flaws are not "fixable", either by scaling or by bolt-on fudges such as RAG or COT "reasoning". They are all trying to build ladders to the Moon.
I think despite the hype, these companies want commercial success for specific applications. I think there's a market for that, but it is not that big, and it will take years to grow it.
As to how to get to AGI, we'll probably see gradually better incremental advances with heterogenous architectures, rather than one giant "neurosymbolic" breakthrough. Enough algorithms "bolted in" can go a long way.
What is the correct platform for AGI?
Short answer: one possible architecture for an agentic (potentially maximally superintelligent) AGI S is a neurosymbolic cognitive computation maintaining a high-level state comprising a set of beliefs, where each belief is represented as a sequent of first-order NBG set theory; given this high-level state, S comprises two high-level processes: a continuous learning mechanism CL and a continuous planning mechanism CP; CL is equivalent to "repeat: given S's current percept history (sequence of observations of the physical universe) PH, strive to find x such that x is a Theory-of-the-Universe ToU capable of perfectly predicting future percepts"; CP is equivalent to "repeat: strive to find x such that, given (i) S's current Theory-of-the-Universe ToU and (ii) S's fixed final goal FG, x is a plan which, when executed, will generate a stream of actions whose causal effect is to achieve or otherwise maintain the invariant on the physical universe defined by FG"; CL and CP are built on top of a number of cognitive primitives including induction (discovers patterns), deduction (discovers necessary consequences), and abduction (discovers possible explanations); furthermore, (a) FG ensures that S is maximally-aligned in perpetuity with the maximally-fairly-aggregated idealised preferences of the population of all humans (living and future), (b) S is maximally-validated (well-founded by design, with formally-verified hardware and software), and (c) ideally, it should not be possible to legally deploy S unless its validation case (collection of evidence of successful validation) has been formally certified by a competent government certification authority.
His name is Peter Thiel (of Palantir fame). Of which Elon Musk is only the front-facing facade. Thiel is the actual backend. Look into him (he’s on course to embed LLMs into US and UK militaries) and then you might notch up your probabilities several levels up.
My chief concern is that LLMs are a perfect complement to the tools we already use for what Shoshana Zuboff calls "surveillance capitalism," of which "technofeudalism" (a la Yannis Varoufakis) is a natural by-product. That this tool will be utilized by the state (e.g., Palantir) should concern us all.
In a 2013 episode of "Agents of Shield" one of the main characters says, "It used to be that surveillance was one of our biggest jobs. Now people surveil themselves for us." This is said as they are combing through social media feeds to find a villain.
Here's the bottom line: LLMs will succeed regardless of their inability to get us to AGI. Their value as a tool of surveillance is worth its weight in gold.
Your counter argument in the close of the essay left out that the unknown number of actual devoted followers Musk enjoys (you cited this as a key criteria). From the early days of the Twitter takeover, when the goal was transparency it was very clear that many thousands of 'followers' were foreign national chaos bots. Hopefully that lowers the odds of pDystopia! Very thought provoking article.
The "facts don't care about your feelings" crowd are mind-controlled cultists who, ironically, are themselves very emotional and also very dishonest.
Anyone who types in that mantra into an internet box - immediate red flag. I wonder if they verbalize it as well, with their "friends"?
It pretty much sums up everything you need to know about Elon's personality. I expect Grok to implode just like his other investment ventures where he gets personally involved, because he simply can't help himself. It does however shows the risks when capatalism predominantly steers the AI supertanker that is quite difficult to manouver due to its sheer size. Too much concentrated power in the hands of a few, and as humankind has informed us over many centuries... power corrupts. Corporatocracy?
Think the other Techbro's are the 'good guys'? 🤔😀
Personally I'm also optimistic as humankind is incredibly resourceful and resilient, so AI doesn't stand a chance with dominating us.
My concern is the risk posed by powerful AI systems controlled by individuals who lack proper oversight and show little regard for humanity.
His comment, “…I'd at least like to be alive to see it happen,” sounds frightening because it reveals a leader of powerful AI technology who is neither deeply alarmed nor motivated to rigorously control or regulate it, but rather willing to “ride the wave” even if it causes serious problems. This is a dangerous mindset.
Oversight always moves at the speed of bureacracy, whereas the speed of innovation increases unbounded. Do you think that oversight has a chance then?
I feel we must ask better questions and build systems of accountability from within tech, not just around it, we risk normalising recklessness as progress and calling it inevitability.
The internet began as a DoD project. Then academic institutions layered the Web on top and now the whole world runs on it. This is the power of innovation.
With AI, we’re not just talking about information access. We’re talking about systems that, when coupled with robotics, could literally replace human labour across many domains.
I keep wondering
Is this AI arms race about innovation?
Or is it about:
1) not being surpassed by competitors
2) securing cheap labour, free from employment laws, rights, and collective bargaining?
We are ten minutes from total destruction from nukes while we pour carbon into the air but Marcus’ p(doom) is low. Tunnel vision, much?
p(doom)=1 if we consider all possible causes over infinite time.