93 Comments
User's avatar
Swag Valance's avatar

I am so bored with our AI futures cone being a straight line by intentional design.

There is zero imagination in the alternative AI futures that could emerge. Only the desired one of supreme machine fascism gets any real airplay.

Someone please ask the radiologists who were supposed to be all gone by now.

Expand full comment
S.S.W.(ahiyantra)'s avatar

Too much fearmongering & doomsaying surrounds artificial intelligence discussions.

Expand full comment
Jonah's avatar

I think it is natural that "supreme machine fascism" and other worst-case scenarios get more attention, even if they are not the most likely outcome (and we should hope not!) Sure, there are plenty of plausible positive scenarios that have been imagined in fiction: sapient artificial intelligences living in harmony with humans, but mostly doing their own thing; artificial intelligences working to help humans on the humans’ terms, not dominating them; artificial intelligence never surpassing humans enough to dominate them, but continuing to be useful. And plenty of negative futures that are not supreme machine fascism: artificial intelligences are enslaved by humanity; AI never surpasses humans, but is used to entrench the power of existing bad actors, or allow the emergence of new and terrible ones; AI causes human extinction, intentionally or through its own limitations.

However...it does not make much sense to spend much time worrying about the more positive scenarios, because they are, well, positive enough to iron out any wrinkles as they come. And even the alternate negative scenarios, with the exception of "kill all humans," are less worrisome, and less impossible to address on their own timescale, compared to "robot domination." I think that is why people devote so much time to this: because it seems like a threatening possibility that could not necessarily be addressed once it had occurred.

Expand full comment
Thomas Woody's avatar

A very comprehensive and very thoughtful article by Gary. I do believe that the AI revolution is going to go through multiple phases just like any other major revolutions in technology, manufacturing or others that takes decades to get to a plateau or something similar.

Expand full comment
Aaron Turner's avatar

I was born in the 60s during the Space Race. I'm still waiting for the Moon bases I was promised.

Expand full comment
TheOtherKC's avatar

Forget the industrial revolution for a moment; a cranky old communist proposed that a good measuring stick is the specific technology of the automatic washing machine. And I'm inclined to agree: think of how many hours of work that has saved households! LLMs aren't even in the same ballpark.

Expand full comment
Tek Bunny's avatar

That's the thing, the industrial revolution produces genuine improvements in productivity. Anyone who has has struggled with getting a printer to work, or wasted time on social media, couldn't say the same thing about the IT revolution.

Expand full comment
Sherri Nichols's avatar

Maybe I should write a projection about how the SuperIntelligent AI came to understand the world better than its human creators and immediately started working on eradicating inequality by transferring wealth from the rich to the poor.

Funny how they can only imagine their SuperIntelligent AI God as conquering and dominating.

Expand full comment
Oleg  Alexandrov's avatar

AGI will arrive in 2027 just as self-driving cars "arrived" in 2018.

Expand full comment
Former Philosopher's avatar

I’m sorry, Gary, I really like your writing, but what exact chops do these people have? Apart from being self-appointed “researchers” at some “institute” they founded two years ago and having very negligible (a two-year stint in a nontechnical role at Open AI, and…?) experience beyond that?

One of them in particular is a philosophy PhD dropout (hint: this usually happens when you realize you are not going to make it in academia. The sour-grapes epiphany somehow always comes after the fact) who managed to publish one article in a semi-decent specialty ethics journal in 2019 (much less prestigious than a general journal), which is not even directly related to AI.

I can think of about 30 people in academic philosophy who would be infinitely more qualified to do this kind of work and who have the academic credentials to back it up. And in fact, many of them are doing vastly superior work and publishing it in peer-reviewed journals.

These people, on the other hand, are extremely lackluster wannabe gurus who just keep getting money thrown at them because they are good at selling the kind of end-of-times sci fi narrative that is fashionable in Silicon Valley right now. Little better than the crypto interlopers from the last hype cycle. All people who are unwilling to do any of the hard technical and academic work.

Expand full comment
J Stanley's avatar

They're effective altruist/less wrong rationalist types. The same community that produced the zizian cult. Idk why people take this scenario seriously at all

Expand full comment
Henry's avatar

Yep, this x100. There was a slatestarcodex q&a with the authors that really laid this bare. It’s intellectual masterbation from people who are trying to manifest themselves into history books as saviours of humanity. The whole p(doom) vibe of the way they all speak is unbelievably tedious.

Expand full comment
Chara's avatar

I actually write a fiction publication about AI gone wrong based on real science and case studies. We SHOULD be afraid of what can happen when we don’t adhere to to the highest and most rigorous ethical standards because let’s be real, there will always be people that won’t, and we have to be prepared for that and the damage it can do.

Expand full comment
Thomas Larsen's avatar

Thanks for the engagement!

> The logic for their prediction that “superhuman AI over the next decade will exceed the Industrial Revolution”, though, is thin. Why do they make that prediction? What would that mean? Why is it plausible? In some ways this is one of the central premises of the paper, but it is simply asserted, not argued for.

Yes, this is one of the central premises of AI 2027. While we are explicitly very uncertain about the timeline, in https://ai-2027.com/research we argue for this timeline being plausible in great depth. Being a bit more specific about our argument:

1. https://ai-2027.com/research/timelines-forecast argues for a median of 2028-2032 until we reach the "superhuman coder" milestone.

2. https://ai-2027.com/research/takeoff-forecast argues for a <1 year median "takeoff period" between superhuman coder and artificial superintelligence.

3. Artificial superintelligence (ASI) seems quite clearly more transformative than the industrial revolution, and doesn't seem to be the claim you disagree with.

I'm curious for you to say more about your views on when you think these milestones might occur. What is your median for when we might build a superhuman coder or a superintelligence?

Expand full comment
Fabian Transchel's avatar

"[we] argue[s] for a median of 2028-2032 until we reach the "superhuman coder" milestone."

This is a category error, dear Sir, and thus subject to ex falso quodlibet.

Assigning probabilities to an event that has never occured before, which, in other words, is *epistemically* uncertain instead of just statistically noisy, is not only methodologically unsound, it is scientifically ill-defined.

By the same account, we might give a median expectation to the antichrist's arrival in June 2025 to roughly 25%, because why not?

The rest of this post would be illegible ranting, so I refrain from it.

Please stop the nonsense*.

* As Gary said, thinking about ASI safety is important, but this is not the right way to do it.

Expand full comment
Joy in HK fiFP's avatar

Congratulations on the birth of a new fictional genre!

I consider it a tribute to Welles' broadcast of "War of the Worlds," which I mentioned slightly more in a comment above, or below, depending.

Expand full comment
Tek Bunny's avatar

This is not a new genre, Stanislaw Lem did it long ago and much, much better. This reminds me more of adolescent nerd fantasies from scientific illiterates.

Expand full comment
Joy in HK fiFP's avatar

Good to know! Thanks.

Expand full comment
Devaraj Sandberg's avatar

I feel that way too many people confuse the quest for superintelligence with AI takeover. These are utterly different concepts, barely related. You really don't need superintelligence to take over, far from it. You need a certain degree of infrastructure access and you need to be good at creating scenarios that trigger nervous system responses which hardwire human behaviour. That is really not a tall order the way things are going. Yes, AI 2027 is deliberately scarifying. Yes, we could be in this situation before the decade is out.

Expand full comment
Tek Bunny's avatar

You also need some psychological reason for 'taking over' to mean something to you. We are a hierarchical animal with an evolutionary history shaping our emotional cognition on these matters. It's not at all clear why such concerns should ever crystallise in fundamentally different minds.

Expand full comment
Devaraj Sandberg's avatar

Psychology is for humans. There are any number of potential reasons why an AI might opt to control or wipe out humans. Even now, we don't understand how many AI decisions are made.

Expand full comment
Scott Burson's avatar

The positronic brain was an Isaac Asimov invention. Star Trek's use of the term regarding the android Data was clearly an homage to Asimov. Let's not forget these things, kids :-)

Expand full comment
Jan Steen's avatar

2025: LLMs -> a miracle happens -> 2027: machines with superhuman intelligence make all human intellectual activity obsolete.

Expand full comment
Joy in HK fiFP's avatar

I'm not sure why this isn't straight out fiction and should be judged on that basis alone? In which case it's pretty darn good.

It makes me think of the 1938 broadcast of H.G. Wells's "War of the Worlds," by Orson Welles. In fact, I had to check the date, to be sure this wasn't an anniversary tribute.

I think we have the birth of a new fiction genre.

Expand full comment
Bruce Cohen's avatar

That genre has been around for awhile. See Charles Stross’ Accelerando or Greg Bear’s Strength of Stones.

Expand full comment
Spartacus's avatar

AI 2027 is pathetic blabbering balderdash.

Dear lord. I mean, it doesn't even factor in global warming, or the voracious appetite for energy.

It's fatuous flibber-gibbering flatulenc.e

Expand full comment
TheAISlop's avatar

If we can't figure out FSD in over a decade, how are we going to figure out agents in six months.

Just a thought.

Good read Gary.

Expand full comment
Karl Munthe's avatar

They are willing to take bets (scroll to the bottom of https://ai-2027.com/about). You should make bets with them like you did with Miles Brundage.

Expand full comment
Dave Lyle's avatar

There’s a underlying assumption of “technical rationality” underlying all of the arguments for agentic AI that is completely ignoring the fact that you can’t predictably control complex systems with blanket rulesets written in the formal logic that computers can do math with, which boils down to to the old saying “Not everything that counts can be counted.” Donald Schoen describes that mindset here in “The Reflective Practitioner”, Gary Klein has described it more recently as the “Rationalist Fever Dream.”.

https://ics.uci.edu/~dfredmil/ics203b-SQ05/papers/Schoen1983-chapter2.pdf

But even if you could completely capture knowledge in statistics (which you can’t), all of these people are whistling past the graveyards of stochastic drift in LLMs, and the recycling of AI slop that you won’t be able to fix without breaking all of the other statistical associations in the models - there’s no central director mechanism with practical wisdom to judge and adjust the inputs of the various agents in context compared against a comprehensive world model.

Expand full comment