106 Comments
User's avatar
Tom's avatar
Sep 7Edited

Is GenAI the final sugar rush before the Great Simplification?

The tech Elite would like us to believe there is this new magical technology that will solve the major problems we face as a civilization including disease, energy shortages, and climatological and ecological breakdown. But behind the scenes many of them are constructing luxury bunkers in remote locations which is why I say pay attention to what they do not what they say!

Instead of city-sized data centers, what it would have taken was a species-level awakening of wisdom, cooperation, and long-term thinking unprecedented in human history to seriously address these problems. It would have meant voluntarily choosing a simpler, smaller world to avoid a forced and catastrophic simplification later.

The fundamental predicament is this: Growth-based civilization is a temporary state. It is a bubble on the long chart of human history. This is the bubble made possible by the one-off “carbon pulse”, and it will pop. The only meaningful choice our species ever had was whether to deflate the bubble slowly with care or to let it burst violently.

Propagandized and manipulated by a rapacious Elite, it appears we have chosen the latter. The great work of the coming decades is no longer to "save" this system, but to create small pockets of resilience, to preserve knowledge, and to practice compassion—to be the stewards of the embers that will be needed in the long, simplified night that is to come.

Sadly, we chose the path of short-term thinking and hyper-individualism. The 1970s was the last time we had the energy surplus and the warning signs to make the choice. We chose more. We chose growth. We chose complexity.

And now, the bill is coming due—the GenAI bubble is a signpost on the road to a future that resembles our collective past more than it does a grand space opera.

Expand full comment
Dean Hull's avatar

While I’m not maybe as dour as you, Tom, it is getting harder and harder to see how this literal ego and pride exercise of the Tech Elite doesn’t result in ecological disaster, financial ruin, or both.

Expand full comment
Claude COULOMBE's avatar

We must be wary of transhumanists and billionaire tech barons who develop ridiculous plans to colonize the universe when long-distance space travel is probably impossible. Otherwise, "Where are they?" as Fermi remarked. They are making a shameful ethical calculation, called "effective altruism," based on the future well-being of hypothetical descendants rather than helping men, women, and children who are suffering in their real lives now. In fact, they are seeking to morally justify enriching themselves without sharing for the hypothetical good of a future humanity.

Expand full comment
Arturo E. Hernandez's avatar

Why not simplification through transformation? AI may lead to small changes in us that leads to changes in everything. But the change comes through us not outside of us. Right now the AI presupposes the big change will come outside of us.

Expand full comment
tjtibor's avatar

Well said. Simplification, however, is very difficult to describe and implement. Our interconnected world is, by definition, complex. It would be useful to find out about some pockets of the world that could be useful examples of a simpler, more sustainable way of life is possible.

Expand full comment
Birgit Wahrenburg-Jähnke's avatar

like Marine Protected areas, that show that nature and biodiversity can recover more quickly than expected https://mpatlas.org/

Expand full comment
toolate's avatar

Complexity is not the problem. Being overly complicated is.

Expand full comment
toolate's avatar

think Irish monks b4 dark ages.

Expand full comment
Matt Scherer's avatar

I love the efforts by the GenAI boosters to try and explain away the MIT study by saying it has nothing to do with the technology itself; it's just that companies aren't using it right. That would be a perfectly valid argument if, say, "only" 25% or maybe even 50% of GenAI pilots were failing. But if 95% are failing, that means virtually no one can figure out how to use it effectively. And that, in turn, either means that Corporate America is collectively way stupider than anyone realizes (admittedly a non-zero probability) or that the technology itself has serious problems.

Expand full comment
Joe's avatar

And we know its failing for businesses because its so unreliable. It's good enough for an individual to use for a query instead of a web search. Not good enough for real business use cases.

Expand full comment
Brian Curtiss's avatar

Succinctly put and exactly on target.

Expand full comment
Jonas Barnett's avatar

I'd characterize "Corporate anywhere" as less stupid and more just inveterate risk-takers. But that is what boards and investors have wanted. They want people who take bold risks for high rewards. I'd say in evolutionary terms, there are reasons for having slow-thinkers and more cautious folks over fast-thinking, high-risk, high-reward folks. However, our corporate decision-making structures are now skewed towards the latter type of people. To paraphrase Arthur C. Clarke, any sufficiently clustered risky behaviour is indistinguishable from mass stupidity.

Expand full comment
Pramodh Mallipatna's avatar

In addition to the compute and energy costs, if true cost of data were to be factored in, I think the foundational model companies can not be profitable for a long long time. I have captured the math on the cost in this article -

The True Cost of AI: $1.5B, 500K Books, and a Broken Promise

https://open.substack.com/pub/pramodhmallipatna/p/the-true-cost-of-ai-15b-500000-books

Expand full comment
D Stone's avatar

Thank you Pramodh

Expand full comment
Oleg  Alexandrov's avatar

$1.5 billion to pay up content creators is very little. Especially for larger companies like Google who have been a bit more careful to start with.

Expand full comment
Bruce Cohen's avatar

That’s just the coins they could shake out of the couches in the executive offices (but they’re more likely to institute an across the board salary cut for employees and contract workers.

Expand full comment
Oleg  Alexandrov's avatar

Authors were awarded "$3,000 per work" or so I read. That is hefty. The core issue is that the market for AI is a lot huger than $1.5 billion, or even $100 billion.

Expand full comment
Matt Kolbuc's avatar

Sam seems to be spinning downwards into a reality of pure insane delusion. Billins on Windsurf, then billions on some hardware device, and now there's reports OpenAI is going to start manufacturing their own GPUs / AI chips, so there's billions more.

These people are building out Manhattan sized data centers, have thrown over $800 billion into this tech so far, and have yet to make a working Taco Bell ordering assistant. Are these big tech CEOs so delusional and have their heads in the clouds so far, that they still think this is a winning trajectory?

Expand full comment
toolate's avatar

do you see them getting less rich in the process?

Expand full comment
Larry Jewett's avatar

Altman and other cEOs of the leading AI companies know that the US government will NEVER allow them to fail.

They are like the big banks in that regard.

One way or another (through multibillion dollar government contracts, government bailouts or government buyouts) these companies will be underwritten by the US taxpayers. Make no mistake: The American public will inevitably foot the bill for their profligance, whether we like it or not.

For companies like OpenAI, making a profit would be nice, but it is not necessary to their continued existence.

Expand full comment
Brian Curtiss's avatar

No way the government would NOT let them fail. These companies are not Ford. We honestly don't really need LLM AI at all. Let's remember why integration companies exist - lots of players in business systems where you have to bridge gaps between them and integrate them. Will AI write an integration platform? Will it be able to take over work functions from analysts using financial software and brainpower to get the books right? Will it be able to spot trends and predict things the way humans can? Maybe. But if LLMs are the best we got for the foreseeable future, it's going to be a while.

Expand full comment
Larry Jewett's avatar

Make that "for their profligacy"

Expand full comment
Steersman's avatar

LoL. Reminds me of Woody Allen's Sleeper, the Jewish robot tailor coming back with a suit several sizes too big ...

Expand full comment
Mehdididit's avatar

Also consider that Peter Thiel, of Palantir, the supposed surveillance company of the future, is doing a Top Secret lecture series on the Antichrist in SF, that everybody knows about.

Expand full comment
Steersman's avatar

🙂 Learn something new every day, though it looks like the cat is out of the bag 🙂:

"How Peter Thiel’s Antichrist Fixation Adds Up for Me

The billionaire is giving a four-part lecture on the biblical bogeyman in San Francisco. Should we be scared?"

Bloomberg: https://www.bloomberg.com/opinion/articles/2025-09-05/peter-thiel-is-warning-silicon-valley-and-the-world-about-the-antichrist

Archive link: https://archive.ph/VXO6X

Expand full comment
Larry Jewett's avatar

Can't spell Antichrist without "A" and "i"

Expand full comment
Steersman's avatar

🙂 Though, speaking of Jungian archetypes, AI is probably less a Christ, or anti-Christ figure than, at least in the fevered imaginations of its salesforce, the Oracle at Delphi.

But I -- and no few others including Norbert Wiener, the progenitor of the whole field of cybernetics -- see it more like the Jewish golem. Y'all might have some interest in this book of his on the topic:

"GOD AND GOLEM, Inc. ; A Comment on Certain Points where Cybernetics Impinges on Religion"

https://monoskop.org/images/1/1f/Wiener_Norbert_God_and_Golem_A_Comment_on_Certain_Points_where_Cybernetics_Impinges_on_Religion.pdf

Of some related interest:

"Curse of the Ghetto Golems"; David Cole

"Golem tales always follow the same template: A Jew builds a monster of clay to destroy his enemies, but in the end the golem turns on its creator."

https://www.takimag.com/article/curse-of-the-ghetto-golems/

https://en.wikipedia.org/wiki/Golem

Expand full comment
Larry Jewett's avatar

Ok, so you cant spell Golem without L and M

Expand full comment
Larry Jewett's avatar

Beware The AIntichrist

Expand full comment
Larry Jewett's avatar

He ain't Christ, he's AItichrist

Expand full comment
Larry Jewett's avatar

"Beware of false chatbots!"

Expand full comment
Joe's avatar

Clammy Sam. Hopefully he's well on his way to being the most hated man in Silicon Valley.

Expand full comment
Seth Talley's avatar

It should be noted that the Census Bureau's definition of AI is broader than most people would consider appropriate to the discussion at hand:

"For all questions referencing Artificial Intelligence (AI), the following definition is available as a pop-up: AI Definition: Computer systems and software that are able to perform tasks normally requiring human intelligence, such as decision-making, visual perception, speech recognition, and language processing."

This includes any machine vision used in part-picking or placing, a backbone of automated assembly for decades, and any speech-to-text transcription, a backbone of business phone systems for just as long. That fourteen percent peak adoption rate in June looks a lot less impressive when you realize the question covers Apple’s Visual Voicemail from iOS 1.0… and makes the two percent drop over eight weeks downright precipitous.

Expand full comment
Mhaan's avatar

nice, sooner it goes down enough that it dies out entirely, the better for everyone!

Expand full comment
D Stone's avatar

"The census data is just a flesh wound." -- The Black Knight

Expand full comment
Geoff Anderson's avatar

*dead*

Expand full comment
Matt Kolbuc's avatar

Wel, what they're doing is simply unsustainable. I know they may be delusional enough to think money is infinite, but it's not, and at some point reality is going to check in with them. They've proven themselves incompetent due to their unwillingness to pivot, and instead just double down on LLMs and transformers, so at some point they will have to crash and burn.

Expand full comment
Bruce Olsen's avatar

How did AI adoption collapse? Two Ways. Gradually, then suddenly.

The >250 group starts ramping down first, probably due to their longer budget review cycles and the (often) unrealistic pilots this group likes to fund, sustained by bribery (I mean success theater). Plus, this group's spending accelerated at the start of 2H24, which is about how long a large company will take to evaluate something significant like AI and decide whether to fund it permanently or toss it on the ash heap of failed IT initiatives.

The 100-250 group seems to maintain its spending the longest, but by the time 2H25 is underway, all the declining-adoption groups have arrived at "suddenly."

I wonder what value 1-4 and 20-49 still find in it. Viewing an AI such as ChatGPT as if it were a research assistant prone to occasional bullshit, it can still save time when compiling data, even with the added burden of fact-checking and reviewing its work. If you aren't comfortable with writing, it could also offer some value.

But at all stages of working with one, you will need to remain aware that it really doesn't have a clue what you're going on about, and will sometimes make mistakes no human would make.

Expand full comment
Joe's avatar

It seems that the larger the company, the greater the worry about the liability risk of using garbage/bullshit data from the AI for important decisions. So it's left to be merely something that's sort of helpful in some cases like writing an email. But how much will they spend on that??

Expand full comment
Hans Sandberg's avatar

Made me think of Barron's "Burning Fast" cover story on March 20, 2000. "When will the Internet Bubble burst? For scores of 'Net upstarts, that unpleasant popping sound is likely to be heard before the end of this year. Starved for cash, many of these companies will try to raise fresh funds by issuing more stock or bonds. But a lot of them won't succeed. As a result, they will be forced to sell out to stronger rivals or go out of business altogether. Already, many cash-strapped Internet firms are scrambling to find financing.

An exclusive study conducted for Barron's by the Internet stock evaluation firm Pegasus Research International indicates that at least 51 'Net firms will burn through their cash within the next 12 months. This amounts to a quarter of the 207 companies included in our study. Among the outfits likely to run out of funds soon are CDNow , Secure Computing , drkoop.com , Medscape , Infonautics , Intraware and Peapod . (For a full list, see Find Your 'Net Stock .)"

Expand full comment
Dragon Field's avatar

On Salesforce's Q3 earnings conference call, the company stated that customer adoption of their AI product, Agentforce, had increased 60% Q-o-Q, but with limited revenue impact.

My wife works as a software procurement manager for an S&P 500 company, and I often ask her about their spending on AI tools. Her response has been that they have spent very little any incremental money on AI software so far, outside some small spending on pilot projects (mostly personnel expenses). If a vendor claims they have AI functionalities and modules in the production software bundle, she would just negotiate these add-ons as freebies. She also says that they are now asking every vendor if they have AI in their tools during negotiations. If not, it would be a negative factor in their vendor selection process.

If this is also true in some other large enterprises, the revenue and ROI for the AI investment will still be years away in the future. Personally, I am very suspicious about the claims of AI revenue in many of the earnings calls I attended.

Expand full comment
Joe's avatar

You would love reading Ed Zitron's analysis. https://www.wheresyoured.at/

Expand full comment
Frodo's avatar

I work in Sales at a Marketing Tech company. Every RFP I work on has AI mentioned, but it's a fraction of what companied are looking for in our industry.

Predictably, we have spent the last 12-24 months investing heavily into our AI capabilities and adding new AI functionality into our product suite. Our website is riddled with the obligatory "AI-Powered _____", "Do ___ with GenAI", and every other marketing slogan you can imagine.

It's early days for some of our new AI capabilities, but our operating costs associated with some of our functionality make it price-prohibitive to a majority of our client base.

I want to say that again; AI is apparently so expensive for us to operate that the costs we need to pass on to our customers are too high for anyone not in our large Enterprise cohort. I would imagine a similar dynamic is occurring at many software companies.

Expand full comment
Michael's avatar

Yep. I sell for AWS…it’s a joke over here too.

We have huge AI quotas but any solution we sell barely generates any revenue, even if it’s a successful application of AI that benefits the customer.

Expand full comment
Frodo's avatar

I work in Sales at a Marketing Tech company. Every RFP I work on has AI mentioned, but it's a fraction of what companied are looking for in our industry.

Predictably, we have spent the last 12-24 months investing heavily into our AI capabilities and adding new AI functionality into our product suite. Our website is riddled with the obligatory "AI-Powered _____", "Do ___ with GenAI", and every other marketing slogan you can imagine.

It's early days for some of our new AI capabilities, but our operating costs associated with some of our functionality make it price-prohibitive to a majority of our client base.

I want to say that again; AI is apparently so expensive for us to operate that the costs we need to pass on to our customers are too high for anyone not in our large Enterprise cohort. I would imagine a similar dynamic is occurring at many software companies.

Expand full comment
David Andersen's avatar

Just ‘very suspicious’? It’s all hand waving and propaganda at this point.

Expand full comment
Larry Jewett's avatar

HopenAI has a date with Reality

Expand full comment
Stefano Boscutti's avatar

The bigger the bubble, the bigger the bust!

Expand full comment
Jonathon's avatar

Have you seen Ed Zitron’s work? He has been sounding the alarm over the wonky economics of AI startups for some time now.

“… OpenAI, by my estimates, has only made around $5.26 billion this year (and will have trouble hitting its $12.7 billion revenue projection for 2025), and will likely lose more than $10 billion to do so.”

https://www.wheresyoured.at/ai-is-a-money-trap/

Expand full comment
Joe's avatar

Yes. It's awesome.

Expand full comment
Jim Ryan's avatar

I don't see how their revenue can keep increasing. There are serious limitations in their product and I think CEOs are reading all the articles and papers about it and seeing the papers about ROI and are seeing no ROI in 95% of adopters. That doesnt make them think there is an

Issue with the way it is implemented rather than the product itself. Do you think the recent economic downturn will make companies shy away from AI for now?

Expand full comment
Joe's avatar

The problem is that their costs outgrow their revenue no matter what their revenue will be. Scaling up will only increase their losses.

Expand full comment
Joy in HK fiFP's avatar

I saw that statement about anticipating monetizing free ChatGPT. I can't imagine how eager the general public, now using it for free, will be to switch to for fee.

Expand full comment
jonW2248's avatar

I think it's worse than that. "Monetizing free ChatGPT" doesn't mean making the people using it pay. It means selling their queries to advertizers and others, and also selling the right to influence what the chatbot tells those users. That is the true monetization of this technology, just like monetizing search, or monetizing social media. Much more lucrative (and corrosive) than asking people to pay fairly for a fair product.

Expand full comment
Joy in HK fiFP's avatar

Yes, thanks for pointing that out. I saw that possibililty as well, but stuck with the easy pickings.

Expand full comment
Jasmine R's avatar

They already allowed Google to index shared chats. That wasn't an accident. I think you're right about the direction that takes, and considering the amount of hypersensitive info people tell these chatbots (how do I find a young wife, how do I end my existence), any monetization would be deeply invasive.

Expand full comment