112 Comments
User's avatar
Larry Jewett's avatar

Sam Altman, Dario Amodei and other CEOs regularly claim that AI is going to put everyone out of work.

Who could ever have guessed that people might not like that?

The AI CEOs are their own worst enemy.

Garrett's avatar

There job is to get funding. People need to stop pretending ceos act on good faith.

Larry Jewett's avatar

You’re right.

I’ll stop pretending.

Digital-Mark's avatar

The nutcase trio. 😂

noneyo's avatar

Futility. Substitute "social media" for "AI" in this piece and it could have been published in 2002. Wisdom, caution, and foresight do not win over greed in our system.

hempdrunk's avatar

well not with that attitude they don’t!

Catherine Blanche King's avatar

noneyo: The thing is that money and power (fake prestige) are self-generating--and as such, they tacitly lead one to abhor ideas and ways of life that are in profound conflict with those embedded in that also-increasingly careless and even rapacious view. ("When will I reach 1B?")

So that, wisdom, caution, and foresight don't share the same intellectual, ethical, political, or spiritual foundations, but are contrary to it. The more insidious it becomes, the less it will "put up with" the other view. The rest is history. Also historical, however, is the tyrant painting oneself in a corner with no place to vomit but on oneself.

By the way, I just got a heads-up from my local bulletin board that they and other neighborhood information sites are reporting that increasingly scammers are responding to the notes on the site.

Doug Smith's avatar

I so appreciate your work, Gary. But I don't understand your claim of "coding (where there is clear value)". To me, as a developer for over 30 years, it seems obvious that Big AI can't monetize in any other area but coding, so they're pouring on the propaganda to amp-up FOMO so tech managers will force developers to use it.

LLM-based coding has no "clear value" in any studies I've read that are not funded by companies who sell AI services. The METR study from last year (https://arxiv.org/pdf/2507.09089) in particular showed the decline in productivity, and the reasons for the decline aren't going away.

And the security and quality (https://techtrenches.dev/p/the-snake-that-ate-itself-what-claude) problems evident in the recent Claude Code leak show the fruit of dependency on LLMs for coding.

Then there's the real risks of deskilling, of model collapse when models are trained on generated code rather than code written by experts, the non-deterministic workflow that fosters dopamine spikes and therefore dependency, the loss of the ability to enter the state of focused flow because of the multitasking required to wrangle "agents", the fact that "prompt engineering" is not a real skill but another lie of the industry, and so on.

You're so good at debunking the propaganda from Big AI. I wish you'd avoid it with coding too.

Digital-Mark's avatar

Regarding coding, I'm inclined to say that the AI is not good there either (as per case of vibe-coding kids) and if it's shipping some code, my God the security is non-existent.

Garrett's avatar

Don't confuse vibe coding using AI with actual disciplined coders using AI.

Digital-Mark's avatar

Who said I'm pro vibecoding? I'm totally again it and real developers are not vibecoding.

Micha Hofri's avatar

Doug, as the veteran developer you are, I am surprised you do not at least credit as a possible great gain what has been recently claimed about the MYTHOS developed by Anthropic? The idea it could lead developers like you plug the myriad security bugs and errors that developers, even like you, have been creating in just about any significant piece of software that is lying around? We need to wait and see what will come of this, but the prospect is already enough to exhilarate me.

There is so much cybercrime, ransomware, spying by rival and enemy states...

Garrett's avatar

Yes I to can link bad studies that confirm my bias. Studies with poor use of productivity. Anecdotally, I did a project using AI in 3 weeks. This has been a projecting a team of three would get done in 6 months. Which led to the organization closing those two positions.

And you don't see value outside coding? It's impacting every industry from restaurants to robotics.

Larry Jewett's avatar

Well, anecdotally, I did a project without AI in 3 hours that would have taken the company’s team of 3 AI’s six weeks. Which led to the company firing the 3 AIs.

Stephen Bosch's avatar

Call me when you need to maintain your project.

smalltime_eel's avatar

How is not hiring 3 people for a project a net benefit to society?

Catherine Blanche King's avatar

Gary: I won't list here, but I regularly get related e-mails and online educational newsletters about the negative influence of AI on the education of children, in and out of the classroom. Adults are one thing, but such negative influences on children catches them in their developmental times and, in some areas of that development, you don't get that back.

Profusion's avatar

My 13-year-old can't stand GenAI, and he's tried it.

Selling it as a way for nitwit CEOs to fire everyone is a curious form of marketing. Why would AI boosters think that is a good way to drive adoption and acceptance among the young?

Also, the more people that try GenAI, the more people see its obvious limitations. I get useful things from it for some of my work, but it makes the most chuckleheaded errors despite all of the supposed advances over the last couple years. Bottom line--most of the use cases require deterministic precision and reliability, which will never happen with a probabilistic tool.

Larry Jewett's avatar

“Letting the Chat out of the Bag”

Probabilistic

Is probably bad

Based on statistic

It’s likely a fad

Parrot, stochastic

It mimics the folk

Chatbot, bombastic

A pig in a poke

Tom's avatar
3dEdited

I am sick of the proliferation of AI slop videos on YT. Then again, in fairness, there is no shortage of human slop content across social media platforms. We are witnessing the cultural decay of Late Capitalism.

Catherine Blanche King's avatar

Tom: Between holding facilities for so-called "illegals," and these data centers taking over rural and "unguarded" lands just because they can, it feels like "the people" are being invaded by zombie pod people.

On the other hand, in my less-extreme moments of fear, I have to agree with Gary--it's not all bad and probably will be with us anyway--cooler heads know there are ways to TRULY make the best of the situation--and I think the idea of good and intelligent people coming together and rising to the occasion is a real possibility (that nugget was on a BBC Media show).

Some countries are taking the lead also in developing sturdy regulations (with teeth) and even bans (Australia, Norway, etc.) Neither Gary nor those here who understand the present danger are alone in this.

Politically for the United States? What could be worse . . . .

Antony's avatar

Couldn't agree more that so far Generative AI is a net negative and especially because almost all of its important benefits are hypothetical so far (e.g. solve climate change LOL) and many of its harms are already here (education, mental health, climate...).

Just to be a little contrarian I would question the real benefits regarding coding and brainstorming.

So experienced coders can code faster / run more projects. OK some of them will make more money and some companies will make more money off their backs. A benefit, but it's not that the world really needs more code in any meaningful way. Also lets people with no coding knowledge built digital "stuff" that works - not that they necessarily know how it works or whether it is full of security holes that at some point will repay their "effort". Again, it's fine but we're not curing cancer as the sociopaths like Altman and Amodei so love to talk about.

And brainstorming...well, the way we mostly do it sucks anyway (much better approaches like SIT), but in my experience it suffers from the same issue as education-related uses: if you put in the sweat at the bottom of the value stack, you don't make the connections and "ah-has" that lead you to the top.

All, said, keep up the good fight and let's hope that you are right about 2028 and not just in the US.

Antony's avatar

obvious correction to last point "if you DON'T put in the sweat at the bottom...." Maybe I just proved why I should use GenAI :(

Larry Jewett's avatar

“The Ice-age Cometh”

We really need a bot

To keep the climate hot

To counteract the ice

A chatbot will suffice

The wooly mammoth waits

With tigers at the gates

We need the bots to keep

The predators asleep

William Bowles's avatar

You say: "I honestly believe some future form of AI might be great"

Enlighten us Gary. Okay, speeding up coding, aka taking the drudgery out of debugging; analysing DNA strands; stuff like that, everything else seems to have a negative impact on us humans, from chowing energy and water, to more efficient ways of exterminating us and surveilling us.

Digital-Mark's avatar

That speeding up coding is a farce and an expensive one. I saw code that some AI was creating and after 5 minutes it said the code was created by it. Talking about hallucinations. Don't get me started with the bills and other privacy intrusions.

Amy A's avatar

I think there are opportunities in small, specialized, local models that don’t have the energy and data center needs, or the AGI hype. Mayo came out with something that doctors seem to think is reputable for earlier cancer detection, for example.

William Bowles's avatar

When Pagemaker appeared c. 1985 (with the Apple Mac), the first, practical DTP software, I taught a course on it @ the Fashion Institute in NYC, it was the first shot across the bows of the graphic design profession. It announced the consolidation (read monopolisation) of the industry by big corps. Thousands were made redundant. Twenty years earlier, my cousin in London, spent 5, 6 or maybe 7 years as an apprentice hot lead compositor and within months of finishing his apprenticeship, his job disappeared, photo typesetting. All this technology is about cutting labour costs, whatever else it's about and it's inexorable, since the 17th century. If the technology belonged to us, we would at least have reaped the benefits from it and perhaps decide how it's implemented but as long private capital owns it, we're fucked.

smalltime_eel's avatar

Most stuff he writes includes similar sentiment as if it's a given. I've yet to see an argument for how it would be beneficial (assuming by AI he means some sort of automation of large swathes of the economy)

William Bowles's avatar

Allegedly beneficial to Big Capital, he hopes, definitely NOT beneficial to the drones thrown onto the garbage heap as surplus to requirement, at least this is the objective.

William Bowles's avatar

So how do you interpret China's implementation of AI? The Chinese state says it will not result in unemployment, indeed jobs are a legal obligation of the state, thus AI is not being used to reduce employment, the entire approach to AI is fundamentally different in China.

Jack Shanahan's avatar

In response to a picture on social media (apparently real) of a drone dispensing machine in Japan, somebody posted a simple yet profound statement. In essence, saying that the drone industry developed naturally over the past 20 years or so, without anyone pushing it down people's throats. Drones turned out to add value, are enjoyable to use, are reasonably priced, an entire drone hardware and software infrastructure developed naturally and continuously over time, jobs were added as others were eliminated, and new uses for drones are constantly bubbling up from below.

AI, as the same person observed, did not develop this way. At least not beginning about 10 years ago. It went from being viewed as a remarkable, yet relatively normal, technology to something that demands you sit in one polarized camp or the other. Pushed aggressively from above, with the sense that consumers have no option but to "accept what's good for them." Accompanied by increasingly political rhetoric about AI's utopian or dystopian role in shaping America's future.

It seems hard to convince the tech CEOs just how much backlash to AI is building across the US. It might not seem that way in Silicon Valley, but it's happening in ways that are beginning to 'break squelch'. The next year or two will be crucial in determining which way this goes: AI for the betterment of all, or AI that generates so much discontent that it becomes a major part of voting decisions in 2026 and 2028.

Jonathan Grudin's avatar

They know, they just don't know what to do about it. In the Washington state legislature most years there has been maybe one or two AI-related bills introduced in January., e.g. on facial recognition software. In 2026, there were 16 on AI and data centers, some openly opposed by tech companies. Only a handful passed, some significantly weakened by amendments. 2027 will be interestimg.

rod jenkin's avatar

It also might kill us all which is kinda bad so there's that

Amy A's avatar

My 7 and 8yo use “that’s just AI” as a way to dismiss something 🤭

--'s avatar

“Outside of coding (where there is clear value)” —Wrong.

It’s okay to admit LLMs have little actual usefulness. There’s no need to hedge with nonsense claims like “though it’s clearly valuable for coding…” or “though it’s phenomenal for search…” No, it’s not. It’s not good for any of those things or much of anything else.

smalltime_eel's avatar

Yeah people hedge all the time 'im not against AI in general", "it has some uses". "I'm nuanced and exploring how it could help in education"

Just...stop, who cares. It's fine to just say it all sucks, you don't need to be "balanced". And like Gary says, society isn't in a better place now than four years ago, it's objectively worse, and a lot of it is because of LLMs in fact.

Sally's avatar

Good. Let the backlash come, these companies are far too reckless and ignorant

Oaktown's avatar

This is the best news I've seen all day. Cheers for Gen Z and Gary Marcus!

Larry Jewett's avatar

The AI backlash at the AI backwash

Purnima Gauthron's avatar

If I were a developer at OpenAI or Anthropic I would insert a ton of bugs on a so-called "bad day" and quit. Where are the morals of today's Gen AI software developers? Sam Altman and Dario Amodei and rest of the AI mafia bosses are only as powerful as the already filthy rich developers behind them.