The biggest danger to employees is that their employers believe the hype and fire them.
Klarna has already been through this full cycle of believing the hype, firing hundreds of support staff, watching customer satisfaction go off the cliff, hiring back hundreds of people… one of the little-recognized benefits of the speed of AI business is that we’re getting to witness the entire Gartner Hype Cycle in the space of months, not decades.
Yeah, I forgot the part of the cycle where the CEO tries to pretend it wasn't his fault and he still deserves his outsized salary package in spite of this colossal blunder - but I suppose you're all familiar with that already.
Those firings were presented as a huge opportunity at board meetings with all around the table saying yes so as not to appear Luddite.
When the strategy goes in the ditch everyone quietly nods and a rehiring takes place. The problem is the most valuable have already found better gigs and aren't coming back. The ones who respond are the ones nobody else wanted.
Seems like the scarier option is that AI will displace some workers in the inevitable enshitification schemes and the quality and safety of most everything will just degrade in a race to the bottom.
In my area (medicine0 it has become increasingly clear to me that in order to use ai well, I really need to have expert level knowledge , to quickly sort the wheat from the chafe. Without expert level knowledge, the alternative is to double check virtually everything. Some have suggested using other LLMs to do that but i find the errors too often are congruous.
I fear that most will chose the shortcut of simply believing the LLM outputs.
I think your last sentence has already been proven correct. Last I read there were well over 300 lawyers who had brought bogus case law before the court (made up by LLMs)—and got caught by a judge for doing it. I'm sure there are even more that escaped detection by lazy, incompetent, or overworked judges.
Question for you: What do you think about the new trend of doctors asking your permission to record patient exams using AI to translate patient statements into notes? On the positive side it's good to free a doctor from a computer keyboard (now that all records are electronic) so they can instead focus on the patient.
On the other hand, I've had my medical records hacked 4 or 5 times, which effectively eviscerates HIPPA laws, and am concerned about where these recordings are kept. If they are hacked, crooks will now also have our voice imprints, which can be used in AI deepfakes or hoovered up by Palantir and weaponized against perceived enemies by our new authoritarian rulers. Doctor's notes will also include even more personal information that used to be screened out of doctor's notes. How will this affect a patient's willingness to be honest with their doctors about personal info that could make the difference between a good diagnosis and a bad one?
you raise a lot of things to think about with no easy answers. Offloading cognitive load is attractive and has its advantages. I surely like my own doctor actually listening to me and not madly using his PC. But offloading this comes at a price. You named he possibility of errors and of hacking. Add to those the bigger issue that it may be dumbing your doctor down
In software I've already witnessed and heard of plenty of pressure from suits to ship sloppy LLM code that appears to work *to them*, who have no technical ability or insight. The reality is that decisions aren't made by people who have deep subject matter expertise, they're made by people who have shallow expertise in choosing short term gains with unknown long term consequences.
But this is all likely temporary. Even in heavily consolidated and monopolized industries there is a threshold of enshittification beyond which customers will endure the pain of switching to another brand rather than putting up with useless untrustworthy crap. Giants have toppled in the past chasing quarterlies while ignoring warning signs that they've cut too many corners.
In your industry it will likely take legislative action since the stakes are so high and internal policy so tightly bound to regulation. In mine, things will just shuffle around. People will lose jobs at generic tech firm A and, after due panic and lamentation, join or found generic tech firm B making product Green which is just like product Blue they used to make, but green.
Yes. And my view is we should be using software less, and less complicated and better fit software, wherever lives or anything else critical is at stake.
But this seems to be the minority view among software people, to put it lightly. There's a weird popular culture perception that machines are faster and more reliable than people, and this is often just not true. And the capacity of software to automate complexity (often poorly) creates an open invitation to generate endless and ever-shifting complexity in policy, since the perception is that software can alleviate the burden it causes (it often can't). See for example medical coding and billing systems.
Generally I want people looking at and relying on computers less and existing in real life more, but somehow the more software we create the more human time and attention it consumes and the less autonomy we have. Something has gone terribly wrong and “AI" is just the latest step down this road.
I think we are also seeing big companies use AI as an excuse for laying off people that they want to get rid of for the more usual reasons. This has the added benefit of doing a little corporate virtue signaling, declaring that the company is on the front lines of technology while avoiding having to mention the failures of projects, etc.
You're honestly not much better than the hype propagandists you tend to criticize. What proof do you have that some day 'this will probably change', or what justification for modifying your disagreements with 'yet'?
Spot on. I really have to wonder why ultimately Gary is a booster for the eventual development of AGI (just not primarily thru LLMs). What good is all of this to humanity? It will only put us all out of work, put us under constant surveillance and ultimately our fate will be left in the hands of a few trillionaires. I see nothing meaningful being done in any world government to prevent this age of techno feudalism.
AGI (or it’s successor, ASI) only makes sense if one believes bots are the rightful heirs to(replacement for) the human race.
Unfortunately for us humans, that is precisely the philosophy that many technoligarchs subscribe to.
As Karen Hao has pointed out, AI is really a religion and it is largely headed by religious extremists. Like all religious extremists, these particular ones have zero use and tolerance for those outside their tiny group. To them, we are the ants at their picnic. we need to be eliminated (presumably with rAId)
A vendor sent us an email admitting that their AI isn’t as good as they thought. And then saying that it’s about to automate all the work! The problem? They don’t have great data. It’s mostly scraped from the web. If you ask for a market size, it gives you free junk market research (I assume this vendor mostly sells to VCs where inflated made up numbers are just great, whoop will be allbirds, but they’ll cash out before the crash). It doesn’t matter how good the agents are if they can’t access the right data.
Gary - this is the most grounded piece on AI and work I’ve read this year. Especially the “jagged” framing. It’s the right mental model.
The line that stood out most: “Don’t focus on replacing humans. Focus on how you can use AI to help the ones you’ve got.”
That’s not just good advice for employers. It’s an architectural principle.
The reason AI keeps disappointing in high-stakes environments - healthcare, finance, regulated industries - isn’t capability. It’s that we keep deploying AI without designing the human authority structure around it. The AI executes. Nobody governs.
What I’d add to your 9 reasons: the missing layer isn’t better AI. It’s a governance control plane - the infrastructure that sits between human authority and AI execution, enforcing consent, escalation and audit at runtime. Not in a policy document. In the system itself.
Aviation figured this out decades ago. Air Traffic Control doesn’t make pilots smarter. It makes the entire system governable.
Healthcare AI needs the equivalent.
We’re building it - specifically for healthcare, specifically starting in the home with elderly patients.
The model is simple:
AI executes. Humans govern.
Every action consent-bound.
Every decision traceable.
Clinical authority never delegated.
For full human-AI synergy we need infrastructure to make the new Op Model enforceable - not just aspirational.
Andy Squire - Founder, PatientCentricCare.AI · Safety OS™ · Runtime Governance Infrastructure for AI in Healthcare
“AI executes” is the dangerous version of this sentence - if it stops there.
The full model is:
AI executes. Humans govern. Patients are protected.
The governance infrastructure we’re building isn’t a compliance wrapper added after the fact. It’s the enforcement layer that makes patient protection structural, not theoretical.
I’m a Patient myself, so I want Patients to stay safe.
They are helpers, sometimes. They can assist not the biggest companies but one-person operations that cannot afford a hire. Customer service, mail prioritizing, after hours help. People who know what they're doing can get a big boost in productivity. And for now that's it.
Since AI bots have been taking over phone trees, which now often have NO option to talk to a human being, even when the bots are incapable of assisting a customer or even understanding the issue at hand, I have noticed a serious decline in customer service and a huge increase in the amount of time I have to spend finding a solution to business errors. I DETEST THEM!!!! They are infuriating.
I will pay extra money any day for a company's products if they provide responsive, competent customer service over a bargain basement company that walls itself off from customer feedback and its own egregious errors.
I am 100% agree with your absolutely rational 9 points. However, what scares me is that the sociopaths ruling our society and some of the biggest companies don't make rational decisions, like we can see these days with the brilliant idea of attacking Iran without a plan.
As software engineer I know that reducing a team of engineers substituting most of them with stochastic parrots is going to be a technical, economical and human disaster. But I also know that they are going to do it because since decades CEOs are only interested in the results of the current cuatrimester.
So I'm not scared of AI or better, not scared of LLM that they are imposing as AI sinonym. I'm scared of this blitzkrieg against intelligence they launched to redefine what intelligence is, for monopolizing it and rent it to us, and, for having the best massive manipulation weapon ever.
Seems to me Gary is doing the opposite of what you say, but rather responding to the overwhelming media hype about impending job losses now and in the immediate future. Where did Gary advise people to panic in this article?
The implication of the phrase 'don't panic (yet)' is that panic will make sense at a yet to be determined point. This is unhelpful.
Gary's point is essentially that current technology is not capable of replacing moat jobs. This would be a more factual and less inflammatory way to say the same thing in my opinion.
I am a big fan of this substack, so it was unexpected to see an implied call to panic (eventually) in the caption.
The future will bring many challenges and obstacles, and clear-eyed, rational response will always be the best approach.
Well I read it a) as a counter argument to all the AI booster wanting you to panic by stoking your FOMO fears, but also b) the famous H2G2 friendly warning "Don't Panic".
The 'don't panic yet' frame is right for the macro numbers — but the accountability gap in AI-mediated employment decisions is already here, not coming. When an AI system recommends hiring, firing, scheduling, or task allocation, the question isn't just whether aggregate employment holds. It's whether the affected person can answer: who authorized this decision about me, under what policy, and how do I contest it? The Klarna case is a useful data point precisely because the accountability chain is invisible. Klarna can claim efficiency gains. The displaced workers can't access the authorization record that explains what the system was permitted to do and whether it acted within scope. That's not a future governance problem. It's the current one. https://www.linkedin.com/pulse/governed-ai-proliferation-evidence-roi-building-trust-infrastructure-suw5c
"don't panic yet" is doing SO much work in that headline... like what exactly does panic-worthy look like? because I feel like by the time we agree it's bad it'll already be too late to do anything
The 'modest ROI' finding is consistent with what I hear from people actually deploying this in orgs, not the ones presenting at conferences. The conference story is transformation, the actual story is productivity gains in specific tasks, friction everywhere else.
Where I'd push back slightly: the individual-level impact is already happening even if the aggregate numbers look calm. People who've gotten fluent with AI tools are doing the work of two. That doesn't show up as unemployment yet - it shows up as not replacing the person who left. The aggregate takes longer to crack.
The biggest danger to employees is that their employers believe the hype and fire them.
Klarna has already been through this full cycle of believing the hype, firing hundreds of support staff, watching customer satisfaction go off the cliff, hiring back hundreds of people… one of the little-recognized benefits of the speed of AI business is that we’re getting to witness the entire Gartner Hype Cycle in the space of months, not decades.
Yeah, I forgot the part of the cycle where the CEO tries to pretend it wasn't his fault and he still deserves his outsized salary package in spite of this colossal blunder - but I suppose you're all familiar with that already.
Those firings were presented as a huge opportunity at board meetings with all around the table saying yes so as not to appear Luddite.
When the strategy goes in the ditch everyone quietly nods and a rehiring takes place. The problem is the most valuable have already found better gigs and aren't coming back. The ones who respond are the ones nobody else wanted.
And the CEO already pocketed the bonus for the very temporary share price spike…
Seems like the scarier option is that AI will displace some workers in the inevitable enshitification schemes and the quality and safety of most everything will just degrade in a race to the bottom.
In my area (medicine0 it has become increasingly clear to me that in order to use ai well, I really need to have expert level knowledge , to quickly sort the wheat from the chafe. Without expert level knowledge, the alternative is to double check virtually everything. Some have suggested using other LLMs to do that but i find the errors too often are congruous.
I fear that most will chose the shortcut of simply believing the LLM outputs.
I think your last sentence has already been proven correct. Last I read there were well over 300 lawyers who had brought bogus case law before the court (made up by LLMs)—and got caught by a judge for doing it. I'm sure there are even more that escaped detection by lazy, incompetent, or overworked judges.
Question for you: What do you think about the new trend of doctors asking your permission to record patient exams using AI to translate patient statements into notes? On the positive side it's good to free a doctor from a computer keyboard (now that all records are electronic) so they can instead focus on the patient.
On the other hand, I've had my medical records hacked 4 or 5 times, which effectively eviscerates HIPPA laws, and am concerned about where these recordings are kept. If they are hacked, crooks will now also have our voice imprints, which can be used in AI deepfakes or hoovered up by Palantir and weaponized against perceived enemies by our new authoritarian rulers. Doctor's notes will also include even more personal information that used to be screened out of doctor's notes. How will this affect a patient's willingness to be honest with their doctors about personal info that could make the difference between a good diagnosis and a bad one?
you raise a lot of things to think about with no easy answers. Offloading cognitive load is attractive and has its advantages. I surely like my own doctor actually listening to me and not madly using his PC. But offloading this comes at a price. You named he possibility of errors and of hacking. Add to those the bigger issue that it may be dumbing your doctor down
“Dumb your doctor down”?
Isn’t that a Queen song”
In software I've already witnessed and heard of plenty of pressure from suits to ship sloppy LLM code that appears to work *to them*, who have no technical ability or insight. The reality is that decisions aren't made by people who have deep subject matter expertise, they're made by people who have shallow expertise in choosing short term gains with unknown long term consequences.
But this is all likely temporary. Even in heavily consolidated and monopolized industries there is a threshold of enshittification beyond which customers will endure the pain of switching to another brand rather than putting up with useless untrustworthy crap. Giants have toppled in the past chasing quarterlies while ignoring warning signs that they've cut too many corners.
In your industry it will likely take legislative action since the stakes are so high and internal policy so tightly bound to regulation. In mine, things will just shuffle around. People will lose jobs at generic tech firm A and, after due panic and lamentation, join or found generic tech firm B making product Green which is just like product Blue they used to make, but green.
The worship of the God of Speed predates AI, though.
Move fast, and break things.
The main problem occurs where two fields like medicine and software meet.
It’s difficult to switch to a different brand when you are dead.
Yes. And my view is we should be using software less, and less complicated and better fit software, wherever lives or anything else critical is at stake.
But this seems to be the minority view among software people, to put it lightly. There's a weird popular culture perception that machines are faster and more reliable than people, and this is often just not true. And the capacity of software to automate complexity (often poorly) creates an open invitation to generate endless and ever-shifting complexity in policy, since the perception is that software can alleviate the burden it causes (it often can't). See for example medical coding and billing systems.
Generally I want people looking at and relying on computers less and existing in real life more, but somehow the more software we create the more human time and attention it consumes and the less autonomy we have. Something has gone terribly wrong and “AI" is just the latest step down this road.
I argued exactly this here!
https://twodollarbill1.substack.com/p/companies-dont-care-if-ai-works
you're right
That's what I personally fear.
I think we are also seeing big companies use AI as an excuse for laying off people that they want to get rid of for the more usual reasons. This has the added benefit of doing a little corporate virtue signaling, declaring that the company is on the front lines of technology while avoiding having to mention the failures of projects, etc.
Bingo.
You're honestly not much better than the hype propagandists you tend to criticize. What proof do you have that some day 'this will probably change', or what justification for modifying your disagreements with 'yet'?
amen
Spot on. I really have to wonder why ultimately Gary is a booster for the eventual development of AGI (just not primarily thru LLMs). What good is all of this to humanity? It will only put us all out of work, put us under constant surveillance and ultimately our fate will be left in the hands of a few trillionaires. I see nothing meaningful being done in any world government to prevent this age of techno feudalism.
That's the moral and pragmatic aspect, which is secondary unless someone gives even just a faint basis for AGI to be a sensical concept.
So far I haven't seen anyone do so.
AGI (or it’s successor, ASI) only makes sense if one believes bots are the rightful heirs to(replacement for) the human race.
Unfortunately for us humans, that is precisely the philosophy that many technoligarchs subscribe to.
As Karen Hao has pointed out, AI is really a religion and it is largely headed by religious extremists. Like all religious extremists, these particular ones have zero use and tolerance for those outside their tiny group. To them, we are the ants at their picnic. we need to be eliminated (presumably with rAId)
Yeah, but again, that relies on the unmerited assumption that AGI/ASI are possible (sensical) to begin with.
You are right.
If it’s not possible, it’s all a moot point.
I guess I subscribe to the precautionary principle. Assume it is possible and work from that basis.
But I could be wrong. Have been before (once or twice)
The problem is that even if AGI and ASI are not possible, the current or future versions of AI could do a lot of damage to human society.
So, it makes sense to have a policy before it happens — even if it never happens.
A vendor sent us an email admitting that their AI isn’t as good as they thought. And then saying that it’s about to automate all the work! The problem? They don’t have great data. It’s mostly scraped from the web. If you ask for a market size, it gives you free junk market research (I assume this vendor mostly sells to VCs where inflated made up numbers are just great, whoop will be allbirds, but they’ll cash out before the crash). It doesn’t matter how good the agents are if they can’t access the right data.
Even if one has the right data to begin with, In many cases, that data needs to be updated on an ongoing basis.
Outdated data is useless for many applications. Shopping is a good example.
OpenAIs grand plan for Chatbot shopping flopped largely because the prices were not properly updated.
Given LLMs tendency to “hallucinate” it’s not clear why anyone ever thought it was a good idea to have AI agents doing the shopping for people.
Its easy to imagine a scenario where an AI maxes out someone’s credit card buying completely unnecessary items.
As I see it, the primary goal in some of these AI companies should not be achieving artificial intelligence but achieving human intelligence.
Gary - this is the most grounded piece on AI and work I’ve read this year. Especially the “jagged” framing. It’s the right mental model.
The line that stood out most: “Don’t focus on replacing humans. Focus on how you can use AI to help the ones you’ve got.”
That’s not just good advice for employers. It’s an architectural principle.
The reason AI keeps disappointing in high-stakes environments - healthcare, finance, regulated industries - isn’t capability. It’s that we keep deploying AI without designing the human authority structure around it. The AI executes. Nobody governs.
What I’d add to your 9 reasons: the missing layer isn’t better AI. It’s a governance control plane - the infrastructure that sits between human authority and AI execution, enforcing consent, escalation and audit at runtime. Not in a policy document. In the system itself.
Aviation figured this out decades ago. Air Traffic Control doesn’t make pilots smarter. It makes the entire system governable.
Healthcare AI needs the equivalent.
We’re building it - specifically for healthcare, specifically starting in the home with elderly patients.
The model is simple:
AI executes. Humans govern.
Every action consent-bound.
Every decision traceable.
Clinical authority never delegated.
For full human-AI synergy we need infrastructure to make the new Op Model enforceable - not just aspirational.
Andy Squire - Founder, PatientCentricCare.AI · Safety OS™ · Runtime Governance Infrastructure for AI in Healthcare
This reeks of AI Slop
Newsjacking in the Age of AI: even more odious than newsjacking in the Age before the Age of AI.
100%
“starting in the home with elderly patients.
The model is simple:
AI executes…”
Yikes
Larry - you’ve named exactly why we built it.
“AI executes” is the dangerous version of this sentence - if it stops there.
The full model is:
AI executes. Humans govern. Patients are protected.
The governance infrastructure we’re building isn’t a compliance wrapper added after the fact. It’s the enforcement layer that makes patient protection structural, not theoretical.
I’m a Patient myself, so I want Patients to stay safe.
They are helpers, sometimes. They can assist not the biggest companies but one-person operations that cannot afford a hire. Customer service, mail prioritizing, after hours help. People who know what they're doing can get a big boost in productivity. And for now that's it.
Since AI bots have been taking over phone trees, which now often have NO option to talk to a human being, even when the bots are incapable of assisting a customer or even understanding the issue at hand, I have noticed a serious decline in customer service and a huge increase in the amount of time I have to spend finding a solution to business errors. I DETEST THEM!!!! They are infuriating.
I will pay extra money any day for a company's products if they provide responsive, competent customer service over a bargain basement company that walls itself off from customer feedback and its own egregious errors.
Bullshit!
solid response
Just listened to you on Alan Alda’s Clear and Vivid. Excellent episode. 👏🏻
I am 100% agree with your absolutely rational 9 points. However, what scares me is that the sociopaths ruling our society and some of the biggest companies don't make rational decisions, like we can see these days with the brilliant idea of attacking Iran without a plan.
As software engineer I know that reducing a team of engineers substituting most of them with stochastic parrots is going to be a technical, economical and human disaster. But I also know that they are going to do it because since decades CEOs are only interested in the results of the current cuatrimester.
So I'm not scared of AI or better, not scared of LLM that they are imposing as AI sinonym. I'm scared of this blitzkrieg against intelligence they launched to redefine what intelligence is, for monopolizing it and rent it to us, and, for having the best massive manipulation weapon ever.
Gary, this type of fear mongering language is unhelpful ('don't panic yet'). It is never helpful to advise someone to panic, whether now later.
Readers rely on people like you for factual and measured discourse, please don't resort to inflammatory language.
Seems to me Gary is doing the opposite of what you say, but rather responding to the overwhelming media hype about impending job losses now and in the immediate future. Where did Gary advise people to panic in this article?
The implication of the phrase 'don't panic (yet)' is that panic will make sense at a yet to be determined point. This is unhelpful.
Gary's point is essentially that current technology is not capable of replacing moat jobs. This would be a more factual and less inflammatory way to say the same thing in my opinion.
I am a big fan of this substack, so it was unexpected to see an implied call to panic (eventually) in the caption.
The future will bring many challenges and obstacles, and clear-eyed, rational response will always be the best approach.
Well I read it a) as a counter argument to all the AI booster wanting you to panic by stoking your FOMO fears, but also b) the famous H2G2 friendly warning "Don't Panic".
AI-pocalypse now!
At this moment in time, AI adoption in coding is rewarding the biggest talkers, and not the people who can walk the walk.
It seems like something big is going to have to crash and burn before the talkers are told to shut up.
You can’t spell PANIC without “A” and “I”.
Or PAIN, for that matter
And without A and I, chatbots would be at a loss for words.
The 'don't panic yet' frame is right for the macro numbers — but the accountability gap in AI-mediated employment decisions is already here, not coming. When an AI system recommends hiring, firing, scheduling, or task allocation, the question isn't just whether aggregate employment holds. It's whether the affected person can answer: who authorized this decision about me, under what policy, and how do I contest it? The Klarna case is a useful data point precisely because the accountability chain is invisible. Klarna can claim efficiency gains. The displaced workers can't access the authorization record that explains what the system was permitted to do and whether it acted within scope. That's not a future governance problem. It's the current one. https://www.linkedin.com/pulse/governed-ai-proliferation-evidence-roi-building-trust-infrastructure-suw5c
"don't panic yet" is doing SO much work in that headline... like what exactly does panic-worthy look like? because I feel like by the time we agree it's bad it'll already be too late to do anything
This is great advice "Focus on how you can use AI to help the ones you’ve got", but hard when near term apparent wins loom so large: https://technoist.substack.com/p/who-is-the-future-for
The 'modest ROI' finding is consistent with what I hear from people actually deploying this in orgs, not the ones presenting at conferences. The conference story is transformation, the actual story is productivity gains in specific tasks, friction everywhere else.
Where I'd push back slightly: the individual-level impact is already happening even if the aggregate numbers look calm. People who've gotten fluent with AI tools are doing the work of two. That doesn't show up as unemployment yet - it shows up as not replacing the person who left. The aggregate takes longer to crack.