They won't believe us but they should listen to the MythBusters. Never build a machine without a kill switch. And never let anything self modify (that's my bit).
FYI This is the NYTimes article that Gary refers to, from this morning. (I have three more "gift articles" for the month.) GARY IS NOT KIDDING. I didn't share it at first precisely because it "scared the xxxx out of me."
There is another one on CHINA's stealing AI stuff and their network of fakery as they try to quash Chinese dissidents who have moved away from China. (I wonder why?)
But in my view from the wilderness, the Military MUST side with the better angels of those who presently occupy Congress and the Supreme Court. Here is the article in full:
Ultimately military does what people above military tell them. And AI doesn't know how to say “no”. That's the only issue here.
Note how AI uses nukes 95% of time, not 5% of time. This STRONGLY suggests that what we have here are not ”hallucinations”, but the opposite: AI couldn't see any plausible path to keeping the US hegemony (because all other weapons wouldn't work… US have lost the arms race), thus it picks the approach that have the best chance of working.
Of course most humans would simply say “hey, the idea of deploying the plan that has 1% success possibility is nuts even if alternative is pile of plans with 0.1% success rate”… but it's possible to find humans that would decide to bet everything on that 1% chance — with AI you simply don't need to conduct that search: most LLMs pick 1% success rate over 0.1% success rate pretty often and LLMs don't refuse to answer because it's how it was programmed.
The key hear is that the models are a reflection of their training data. So the output may be a reflection of conventional wisdom based on gaming. I'd be more worried if it was true military think tank data and planning data.
Absolutely, but my point is that AI doesn't move the needle all that much. It just makes it easier to do the same stupidity that would be done without AI, anyway.
well, I called Collins and King, my two senators... amazingly I got a human being at Collins Washington office. Told thm please don;t let Trump annihilate the planet. I did ask nicely.
this is by no means exhaustive it's off the top of my head but these are some of the things you can do right now to hijack and manipulate drones or autonomous firing solutions. as long as there's an AI there's a way to hack it.
The pathocracy busy replicating, itself, until the wheels of the entire bus fall off. And we are supposed to just be quiet spectators to our own annihilation. Do we exceed their expectations, or go down in silence?
Well yes but the US warmongers don't have any problem blowing away the innocents the 'OLD' way do they! So 'all AI' does is remove human culpability from the process. After all, all those men and women in tin huts and a joy stick eradicating figures on a monitor with gay abandon as their drones splatter people across the Caribbean Sea. It's all those men in B52s dropping millions of tons of bombs on Vietnamese peasants from 2 milles up and using Visicalc to calculate the tonnage needed to do it with. We've been automating war for over a century. And isn't 'AI' the logical end product of a culture built on waging war.
And it's a false argument because it's humans who still make the decisions to deploy the weapons, who design the software used to control the weapons, it just creates the illusion that humans are no longer in control, very useful propaganda.
Gary, if I may telescope a bit here: From the field of philosophy, there is the disturbing phenomenon of the subconscious harboring existential, even psychic and image-rooted ideas that the thinker/person lives somewhere between reality and no-where and so that thinker-person does NOT understand even themselves as "really real." And if one is not really real, what stake does one have in anything?
The present problematics in the neurosciences, however, often reflect just this idea, but from the thinker being immersed in it, and not yet as having understood it or its vastly influential cultural manifestations.
This philosophical comportment also shows up in those who have fewer humanizing influences in their lives but yet they are functional, and notably as long as THEY are in control of setting their own distorted and limited boundaries for everyone else to be "normative" about.
In brief, there is a careless nihilism that springs from our (earlier centuries') bouts with relativism that never was resolved and that leaves especially those with NO FIXATIVES in their individual informal and formal experiences and education. Such deeply philosophical ideas are, themselves, not mere concepts to understand, but rather become (as absented) the actual performative lenses through which one constantly interprets the world, others, and themselves in it.
Thinkers who are left with undeveloped egos makes it worse than being blikd, as we see in many (not all) techies, and who suffer from the same "nowhere man" anti-place in the world don't have much of a chance to do anything to stare blankly at the idea of responsibility (or culpability) and see the world as entirely transactional with a low-bar for, e.g., their ideas of the good or of excellent human behaviors. (Trump is the poster boy for depicting this shallow interior mess.)
But (I sound like I am fawning, but NADA), your story of your mother's life and you in it speaks of lots of "fixatives" that probably and constantly went against a "Hegseth" or (too commonly) the absences we see in many tech and political mentalities. (Aristotle thought that good parents weren't just good to have, but essential for across-the-board healthy human development.)
As you probably know, however, one's philosophical viewpoint is to all the other significant human stuff as a bowl is to the salad it holds together. But I would be careful not to project onto others the influences you received from your own developmental background.
Lastly, and as an aside, you mentioned Harry Stack Sullivan--his work is excellent and central to the work of Bernard Lonergan whose body of work I have studied for years. Lonergan cites Sullivan in his major works--always good to bring forward those who do us all the favor of writing and leaving behind great insights for us to savor and try to live up to.
The outcome of the current Meta lawsuit will be an important precedent, no? Profit-hungry tech bros have been trying to hide behind IP and their man-made algorithms to escape accountability for far too long.
Do you think if the plaintiffs win that will open a door to bypass Section 230 protections in order to regulate social media platform algorithms too? Algorithms are not free speech.
Let me play the devil's advocate. Go watch that movie House of Dynamite, which is tightly based on reality of how a nuclear launch on the US might happen. It's a scary movie and it's scary precisely because a nuke is launched from "somewhere" in North Asia but we don't know who did it. In the movie (spoilers ahead) humans are sort of frozen. They can't make decisions. The missile gets closer and closer to Chicago but what if the humans freeze? In the end the humans have 25 total minutes to decide to launch a nuclear attack back. But can they? I totally get your fears of an AI nuclear war and you are 100% right to bring them up and thank you for doing that. But just to be complete - what about an opposite scenario where humans freeze and can't make a deicsion, which let's say a machine might make better? What if AI in this scenario could locate the source of the attack and fire back and relieve the humans of this insane moral duty of incinerating millions, which might be necessary to prevent further war but which maybe the humans due to human fraility and emotions can't make?
Good points, and further evidence that we are the equivalent of 5 year olds playing with matches in a gas station. The only adults are also us. Some may have more awareness, and concern, but are not in control. What is to be done?
William Bowles: Timothy Snyder (writer/historian of political power) has a Substack with a free level where he offers some well-developed ideas about just these concerns. FYI
Not really. 1. It's labour saving and 2. Handing the slaughter over to an algorithm means even more people can be eradicated for even less effort. Like I said, we've been industrialising slaughter for over a century.
This is not exactly HA-HA FUNNY . . . but do I have this right? That, TSP (totally stupid people) are depending on AI to do the thinking that neither the TSP's nor AI is capable of doing well (that is, where we all can go on living).
Worse, there is a great "coming together" in this historic moment of (1) incompetent and dangerous AI+ with (2) TSP's who are not only S but already-evidenced HDVS's (hateful degenerative vindictive sociopaths). These are people who want to build endless prisons to "house" whomever they don't like and under flimsy excuses, where pretty soon the excuses will disappear and they'll just do it, and where what commonly happens next at "concentration camps" is just too close in history to ignore. (The list goes on.)
It would at least have been a poetic tragedy if we were destroyed by a man-made intelligence. In reality, we'll destroy ourselves while roleplaying that fantasy. I suppose madness is another classic way to bow out.
Wasn't this Trump's question to his advisors in his first administration? He also seems to like the idea of raining down massive nuclear destruction on perceived enemies. It fits with his MO of hitting back on any slight with Nx the power. If N. Korea launched an A-bomb, Trump would carpet the country with H-bombs.
The article I think says they used AI to remove Maduro. One effect of that is that the Venezuelan regime has liberalized, let many political prisoners go, agreed to reforms, and has agreed to liberalize the economy. Is that so horrible? Biden got none of those things done.
If you believe any of that you are a fool. The US destroyed any chance of Venezuela succeeding under Hugo Chavez because it is a global bully. 'Liberalizing the economy' in this case just means stealing all the oil and resources. Can't see how that will benefit the people of Venezuela.
Anthropic is to be commended for focusing on AI safety. If Anthropic's safety measures were deterministic at real-time exchange, Anthropic could turn off selective levels of guardrails for specific clients (e.g., DoD) while maintaining full guardrails for all other use cases/markets. Baking in probabilistic safety at training-time has worked well through 2025 but is limited and does not serve all use cases going forward.
On the other hand, under Biden, we got the Afghanistan withdrawal debacle, the Ukraine invasion debacle and the Gaza/Israel debacle. Under Trump and Hegseth Iran got bombed and Maduro removed from power. And no debacles.
I guess the good news is that is that if we do have a nuclear holocaust, those who survive it won't be plagued by LLMs no more, nor computers or any electronic device for a few generations.
Except, those who survived living in their bunkers before💥, like the one underneath where the East Wing was. Those left, were to be “the ones to rebuild the planet or country”. So…it’s looking like that would be the trump regime et.al. 😳
The Claude Code assistant is remarkably good at producing if not deterministic but at least relevant strategies for doing code, build, and test work, and is able to self-correct.
At the end of day, that's what matters. The world is too complex for a single correct answer anyway. No two people will give you the same answer, but they may converge onto same process after enough practice.
Maybe. When I work with Claude Code Opus 4.6, I get done at least 2x in same amount of time. But I very thoroughly inspect its work and have a very large number of regressions.
Yes, it can screw up. Yes, it is worthy. And best of all, it often fixes my own screw ups.
If you say so. But there's precious little data to back up any productivity increases at all from AI use.
But sure, let's say that's true and all developers are seeing this 2x increase. Is that worth destroying society and the environment? Because that is what is happening NOW, not in some imagined future.
More spam, more deepfakes, more scams, hundreds of billions of dollars spent that could go to any number of other uses. You say "economics will take care of it". Yes, eventually. How many will be harmed in the meantime? And for what? So we can get 2x as much crappy software?
The people running the major AI companies claim that all of these problems are to be solved by...AI. Just give us billions of dollars and let us make a huge mess and it will all work out in the end. Or so they claim.
I hope that most sane people agree that launching a nuclear attack is a bad bad idea. Those lunatics that run our country now may not be among them and I don’t want Claude or any other AI to give them any ideas or validate their crazy ideas. AI models don’t have any place in making decisions of that magnitude regardless of how well they do on coding tasks (or anything else).
They won't believe us but they should listen to the MythBusters. Never build a machine without a kill switch. And never let anything self modify (that's my bit).
Kind of the nature of evolution -- in man and machine ... 😉🙂
FYI This is the NYTimes article that Gary refers to, from this morning. (I have three more "gift articles" for the month.) GARY IS NOT KIDDING. I didn't share it at first precisely because it "scared the xxxx out of me."
There is another one on CHINA's stealing AI stuff and their network of fakery as they try to quash Chinese dissidents who have moved away from China. (I wonder why?)
But in my view from the wilderness, the Military MUST side with the better angels of those who presently occupy Congress and the Supreme Court. Here is the article in full:
https://www.nytimes.com/2026/02/24/us/politics/pentagon-anthropic.html?unlocked_article_code=1.O1A.2aOv.QidICvp7TIfo&smid=url-share
Ultimately military does what people above military tell them. And AI doesn't know how to say “no”. That's the only issue here.
Note how AI uses nukes 95% of time, not 5% of time. This STRONGLY suggests that what we have here are not ”hallucinations”, but the opposite: AI couldn't see any plausible path to keeping the US hegemony (because all other weapons wouldn't work… US have lost the arms race), thus it picks the approach that have the best chance of working.
Of course most humans would simply say “hey, the idea of deploying the plan that has 1% success possibility is nuts even if alternative is pile of plans with 0.1% success rate”… but it's possible to find humans that would decide to bet everything on that 1% chance — with AI you simply don't need to conduct that search: most LLMs pick 1% success rate over 0.1% success rate pretty often and LLMs don't refuse to answer because it's how it was programmed.
The key hear is that the models are a reflection of their training data. So the output may be a reflection of conventional wisdom based on gaming. I'd be more worried if it was true military think tank data and planning data.
> Ultimately military does what people above military tell them.
Pete Hegseth is the *civilian* who sits above the military. Above him is Trump.
We are in serious trouble.
Absolutely, but my point is that AI doesn't move the needle all that much. It just makes it easier to do the same stupidity that would be done without AI, anyway.
Thank you for sharing the gift link!!
Thank you for sharing!
The irony is that the Anthropic models are a lot better than the others at saying "no", see Peter Gostev's "bullshit benchmark" for LLMs.
well, I called Collins and King, my two senators... amazingly I got a human being at Collins Washington office. Told thm please don;t let Trump annihilate the planet. I did ask nicely.
Did you offer him the $100 million that Miriam Adelson gave Trump for his campaign coffers, so he would start a war with Iran?
If not, your vote is worthless.
AI slop goes from cringy to deeply fucked up in a war zone
this is by no means exhaustive it's off the top of my head but these are some of the things you can do right now to hijack and manipulate drones or autonomous firing solutions. as long as there's an AI there's a way to hack it.
The "False Positive" Friendly Fire
GPS Spoofing & Redirection
Target "Mirroring" (Visual Spoofing)
Meaconing (Replay Attack)
AI "Hallucination" Manipulation
The Sovereignty Loop
Electronic Warfare (EW) Hijacking
It's completely natural that amoral morons would put an amoral moron in control of the US military.
The pathocracy busy replicating, itself, until the wheels of the entire bus fall off. And we are supposed to just be quiet spectators to our own annihilation. Do we exceed their expectations, or go down in silence?
Hegseth is vastly worse than amoral: https://www.pbs.org/newshour/nation/what-to-know-about-the-archconservative-church-defense-secretary-pete-hegseth-attends
Well yes but the US warmongers don't have any problem blowing away the innocents the 'OLD' way do they! So 'all AI' does is remove human culpability from the process. After all, all those men and women in tin huts and a joy stick eradicating figures on a monitor with gay abandon as their drones splatter people across the Caribbean Sea. It's all those men in B52s dropping millions of tons of bombs on Vietnamese peasants from 2 milles up and using Visicalc to calculate the tonnage needed to do it with. We've been automating war for over a century. And isn't 'AI' the logical end product of a culture built on waging war.
the removal of culpability is something i have been meaning to write about. very serious issue.
And it's a false argument because it's humans who still make the decisions to deploy the weapons, who design the software used to control the weapons, it just creates the illusion that humans are no longer in control, very useful propaganda.
Gary, if I may telescope a bit here: From the field of philosophy, there is the disturbing phenomenon of the subconscious harboring existential, even psychic and image-rooted ideas that the thinker/person lives somewhere between reality and no-where and so that thinker-person does NOT understand even themselves as "really real." And if one is not really real, what stake does one have in anything?
The present problematics in the neurosciences, however, often reflect just this idea, but from the thinker being immersed in it, and not yet as having understood it or its vastly influential cultural manifestations.
This philosophical comportment also shows up in those who have fewer humanizing influences in their lives but yet they are functional, and notably as long as THEY are in control of setting their own distorted and limited boundaries for everyone else to be "normative" about.
In brief, there is a careless nihilism that springs from our (earlier centuries') bouts with relativism that never was resolved and that leaves especially those with NO FIXATIVES in their individual informal and formal experiences and education. Such deeply philosophical ideas are, themselves, not mere concepts to understand, but rather become (as absented) the actual performative lenses through which one constantly interprets the world, others, and themselves in it.
Thinkers who are left with undeveloped egos makes it worse than being blikd, as we see in many (not all) techies, and who suffer from the same "nowhere man" anti-place in the world don't have much of a chance to do anything to stare blankly at the idea of responsibility (or culpability) and see the world as entirely transactional with a low-bar for, e.g., their ideas of the good or of excellent human behaviors. (Trump is the poster boy for depicting this shallow interior mess.)
But (I sound like I am fawning, but NADA), your story of your mother's life and you in it speaks of lots of "fixatives" that probably and constantly went against a "Hegseth" or (too commonly) the absences we see in many tech and political mentalities. (Aristotle thought that good parents weren't just good to have, but essential for across-the-board healthy human development.)
As you probably know, however, one's philosophical viewpoint is to all the other significant human stuff as a bowl is to the salad it holds together. But I would be careful not to project onto others the influences you received from your own developmental background.
Lastly, and as an aside, you mentioned Harry Stack Sullivan--his work is excellent and central to the work of Bernard Lonergan whose body of work I have studied for years. Lonergan cites Sullivan in his major works--always good to bring forward those who do us all the favor of writing and leaving behind great insights for us to savor and try to live up to.
The outcome of the current Meta lawsuit will be an important precedent, no? Profit-hungry tech bros have been trying to hide behind IP and their man-made algorithms to escape accountability for far too long.
Do you think if the plaintiffs win that will open a door to bypass Section 230 protections in order to regulate social media platform algorithms too? Algorithms are not free speech.
Let me play the devil's advocate. Go watch that movie House of Dynamite, which is tightly based on reality of how a nuclear launch on the US might happen. It's a scary movie and it's scary precisely because a nuke is launched from "somewhere" in North Asia but we don't know who did it. In the movie (spoilers ahead) humans are sort of frozen. They can't make decisions. The missile gets closer and closer to Chicago but what if the humans freeze? In the end the humans have 25 total minutes to decide to launch a nuclear attack back. But can they? I totally get your fears of an AI nuclear war and you are 100% right to bring them up and thank you for doing that. But just to be complete - what about an opposite scenario where humans freeze and can't make a deicsion, which let's say a machine might make better? What if AI in this scenario could locate the source of the attack and fire back and relieve the humans of this insane moral duty of incinerating millions, which might be necessary to prevent further war but which maybe the humans due to human fraility and emotions can't make?
Good points, and further evidence that we are the equivalent of 5 year olds playing with matches in a gas station. The only adults are also us. Some may have more awareness, and concern, but are not in control. What is to be done?
William Bowles: Timothy Snyder (writer/historian of political power) has a Substack with a free level where he offers some well-developed ideas about just these concerns. FYI
I'll check him out
What you say is certainly true, but this takes it to a very different level!
Not really. 1. It's labour saving and 2. Handing the slaughter over to an algorithm means even more people can be eradicated for even less effort. Like I said, we've been industrialising slaughter for over a century.
Thank you for putting this on everyone's radar, Professor. Doing my best (Mark_MH) to spread the word on BlueSky about this, too...
This is not exactly HA-HA FUNNY . . . but do I have this right? That, TSP (totally stupid people) are depending on AI to do the thinking that neither the TSP's nor AI is capable of doing well (that is, where we all can go on living).
Worse, there is a great "coming together" in this historic moment of (1) incompetent and dangerous AI+ with (2) TSP's who are not only S but already-evidenced HDVS's (hateful degenerative vindictive sociopaths). These are people who want to build endless prisons to "house" whomever they don't like and under flimsy excuses, where pretty soon the excuses will disappear and they'll just do it, and where what commonly happens next at "concentration camps" is just too close in history to ignore. (The list goes on.)
You have left off the pathocracy that we are living under, and the long-term goals they may, or may not, have.
It would at least have been a poetic tragedy if we were destroyed by a man-made intelligence. In reality, we'll destroy ourselves while roleplaying that fantasy. I suppose madness is another classic way to bow out.
Seems pretty obvious that Hegseth has every intention of using this technology for unethical and illegal means.
David: I think it's (quite simply) all about "I can so I will" power. Ethical and legal (they even say so openly) are for losers.
Wasn't this Trump's question to his advisors in his first administration? He also seems to like the idea of raining down massive nuclear destruction on perceived enemies. It fits with his MO of hitting back on any slight with Nx the power. If N. Korea launched an A-bomb, Trump would carpet the country with H-bombs.
Also a good point. Regardless of what they plan to do, they don’t want anyone pushing back.
The article I think says they used AI to remove Maduro. One effect of that is that the Venezuelan regime has liberalized, let many political prisoners go, agreed to reforms, and has agreed to liberalize the economy. Is that so horrible? Biden got none of those things done.
If you believe any of that you are a fool. The US destroyed any chance of Venezuela succeeding under Hugo Chavez because it is a global bully. 'Liberalizing the economy' in this case just means stealing all the oil and resources. Can't see how that will benefit the people of Venezuela.
Anthropic is to be commended for focusing on AI safety. If Anthropic's safety measures were deterministic at real-time exchange, Anthropic could turn off selective levels of guardrails for specific clients (e.g., DoD) while maintaining full guardrails for all other use cases/markets. Baking in probabilistic safety at training-time has worked well through 2025 but is limited and does not serve all use cases going forward.
Hegseth's Department of War is the last thing in the world that should have guardrails removed.
On the other hand, under Biden, we got the Afghanistan withdrawal debacle, the Ukraine invasion debacle and the Gaza/Israel debacle. Under Trump and Hegseth Iran got bombed and Maduro removed from power. And no debacles.
Whataboutism is a fallacy and admission. Also, every word of that is a lie.
I was thinking along the lines of fixing the code around the core policy of safety. In cryptic ways, of course. Or is that even possible?
Hegseth and Trump have no clue. They just want anything they can get their hands on to further their fascist regime
Sounds a bit like Skynet... I hope I am wrong!
I guess the good news is that is that if we do have a nuclear holocaust, those who survive it won't be plagued by LLMs no more, nor computers or any electronic device for a few generations.
"Those who survive it?" I don't think cockroaches have any need for LLMs.
“There was time enough at last…”
Except, those who survived living in their bunkers before💥, like the one underneath where the East Wing was. Those left, were to be “the ones to rebuild the planet or country”. So…it’s looking like that would be the trump regime et.al. 😳
Why would anyone want to use a non-determenistic tool in a system that requires determenistic solutions? Is this about making money, again?
Anastasia: I wouldn't project your own intelligence and morality out on these people. . . doesn't work.
The Claude Code assistant is remarkably good at producing if not deterministic but at least relevant strategies for doing code, build, and test work, and is able to self-correct.
At the end of day, that's what matters. The world is too complex for a single correct answer anyway. No two people will give you the same answer, but they may converge onto same process after enough practice.
Just yesterday Claude tried to convince me, that empty string is a truthy value. So, I'm sorry, but no.
Maybe. When I work with Claude Code Opus 4.6, I get done at least 2x in same amount of time. But I very thoroughly inspect its work and have a very large number of regressions.
Yes, it can screw up. Yes, it is worthy. And best of all, it often fixes my own screw ups.
If you say so. But there's precious little data to back up any productivity increases at all from AI use.
But sure, let's say that's true and all developers are seeing this 2x increase. Is that worth destroying society and the environment? Because that is what is happening NOW, not in some imagined future.
Rigorous studies about developer prductivity are very hard to do, and the tools keep on changing for the better.
We are not destroying society and the environment. This is a vast exaggeration.
AI slop is like spam on a massive scale. Nothing we did not deal with before.
Economics will take care of electricity costs and regulations will take care of environmental costs.
We will adapt. Always did.
Then, code gen is a lot more efficient than fake cat video gen. And more value too.
More spam, more deepfakes, more scams, hundreds of billions of dollars spent that could go to any number of other uses. You say "economics will take care of it". Yes, eventually. How many will be harmed in the meantime? And for what? So we can get 2x as much crappy software?
The people running the major AI companies claim that all of these problems are to be solved by...AI. Just give us billions of dollars and let us make a huge mess and it will all work out in the end. Or so they claim.
I'm not buying it.
I hope that most sane people agree that launching a nuclear attack is a bad bad idea. Those lunatics that run our country now may not be among them and I don’t want Claude or any other AI to give them any ideas or validate their crazy ideas. AI models don’t have any place in making decisions of that magnitude regardless of how well they do on coding tasks (or anything else).
Yes, all sane people agree nuclear attacks are a very bad idea.
Continuously evaluating all threats and opportunities is a sane and required defense strategy.
These are not mutually exclusive.
Given the kakistocracy we currently have, even a so-so AI might be better at making decisions than the human "deciders".
Might I suggest that 'kakistocracy' doesn't go far enough in describing the current system. I suggest 'pathocracy' is a better fit. What do you think?
Yeah, people are going to be a lot bigger problem than A for the foreseeable future.
Until the AI gets rid of them with a tactical nuclear strike that is ...
As a child I was taught "Just because you can do something, it doesn't mean you should do it."
This now has a relevance I could never have dreamt of.
Can I now add another couple
"Just because you think you're in control, it doesn't mean you are."
"If you survive, you may well wish you hadn't."
Good ones!