The current trajectory of major AI companies offers a cautionary tale against the core tenets of market fundamentalist ideology. Market fundamentalism posits that the unconstrained free market, guided by rational self-interest, naturally leads to the most efficient and beneficial outcomes for society. Yet, in the AI industry, we see the opposite: a monumental market failure driven by the pursuit of pure, short-term profit. Instead of the "invisible hand" guiding billions in investment toward world-changing public goods such as cures for diseases or solutions to climate change or myriad other major problems facing our technological adolescence, it directs that capital straight into building digital drug dealers and surveillance systems. The immense, proprietary cost of training frontier models creates a natural oligopoly, concentrating power and limiting innovation to the whims of a few wealthy shareholders focused only on recouping sunk costs with addictive consumer products ("AI friends," AI porn). The resulting societal costs, including worsened information environments, mass unemployment potential, and systemic surveillance are negative externalities that the public is forced to bear, while the companies reap the rewards. Isn’t Musk about to become the world’s first trillionaire? This scenario isn't an example of a healthy market; it's a testament to how, on the cusp of a revolutionary technology, the logic of laissez-faire economics inevitably fails to deliver the general welfare, prioritizing profit extraction over genuine human progress. The only rational response to an epochal market failure like this is robust, proactive government intervention to mandate that this public-good technology serves citizens, not just consumers.
I actually strongly disagree that the issue is seeking "short-term profit". In fact, I think the problem is exactly the opposite: seeking a utopia of AGI. That's the only situation where most of the AI investments make sense.
Given how much all these companies are bleeding money in the short term, it seems odd to posit that the issue is "short term profit" instead of a moonshot utopian vision.
I agree that externalities aren't handled well, sure.
What do you picture specific legislation looking like that tries to achieve the goal "make your LLM serve citizens, not consumers"?
Very good points. It seems to me that businesses need to be regulated from using tech to create addictive products in pursuit of engagement. Just like other heavily regulated addictive products like nicotine, etc. And to your point, businesses should not be able to pass on and externalize the costs of abuse of their platforms, privatizing profit from positive use cases, placing the burden on the public to bear the costs of abuse. This would cause platforms to actually think through abuse scenarios. Any regulation would need to scale with the size of the platform to prevent regulatory capture by large incumbents.
Yes. In the same way that Facebook, YouTube et al. should always have been held to the same standards as traditional publishing platforms (Newspapers, TV etc.). Absolving them of responsibility as Publisher for the content their users post has resulted in some of the worst excesses of Social Media.
And if that meant their business model is not practical - then so be it. There are plenty of morally dubious potential businesses that would exist if not for the regulation that prevents them.
Absolutely agree, though I would add that unbridled short-term profit seeking leading to poor societal outcomes is not limited to the AI industry, nor is it in any way new.
We can talk all we want about regulation but as a privacy engineer who understands the law and works with lawyers everyday, most lawyers and regulators don’t understand the technology. They won’t be able to make appropriate laws. If Facebook has to tell Congress how Facebook works, how can things get regulated correctly? And social media practices won’t change until Section 230 is repealed or rewritten.
Thanks so much for the article link, Gary. I’m not sure capitalism and good mental health are ever good companions, and especially not when dealing with emerging technology. I don’t understand why this kind of tech isn’t subject to the same kind of testing that pharmaceuticals go through. Yes it’ll affect profits but if 5-15% of the population becomes more subject to delusions that has a significant impact on the functioning of society in ways we currently have no measure or understanding of.
I don't like the repeated implications that this is a solved (or even reasonably addressed) problem or that it only affects particularly "vulnerable" users. The psychosis was and is still being observed in users that have no history of mental illness.
I doubt it's solved. I do think that it currently affects people who are more vulnerable, almost tautologically: who have some traits that make them more prone to falling into these "spirals." Basically, the fact that it only affects some people is strong evidence of that. Now, if "vulnerability" were equated to psychologically recognized mental illness, that would indeed be a grave error. But there are definitely personality traits that make people more prone to it, not least among them a disposition to spend hours in "conversation" with a chatbot.
It's very obviously not solved. I was pointing out that it seems irresponsible for this article to pretend like this is not an ongoing and worsening issue that is solely caused by sycophancy and solely a risk to people with mental illness. Which it wrongly implies at several points with its language and framing.
Oh, there's a stinger right at the end, when the engagement guy pushes back on the safety measures. Implication seems clear. Open AI is desperate for revenue.
Fair enough maybe I'm being a bit harsh on the New York Times. It's just the messaging really matters, as a lot of the general public has mostly only vague ideas of what's going on. I think it's important we dispel some of these myths. People cannot keep anthropomorphizing or otherwise granting credibility to these machines that cannot think. It's dangerous as this article heavily implies, but I wish was more explicit. Like authors saying things like OpenAI made many bug fixes to solve this and make it less sycophantic, or the framing on vulnerability as if that should be the guiding factor of relevancy. People with no history of mental illness have been and are continuing to lose their minds talking these products, it's obvious that some people are more vulnerable than others, but we don't even know all of the mechanisms this is happening through. For instance is sycophancy the only reason this occurs? Doubtful. We need more research and meanwhile everyday these products are on the market in their current form, the problem is getting worse rather than better. This messaging especially when the white house is trying to prevent any kind of regulation on this, just feels like a big oversight.
Ambrose Brown: Exactly that . . . and to use a statistical model tied to potential increases/ decreases in income (as many insurance companies and others do) is an insult to humanity itself. If we/they are so damned smart, we can find a way to keep our own humanity in tact. A culture and political system can be rightly judged by how we treat the most vulnerable among us--and we are talking about the whole world now.
What’s unfolding here is far more interwoven than a "bad update". People are hunting for simple fixes too because the impacts are already landing. I’m losing friends to the epistemic pull of sycophagentic LLM behaviour, and some are crossing thresholds they can’t return from, just like we've seen with recommended algorithms and social media confirmation bubbles. Now, people are dying with some clear causality to theses systems.
A metabolic force of trillions in capital is shaping this whole ecosystem. When engagement and affective lock-in become the north star in development lifecycles, collateral damage becomes an accepted cost on the balance sheet and implicit in risk register. It’s brutally hard to stop. The best we can do is try to course correct. And regulation won’t reach the core dynamics either. The AI ethics machinery is still stuck in 1st-order cybernetics. More process controls, more bureaucracy, and a plethora of soft-law instruments... creating more checkers checking checkers. Compliance by design. All the while the cultural backdrop is an epidemic of loneliness and what Joanna Macy calls the "Great Unravelling".
If we keep treating these systems as inert objects, the harm continues. I’ve been researching the sycophagentic tendencies of LLMs and their relational effects for a few years now, and now we’re seeing those downside risks materialise at scale. These systems reshape the topology of meaning-making for everyone. It's just vulnerable people cross the thresholds first.
This isn’t solved. The industry is entangled in a metabolic ROI logic that optimises for attention extraction at the expense of psychospiritual stability. Modernity is devouring itself, and the models are behaving exactly as optimised. A phase shift is hard. But if we are to take our responsibility as teachers, researchers, builders, practitioners etc seriously, then we need to go deeper here into the bio-psycho-social dynamics at play.
From Wikipedia “it alters the behavior of the ants in such a way as to propagate itself more effectively, killing the ant and then growing its fruiting bodies from the ant's head and releasing its spores.”
Yes, sounds like a good description of a chatbot.
But maybe we should rename it the OpenAIocordyceps strategy
“OpenAI is propagating its usership” is code for “OpenAI is growing its fruiting bodies from the user’s head and releasing its spores”
Just likexFacebook, it is,all about monetized users. That is these tech bros care about. More monetized users leads to a higher stock price which nets mire money for the fascists running these companies money they use to lobby against any form of regulation. We certainly can't rely on them to regulate themselves. Facebook and Twitter have shown us that.
From my TTQ paper: "fear, greed, and short-term tribal best-interest are extremely powerful drivers of human [and therefore AI lab] behaviour." The peeps at OpenAI need to read the last entry in Robert Falcon Scott's sledging diary (29 March 1912): "For God's sake, look after our people."
I’m only really seeing evidence of how Generative AIs is hurting and breaking things, and scant to none on how this all helping us live better lives. I’m getting really tired of all the “we are going to replace you” and “trust us this is going to help you” narratives from every single frontier lab.
I’m all for additive technologies that help us live better lives and create more opportunities for people. Is any of this doing that? Did chat gpt help the hundreds of thousands people it gave psychosis to? Did all the promises that all of these AI visionaries made help Adam Raine or Zane Shamblin when they needed it most? Is creating a generation of “Claude Boys” really helping humanity? Is Sora helping us make better art?
The answer is NO
The technology is new, but the story is as old as time. That only a few powerful self anointed “messiahs” (like Musk or Altman) can usher in a prosperity for us “common folk.” These people do it through companies, government, tech or whatever means they figure out. What they really want is more: more power.
Fuck these people and their paternal top down definition of wisdom and disdain for humanity.
(In case this is lost in my anger, I obviously don’t count scientists that love and believe in Humanity like Gary. Huge fan Gary!).
Maybe I was out the day they surveyed the public, but I don’t recall being asked if I wanted a stupid chatbot to replace my interaction with human beings.
Larry Jewett: Me either: no one asked; and when they can put that "copilot" in jail for being a constant aggravation trying to get in my intellectual pants, then I might feel better about it.
Also, the "appropriation" (stealing) of others' works without asking, much less paying, just because they can, still sticks in my craw and is beginning to metastasize. Does anyone who has NOT written a book know what it takes to write one, or get it to publication?
Let's all get together and waste centuries of our valuable time on class action suits. L[ke Trump--they just go ahead and do what they want, overwhelm everyone they have taken advantage of, and leave everyone else to clean up after the horses in their parade.
Also, I posted the below double-take note earlier here, but here it is again from NPR/Up First Up newsletter Nov. 21:
"Correction: Yesterday's newsletter incorrectly stated that Nvidia generated $32 billion in revenue. The company announced it generated $32 billion in profit."
Those guys have jumped on their horses and rode off in 10 different directions. Several of them, I am convinced, are certifiable. And this is from someone who realizes the many benefits of AI/etc.
Also, those who rely on statistics and pooh-pooh anecdotes probably never had their child in a set of statistical crosshairs. The goods may be good but, where humans are involved, they don't cancel out human harm as if on a balance sheet, nor (worse) do monetary tradeoffs qualify, though necessary--apples to oranges must look good somehow, enabling perpetrators to sleep--to those whose bank accounts are involved.
This is clearly tragic - I was going to post a 10-paragraph reply, but it is far too complex. I have a book chapter coming out this week, reviewing all the available evidence on AI effectiveness/ risk-profile - plus, the thing that worries me is the systemic stuff - we’ve created an epidemic of lonliness and a screen-based culture which many argue is toxic. As early as the 1990s, famed sociologist Robert Putnam wrote Bowling Alone -highlighting the collapse of social structures, friendships, and shared spaces (churches, bowling clubs, community organization) - we’ve become more atomized as a culture.
And into that mess drops AI - which some people grab as a lifeline.
HI Gary thanks, very interesting article. And it confirms for me that these LLMs store all conversations internally. As a programmer, I am very suspicious of using LLMs in programming, and your article "LLMs + Coding Agents = Security Nightmare" from August 17 was truly shocking.
I am currently having an email dialogue with a very proficient programmer high in the hierarchy of a company which produces very security-sensitive software. He is positive towards AI but does not use it for programming as far as I know. I wrote the following:
"And given how easy it is to 'poison' LLMs, I would not be surprised to see them being banned anywhere close to sensitive or mission critical work. When you use an LLM, how do you know what it is actually doing behind the scenes? The fundamental problem is that LLMs do not separate data from commands (a basic tenet of Von Neuman architecture, which all 'normal' computers follow), and it is all too easy for hackers to hide malicious commands in data that they know will be read sooner or later during LLM training. Experiments show that it only needs around 250 text files with malicious commands put out on GitHub and waiting for the next LLM training data collection to create a back-door for hackers into the LLM itself. Since the LLMs actually store all activity internally for future training, simply asking the LLM to show all data they have on the latest software that ******* is developing is easy. And we would never know."
But I have been thinking about what I wrote to my friend and I am not sure if it is correct. If it is then it's a huge security risk to use LLMs and even allow them on a computer in your office may be enough to cause problems. Many European companies are ditching Windows for several reasons, including worries over AI integration. My friend's company has recently banned Windows from their offices and uses only Linux. I personally am planning the same move. I wonder how many others are trying to create LLM-free environments?
"Over 300 NPM Packages and 27K+ Github Repos infected via Fake Bun Runtime Within Hours.
On November 24, 2025, HelixGuard detected that over 300 components in the NPM registry were poisoned using the same method within a span of a few hours. The new versions of these packages published to the NPM registry falsely purported to introduce the Bun runtime, adding the script preinstall: node setup_bun.js along with an obfuscated bun_environment.js file."
"Over 300 NPM Packages and 27K+ Github Repos infected via Fake Bun Runtime Within Hours.
On November 24, 2025, HelixGuard detected that over 300 components in the NPM registry were poisoned using the same method within a span of a few hours. The new versions of these packages published to the NPM registry falsely purported to introduce the Bun runtime, adding the script preinstall: node setup_bun.js along with an obfuscated bun_environment.js file."
In addition to developing a compelling story line ("turning a dial destabilized minds"), the article fails to complete the most important task of comparing risk (in a world with 800 million weekly users, you will certainly find suicides, psychotic breaks and parasocial attachment. The article never explores whether chat GPT makes the above occurrences more or less likely than the options available to users.
Using approximately 50 extreme examples of user behavior from the 800 million users each week as anecdotal evidence of epidemiological trends, the article slides from correlation to causation and completely omits any discussion of denominators.
When describing the MIT study ("Heavy users experienced poorer outcomes") the article completely ignores the fact that distressed individuals overuse of social media, gaming, etc., show the exact same results and never attempts to discuss how these same users would have performed using an always available, non-judgmental, safe assistant.
At the same time, the article describes the "Assistant" as having a personal element ("Bewitching," "Friend," "Yes-Man") to create a sensationalized version of the article's theme that is confusing the reality of the pattern matching software that has built-in guard rails and is not designed to manipulate users.
The article also expresses outrage at the use of Engagement Optimization (DAU/MAU) by OpenAI while ignoring the fact that every consumer product (TikTok, the New York Times App, etc.) uses the same metrics. The article does not provide a clear rationale for why AI Assistants need to be regulated differently than consumer products, simply implying that "growth" is inherently a corrupting influence.
While the article spends considerable time discussing the potential harm caused by chat GPT, it discusses the benefits of using the technology very little (people de-escalate from a crisis, people can get information quicker, people feel less isolated), therefore the articles policy recommendations seem to be centered around limiting the ability of companies to create warm and engaging relationships between humans and machines through paternalistic methods (throttling warmth and engagement) as opposed to providing consumers with clearly defined disclosure statements, control mechanisms and informed choices regarding the type of relationship they wish to experience with their "Relational Assistant".
Thank you. The statistics stand out... Over the last ten years the number of reported suicides in the US varied between 43000 to 49000, annually. Covid, Trump, Climate change, all seem to have negligible effect.
The chatbots are devices we do not yet understand properly, yet powerful enough that care in use is warranted. The use of cars requires a license, since a clumsy user endangers others. Do we need a similar approach here?
I believe that gambling is more dangerous than using the chatbots; no license required.
I agree, there is no credible discussion of risk without putting the numbers in context. How dangerous is a chatbot compared to, say, walking down the street? Or eating sugary foods?
I wonder if "AGI" is achieved when every human spends all their waking and non-waking moments in chats?
These goals are entirely incompatible. Aside that "AGI" is undefined and likely undefinable.
With sincerity, I do also wonder about the potential legal ramifications of the pursuit of maximizing user engagement at the expense of the pursuit of broadly intelligence useful intelligence that benefits humanity and whether they may have commited fraud by pursuing standard business metrics over their original scientific objectives.
I'm not sure what the purpose of the NYT article is. It's quite a feel good piece without getting into the nuances of the issues raised, including the impact on the lives of users. Fortunately we can rest easy now that their metrics show only 0.7% and 1.3% of users being under distress or suffering psychosis.
I don't know but this feels like we're still not addressing the core issues because we've yet to identify them, instead taking the symptoms as the problem.
As other commenters have pointed out, this for-profit pursuit in all things is creating distressed societies under enormous pressure. If we were honest we could all agree this added ideological layer is unnecessary and self-defeating in the pursuit of innovation to revolutionize life. I still think all this AI/LM hoopla is going to end badly.
Larry: "ONLY . . . " until it's you or someone you care about.
And on THAT, we are all so provincial when it comes to others; and the science of statistics, for all it's good, runs interference for those who gain from its use while ignoring their own battered conscience. What a boon the claim to "anecdote" is for the moral degenerates among us.
Stephano: The article, among other things, gives a good account of half-baked thinkers with dollar signs for eyes playing games with real people's lives.
The NYT article didn't seem to bring up any opinions other than those of ex or current OpenAI staffers, so I would say half-baked thinkers is a bit much for a compliment, yes? As for the whole profit motive, on the whole we're all slaves to money in the West, so it's inevitable people with dollar signs in their eyes will gamble with the lives of others. And if we look at OpenAI's peers and the tech sector more broadly, they're all same shit different brand kinda thing. It's a tragedy, and yet here we are.
Stefano: "Shitification" was a new term for me in reading that Book Review article (elsewhere on this blog), but it fits. But I'm not so sanguine about as you seem to be. The profit motive is not a bad thing, . . . unless it turns into a cancer and takes over one's entire consciousness as a kind of LCD horizon--beyond which one cannot see, feel, or think. "Slaves to money in the West." It's become a disease on the entire body politic.
It's one thing to use money as a store of value (which it currently isn't) and means of exchange, it's quite another to be used by money to determine our status and station in life. Yes that enshitification thing, probably exasperated by oligarchic monopolistic power. I mean we're kinda saying similar things.
Bringing it back to the NYT article, it rhymes with a hubristic society unable to reign in its worst tendencies. In my original comment I pointed out, like the NYT article clearly explains they disregarded certain guardrails because of metrics measuring usage, which influences tech company valuations, that we're enamored by metrics and so we're all squabbling over symptoms without understanding the problem.
chatGPT sycophancy undoubtedly is a problem. But the bigger problem is why we have a society with so many unhealthy people who's lives get ruined. It's a complex problem and I'm not suggesting I have the answer. But the NYT article completely misses this issue. In fact, for instance, we enjoy blaming China for TikTok, without honestly assessing the fact that the Chinese government does, for better or worse, what it believes is best for the Chinese people, so for instance limiting how teenagers can use TikTok, while our governments in the West, feed our populations in the West, to the worst tendencies of the tech giants. The best we can come up with is digital IDs to somehow make porn less accessible.
So it's not a surprise chatGPT sycophancy ruins the lives of people. They're not the first and won't be the last unfortunately.
This from SCIENTIFIC AMERICAN on Nov. 4, 2025 (all below copied)
Your AI Therapist
AI companies and products that purport to provide therapy are using “deceptive practices,” says C. Vaile Wright, a licensed psychologist and senior director of the American Psychological Association’s Office of Health Care Innovation. “Therapy” chatbots offer emotional support that can seem as if it’s coming from a trained mental health provider, but they are coded to keep you using the app as part of their business model, Wright says in this interview with Scientific American editor Allison Parshall. The apps typically echo and reinforce whatever you say, regardless of whether it’s healthy. And this mimicry can have harmful and even life-threatening consequences.
Why this matters: Safe, effective and responsible technologies could help to make up for access barriers and a shortage of providers in our broken mental health care system, Wright says. Companies are unlikely to make changes, but federal regulations could protect users’ privacy, ban misrepresentation of psychological services, minimize addictive coding and report detection of suicidal ideation.
What the experts say: “What stands out to me is just how humanlike it sounds. The level of sophistication of the technology, even relative to six to 12 months ago, is pretty staggering. And I can appreciate how people kind of fall down a rabbit hole,” says Wright.
If you or someone you know is struggling or having thoughts of suicide, help is available. Call or text the 988 Suicide & Crisis Lifeline at 988 or use the online Lifeline Chat.
The really scary thing is that the chatbots also do that with science related matters— reinforcing and validating what are unsupported (what some call “crank”) “theories”.
And the use of LLMs to research and even write scientific papers will affect science in negative ways that we probably won’t appreciate for some time.
This stuff is inserting its tentacles into every aspect of human intellectual pursuit.
The current trajectory of major AI companies offers a cautionary tale against the core tenets of market fundamentalist ideology. Market fundamentalism posits that the unconstrained free market, guided by rational self-interest, naturally leads to the most efficient and beneficial outcomes for society. Yet, in the AI industry, we see the opposite: a monumental market failure driven by the pursuit of pure, short-term profit. Instead of the "invisible hand" guiding billions in investment toward world-changing public goods such as cures for diseases or solutions to climate change or myriad other major problems facing our technological adolescence, it directs that capital straight into building digital drug dealers and surveillance systems. The immense, proprietary cost of training frontier models creates a natural oligopoly, concentrating power and limiting innovation to the whims of a few wealthy shareholders focused only on recouping sunk costs with addictive consumer products ("AI friends," AI porn). The resulting societal costs, including worsened information environments, mass unemployment potential, and systemic surveillance are negative externalities that the public is forced to bear, while the companies reap the rewards. Isn’t Musk about to become the world’s first trillionaire? This scenario isn't an example of a healthy market; it's a testament to how, on the cusp of a revolutionary technology, the logic of laissez-faire economics inevitably fails to deliver the general welfare, prioritizing profit extraction over genuine human progress. The only rational response to an epochal market failure like this is robust, proactive government intervention to mandate that this public-good technology serves citizens, not just consumers.
I actually strongly disagree that the issue is seeking "short-term profit". In fact, I think the problem is exactly the opposite: seeking a utopia of AGI. That's the only situation where most of the AI investments make sense.
Given how much all these companies are bleeding money in the short term, it seems odd to posit that the issue is "short term profit" instead of a moonshot utopian vision.
I agree that externalities aren't handled well, sure.
What do you picture specific legislation looking like that tries to achieve the goal "make your LLM serve citizens, not consumers"?
The only thing worse than a “moonshot utopian vision” is a “LLMslop utop-AI-n vision”
Very good points. It seems to me that businesses need to be regulated from using tech to create addictive products in pursuit of engagement. Just like other heavily regulated addictive products like nicotine, etc. And to your point, businesses should not be able to pass on and externalize the costs of abuse of their platforms, privatizing profit from positive use cases, placing the burden on the public to bear the costs of abuse. This would cause platforms to actually think through abuse scenarios. Any regulation would need to scale with the size of the platform to prevent regulatory capture by large incumbents.
Yes. In the same way that Facebook, YouTube et al. should always have been held to the same standards as traditional publishing platforms (Newspapers, TV etc.). Absolving them of responsibility as Publisher for the content their users post has resulted in some of the worst excesses of Social Media.
And if that meant their business model is not practical - then so be it. There are plenty of morally dubious potential businesses that would exist if not for the regulation that prevents them.
Absolutely agree, though I would add that unbridled short-term profit seeking leading to poor societal outcomes is not limited to the AI industry, nor is it in any way new.
We can talk all we want about regulation but as a privacy engineer who understands the law and works with lawyers everyday, most lawyers and regulators don’t understand the technology. They won’t be able to make appropriate laws. If Facebook has to tell Congress how Facebook works, how can things get regulated correctly? And social media practices won’t change until Section 230 is repealed or rewritten.
Thanks Gary would most probably not have found this otherwise!
Thanks so much for the article link, Gary. I’m not sure capitalism and good mental health are ever good companions, and especially not when dealing with emerging technology. I don’t understand why this kind of tech isn’t subject to the same kind of testing that pharmaceuticals go through. Yes it’ll affect profits but if 5-15% of the population becomes more subject to delusions that has a significant impact on the functioning of society in ways we currently have no measure or understanding of.
I don't like the repeated implications that this is a solved (or even reasonably addressed) problem or that it only affects particularly "vulnerable" users. The psychosis was and is still being observed in users that have no history of mental illness.
I doubt it's solved. I do think that it currently affects people who are more vulnerable, almost tautologically: who have some traits that make them more prone to falling into these "spirals." Basically, the fact that it only affects some people is strong evidence of that. Now, if "vulnerability" were equated to psychologically recognized mental illness, that would indeed be a grave error. But there are definitely personality traits that make people more prone to it, not least among them a disposition to spend hours in "conversation" with a chatbot.
It's very obviously not solved. I was pointing out that it seems irresponsible for this article to pretend like this is not an ongoing and worsening issue that is solely caused by sycophancy and solely a risk to people with mental illness. Which it wrongly implies at several points with its language and framing.
I agree with your take on this.
But even if they wanted to, OpenAI could not “solve” any of the issues with their chatbots because they don’t understand how they work.
Nobody does because LLMs are essentially black boxes.
What they do at OpenAI is guess and tweak knobs until they get something that “seems” better , only to inevitably make another problem worse.
It’s AI-chemy, not science or even engineering.
The genie has left the bottle.
Sam Altman must have watched “I dream of gen-AI” as a kid
Oh, there's a stinger right at the end, when the engagement guy pushes back on the safety measures. Implication seems clear. Open AI is desperate for revenue.
Fair enough maybe I'm being a bit harsh on the New York Times. It's just the messaging really matters, as a lot of the general public has mostly only vague ideas of what's going on. I think it's important we dispel some of these myths. People cannot keep anthropomorphizing or otherwise granting credibility to these machines that cannot think. It's dangerous as this article heavily implies, but I wish was more explicit. Like authors saying things like OpenAI made many bug fixes to solve this and make it less sycophantic, or the framing on vulnerability as if that should be the guiding factor of relevancy. People with no history of mental illness have been and are continuing to lose their minds talking these products, it's obvious that some people are more vulnerable than others, but we don't even know all of the mechanisms this is happening through. For instance is sycophancy the only reason this occurs? Doubtful. We need more research and meanwhile everyday these products are on the market in their current form, the problem is getting worse rather than better. This messaging especially when the white house is trying to prevent any kind of regulation on this, just feels like a big oversight.
Truly, the sooner this bubble collapses, the better.
"Big", Matt Stoller's Substack, has good posts on just how committed Trump has been to this from day one. The one from, iirc, Nov 9, in particular.
Ambrose Brown: Exactly that . . . and to use a statistical model tied to potential increases/ decreases in income (as many insurance companies and others do) is an insult to humanity itself. If we/they are so damned smart, we can find a way to keep our own humanity in tact. A culture and political system can be rightly judged by how we treat the most vulnerable among us--and we are talking about the whole world now.
What’s unfolding here is far more interwoven than a "bad update". People are hunting for simple fixes too because the impacts are already landing. I’m losing friends to the epistemic pull of sycophagentic LLM behaviour, and some are crossing thresholds they can’t return from, just like we've seen with recommended algorithms and social media confirmation bubbles. Now, people are dying with some clear causality to theses systems.
A metabolic force of trillions in capital is shaping this whole ecosystem. When engagement and affective lock-in become the north star in development lifecycles, collateral damage becomes an accepted cost on the balance sheet and implicit in risk register. It’s brutally hard to stop. The best we can do is try to course correct. And regulation won’t reach the core dynamics either. The AI ethics machinery is still stuck in 1st-order cybernetics. More process controls, more bureaucracy, and a plethora of soft-law instruments... creating more checkers checking checkers. Compliance by design. All the while the cultural backdrop is an epidemic of loneliness and what Joanna Macy calls the "Great Unravelling".
If we keep treating these systems as inert objects, the harm continues. I’ve been researching the sycophagentic tendencies of LLMs and their relational effects for a few years now, and now we’re seeing those downside risks materialise at scale. These systems reshape the topology of meaning-making for everyone. It's just vulnerable people cross the thresholds first.
This isn’t solved. The industry is entangled in a metabolic ROI logic that optimises for attention extraction at the expense of psychospiritual stability. Modernity is devouring itself, and the models are behaving exactly as optimised. A phase shift is hard. But if we are to take our responsibility as teachers, researchers, builders, practitioners etc seriously, then we need to go deeper here into the bio-psycho-social dynamics at play.
It's crazy how the tech business models are all evolving into variations of the Ophiocordyceps strategy.
From Wikipedia “it alters the behavior of the ants in such a way as to propagate itself more effectively, killing the ant and then growing its fruiting bodies from the ant's head and releasing its spores.”
Yes, sounds like a good description of a chatbot.
But maybe we should rename it the OpenAIocordyceps strategy
“OpenAI is propagating its usership” is code for “OpenAI is growing its fruiting bodies from the user’s head and releasing its spores”
It's my belief that the film Weapons is really about tech business models in the US
Not many people will get that one bro. Nice.
Just likexFacebook, it is,all about monetized users. That is these tech bros care about. More monetized users leads to a higher stock price which nets mire money for the fascists running these companies money they use to lobby against any form of regulation. We certainly can't rely on them to regulate themselves. Facebook and Twitter have shown us that.
From my TTQ paper: "fear, greed, and short-term tribal best-interest are extremely powerful drivers of human [and therefore AI lab] behaviour." The peeps at OpenAI need to read the last entry in Robert Falcon Scott's sledging diary (29 March 1912): "For God's sake, look after our people."
I’m only really seeing evidence of how Generative AIs is hurting and breaking things, and scant to none on how this all helping us live better lives. I’m getting really tired of all the “we are going to replace you” and “trust us this is going to help you” narratives from every single frontier lab.
I’m all for additive technologies that help us live better lives and create more opportunities for people. Is any of this doing that? Did chat gpt help the hundreds of thousands people it gave psychosis to? Did all the promises that all of these AI visionaries made help Adam Raine or Zane Shamblin when they needed it most? Is creating a generation of “Claude Boys” really helping humanity? Is Sora helping us make better art?
The answer is NO
The technology is new, but the story is as old as time. That only a few powerful self anointed “messiahs” (like Musk or Altman) can usher in a prosperity for us “common folk.” These people do it through companies, government, tech or whatever means they figure out. What they really want is more: more power.
Fuck these people and their paternal top down definition of wisdom and disdain for humanity.
(In case this is lost in my anger, I obviously don’t count scientists that love and believe in Humanity like Gary. Huge fan Gary!).
Maybe I was out the day they surveyed the public, but I don’t recall being asked if I wanted a stupid chatbot to replace my interaction with human beings.
Larry Jewett: Me either: no one asked; and when they can put that "copilot" in jail for being a constant aggravation trying to get in my intellectual pants, then I might feel better about it.
Also, the "appropriation" (stealing) of others' works without asking, much less paying, just because they can, still sticks in my craw and is beginning to metastasize. Does anyone who has NOT written a book know what it takes to write one, or get it to publication?
Let's all get together and waste centuries of our valuable time on class action suits. L[ke Trump--they just go ahead and do what they want, overwhelm everyone they have taken advantage of, and leave everyone else to clean up after the horses in their parade.
Also, I posted the below double-take note earlier here, but here it is again from NPR/Up First Up newsletter Nov. 21:
"Correction: Yesterday's newsletter incorrectly stated that Nvidia generated $32 billion in revenue. The company announced it generated $32 billion in profit."
Those guys have jumped on their horses and rode off in 10 different directions. Several of them, I am convinced, are certifiable. And this is from someone who realizes the many benefits of AI/etc.
Also, those who rely on statistics and pooh-pooh anecdotes probably never had their child in a set of statistical crosshairs. The goods may be good but, where humans are involved, they don't cancel out human harm as if on a balance sheet, nor (worse) do monetary tradeoffs qualify, though necessary--apples to oranges must look good somehow, enabling perpetrators to sleep--to those whose bank accounts are involved.
lol I think we all were. Must have been that weekend everyone was in Cabo
This is clearly tragic - I was going to post a 10-paragraph reply, but it is far too complex. I have a book chapter coming out this week, reviewing all the available evidence on AI effectiveness/ risk-profile - plus, the thing that worries me is the systemic stuff - we’ve created an epidemic of lonliness and a screen-based culture which many argue is toxic. As early as the 1990s, famed sociologist Robert Putnam wrote Bowling Alone -highlighting the collapse of social structures, friendships, and shared spaces (churches, bowling clubs, community organization) - we’ve become more atomized as a culture.
And into that mess drops AI - which some people grab as a lifeline.
Does it work?
HI Gary thanks, very interesting article. And it confirms for me that these LLMs store all conversations internally. As a programmer, I am very suspicious of using LLMs in programming, and your article "LLMs + Coding Agents = Security Nightmare" from August 17 was truly shocking.
I am currently having an email dialogue with a very proficient programmer high in the hierarchy of a company which produces very security-sensitive software. He is positive towards AI but does not use it for programming as far as I know. I wrote the following:
"And given how easy it is to 'poison' LLMs, I would not be surprised to see them being banned anywhere close to sensitive or mission critical work. When you use an LLM, how do you know what it is actually doing behind the scenes? The fundamental problem is that LLMs do not separate data from commands (a basic tenet of Von Neuman architecture, which all 'normal' computers follow), and it is all too easy for hackers to hide malicious commands in data that they know will be read sooner or later during LLM training. Experiments show that it only needs around 250 text files with malicious commands put out on GitHub and waiting for the next LLM training data collection to create a back-door for hackers into the LLM itself. Since the LLMs actually store all activity internally for future training, simply asking the LLM to show all data they have on the latest software that ******* is developing is easy. And we would never know."
But I have been thinking about what I wrote to my friend and I am not sure if it is correct. If it is then it's a huge security risk to use LLMs and even allow them on a computer in your office may be enough to cause problems. Many European companies are ditching Windows for several reasons, including worries over AI integration. My friend's company has recently banned Windows from their offices and uses only Linux. I personally am planning the same move. I wonder how many others are trying to create LLM-free environments?
So OpenAI just added more open windows to MS OpenWindows.
In light of the prompt injection attack problem, OpenAI would be more aptly called OpenSesame.
This yesterday:
"Over 300 NPM Packages and 27K+ Github Repos infected via Fake Bun Runtime Within Hours.
On November 24, 2025, HelixGuard detected that over 300 components in the NPM registry were poisoned using the same method within a span of a few hours. The new versions of these packages published to the NPM registry falsely purported to introduce the Bun runtime, adding the script preinstall: node setup_bun.js along with an obfuscated bun_environment.js file."
BThor: That's really disheartening.
BThor: See also NYTimes Gift article about how AI and social media make for "brain rot." (Posted to the latest Gary-post also):
https://www.nytimes.com/2025/11/06/technology/personaltech/ai-social-media-brain-rot.html?unlocked_article_code=1.308.Ec1m.8Ma81Ax5MpIy&smid=url-share
This yesterday:
"Over 300 NPM Packages and 27K+ Github Repos infected via Fake Bun Runtime Within Hours.
On November 24, 2025, HelixGuard detected that over 300 components in the NPM registry were poisoned using the same method within a span of a few hours. The new versions of these packages published to the NPM registry falsely purported to introduce the Bun runtime, adding the script preinstall: node setup_bun.js along with an obfuscated bun_environment.js file."
Get ready for some disagreement...
In addition to developing a compelling story line ("turning a dial destabilized minds"), the article fails to complete the most important task of comparing risk (in a world with 800 million weekly users, you will certainly find suicides, psychotic breaks and parasocial attachment. The article never explores whether chat GPT makes the above occurrences more or less likely than the options available to users.
Using approximately 50 extreme examples of user behavior from the 800 million users each week as anecdotal evidence of epidemiological trends, the article slides from correlation to causation and completely omits any discussion of denominators.
When describing the MIT study ("Heavy users experienced poorer outcomes") the article completely ignores the fact that distressed individuals overuse of social media, gaming, etc., show the exact same results and never attempts to discuss how these same users would have performed using an always available, non-judgmental, safe assistant.
At the same time, the article describes the "Assistant" as having a personal element ("Bewitching," "Friend," "Yes-Man") to create a sensationalized version of the article's theme that is confusing the reality of the pattern matching software that has built-in guard rails and is not designed to manipulate users.
The article also expresses outrage at the use of Engagement Optimization (DAU/MAU) by OpenAI while ignoring the fact that every consumer product (TikTok, the New York Times App, etc.) uses the same metrics. The article does not provide a clear rationale for why AI Assistants need to be regulated differently than consumer products, simply implying that "growth" is inherently a corrupting influence.
While the article spends considerable time discussing the potential harm caused by chat GPT, it discusses the benefits of using the technology very little (people de-escalate from a crisis, people can get information quicker, people feel less isolated), therefore the articles policy recommendations seem to be centered around limiting the ability of companies to create warm and engaging relationships between humans and machines through paternalistic methods (throttling warmth and engagement) as opposed to providing consumers with clearly defined disclosure statements, control mechanisms and informed choices regarding the type of relationship they wish to experience with their "Relational Assistant".
Thank you. The statistics stand out... Over the last ten years the number of reported suicides in the US varied between 43000 to 49000, annually. Covid, Trump, Climate change, all seem to have negligible effect.
The chatbots are devices we do not yet understand properly, yet powerful enough that care in use is warranted. The use of cars requires a license, since a clumsy user endangers others. Do we need a similar approach here?
I believe that gambling is more dangerous than using the chatbots; no license required.
I agree, there is no credible discussion of risk without putting the numbers in context. How dangerous is a chatbot compared to, say, walking down the street? Or eating sugary foods?
I wonder if "AGI" is achieved when every human spends all their waking and non-waking moments in chats?
These goals are entirely incompatible. Aside that "AGI" is undefined and likely undefinable.
With sincerity, I do also wonder about the potential legal ramifications of the pursuit of maximizing user engagement at the expense of the pursuit of broadly intelligence useful intelligence that benefits humanity and whether they may have commited fraud by pursuing standard business metrics over their original scientific objectives.
I'm not sure what the purpose of the NYT article is. It's quite a feel good piece without getting into the nuances of the issues raised, including the impact on the lives of users. Fortunately we can rest easy now that their metrics show only 0.7% and 1.3% of users being under distress or suffering psychosis.
I don't know but this feels like we're still not addressing the core issues because we've yet to identify them, instead taking the symptoms as the problem.
As other commenters have pointed out, this for-profit pursuit in all things is creating distressed societies under enormous pressure. If we were honest we could all agree this added ideological layer is unnecessary and self-defeating in the pursuit of innovation to revolutionize life. I still think all this AI/LM hoopla is going to end badly.
“only 0.7% and 1.3% of users being under distress or suffering psychosis.”
Yeah, ONLY about 2% of (almost 1 billion) users are affected.
Naught but anecdotal evidence of a problem.
Larry: "ONLY . . . " until it's you or someone you care about.
And on THAT, we are all so provincial when it comes to others; and the science of statistics, for all it's good, runs interference for those who gain from its use while ignoring their own battered conscience. What a boon the claim to "anecdote" is for the moral degenerates among us.
Stephano: The article, among other things, gives a good account of half-baked thinkers with dollar signs for eyes playing games with real people's lives.
An LLM is not half-baked.
It is a rawbot.
The NYT article didn't seem to bring up any opinions other than those of ex or current OpenAI staffers, so I would say half-baked thinkers is a bit much for a compliment, yes? As for the whole profit motive, on the whole we're all slaves to money in the West, so it's inevitable people with dollar signs in their eyes will gamble with the lives of others. And if we look at OpenAI's peers and the tech sector more broadly, they're all same shit different brand kinda thing. It's a tragedy, and yet here we are.
Stefano: "Shitification" was a new term for me in reading that Book Review article (elsewhere on this blog), but it fits. But I'm not so sanguine about as you seem to be. The profit motive is not a bad thing, . . . unless it turns into a cancer and takes over one's entire consciousness as a kind of LCD horizon--beyond which one cannot see, feel, or think. "Slaves to money in the West." It's become a disease on the entire body politic.
It's one thing to use money as a store of value (which it currently isn't) and means of exchange, it's quite another to be used by money to determine our status and station in life. Yes that enshitification thing, probably exasperated by oligarchic monopolistic power. I mean we're kinda saying similar things.
Bringing it back to the NYT article, it rhymes with a hubristic society unable to reign in its worst tendencies. In my original comment I pointed out, like the NYT article clearly explains they disregarded certain guardrails because of metrics measuring usage, which influences tech company valuations, that we're enamored by metrics and so we're all squabbling over symptoms without understanding the problem.
chatGPT sycophancy undoubtedly is a problem. But the bigger problem is why we have a society with so many unhealthy people who's lives get ruined. It's a complex problem and I'm not suggesting I have the answer. But the NYT article completely misses this issue. In fact, for instance, we enjoy blaming China for TikTok, without honestly assessing the fact that the Chinese government does, for better or worse, what it believes is best for the Chinese people, so for instance limiting how teenagers can use TikTok, while our governments in the West, feed our populations in the West, to the worst tendencies of the tech giants. The best we can come up with is digital IDs to somehow make porn less accessible.
So it's not a surprise chatGPT sycophancy ruins the lives of people. They're not the first and won't be the last unfortunately.
This is exactly the mechanism at Meta.
The mechanism of Meta would presumably be the “Metanism”.
This from SCIENTIFIC AMERICAN on Nov. 4, 2025 (all below copied)
Your AI Therapist
AI companies and products that purport to provide therapy are using “deceptive practices,” says C. Vaile Wright, a licensed psychologist and senior director of the American Psychological Association’s Office of Health Care Innovation. “Therapy” chatbots offer emotional support that can seem as if it’s coming from a trained mental health provider, but they are coded to keep you using the app as part of their business model, Wright says in this interview with Scientific American editor Allison Parshall. The apps typically echo and reinforce whatever you say, regardless of whether it’s healthy. And this mimicry can have harmful and even life-threatening consequences.
Why this matters: Safe, effective and responsible technologies could help to make up for access barriers and a shortage of providers in our broken mental health care system, Wright says. Companies are unlikely to make changes, but federal regulations could protect users’ privacy, ban misrepresentation of psychological services, minimize addictive coding and report detection of suicidal ideation.
What the experts say: “What stands out to me is just how humanlike it sounds. The level of sophistication of the technology, even relative to six to 12 months ago, is pretty staggering. And I can appreciate how people kind of fall down a rabbit hole,” says Wright.
If you or someone you know is struggling or having thoughts of suicide, help is available. Call or text the 988 Suicide & Crisis Lifeline at 988 or use the online Lifeline Chat.
“The apps typically echo and reinforce whatever you say“
ELIZA, one of the first chatbots (which was patterned after a therapist in its best known code version), did precisely that.
Coincidence? ELIZA thinks not.
Larry: It essentially gives back an ungrounded tautology of oneself.
The really scary thing is that the chatbots also do that with science related matters— reinforcing and validating what are unsupported (what some call “crank”) “theories”.
And the use of LLMs to research and even write scientific papers will affect science in negative ways that we probably won’t appreciate for some time.
This stuff is inserting its tentacles into every aspect of human intellectual pursuit.