101 Comments
User's avatar
Joy in HK fiFP's avatar

After spending way too much time with an AI assistant earlier this evening, which was totally inept, to put it as graciously as possible, the loss of true productivity, as opposed to, say, investor funding, is likely to cause a huge rise in the economic bubble as well as human blood pressure, and failure to accomplish even simple goals. For some agencies, such as governmental ones, the loss falls almost entirely on the customer/consumer side, with a loss of trust and ability to comply with necessary obligations.

In for-profit businesses, at some point when enough customesr 'can't get no satisfaction,' then the company may well lose a whole lot of customers entirely.

Right now, a big selling point would be to offer human customer service reps, who are not tied to an automated script and can make intelligent decisions by having some degree of discretion. We had that not that long ago that it's still in the memory of most adults. If this automated customer service disaster continues apace, we are in for a great deal of misery that will have no outlet or redress, leading to serious frustration, anger, and possibly more.

We are supposed to think the proliferation of this is good?

Stephen Schiff's avatar

My concern is that "AI" "Assistants" will proliferate to such an extent so as to make it the norm, thereby eliminating the distinction between responsive and non-responsive "service".

There's a precedent: Outsourcing of call centers to companies lacking in fluency.

Joy in HK fiFP's avatar

I do see a difference between the call centers and the AI assistants, perhaps not on the manpower front, but on comprehensibility front. Although I have had good and terrible experiences with call-center outcomes, the latter has, more often than not, been due to the company policies and the limited discretion the representatives are allowed. I'll gladly fight my way through a call center representative, with some discretion to move beyond the AI script, over the AI, any time.

Jeremy Bickel's avatar

Who's responsible in those cases? It's hard to pin blame on a nameless software engineer or LLM training data annotator.

Bruce Cohen's avatar

I think there’s a lot of responsibility to go around. Reading the papers and announcements coming out of the AI companies I see what looks like a huge case of groupthink across the whole industry. The number of safety/risk groups muzzled or shut down and the exit posts of senior engineers and scientists support this view.

Ken Kovar's avatar

Working with an AI is going to be a learning curve. And even mighty Walmart AI customer service agent is less than perfect 😍

Alan King's avatar

Yes it’s a learning curve — but don’t believe the hype. The LLMs are extremely useful — especially when it is important to go broad. I have gotten some very good outcomes from following a suggestion, then continuing by going deep. You have to be able to go deep when using this tool.

Ken Kovar's avatar

That's a good point, I actually am just starting to use them but I think interacting with them can be a great way to use them rather than having them "spit out" canned answers like essays.

skierpage's avatar

I've had huma CSRs flat out lie to me ("You can order that car on our web site") and make excuses ("Yes the produce code on the electric toothbrush doesn't work on our support site, you're supposed to save the product code on the box it came in"). Companies that don't prioritize customer service are the problem.

Joy in HK fiFP's avatar

I agree, but at least it is possible to imagine that a human can understand plain English, and not just keep asking you to be more specific and give details when that is exactly what you have already done, repeatedly.

Bruce Cohen's avatar

OTH the main reason I’ve kept CenturyLink as my ISP and landline support is that if I call customer support I can quickly get to a human who understands the technology and quickly recognizes that I do too. I’ve already cycled the power and that didn’t help so what now?

C. King's avatar

. . . skierpage: And then there are the huge companies who know what to say and how to say it. And that's it.

Jeremy Bickel's avatar

Consider what ineptitude, covered by buttery smooth speech, would do in the hands of those who make an artful living out of lies. Like some politicians and debate addicts, their GenAI might turn into a demon of manipulation, to great, great (monetary) value.

Paul Dongha's avatar

Spot on. Thank you Gary.

Bruce Cohen's avatar

Excellent analysis, Gary. How can we have a “winner “ when we can’t even define winning beyond the numbers of data centers we build and chips we sell?

AI Governance Lead ⚡'s avatar

This! I’m always saying this!

TheAISlop's avatar

Gary, you got this. Who cares who has the better gen AI, there is no clear winner, and nearly 10 US and Chinese companies make "good enough" gen AI. The strategic battle is the sovereign that can build AI, that makes weapons and destroys things faster. That front is concerning for the US.

Bruce Cohen's avatar

By definition GenAI is “good enough” and no better. Exactly why it is so limiting in its effect on both business ( not a big productivity win ) and society (potentially serious harm to vulnerable people like children).

Xian's avatar
Dec 13Edited

Cannot agree more.

For every real improvement in human history, it all inevitably came with the explosion of energy. None of the real human growth happened without being tightly tied to energy. If you extend two more hours of home lights in Africa, a family's income can earn another 20%. It's not due to technology development alone, they are simply empowered with energy.

I am personally concerned that AI is a bubble. It requires soaring electricity demand while the output is zero. On the contrary, we used to see work that should be done and must be done by humans, but right now it's very okay for it to be done by AI. Otherwise, the US wouldn't have fired 1.1 million people in 2025.

Maybe 20 years later if we look back, it breaks a point and generates some extra energy and really makes people's lives better. But at least one generation has gone by then…

https://medium.com/design-bootcamp/the-energy-test-why-ai-fails-the-only-metric-that-matters-9249596dcbb3

skierpage's avatar

> the output is zero

Spare us the hysterical overreaction. Companies and people are spending billions to use "AI", therefore they are finding value in it.

Xian's avatar
Dec 14Edited

Hmmm,, based on Gary’s previous post and also the same news that I read, Meta Amazon Microsoft Google and Tesla will have invested around 560 billion dollars in AI related capital expenditure since the beginning of 2024 while generating only around 35 billion dollars in AI related revenue. It barely make ends meet.

If you mean that I spend $10 to buy an AI service from A company, then A company earn $10 from me. Hmmm… anything generated during this process let alone that soaring demand for electricity? Anything difference with me purchasing a physical goods? Like a table? If I buy a wooden table, at least one or two carpenters are involved in making it.

Martin Machacek's avatar

AFAICT AI companies are selling their services at loss. If they priced them to make any profit, many current Gen AI use cases would cease to be economically interesting. So, I think it is fair to say that economic output of Gen AI, as measured by profit of AI providers, is currently negative. AI may save money to some users, but at the moment that is largely subsidized by venture capital.

Oleg Alexandrov's avatar

There is always a lag between what you spend and what you get. The important question is if the methods can be made accurate enough and profitable enough with more time, money and work. That will take a few years to figure out.

User's avatar
Comment deleted
Dec 22Edited
Comment deleted
Oleg Alexandrov's avatar

LLM is predictive statistics. Only useful for making hypothesis. What is needed is more infrastructure for verification, searching, invocation of simulators, to handle things LLM cannot.

There is a lot of work that will help with such augmentation, and we are barely getting started.

Mircea Popescu's avatar

"People are buying the snake oil, therefore they are finding value in it"

Even the snake oil salesmen aren't making a profit! What are we doing here?

Bruce Cohen's avatar

I doubt the snakes are happy either.

Bruce Cohen's avatar

I take it you weren’t around for the dotcom bust or the Great Financial Freeze. Financial bubbles are always a risk with a buildup of investment beyond a reasonable expectation of returns. FOMO doesn’t usually return much profit.

skierpage's avatar

I was around for the 1929 stock market crash and both Great Britain's 1840 railroad mania and USA's railroad panic of 1873. OP didn't say the eminently defensible "Nobody's making any money" or "This is obviously a bubble." They said "the output is zero", which is ridiculous exaggeration. If you make blanket absolute statements, you're probably lying.

Aaron Turner's avatar

"If we are victorious in one more battle with the Romans, we shall be utterly ruined."

- King Pyrrhus of Epirus, 279 BCE

Oleg Alexandrov's avatar

This is not a zero-sum game. There's a lot of money to be made even with incremental improvements, and the products are getting more useful.

Herbert Roitblat's avatar

I agree with the thrust of your post. Far from a winner take all, I would suggest that there is no winner (mostly what you said) and there is nothing to be won from generative AI. These models have asymptoted. Did anyone notice that OpenAI brought out ChatGPT 5.2? Did it make any kind of real difference to anyone.

Frankly, what worries me about regulation is not controlling the technology, it pretty well limits itself. There are still people promising that with just a little more scale, the technology will cross a chasm and become intelligent in the same way that Pinocchio desired to become a real boy. Well, there is no Good Fairy to help OpenAI. A very different approach to AI is needed to be more than a token guesser.

What worries me is artificial and natural stupidity, not artificial intelligence. People attribute anthropomorphic properties to word guessers. They think that they are intelligent while the incumbents spend millions of dollars a day hiring "consultants" to write down their thinking patterns so that the models can repeat them. It is the people doing the thinking, not the machine.

There are vulnerable populations (maybe even most of us) that need protection from the unscrupulous use of AI. There are unintended consequences to pretending to be intelligent and caring, but this is pretense. The regulations are needed to protect the misuse of these machines, particularly their deceptive misuse. The technology is not now and for the foreseeable future is simply not relevant to the harm that misuse can cause. And I'm not even talking about the environmental impact of all of this scaling effort. Those are real and present dangers. The possibility of SkyNet is not.

Jan Steen's avatar

What kind of regulation would be desirable? I can think of myriad things, but let's single out just one for now. I hate it when a chatbot is made to impersonate a human being. Like this: "I am Steve. I am here to help. What is your problem?"

There is no 'Steve' there. It's a lie. If you ask me, it should be made illegal to suggest that you are connected to a human being when in reality you are interacting with a computer.

C. King's avatar

Jan Steen: What you say about lying and a person being there (or not), . . . and I feel that emptiness all the way down to my toenails . . . is the core of the problem; and then it branches and whirls and twists out from there like a drunken Jackson Pollock painting.

And then it does some true good in many circumstances. But wait a minute . . . that's exactly why we need to put in place some serious and intelligent human control and regulation.

But the tech power brokers act like teenagers whose dad won't give them the car keys, just to be mean, of course and not out of knowing what is right--for them and everyone else. Like the Trump circus with their lower-court judges--how dare they think they can control me--as if it has to do with them-against-someone else, instead of doing what is good for everyone involved. (That ultra-subjectivity is evident in almost every article I read, particularly from the right).

We got this way from a series of oversights and events that have been documented over and over again by historians and educators . . . it didn't happen in a vacuum, it's decades and so generations old now, and so it's going to take some vision accompanied with spine and power to get us onto a better way forward. I keep looking . . .

Jeremy Bickel's avatar

Don't keep looking where the so-called blunders keep happening.

It's not silly antics that could be fixed by your diligent investigation or best actions. It's purposeful manipulation, wrapped in a facade of ineptitude.

^ Look ^

| Elsewhere |

C. King's avatar

Jeremy Bickel: Of course, not my "diligent investigation or best actions."

I was referring to the emergence of leaders who seemed to have come forward in the last centuries to steer correctly, even as many still have their own sets of flaws.

Also, I do believe in tightening current laws and their applications--but don't want us (particularly in the U.S.) to get to where we open reeducation camps, or undergo genocide, or start throwing "dangerous" people out of high-rise windows, or to sink into a nihilistic attitude. On the other hand, we know that people do undergo change. Setting the conditions for change to occur is no guarantee, but it is probably a good thing to do, and each of us have to do that in our own sphere of experience.

What would you suggest?

Jeremy Bickel's avatar

Language is tough, sometimes. Our best efforts - yours, mine, and the politicians' - are barely useful. We fall down, and many people who do don't get back up again! And so, when we try to help, we often hurt, and when we try to restrict, we're often helping our adversary by closing a door we'll need at some moment out of our present line of sight (but our adversary, standing over there, both saw it and got ready!). We're steered, because the forces of this world are not flesh and blood, nor do they have our own high level of limited foresight and knowledge.

I suggest running to Master Jesus. I always do. And these particular problems - AI and politician's greedy, power-hungry, earnest, well-intentioned thoughtlessness (and more overt crimes) - are related to beliefs about actions and the delusion that happens because of our driving lusts.

I remember a certain British prince from a couple decades ago or so saying that if he were reincarnated, he'd want to come back as a killer mosquito to kill off most of the world's population; he was probably being a petulant teenager who thought his outrageous statement aligned overall with what he was, indeed, being taught.

Our righteousness doesn't work. Our insight is too limited. So we make our own beings to serve us, and we will serve them as idols to our own genius; we'll think we're very advanced toward our awokening to Enlightenment about how we can surely replace God's authority if we put our minds to it.

I think that's exactly what all this is about. It's dangerous and has the potential for great good, both. I intend to use it in line with my God's direction; so do Satanists.

What do you think?

C. King's avatar

Jeremy Bickel: Well, we do what we can.

Jon Rowlands's avatar

The parallel to cold war arms race hastening the collapse of the USSR is new and disturbing.

Bryan McCormick's avatar

I posted Geminis analysis of its own catastrophic model failure prompted by some sort of patch put on December 4. It basically nailed it even giving the only winner in this race that isn't one to DeepMind. We already knew this. There's only one person or perhaps two that really stand to lose everything if this race for money dominance does not work -- that is Sam Altman for sure and to a lesser degree Jensen Huang. I am probably being far too cynical when I say that these two people may be driving policy for the rest of us solely for their own selfish very short-term greedy ends. The very simple math is none of this adds up to anything sensible and they have both been trying to outrace markets before they figure that out. Almost like it's their own raid on Nakatomi Plaza's vault. The fear mongering about the AI race is nothing more than distraction – the helicopter on the roof. Ho Ho Ho.

AI Governance Lead ⚡'s avatar

You’re not wrong on this. I feel that Jensen and Sam have been trying to ‘make the market’ more so than corner it.

Fukitol's avatar

Yeah, well, if you're fully bought-in to the singularity cult idea that LLMs are soon going to start self-improving and then follow the plot of every pop culture sci fi story ever, the "winner takes all" argument makes sense.

The line just goes up bro. It always goes up. Ignore the fact that it's been going mostly sideways. That just means it's winding up for an even bigger go up event. Can I get you another cup of kool aid? Did you want the red or the blue flavor?

C. King's avatar

Fukitol: Also, what we know, and don't know, about human consciousness and development, coupled with the idea of the great difference between human and AI self-improvement without that knowledge, we'd better think about, e. g., Frankenstein's ability or even tendency to self-improve--what we might expect of that scenario?

Fukitol's avatar

That's funny, because Frankenstein's monster is yet another iteration of the golem myth, like all robot/AI stories. All we have *ever* guessed about what would happen if we created life in our own image is "it will go terribly wrong, somehow."

But, these stories almost invariably assume independent goal formation in the golem(/etc.). Being incorrigible anthropomorphisers, we just assume the monster has a will and needs of its own, especially if it can talk.

The monster we've actually created has no such thing, and for all the hand-wringing that it will magically manifest a will and what might happen if it did... IRL nobody has any idea how to give it one if we wanted to. LLMs categorically cannot form goals without direction and can't pursue them without constant guidance, else they decohere destructively. Witness all the "for the first two weeks productivity was off the charts, then it deleted our production databases and left a suicide note" stories about agentic programming.

I'm not saying it's impossible. I don't know whether it's possible. I just don't assume it'll happen automatically, because I see no reason to assume that. In the meanwhile, it's premature to guess what some hypothetical LLM+ would do with self-improvement. We can't even begin to ask "what does 'better' mean to this thing."

Bruce Cohen's avatar

I’ve been around the AI research community long enough to have learned how little artificial neural nets are like organic neural nets and how poorly the notion of just throwing neurons into a bowl will get any difference in function. Scaling didn’t work well with CYC, and there’s quite a lot of evidence it’s not working for LLMs.

C. King's avatar

Fukitol: "What does 'better' mean to this thing," indeed. And even for us, there is that pesky "thisness" factor where "better" is more often than not, dependent on the unknown of an emergent complexity of several levels and gradients of human understanding and history. This leads me to think that those here who are talking about getting the right "fit" to needs that actually can be fulfilled have it right. The so-called bubble, then, is made of pipe dreams rather than realistic expectations.

Oaktown's avatar

So glad you're finally getting the attention you richly deserve, Gary, particularly from people like Steve Eisman, who gave you the perfect moniker: AI realist.

Cheers!!

Nisal Periyapperuma's avatar

Thanks for cutting through the noise and the propaganda!

Erick's avatar

I think you underestimate the appetites of US and China. They're competing for the global market. Whoever gets the Americas, Asia, Europe etc. online first with either US or China's tech becomes the dominant player in AI for decades to come.

John Michael Thomas's avatar

I also noticed this fallacy immediately when I read it.

And I agree, this is the kind of belief that I'm not sure anyone would adopt unless there was someone intentionally hyping up the FUD for their own benefit. It strains credulity to think that there are any competent business leaders who aren't extremely aware that winner-takes-all just isn't a thing in modern business.

Trump seems to be somewhat susceptible to this kind of manipulation. I don't think he's unique in this by any means; it literally always happens. But because he's so aggressive in pursuing his plans, it means that the same manipulative voices that might have small influence in other administrations end up with outsized impact now.

And though I expect that it will be corrected at least some by Congress, there will be some fallout until then. (In fact, the EO may force Congress to finally act on a national AI legislation they've been dragging their heels on; by making an end run around them, Trump may have just pushed them to defy him).

C. King's avatar

To: John Michael Thomas: Read the above link (in my post) to a brief TRUTHOUT online magazine article--it talks about the hordes of lobbyists hired by the big-tech people to go to the recent big meeting of State representatives. Congresspeople and the people of the USA (and their wants and needs as citizens of this democratic culture) are literally being drowned by the power of cash, promises, and other bells and whistles of the tech industry moguls.

toolate's avatar

I think you are missing the forest for the trees

AI is being used as part of a global surveillance network and information control network.

China race is a ruse.

Larry Jewett's avatar

“Open IAs”(Intelligent Agents)

IAs hide

In open sight

Hence elide

A freedom fight

Agents do

Their job for free

Hiding view

Of you and me

Viewing biz

Is China cause

China is

As China does

C. King's avatar

toolate: It's not the money but the huge political difference (at least up to this point in time) between China and the USA, at least insofar as "we" are still a democracy, which isn't a given as it used to be. Surveillance, surveillance, surveillance . . . as you also refer to in your note, is the extant problem, which is a given for China both as their political background and history of quasi-totalitarian control, AND the fact that (from my understanding of it) China is fundamentally tribal in a way that civilized democracies become systematized to break away from (the negatives of tribal forces).

On those two points (political totalitarian background and tribal ideas about who deserves to live and who not), China is a solid centuries-old problem (as is Putin). On the other hand, and American money brokers and their historical/political ignorance is the problem of our time in democratic cultures who have (by definition) lost their way in education for "keeping" democracy healthy.

If so, the power brokers in China still see themselves as politically in a zero-sum game--my tribe or yours which, again, impacts on surveillance, not to mention everything else. None of it good for the US or other State actors who lean towards open cultures and democracies, and who understand the basic conflictive movements that are happening globally between tribal and civil orders (human rights, rule of law, etc., and their potential loss.) Those who grew up in democratic "air" are apparently and ignorantly selling not only the chickens, but the farm that grew them.

One odd thing is that it seems that cultures based in freedoms, civil rights, and diversity tend to produce the creative spirit that China seems not to have, and knows it, and is apparently still envious of. Though that seems also to be changing, (I suggest) insofar as the global information culture (and with it, that enhancement of spirit) has also reached into every mind on the planet and in many cases, it has tended to breakdown the totalitarian consciousness "inherited" by their leaders that China and Putin and others still push for their people. The revolutionary spirit might be pushed into hiding, but it never really goes away. (I think it is our one abiding hope.)

If I am correct in this matter, the upshot of that is the irony that China (insofar as it remains on the negative aspects of a tribal top-down vector politically--i.e., over-control of its citizens), wants what in fact arrived on the scene via the positive aspects and sources of a free culture (from below upwards) which of course is no bastion of perfection either.

This is quite general and comes from a basic dialectical vector analysis. The differences have nothing to do with being "Chinese" or "American" but rather being human and as human beings are influenced deeply and intellectually by one's political culture and its history. It's also a problem of confusing one's cultural history with its political history. where, of course these are intertwined and cannot be otherwise, but they are still not the same thing. One can love what everyone understands as Chinese or American culture, while understanding the dialectics of politics as an entirely different and dangerous animal.

Just a few thoughts on too-late's and others' notes here, and if I understood what they mean.

Kevin Cahill's avatar

Actually, Trump's Russia policy is better than that of the Democratic party.

Oaktown's avatar

" ... because of the speed at which chips like GPUs (a key component of that infrastructure) depreciate, it may be that the real winner is whichever country doesn’t overextend itself to the point of financial ruin, in a foolish effort to win a race that can’t be won.

"All the more so if LLMs turn out to be a dud, or if LLMs are replaced by smaller, more efficient systems that don’t demand such immense amounts of infrastructure."

I think we saw a preview of this with the release of Deepseek. Seems excessive riches more often result in waste, bad investments, and lazy thinking, just like the old saw tells us, "necessity is the mother of invention."