102 Comments
User's avatar
Gary Marcus's avatar

it’s a huge huge negative signal that any other prospective investor has to take very very seriously

Oleg Alexandrov's avatar

Investors are responsible for their investing. Many of them are fabulously rich, because they are good at their game. The early ones will do well.

Alex Tolley's avatar

But many investors are regular people with mutual funds and 401Ks. They rely on fund managers. But the industry is very herdlike, as we see in prior deep downturns - 1987, 2000, the 2008 financial meltdown... So much of the S&P growth and the wider economy is built upon teh success of the 7 [?] giant tech firms predicated on AI having a big ROI payoff. The recent good results of MSFT and FB are attempts to show that they were fine and would be unaffected by any AI bust. I am not sure that is true, as these corps require a healthy economy to stay profitable.

Oleg Alexandrov's avatar

How to properly regulate this is the perpetual challenge. Investors are advised to invest in index funds, for the long term, and have a balanced portfolio. Historically the stock market delivers, but it can be rough.

Oleg Alexandrov's avatar

What is amusing to me is that the do-gooders with no skin in the game and who were convinced for a long time this was all a fad are the ones who fret so much about investors.

Amy A's avatar
Feb 5Edited

I’ve asked on several articles in The Information for them to lay out the financial risks more clearly, so far with no luck. They reported that Anthropic expects more revenue and also more delay to profitability with no follow up.

Thomas Schmid's avatar

Well, why don't you read Ed Zitron ? He's all about "where's the money" in excruciating details. With sources and citations.

Amy A's avatar

I do! It seems like major tech industry publications should catch on at some point and ask good questions, no?

Thomas Schmid's avatar

A very good point. Nobody dares to rock the boat ? Or anybody who might dare to ask good simple questions gets reminded by superiors about who is paying his/her salary ?

Terry Bollinger's avatar

The degree and intensity with which this lot has managed to put off the inevitable collapse of their empire is quite remarkable. The collapse is inevitable due to their product being intrinsically incapable of doing what the majority of investors think it is capable of: True cognition, versus clever but extremely unreliable retrieval of largely stolen intellectual property.

Even this week, NVIDIA's CEO has been blabbering somewhat incoherently -- and in language worthy of a Heaven's Gate devotee -- about how important it is for big investors and entire governments to recognize that LLM AI is still an itty-bitty, poor baby that folks must coddle with trillions more in investments to help the poor thing reach its full potential. After factoring out the deception of casual intellectual property theft, that full potential still seems, even after several years of promising a correction, to be nothing more than the astonishing ability to count arms and legs correctly.

To paraphrase an old saying, why has this scam "Boldly gone where no scam has gone before"? That's easy: What they created, starting in particular with the Attention mechanism, is the full and global automation of the confidence artist: the ability to bring up just the right facts and with just the right confidence to persuade others, pretty please, to hand over their bank accounts just one more time, since this time, surely, without doubt, their latest contribution will finally result in the release of the promised fountains of gold.

Craig Yirush's avatar

Love that - unreliable retrieval of stolen IP not true cognition!

La Rue Rue's avatar

The Great Reset is very much a continuation of the original fascist project under Benito Mussolini, in which 20th century industrial plutocracy sought to accelerate its production by reshaping living beings into regimented and obedient units of human capital.

Cranky Frankie's avatar

The proponents say that every clock cycle is sold. Every bit of new capacity is spoken for, they claim. This differs from the fiber overbuild in the 2000s where like 5% of the capacity was producing revenue with the rest dark.

In an interesting bit of news, Amazon has produced a "template" tool for businesses to implement AI where the biz has no expertise in data mining. I assume this will grow, market wise, to a front end service model as a bolt on enhancement to whatever your enterprise does. The consultants selling the middleware will do a land office business.

Example: High rise office building looking to reduce utility expense hires a firm to install sensors and controls then turns the whole thing over to AI. It'll rise or fall with provable ROI.

Terry Bollinger's avatar

The difference between the fiber overbuild in 2000 and the server farm overbuild in the mid-2020s is that LLM provides the most effective way to waste computing energy, processing, hardware, and human mental attention ever devised. This strategy ensures maximum use of any new capacity constructed, even while simultaneously ensuring that none of this added capacity accomplishes anything dramatically new.

The same data one can find for free in Wikipedia and other sources is encoded in a fashion that triggers the human gambling and gaming reflexes: How can I design just the right prompt to access the Infinite Intelligence at the end of the tunnel, and thus be showered in riches while others struggle to find the right Key to success?

Alas, like the lawless game shows of the 1950s (e.g., The $64,000 Question), all of the higher-level prizes are bogus. Ask for a game plan to make you a billion dollars, and, with enough game play — with enough prompt engineering — you might persuade the LLM to return a copy of Apple's already-public business strategy from a few years ago. But what you will never get is the real prize you are seeking: An actual, deeply insightful strategy created by an entity with superhuman intelligence.

That illusion of access to superintelligence is just the con, but it's also one that more than a few billionaires believe — and want to believe -- with all their heart. For them, especially the older ones, the belief that superintelligence is "just around the corner" has this tantalizing implication that their goal of overnight, superhuman medical advances and them never "really" dying is also just around the corner. The idea of superintelligence at their beck and call tickles their deepest fears and hopes, and so opens their pocketbooks in ways that no ordinary money-only investment could ever achieve. Terence Tao becomes their shining beacon of hope, even though the fact that the only custom LLM database that ever solved a math theorem never worked by a human required months of constant attention and correction by the best and most disciplined mathematician in the world.

For most folks, however, this hope triggers a much simpler response: Accessing and running server farms to and beyond capacity, hoping always for something more than just rehashed Wikipedia and Reddit, but never getting it. As long as they never see the actual cost of the power they are burning, they'll keep doing it.

So, in that sense, the server farm overbuild of the mid 2020s is very different from the fiber overbuild around 2000. The new overbuild will also end much more sadly, since the cheap, stripped-down, and temperature-fragile chips NVIDIA shovels into them will never last long enough to repurpose to some better purpose.

This will not end well.

Cranky Frankie's avatar

By definition not everyone is a genius. Maybe being able to pose marginally crafted queries and getting fairly accurate responses based on things that are already known somewhere, just not by you, will offer value. Yes the same expertise is available somewhere but probably for a fee.

An example might be the logistics coordinator exploring different shipping methods. Factors that would be important would be dimensions, weight capacity, possible routes and other variables. AI might be a decent way to tee up all that information, filtered for ease of evaluation and agnostic as to specific vendors. Will it always be right? Probably not. Will it yield strategies that might not otherwise be considered. Probably. The result might easily be shaving time or cost from product manufacture and delivery.

Small ball but important to productivity.

Terry Bollinger's avatar

I’ve spent most of my life trying hard to accomplish just that: Help develop intelligent machines that assist everyone in becoming more of what they can be. Each of us is unique; each of us is important; each of us has a contribution no one else can make. Computers should always help that potential, not undermine it.

One of my favorite challenges to robotics researchers was this: Can you make a robot that can actually help a small group of isolated people in a dangerous environment? If it can’t help people in a dire situation, what’s the point of calling it a robot? It’s just another tool or vehicle.

LLMs absolutely are able to help folks find useful items. That’s not my point. My point is that the LLM version of helping comes with a sneaky but devastating long-term cost, one much like the cost of believing that an exceptionally knowledgeable and smooth-talking confidence artist has the skills needed to perform actual surgery on you.

Cranky Frankie's avatar

High powered computing analyzing images stands to improve quality in ways that humans who suffer fatigue and inattention might never.

Ex: cameras in the automobile paint booth continuously scanning for flaws and correcting paint flow and application accordingly. Yes it's a manufacturing process but saving a few reworks or, worse, cars sold where the customer is unhappy, is worth something.

smalltime_eel's avatar

I've felt lately that LLM chatbots are actually just entertainment platforms instead of tools.

Matty's avatar

Sold to who? None of the AI companies are even close to generating a profit on their AI investments, and might go bankrupt any second.

John Konopka's avatar

This is the great question. At the Alphabet earnings phone call yesterday they said they had ~750M active users of Gemini. No more details than that. What is an active user? Asking a question once a week? Performing a search in Chrome with Gemini? They also didn’t say if any of them were paying customers or just using it for free.

They mentioned the tie in with Apple but studiously avoided any discussion of who is paying for this. Even when asked directly they didn’t answer. The press is suggesting that Apple is paying. I wonder if Alphabet is paying Apple for access to ~1.5B customers who will use Gemini and maybe be exposed to ads.

It is a massive puzzle that supposedly smart people are spending incredible sums on something that might never pay out.

Alphabet says they will spend ~$185B this year on CAPEX. I read that Apple will spend ~$13B this year on CAPEX. $13B is a huge amount of money, but it seems tiny by comparison. Makes Apple look like misers.

Terry Bollinger's avatar

The need to quote impressive-sounding but empty stats on LLM adoption is driving a remarkable amount of infuriatingly annoying product updates, such as Gmail suddenly terminating its useful and convenient automatic email categorization feature unless you “choose” to turn on their new summarization, which also then gets explicit permission to “learn” from everything you write or read.

It wasn't that long ago that weaponized intimidation to get hold of your most personal data was considered bad and even felonious behavior. Ah, those were the good old days!

Thomas Schmid's avatar

"they said they had ~750M active users of Gemini"

Well, they shove it atop of every and any google search we do, so no wonder the numbers get so high. But I mostly just give it a quick glance and then try to find a good answer to my search below the ads. Quoting Ed Zitron, if Google hadn't deliberately worsened its search capabilities since 2020, none of this LLM-hype-based expenditure would have happened. Me directly "paying" them for this debacle won't happen, that's for sure.

Cranky Frankie's avatar

I don't know who the customers are. When the honchos of the datacenter world are interviewed this is their claim. Maybe the revenue is just cash being burned by AGI wannabes, consistent with the circular investment/revenue diagram above.

The socials are actual users and take in exabytes every second from their users. They seem profitable. Their data has to go somewhere and be served back from somewhere. Maybe them at least for now. They are building their own so must be in a hurry to add capacity.

Jim Ryan's avatar

What is going to happen when people need to start paying full price for all the 'tokens" from things like Claude. They are just like a drug dealer -keep the price low or free until they are hooked, then jack up the price

Denver Fletcher's avatar

Re the astonishing ability to count arms and legs, they still don't reliably get the number of fingers and toes right.

Pasavel R's avatar

Crystal clear. This is the analysis as an outcome of cognitive intelligence on the pseudo AI based on brute force next token lingustic intelligence.

Marc Schluper's avatar

If investors base their bets on unreliable information it's their mistake.

If you want to help the investors, create clarity.

It does not help pointing out AGI has not arrived, or true cognition is an illusion. Nor is it helpful to describe what we have now as extremely unreliable retrieval. And no, it's not stolen intellectual property (just like me reading a book on software development and then use the knowledge to develop software does not make me a thief, and you are not a thief because you went to school and learned something you use in your profession).

The retrieval is actually remarkably good. Not perfect, sure, but better than anything we had. (Just ask Google.) Add insanely good pattern matching and pattern completion capabilities and it is pretty clear we can leverage this new technology to improve what we collectively produce.

We can make this case without using fuzzy words like cognition, understanding, intelligence. I have no way of verifying anyone understands what I just wrote. Your employer cannot be 100% sure you understand them, and yet they hired you. We live in a world without absolute answers and yet we are doing fine.

Terry Bollinger's avatar

Heh! It's always the same, isn't it? "It's the fault of those darned humans!"

An absolutely critical part of efficient problem-solving -- I hope you find that phrase less ambiguous than "cognition" -- is extensive, multi-level reuse and adaptation of past solutions. That's been kind of a career-long theme for me, and I still feel like I cannot emphasize it enough. It is also the greatest strength of well-applied LLM systems, with Terence Tao again being an especially interesting (and extremely rare) example. The ability of LLM systems to correctly (sometimes) recognize how some piece of software might fit some casual user's needs can be spectacular and impressive.

What you might want to consider a lot more carefully for the sloppy LLM version of software reuse is that stringing the resulting pieces together using simple probabilities and interpolations -- "Hey, 1000 programmers did this, so I'll recommend it, too!" -- is the software equivalent of giving your DNA a hefty dose of gamma radiation every time you use it to build new proteins.

Mutational reuse of well-designed code with a strong emphasis on making the user as happy and confident as possible about the result works great as a short-term strategy for getting happy bumps. However, as with mutating radiation in biology, the long-term result is pretty catastrophic and always fatal. Eventually, everything collapses -- as Microsoft seems to be finding out this week and last.

Yes, I went to college, and yes, I learned. What I did not do was peek over other students' shoulders every time I took a test. Look carefully at how LLMs work, and that is all they do. They just dress it up with "artificial intelligence" lingo. Also, if you think no theft is going on, you need to look a lot closer at the stories of small content producers who get every shred of uniqueness in their personal products copied and then mass-distributed as information "learned" by an LLM server farm.

Marc Schluper's avatar

LLMs peeking over other student's shoulders? That is a far cry from reality. They actually use a clever way of storing vast amounts of information and enable retrieval of relevant knowledge, which we can use to our advantage. We have done the same with our limited brain for eons.

And those "small content producers" - I wonder how did they get their "new" information? Wasn't it by studying? Observing? Analyzing? Learning from others? Combining existing ideas? Did that make them thieves? Was that "shred of uniqueness" anything but a gift? Please realize we even use proper language when people "get" an idea - they got it, like a gift. And then they shared it. Good for everybody. And if they get many of these "shreds of uniqueness" we call them gifted.

Thomas Schmid's avatar

"enable retrieval of relevant knowledge" = use statistics and some thrown in heuristics to determine what the next piece of data would most likely be ?

LLMs do *not* understand anything, they make probabilistic guesses.

"We have done the same with our limited brain": With the small and but utterly most important difference that we *understand* why A and B leads to C. Even our ancestor cave men observed, learned and understood cause and effect.

AlexT's avatar

If you read a book and then reproduce its content in exchange for payment, you go to jail, yes? Fair use applies when you use what you learned, not when you duplicate the learning material. So yes, it is precisely stolen intelectual property.

ExplodeMeow's avatar

Yes; I've long used the term "High Scores, Low Ability" to describe it.

Originally applied to humans; Hong Kong's rigid education system and insane "cram school culture" have "manufactured" a cohort of individuals with flawless exam results yet utterly incapable of handling real-world work.

LLMs are like memorizing answers using the most efficient method without truly grasping the knowledge.

I could easily recite famous formulas like "E=mc²", but I'm clearly not as brilliant as Einstein.

While not entirely useless, calling such "AI" to "higher intelligence" is fraudulent.

I fully understand why Gary Marcus and others are so furious with OpenAI.

Regarding copyright and theft issues, I'm not an artist... but morally, I unconditionally support them, this is the most legitimate reason to oppose these fraudsters.

Thomas Schmid's avatar

" others are so furious with OpenAI" Your points and the combination of conning and grifting, by hailed "leaders of the industry", applauded by the claquers in the press and news agencies.

Stephen Schiff's avatar

Warren Buffet is reputed to have said that one ought not invest in something one doesn't understand. Had investors followed that advice the value of crypto currency and LLM companies would be nearly zero.

Alternatively one might consider the integrity of the proponents. Let's see, Donald Trump,, Elon Musk, Peter Thiel, . . , need I say more?

Tim Koors's avatar

Short Explanation of AI circular Investment

thanks to GamersNexus

NVDA has money from GPU sales OpenAI needs money

NVDA agreed to invest $100 billion in OpenAI so it could

use the money to purchase/lease NVDA GPUs that haven't been made

to put in datacenters that haven't been built

that will be powered by electricity that hasn't come online

to rent to users that haven't subscribed

to provide features that haven't been developed

What could go wrong?

Richard Self's avatar

Interestingly, no one has mentioned Stargate in the last week, which must surely die now.

Thomas Schmid's avatar

Well, there is still Softbank, providing some 20B$ of funding without having them in the first place.... so borrow, baby, borrow.

Gas Axe's avatar

As I was scrolling I saw a post that Palantir is also taking a hit.

Jim Ryan's avatar

Good. Hope they go under

Bryan McCormick's avatar

Also - Google planning to spend HALF its total annual revenue on expanding data centers. Add that to the nearly 100% of annual rev for Meta. This is classic tulpenmanie - but at least then the cows got to eat the bulbs.

Oleg Alexandrov's avatar

For Google it makes sense. For Meta, not as much. We already saw the first victim in xAI, got bailed out. Google and Microsoft are cloud providers. They win in either case. Little startups who want to strike gold are in danger.

Bryan McCormick's avatar

100% disagree. I would appreciate if you stopped commenting on my comments thank you

Oleg Alexandrov's avatar

Have an argument if you disagree. This is a public forum. No special protection.

Bryan McCormick's avatar

Gary - you know that Sam will go crying for the "promised" backstop to Big Uncle. AKA - US!

Xian's avatar

who will bear the cost of AI’s expansion? Question: does that spike electricity bill have anything to do with the data center? Or not?

Nutmeg's avatar

https://x.com/gnoble79/status/2019519905364631577

I just learned something that should terrify every AI investor:

Six major large language models were tested on real freelance work - the kind actual humans get paid to do on Upwork.

Not homework. Not summaries. Real commercial tasks that generate real revenue.

Building video games. Creating presentations from rough notes. Architectural schematics.

The BEST performing AI completed tasks well enough to get paid 2.5% of the time.

The worst? 0.3%.

Think about that.

Lois Obrien's avatar

Nothing would be more pleasurable to me personally if an AI bubble did occur. I’m so tired of being advertised to and having AI pushed upon me that I neither want nor need in my life. My concern is for the Earth that has been disturbed by their massive complexes, that require large amounts of water usage. Odd that they keep trying build them so close to the Great Lakes, Wisconsin and Michigan for example, isn’t it?

Greg Tuck's avatar

The next stage is seeing the postponement or cancellation of data centre build which has to follow as the scaling mania fades. Then this stops simply being about tech stock but starts effecting buildings and services companies and real estate. Of course if anyone is actually making a profit from AI service (as opposed to chip sales) this won't happen, but I've yet to see any hard data that anyone is.

David Cotton's avatar

$100bn is nothing compared Alphabet / Google announicing they're spending $185bn~ on Capex in 2026. Some of which will go on tensor, since they don't use Nvidia chips.

I just don't see how Pichai can expect to ever see a positive return from the amount being spent. He's lost his mind.

Sure they have loads of very profitable businesses and a near monopoly on search advertising.

There's no reason to think "AI" / LLMs / Deepmind / Gemini / Other will ever get close to the revenue needed to sustain this capex. It's also extinction for OpenAI and anyone else, because they can't compete with Google being prepared to potentially lose the amount of money they're likely to from this spending.

It's enormous economic distortion, most of that 185bn could be returned to stock holders, who might actually have something useful they can do with it!

Oleg Alexandrov's avatar

Google persevered with Waymo, and now we see the fruits of that.

The current AI is a step change compared to before, and no sign the innovation around those techniques will stop. This is a long-term game.

David Cotton's avatar

Waymo isn't profitable, what "fruits". Per HN: The economic side of Waymo still makes absolutely no sense to me. Uber, with no heavy capital investment, has been pretty much unprofitable for over a decade. The car companies are low margin. The taxi companies are low margin. Alphabet has like 30% profit margins.

Waymo's business proposition is 1) to own a very capital-intensive fleet of bespoke cars with no resale value, 2) employ an extremely expensive team of engineers to develop autonomous driving for them, 3) use this incredible capital investment to try to undercut an industry that is already barely or not at all profitable and earn back their 10s of billions of dollars in investment (and 10s of billions more of costs in the future), one taxi ride at a time.

And that's completely ignoring their issue with surges, as their robot car supply is inherently fixed - they either don't intend to handle surges, or intend to have low utilization of their fleet.

Oleg Alexandrov's avatar

This is a long-term investment. Skeptics spent a decade saying it wouldn't work, and will spend a decade saying it won't make a profit.

Waymo does not want to own a taxi business. It wants to sell its software to all car companies. The cost of the extra hardware will go down, the R&D will get amortized, and people will want a car that is safer than driven by hand.

It will take time to get there. This is massive shift, but nothing is quick.

eg's avatar

The less you know about a subject, the more impressive an LLM seems to you to be on it.

Hence the grift’s success in the vast and deep oceans of human ignorance.

Dane Disimino's avatar

CapEx has to meet the ROIC train soon.