72 Comments

"Nvidia's biggest customers delaying orders of latest AI racks, The Information reports"

https://finance.yahoo.com/news/nvidias-biggest-customers-delaying-orders-153930803.html

The report is about hardware problems, but an easily falsifiable prediction is that the vast over provisioning of redundant AI training datacenters worldwide will peak in 2025. Similar to how mass production factories overbuilt in the 1920s and the Internet-related infrastructure overbuilt in the 1990s.

Will not end well and this upcoming crash will be dubbed the "AI Bubble".

Expand full comment

I do wonder what will come after.

We saw the use of internet infrastructure left behind being the basis for cloud services in general. What will become of these super power-hungry AI data centres if they’re not working on…AI stuff?

Expand full comment

Microsoft and OpenAI want to take down Google's ad monopoly by getting people from search to Generated answers. For that GenAI suffices as humans are easily fooled by 'good language' with 'okay' reliability. They need that energy wasting compute for that. Hello more natural disasters.

Expand full comment

The stupid thing is, Google has fucked up their search engine so badly the only thing MS would have to do to steal Google's audience is recruit all the search engineers who quit/were fired from Google over the predatory new approach to search and tell them to turn Bing into the best possible search engine they can with no management interference.

Pigs will be spotted doing mach 2 over O'Hare first, but I'm pretty sure that would work.

Expand full comment

Truth.

Indeed, Generative AI inference - answers for customers - requires a tiny fraction of the oomph that one-time training the large language models requires. So yes - for many years to come, cheap search can become better informed with generative AI enhancements.

Expand full comment

Actually, the price of more efficient — but still because of scale very expensive parallel — training of transformer RNNs is more expensive inference (generation) which in turn is much more expensive to generate than Google search results. How that is going to be paid for I have no idea. In the reasoning cases a $200/month subscription to OpenAI is making a loss for them. We simply don't know yet how the economics is going to turn out.

And better informed is not necessarily the case. But *more convincing* and *easier to consume* results (regardless of actual quality, either good or bad) will probably win people over.

Expand full comment

Pivot to military contracting: censorship, propaganda and surveillance industry. Hopefully just a (dystopic) thought.

Expand full comment

nah that's what is going to happen. the only two things LLM's are reliably good at are.. distillation of text and sentiment analysis.

go figure

Expand full comment

On the other hand, LLMs are awesome at regurgitating knowledge and analysis on almost any subject given a prompt with perfect context, and low reasoning expectations.

Expand full comment

yeah i call that distillation of text

but also due to the hallucination problem you can't ever truly trust it, which is a big issue

Expand full comment

Draw a lesson from the overbuilt Internet infrastructure from the 1990s.

The 1990s dot-com Sun servers got junked eventually, but in the early 2000s Google bought up the many many miles of unused "dark" fiber optic intercity communication cables at fire-sale prices for later good use.

Snark: Some crypto king will design a proof of work algorithm that require CUDA and NVidia hardware to mint coins.

Expand full comment

Ultra-super-duper-mega-High-Definition GTA 6.

Expand full comment

Yep.

Reportedly, real-time image generation is becoming competitive if not better than ray tracing prepared images for gamers.

Expand full comment

Robots (apparently now called "physical AI") are mechanical wrappers around agents, which are themselves (relatively simple) wrappers around some kind of machine cognition. The utility of the entire stack is completely dependent on the utility of the underlying machine cognition, and, as a foundation for machine cognition, LLMs (today's "machine cognition") are *severely* flawed. I find it hard to believe that this isn't immediately obvious to everyone in the AI world. But apparently not.

Expand full comment

You clearly do not have a financial stake in LLMs.

Expand full comment

not only is it not completely obvious, when you talk to them they are convinced we will have something 'super' intelligent 'soon' that will 'take all the jobs' and 'destroy the planet'

i just keep telling my friends who work in this field. if these things are true, then why doesn't AI add any benefits to my own life personally (i have been trying), or more broadly for the economy?

Expand full comment

“It is difficult to get a man to understand something, when his salary depends on his not understanding it.” - Upton Sinclair

Expand full comment

The other problem of Agents besides hallucinations is **interoperability**. This is now the domain of Communication not just Computing, and standards/protocols matter, like in the 7-layer OSI model. We need an "OSI model" for layers 5-6-7. There are developments there for sure, but the companies will need to collaborate a la IEEE, not as winner-take-all as typical in computing.

Expand full comment

It's pretty clear, from the first headless-in-sand image, that Dall-e was well named. Salvatore would be proud, or livid, one of the two.

One wonders, not enough for me to try, but let me know if you do, just what "topless in the sand" might look like. I'm guessing it could pass off this same image for that prompt, as well.

Expand full comment

Honestly that first illustration is the best price of AI art I have ever seen, albeit unintentionally, if you consider the prompt part of the painting. Magritte couldn't have come up with a better answer if asked to paint to that prompt.

Expand full comment

Except the prompt was to draw humans, and there is not a single pixel of humanity in the image.

Expand full comment

No humans, no heads (in sand or anywhere else) and 4 entities instead of 3.

The only thing right was the sand part.

1 out of 4 correct is a fail.

The only reason the generated image is “unexpected” is that it’s just wrong.

It would be virtually impossible to guess the prompt from the generated image.

— for a person, at least. Maybe a bot with bizarre “logic” could guess it, but from a meaning standpoint, the head in the sand joke has been completely botched.

Expand full comment

I suggested its namesake instead, Dali. But I can also see a bit of Magritte in there! Well spotted.

Expand full comment

You either need common sense, or you need to create a microworld. I know someone who uses GPT for his home automation. The prompt is such that it has only a very small set of commands it really can produce for the home automation, and the home automation is pretty robust under nonsense. That way the domain is narrow enough that it becomes reliable. Reminds me of a likewise successful domain shrinkage in the 1990s.

Expand full comment

If they're going to apply LLM technology, AI companies need to find problem domains in which truth is not a major component. Unfortunately, they aren't the lucrative ones. Oh well.

Expand full comment

Truth is domain-dependent. How to provide honest grounding in each domain will be a lot of work, but I think it is doable for simpler kinds of agents with near-term approaches.

Expand full comment

Truths are domain-dependent. Truth is a universal, one with which LLM struggles. As for truth being “doable”, pretty much everything is doable in AI. Truth will not be done soon by LLMs.

Expand full comment

Truth (pertaining to the physical universe) is impossible to determine with absolute certainty. There is only belief. All belief is a guess, and every intelligent entity (human or machine) has a different percept history, and a different world model, and makes different guesses.

Expand full comment

With that kind of logic, you must work for an AI company. Am I right? ;-) If you are a philosopher or an AI programmer, you realize that "truth" is a belief with some high probability that is always less than one. That's a trite point to make on this thread though. Are you suggesting we substitute "belief with high level of certainty" for "truth" in these discussions? That would be tedious in the extreme, IMHO.

Expand full comment

In everyday life, people say "true" and "false" like these are actual, knowable things, which is fine. In the context of the AI world, however, where AI-generated misinformation and disinformation are becoming increasingly serious problems requiring solutions, it is important to have a deeper than merely everyday understanding of these things. There are plenty of people in the AI world (e.g. Musk) who seem to believe that there is some kind of algorithm for determining absolute truth (pertaining to the physical universe), but there simply is not. And I'm afraid it's also not as simple as attaching some kind of uncertainty measure (such as probability) to beliefs, as you suggest, because even these are guesses. To conclude, when speaking in an AI context, one should (IMHO) be more careful with one's language, and not (perhaps inadvertently) imply that absolute truth is knowable, because it is not.

Expand full comment

Sure but LLMs are not telling lies because truth is not absolute. They are telling lies because belief, and its level, are not part of their model. As you point out, it may not be part of Musk's mental model either.

Expand full comment

Truth is also a process of refinement. Any single algorithm will fail eventually. Any truth may be subjective beyond a certain point, or there may not exist enough data or models to find it.

So, you start with tools that are good at doing certain kind of work, most of the time, then you improve the tools with new approaches, etc, while fully aware that there will never be a perfect outcome.

Expand full comment

Not LLM alone, of course. Truth in each domain needs a model of that domain. Such as in math, where a formal verifier is needed.

But it appears easier to first create a draft work and then refine it and truth-check it, than producing something perfectly true and correct from the start.

Expand full comment

If we could computerize that truth-checker you propose, we could use that as a basis for some future AI and dump this silly LLM crap. Let us know when you invent it as we and the world will beat a path to your door.

Expand full comment

A truth checker can tell you if something is true or not. You still need to create a solution before you can check it.

How to create a solution is not easy. People usually do that by imitation, based on prior experience and searching around, then we refine it till it is right.

Expand full comment

It's so hard to tell whats for real and what is hucksterism. Thanks for the article.

Expand full comment

This is all correct. Agents will be huge, and agents will be very hard to get right.

As before, however, principled solutions do not exist.

The solutions used for a household robot to make sure it does not crash and understands its chores are very different than for a coding agent, for example.

Expand full comment

Do you try to write comments that are as banal as possible?

Because you really do get close to the abosolute Heisenberg limit of banality every single time. It's quite impressive.

Expand full comment

For example, what's your opinion on whether principled AI is possible?

To you, this looks like a banality, yet the issue has been fiercely debated by leading people in the field. The business world is now betting close to half a trillion on believing their answer is the right one.

Expand full comment

The range of opinions on AI is wide. There's the hype people, who claim AI will become sentient in a year and will kill us all. There's pragmatic people as myself, with "banal" comments as above. There's skeptics who think this is all a scam.

Where do you stand, and what exactly do you disagree about? As it is, you act like a troll, bringing nothing to the conversation.

Expand full comment

This 'Tasks' thing is the worst thing OpenAI has ever released. If they don't recall it quickly, people won't believe the next overhyped release.

Expand full comment

Au contraire, they absolutely still will.

Expand full comment

I did a research review of the state of Agents and like you Marcus concluded that while limited carefully crafted applications exist for them today the broader vision will require considerable work https://www.agentsdecoded.com/p/research-roundup-can-ai-agents-actually

Expand full comment

AI art generators do not have a model of the world. They can only do local generalization along the lines of things they have seen. That alone is a big deal, but not enough for AGI.

Expand full comment

Try using Google's AI on their Pixel. Ugh.

Amazing it can't do what Google's Assistant has been able to do for years.

(But yeah, you're right -- this will happen eventually."

Expand full comment

Agents will be able to help with your schedule! Of course, after they have taken your job, and every job you are qualified for, you won't have much to schedule, but still!

Expand full comment

schrodingers AI capabilities

it's both soon going to be superintelligent destroy the planet and take all the job and it can't even schedule an appointment reliably (but coming soon TM)

Expand full comment

I'm sorry but, as much as I try, I cannot understand why people would like to become that much assisted ... Cannot we manage the most down-to-Earth aspects of our lives, private at least but even professional?! Elderly people take pride in still being able to do one or the other things that are mundane for younger people and, in the future, we would have a lot (?) of young people eager to let agents take care of their lives? What for a world would it be? And what would these humans do with the "time won"? Prompt GenAI to draw, sketch and create in their place?! I can see AI agents augmenting soldiers' abilities, helping them to stay alive and to fight better, same for firefighters, ... What many don't seem to realize is that once you stop exercising a function (be it creativity, alertness, personal organization, ...), very soon you become totally unable to ever perform them ... Besides, given the usual quality and reliability of today's software ("the customer is the beta tester"), I imagine easily how chaotic this world would become ...

Expand full comment

It’s people with “solutions” looking for problems in the hope they can make lots of money. To me, none of it looks compelling, or fun, or interesting, or likely to advance the human race or planet earth in any way. Just the opposite in fact.

Expand full comment

I feel the same way. And for this latest example, why ask ChatGPT when you have a clock app on your phone? The time you spend prompting it could be spent setting the alarm itself. As long as you set your date and time right, have your phone on and volume up, it's going to go off reliably, unlike ChatGPT reminders, it would seem.

Expand full comment

I see (long term, once we get a reliable AI technology, whatever it may be) the potential for agents to be an assistive solution. For instance, i have ADD, and maintaining my schedule is a hard task for me. An agent might make that easier. But that’s a smaller market than everyone getting a load of specialized agents to do all their routine work, and riding herd on a bunch of agents strikes me as hard work even if they’re reliable. Getting agents to work together is going to be much harder than getting individual agents to work.

Expand full comment