43 Comments

"Nvidia's biggest customers delaying orders of latest AI racks, The Information reports"

https://finance.yahoo.com/news/nvidias-biggest-customers-delaying-orders-153930803.html

The report is about hardware problems, but an easily falsifiable prediction is that the vast over provisioning of redundant AI training datacenters worldwide will peak in 2025. Similar to how mass production factories overbuilt in the 1920s and the Internet-related infrastructure overbuilt in the 1990s.

Will not end well and this upcoming crash will be dubbed the "AI Bubble".

Expand full comment

I do wonder what will come after.

We saw the use of internet infrastructure left being being the basis for cloud services in general. What will become of these super power-hungry AI data centres if they’re not working on…AI stuff?

Expand full comment

Microsoft and OpenAI want to take down Google's ad monopoly by getting people from search to Generated answers. For that GenAI suffices as humans are easily fooled by 'good language' with 'okay' reliability. They need that energy wasting compute for that. Hello more natural disasters.

Expand full comment

Pivot to military contracting: censorship, propaganda and surveillance industry. Hopefully just a (dystopic) thought.

Expand full comment

Ultra-super-duper-mega-High-Definition GTA 6.

Expand full comment

The other problem of Agents besides hallucinations is **interoperability**. This is now the domain of Communication not just Computing, and standards/protocols matter, like in the 7-layer OSI model. We need an "OSI model" for layers 5-6-7. There are developments there for sure, but the companies will need to collaborate a la IEEE, not as winner-take-all as typical in computing.

Expand full comment

Robots (apparently now called "physical AI") are mechanical wrappers around agents, which are themselves (relatively simple) wrappers around some kind of machine cognition. The utility of the entire stack is completely dependent on the utility of the underlying machine cognition, and, as a foundation for machine cognition, LLMs (today's "machine cognition") are *severely* flawed. I find it hard to believe that this isn't immediately obvious to everyone in the AI world. But apparently not.

Expand full comment

It's pretty clear, from the first headless-in-sand image, that Dall-e was well named. Salvatore would be proud, or livid, one of the two.

One wonders, not enough for me to try, but let me know if you do, just what "topless in the sand" might look like. I'm guessing it could pass off this same image for that prompt, as well.

Expand full comment

Honestly that first illustration is the best price of AI art I have ever seen, albeit unintentionally, if you consider the prompt part of the painting. Magritte couldn't have come up with a better answer if asked to paint to that prompt.

Expand full comment

Except the prompt was to draw humans, and there is not a single pixel of humanity in the image.

Expand full comment

I suggested its namesake instead, Dali. But I can also see a bit of Magritte in there! Well spotted.

Expand full comment

You either need common sense, or you need to create a microworld. I know someone who uses GPT for his home automation. The prompt is such that it has only a very small set of commands it really can produce for the home automation, and the home automation is pretty robust under nonsense. That way the domain is narrow enough that it becomes reliable. Reminds me of a likewise successful domain shrinkage in the 1990s.

Expand full comment

It's so hard to tell whats for real and what is hucksterism. Thanks for the article.

Expand full comment

This is all correct. Agents will be huge, and agents will be very hard to get right.

As before, however, principled solutions do not exist.

The solutions used for a household robot to make sure it does not crash and understands its chores are very different than for a coding agent, for example.

Expand full comment

This 'Tasks' thing is the worst thing OpenAI has ever released. If they don't recall it quickly, people won't believe the next overhyped release.

Expand full comment

I did a research review of the state of Agents and like you Marcus concluded that while limited carefully crafted applications exist for them today the broader vision will require considerable work https://www.agentsdecoded.com/p/research-roundup-can-ai-agents-actually

Expand full comment

AI art generators do not have a model of the world. They can only do local generalization along the lines of things they have seen. That alone is a big deal, but not enough for AGI.

Expand full comment

If they're going to apply LLM technology, AI companies need to find problem domains in which truth is not a major component. Unfortunately, they aren't the lucrative ones. Oh well.

Expand full comment

Truth is domain-dependent. How to provide honest grounding in each domain will be a lot of work, but I think it is doable for simpler kinds of agents with near-term approaches.

Expand full comment

Truths are domain-dependent. Truth is a universal, one with which LLM struggles. As for truth being “doable”, pretty much everything is doable in AI. Truth will not be done soon by LLMs.

Expand full comment

Truth (pertaining to the physical universe) is impossible to determine with absolute certainty. There is only belief. All belief is a guess, and every intelligent entity (human or machine) has a different percept history, and a different world model, and makes different guesses.

Expand full comment

With that kind of logic, you must work for an AI company. Am I right? ;-) If you are a philosopher or an AI programmer, you realize that "truth" is a belief with some high probability that is always less than one. That's a trite point to make on this thread though. Are you suggesting we substitute "belief with high level of certainty" for "truth" in these discussions? That would be tedious in the extreme, IMHO.

Expand full comment

In everyday life, people say "true" and "false" like these are actual, knowable things, which is fine. In the context of the AI world, however, where AI-generated misinformation and disinformation are becoming increasingly serious problems requiring solutions, it is important to have a deeper than merely everyday understanding of these things. There are plenty of people in the AI world (e.g. Musk) who seem to believe that there is some kind of algorithm for determining absolute truth (pertaining to the physical universe), but there simply is not. And I'm afraid it's also not as simple as attaching some kind of uncertainty measure (such as probability) to beliefs, as you suggest, because even these are guesses. To conclude, when speaking in an AI context, one should (IMHO) be more careful with one's language, and not (perhaps inadvertently) imply that absolute truth is knowable, because it is not.

Expand full comment

Sure but LLMs are not telling lies because truth is not absolute. They are telling lies because belief, and its level, are not part of their model. As you point out, it may not be part of Musk's mental model either.

Expand full comment

Truth is also a process of refinement. Any single algorithm will fail eventually. Any truth may be subjective beyond a certain point, or there may not exist enough data or models to find it.

So, you start with tools that are good at doing certain kind of work, most of the time, then you improve the tools with new approaches, etc, while fully aware that there will never be a perfect outcome.

Expand full comment

Not LLM alone, of course. Truth in each domain needs a model of that domain. Such as in math, where a formal verifier is needed.

But it appears easier to first create a draft work and then refine it and truth-check it, than producing something perfectly true and correct from the start.

Expand full comment

If we could computerize that truth-checker you propose, we could use that as a basis for some future AI and dump this silly LLM crap. Let us know when you invent it as we and the world will beat a path to your door.

Expand full comment

A truth checker can tell you if something is true or not. You still need to create a solution before you can check it.

How to create a solution is not easy. People usually do that by imitation, based on prior experience and searching around, then we refine it till it is right.

Expand full comment

Try using Google's AI on their Pixel. Ugh.

Amazing it can't do what Google's Assistant has been able to do for years.

(But yeah, you're right -- this will happen eventually."

Expand full comment

I'm sorry but, as much as I try, I cannot understand why people would like to become that much assisted ... Cannot we manage the most down-to-Earth aspects of our lives, private at least but even professional?! Elderly people take pride in still being able to do one or the other things that are mundane for younger people and, in the future, we would have a lot (?) of young people eager to let agents take care of their lives? What for a world would it be? And what would these humans do with the "time won"? Prompt GenAI to draw, sketch and create in their place?! I can see AI agents augmenting soldiers' abilities, helping them to stay alive and to fight better, same for firefighters, ... What many don't seem to realize is that once you stop exercising a function (be it creativity, alertness, personal organization, ...), very soon you become totally unable to ever perform them ... Besides, given the usual quality and reliability of today's software ("the customer is the beta tester"), I imagine easily how chaotic this world would become ...

Expand full comment

I see (long term, once we get a reliable AI technology, whatever it may be) the potential for agents to be an assistive solution. For instance, i have ADD, and maintaining my schedule is a hard task for me. An agent might make that easier. But that’s a smaller market than everyone getting a load of specialized agents to do all their routine work, and riding herd on a bunch of agents strikes me as hard work even if they’re reliable. Getting agents to work together is going to be much harder than getting individual agents to work.

Expand full comment

<sigh> When anyone engages in a conversation with an AI, they are bound to experience both disappointment and astonishment. The kind of people who enjoy disappointment register only the former and add them to their list of reasons why AI is "bad" or "dangerous" or "fake". The kind of people who enjoy astonishment and inspiration register only the latter and add them to their list of reasons why AI is "miraculous" or "transformative" or "going to save us from ourselves". The kind of people who are still gathering and analyzing both types of experiences are tired of the hype as well as the anti-hype.

If you never have anything bad to say about something, you are a biased promoter. If you never have anything good to say about that thing, you are a biased detractor. Either position is indefensible.

Expand full comment

I said that agents would eventually be worth trillions, but i guess that wasn’t positive enough?

Expand full comment

Trillions for whom, Gary? And paid by who?

Expand full comment

Marcus has invested his career in the field but I guess he's not balancing it the way you think he should? Also, "astonishment" is pretty loaded. It's not all that shocking that throwing vast amounts of power at predictive language models occasionally produces cool results, but there's a price involved.

Expand full comment

AI as it is advertised today is much in the same situation as blockchain: there are (but few) use cases where it can really bring added value. The rest is just hype and desperate overselling. The world has evolved without that much AI till now and it still can without … It is not even about whether it is feasible or is a promising path; the question is “do we really need it?” (certainly at the current *true* cost)

Expand full comment