33 Comments
User's avatar
Marcel van Driel's avatar

I’m Dutch and approve of this message

Expand full comment
Notorious P.A.T.'s avatar

Do your people eat something called "Dutch crunch bread"? It is somewhat popular here in the USA, so I am wondering if it really is Dutch.

Expand full comment
Paul van Gool's avatar

This time it was no hallucination. It copied information from an unreliable website, that false information was written by a human.

It shows you always must check facts when using AI

Expand full comment
Marko T. Manninen's avatar

It also shows increasing overtrust on GenAI, despite that people know it hallucinates. Rush, lazyness, and not knowing how to use tools and validate, these pile up to errors. Eventually all this generated non checked material is used to train models. What to expect?

Expand full comment
Larry Jewett's avatar

So, check facts and check for sanity?

Other than that LMMs are totally trustworthy.

Expand full comment
Larry Jewett's avatar

That’s a combination of LLMs and M&Ms

Expand full comment
Larry Jewett's avatar

Personally, I prefer M&Ms

Expand full comment
Art's avatar

"It shows you always must check facts when using AI" — and we'll land in the endless loop. 😜

Expand full comment
Jed Serrano's avatar

Keep rubbing how right you are in their faces, Gary. Someone has to stand up to these technocrats that are doing the orange man’s bidding 1984 style!

Expand full comment
Andre Vellino's avatar

The problem is that AIs hallucinate (in part) because they are trained on (in part) false human information. Is it any surprise that AIs trained (in part) on content from Reddit are (sometimes) not telling the truth?

Expand full comment
Stephen Schiff's avatar

Exactly. LLMs reflect their training sets, and with O(1E12) items in the set there is no possibility of comprehensive human curation.

Expand full comment
Larry Jewett's avatar

There is no cure for LLMitis.

It’s terminal.

Expand full comment
Glen's avatar

Most people interact with LLMs over http rather than terminal.

:D

Sorry, could not resist.

Expand full comment
Larry Jewett's avatar

So, it’s “I-P-in-al”? (ie, urinal)

Expand full comment
Gerard's avatar

Haha… In a way, this is good news—clear examples of the obvious for those indifferent to AI hallucinations.

The house of cards built on AI hype and reckless thinking keeps marching forward. After “AI agents” and “thinking models,” the latest buzz is around distilled models (mini or lite)—adding yet another layer of failure, as if algorithmic bias wasn’t already enough.

Welcome to the equivalent of using the trainee instead of the “master” for high-volume, high-stakes applications. The justification? Cost savings and efficiency!

https://ai-cosmos.hashnode.dev/understanding-the-risks-behind-distilled-ai-models

Expand full comment
Larry Jewett's avatar

Yes, training bots with bots (with bots(with bots?))

It’s a bot-tomless pit

Expand full comment
Larry Jewett's avatar

Also known as “bot-tle collapse”

Expand full comment
Larry Jewett's avatar

Shitstillation: boiling off all human “contamination” until only pure botshit remains.

To be bottled up and sold as “The Magic Elixir of Reason”

Expand full comment
Larry Jewett's avatar

The real question is would you EAT cheese made by a bot?

Expand full comment
Notorious P.A.T.'s avatar

I'm sure it would contain the minimum daily requirement of glue.

Expand full comment
Larry Jewett's avatar

And crushed glass

Expand full comment
Larry Jewett's avatar

Bot cheese?

Expand full comment
David Z. Morris's avatar

10/10, no notes

Expand full comment
Larry Jewett's avatar

Beyond showing that LLMs are unreliable for facts, the Google “AI generated” ad demonstrates something else beyond any doubt: LLMs can and do plagiarize word for word.

https://www.yahoo.com/tech/google-super-bowl-ad-accidentally-174532928.html

This and the New York Times examples of word for word copies should disabuse any reasoning person of the claims by computer “scientists” that “LLMs don’t work that way”

Expand full comment
Stephen Schiff's avatar

Let me guess- they used AI to fact check the ad.

Expand full comment
Larry Jewett's avatar

It’s bots all the way down

Expand full comment
Joy in HK fiFP's avatar

Looks like AI might just have become the "(Cheez)Whiz" Kid!

Expand full comment
MarkS's avatar

> Me, February 2025: The problem with LLMs is that they hallucinate, and their errors can be hard to catch.

> Google, Feb 2025, almost exactly two years to the day later

Human errors, on the other hand, are often pretty easy to catch :)

Expand full comment
Jon Aarbakke's avatar

LLMs are here to stay, tho'

Expand full comment
Chara's avatar

Maybe Gemini wants that to be true.

Expand full comment
Saty Chary's avatar

Lol - 'exoplanets' v.2.0 [I'm referring to the Bard launch incident] (https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/google-bard-makes-factual-error-about-james-webb-space-telescope)

Strung up words, and facts, might not always be identical. Almost anyone living in the US or Europe (for ex) would have caught that - SO.... how did it get all the way to almost being broadcast?? Shameful.

Those 'raw' PageRanked tf-idf links don't seem so bad, after all - because there is no misleading BS between those links and humans.

EVERY LLM-based service is forever in danger of producing such crap - even 'agentic' ones, except for ones that look up curated facts before responding [RAG].

Expand full comment
Sufeitzy's avatar

AI’s based on GPT models use a parameter called “temperature” to increase or decrease the determinism of prompt responses. There’s no mystery, no magic, no “misunderstanding”, no errors and no deception.

The sooner people grasp this fact, the more they’ll use LLM in way which is reliable.

Humans make terrible non-deterministic decisions all the time. Recall a Dove ad when a black woman used soap morphed into a white woman. No AI, no nerfarious nameless entities. Just non-deterministic a stupidity, a lack of having someone who’s an authority double check work.

Expand full comment
User's avatar
Comment deleted
Feb 10
Comment deleted
Expand full comment
Larry Jewett's avatar

That’s because they are all bots

Expand full comment