Sometimes history does repeat itself. Witness:
Me, December 2022: The problem with LLMs is that they hallucinate, and their errors can be hard to catch.
Google, Feb 2023. No problemo …. oops.
§
Me, February 2025: The problem with LLMs is that they hallucinate, and their errors can be hard to catch.
Google, Feb 2025, almost exactly two years to the day later: No problemo …. oops.
So much for exponential progress.
I’m Dutch and approve of this message
This time it was no hallucination. It copied information from an unreliable website, that false information was written by a human.
It shows you always must check facts when using AI