Discussion about this post

User's avatar
Dana F. Blankenhorn's avatar

I suspect the Doobie Brothers said it best, years ago. 'What a fool believes he sees, no wise man has the power, to reason away."

Expand full comment
Dakara's avatar

I came across something else interesting some might find of interest. A formal proof that LLMs will always hallucinate even with perfect data and limitless compute.

"... we present a fundamental result that hallucination is inevitable for any computable LLM, regardless of model architecture, learning algorithms, prompting techniques, or training data."

Included in my recent post about hallucinations as unsolvable - https://www.mindprison.cc/p/ai-hallucinations-provably-unsolvable

Expand full comment
88 more comments...

No posts