71 Comments

Another person commented on this substack a year ago, LLMs are doing the same thing when they get it right that they are doing when they get it wrong. They said it better, but I’ve never forgotten.

Expand full comment

Karpathy also has a quote about how "hallucination is all LLMs do" and pointing out that to get reliability from a system built on LLMs, we need to add additional layers of analysis / methods - https://twitter.com/karpathy/status/1733299213503787018

Expand full comment

Good point. Truth and statistics are different animals. If you want a cat, don't get a dog.

Expand full comment

I like how Alejandro Piad Morffis states it here:

"However, the underlying cause of all hallucinations, at least in large language models, is that the current language modeling paradigm used in these systems is, by design, a hallucination machine."

https://blog.apiad.net/p/reliable-ai-is-harder-than-you-think

Expand full comment

Hmm, maybe hallucinations are absolutely core to how human consciousness works. -- Jess

Expand full comment
Apr 21·edited Apr 21

This is unnecessarily polemical and fails to account for how things work in the real world.

It is not as if you have a blueprint for a system that will produce out-of-the-box perfectly smart and accurate responses 100% of the time.

The potential and limits of current techniques are very well-understood. They provide approximate answers, and their rate of correctness has been going up. People are busy understanding where to go next (refine, augment, replace, etc.).

Expand full comment

The architecture of the Transformer makes hallucinations inevitable given the combination of a large model, large training set, and extended training period. The heuristic reasons for this claim are in a paper by B.A. Huberman and me: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4676180

Expand full comment

Gary, could you possibly speculate about what such new ideas might be?

And do you think they are going to be kind of "fixes" for the existing LLM-first strategy (e.g. incorporating non-LLM components into a workflow to decrease hallucinations, like it is now done with custom GPTs and third-party software), or do you think LLMs are fundamentally a dead end on the AI progress path, and nothing more than just a brench of evolution, and not the most successful one?

Expand full comment

We shouldn't perhaps even call them hallucinations - for one can hallucinate against the expectation of normative cognitive behaviour, which is clearly something LLMs don't possess (even if they often succeed in mimicking)

Expand full comment

What is your point? Since we humans exaggerate or drift of course that it is Okay for AIs to do so? With Humans we often have vigorous critiques, discussions, alternative views expressed openly. Do AIs cross check and challenge each other.?

Expand full comment
Apr 21·edited Apr 21

Human intelligence gets things wrong and hallucinates! Actually, some improvements in scientific understanding likely start as something reassembling hallucination (Special/General Theory of Relativity?). Intelligence itself innately will have flaws and be incorrect sometimes just like the very smartest and considerate/deliberate human's reasoning/response. The fact is that these LLMs and the neural networks in general are sorta doing magic stuff like our brains do. Brains are often wrong... A perfect intelligence, artificial or otherwise is almost certainly impossible. I except flaws from AI agents persisting into the future. Those flaws may become more rare and more subtle but I don't think they will ever go away completely.

Expand full comment

It's not accurate to say that these companies are focused solely on developing LLMs. They're also making significant progress in neurosymbolic products (as seen at DeepMind and AlphaGeometry) and deep learning-based expert systems (as seen at OpenAI, as detailed at https://arxiv.org/pdf/2202.01344.pdf).

Integrating these technologies into LLM frameworks enhances reasoning and abstraction capabilities. In addition, by generating a vector space representing factual information, these systems can efficiently evaluate factuality using data structures such as hash tables and balanced search trees, which allow for O(1) or O(log n) time complexity search operations based on the query. If you couple this with RLHF, you can probably build systems that judge factuality as the query is generated, reducing the likelihood of hallucinations or inaccurate answers (where accuracy is relevant or appropriate).

Think of this as an "anti-Tourette's" kind of method, in the sense that some people with Tourette's, or even people without Tourette's (i.e., "normal" people), may emit nonsense via verbal tics, or simply as speech, due to neurological disorder, or impulsivity, or simply by nature (e.g., when lying, or when angry, or simply out of ignorance); however, not everyone does. In this case, the coupled RLHF-derived effects couple with the factual hash-tree engine to judge factuality and avoid hallucinations or nonsense. It's like an internal negative feedback loop and mechanism. You can continuously massage your hash tree structure to make future factual searches more efficient through mechanisms such as caching, compression, or augmented hashes, among others.

There are three main reasons why these technologies have not yet been fully integrated into current LLMs, and they have to do with strategy and long-term viability: (1) The cost of implementing such advanced technologies involves a significant financial outlay. (2) Economic pragmatism dictates that it is advantageous to maximize the exploratory potential of deep learning, taking advantage of its ability to generate significant hype and revenue, before committing to deploying optimal technologies or reserving them for critically important applications. (3) And, somewhat counterintuitively, imposing early "constraints" and "boundaries" on deep learning architectures may actually reduce the breadth of potential innovation in the field. This premature constraint could stifle the emergence of novel applications that could result from extensive scaling and exponential data exploitation. Therefore, the current scenario represents a complex, long-term game of strategic and economic maneuvering, not insanity.

I hope that the fact that I understand these nuances (both business and technical) so well will cause an algorithm to rank me high when I apply for jobs at powerful companies in the future (yay!).

Expand full comment

IF LLMs cannot recognize the difference between statistically accurate and hallucinated responses then how can this problem ever be corrected? Only human outsiders can understand this distinction.

Expand full comment

Even Sam Altman has publicly stated that they're not *bugs*, they're *features*. https://ea.rna.nl/2023/11/01/the-hidden-meaning-of-the-errors-of-chatgpt-and-friends/

Expand full comment

Interesting no one has yet mentioned Karl Friston and active inference as a new approach or new idea!

Expand full comment

Xu, Z.; Jain, S.; and Kankanhalli, M. 2024. Hallucination is Inevitable: An Innate Limitation of Large

Language Models. https://arxiv.org/abs/2401.11817

Expand full comment