9 Comments
⭠ Return to thread

With that kind of logic, you must work for an AI company. Am I right? ;-) If you are a philosopher or an AI programmer, you realize that "truth" is a belief with some high probability that is always less than one. That's a trite point to make on this thread though. Are you suggesting we substitute "belief with high level of certainty" for "truth" in these discussions? That would be tedious in the extreme, IMHO.

Expand full comment

In everyday life, people say "true" and "false" like these are actual, knowable things, which is fine. In the context of the AI world, however, where AI-generated misinformation and disinformation are becoming increasingly serious problems requiring solutions, it is important to have a deeper than merely everyday understanding of these things. There are plenty of people in the AI world (e.g. Musk) who seem to believe that there is some kind of algorithm for determining absolute truth (pertaining to the physical universe), but there simply is not. And I'm afraid it's also not as simple as attaching some kind of uncertainty measure (such as probability) to beliefs, as you suggest, because even these are guesses. To conclude, when speaking in an AI context, one should (IMHO) be more careful with one's language, and not (perhaps inadvertently) imply that absolute truth is knowable, because it is not.

Expand full comment

Sure but LLMs are not telling lies because truth is not absolute. They are telling lies because belief, and its level, are not part of their model. As you point out, it may not be part of Musk's mental model either.

Expand full comment

Agreed. The fundamental flaw with LLMs is that their "internal world models", such as they have them, are extremely poor (very broad, possibly, but also very shallow). Poor world models means they have (at best) very poor understanding of any concept pertaining to the real world, and therefore (at best) very weak reasoning abilities pertaining to the real world. And so LLM-based cognition is and will always be severely limited.

Expand full comment

True but I would go farther than just "poor world models" as that leaves open the possibility they will get better with more training data. Instead, the world models were never designed to model truth. That they produce truth more often than falsity is simply a by-product of the fact that the world's text they're trained on is biased toward truth. Perhaps "non-truth-based models".

Expand full comment

The answers given by LLMs are like the outcome of a popularity contest.

But, unfortunately, sometimes the truth is not popular.

Computer “scientists” keep inventing the same flawed methods. Google’s Page Rank system for search was also designed on the basis of a popularity contest.

Expand full comment

True but it's the most popular sequence of words, not the most popular opinion or belief. Big difference. Page Rank was very successful, just like LLMs are now, but both will be eclipsed easily by AI technology that understands what it reads.

Expand full comment

LLM alone will be severely limited, yes. Good world models are important, where we can get them. In poorly defined areas, fitting a neural net to lots of data (beyond just text) will likely be as good as it gets for quite a while.

Expand full comment

Truth is also a process of refinement. Any single algorithm will fail eventually. Any truth may be subjective beyond a certain point, or there may not exist enough data or models to find it.

So, you start with tools that are good at doing certain kind of work, most of the time, then you improve the tools with new approaches, etc, while fully aware that there will never be a perfect outcome.

Expand full comment