Discussion about this post

User's avatar
Herbert Roitblat's avatar

I don't want to get into Musk's personality or politics, but I do want to ask how an LLM could rigorously adhere to the truth when it has no representation of the truth. How could it tell whether it is rigorously adhering to the truth? LLMs learn token probabilities given a context. All of the information they have about the token strings is in the probability distribution. If you want a system that adheres to the truth, then it must have some way of deciding whether it is or it is not adhering to the truth. Popular or predictable are adequate to decide truth.

Here is what I suggest as a place to start. https://herbertroitblat.substack.com/p/the-self-curation-challenge-for-the

Expand full comment
Paul Topping's avatar

Good job. Hoisted by his own LLM. Not a hallucination.

Expand full comment
46 more comments...

No posts