Discussion about this post

User's avatar
Amy A's avatar

Another person commented on this substack a year ago, LLMs are doing the same thing when they get it right that they are doing when they get it wrong. They said it better, but I’ve never forgotten.

Expand full comment
Jimmy's avatar

Karpathy also has a quote about how "hallucination is all LLMs do" and pointing out that to get reliability from a system built on LLMs, we need to add additional layers of analysis / methods - https://twitter.com/karpathy/status/1733299213503787018

Expand full comment
59 more comments...

No posts