Discussion about this post

User's avatar
Sufeitzy's avatar

Sad.

Just a note on hallucination:

I tell my teams:

If you want deterministic replies, ask the tool to write a program.

If you want interpretive replies ask the tool for direct output.

It’s directly analogous to Dan Kahenman’s “Thinking, Slow and Fast”. Algorithm vs constructed recall.

LLM’s have built-in non-determinacy, I know it’s called “hallucination”, it’s you can’t get rid of it unless you turn the temperature to zero inside the mechanism, which you can’t do with a chat, directly.

AGI is, as my father used to say, a fig-newton of the imagination.

Expand full comment
Saty Chary's avatar

Hi Gary! When all is said and done, every LLM ever is about producing something out of nothing - intelligence out of a pile of numbers and math calcs over them. We've seen this movie before :)

Expand full comment
77 more comments...

No posts