4 Comments
⭠ Return to thread

Despite the claims coming from their salesmen and saleswomen, LLMs don’t actually understand the real world and are simply recreating superficial patterns present in their training data

“AI video generators like OpenAI's Sora don't grasp basic physics, study finds”

https://the-decoder.com/ai-video-generators-like-openais-sora-dont-grasp-basic-physics-study-finds/

Hard to see how something like Sora is going to “solve physics” when it has no understanding of even rudimentary physical concepts

Expand full comment

LLMs are reminiscent of Clever Hans, the “mathematical horse” that got correct answers to math problems by picking up on subtle behavioral patterns provided (perhaps unwittingly) by its owner.

And like Clever Hans, “Clever LLMs” have fooled (unwittingly, of course) a lot of intelligent people.

But Clever Hans actually WAS clever, (just not in the way everyone thought.)

The same can not be said for LLMs which are simply outputting patterns based on statistics.

Expand full comment

There is an interesting psychological aspect at work in that even after the evidence makes it clear, people don’t wish to admit that they were fooled by a horse, so they will continue to defend the horse manure.

Expand full comment