3 Comments
⭠ Return to thread

Hi Gary! True 'understanding' wouldn't need words, images, sounds, equations, rules or data, it would just need direct experience with the world - which requires a body. LLMs (all AI in fact) in contrast is entirely dependent on them (words, etc) - ie, no symbols, no dice. That disparity is what's evident, over and over. It is nothing but "understanding" the world entirely in terms of human-originated symbols, with zero grounding of any of it. At best it's 'understanding' patterns in input, without regards to underlying meaning.

'One hand' makes sense to entities that themselves have hands, not to disembodied calculations.

More generally, "making sense" has 'sense' in it for a reason. Common sense is sensed, not grokked via a knowledge-base, not inferred solely from abstract reasoning.

Expand full comment

Right, "making sense" for us thinking humans has "sense" in it. But that's a typo as far as the LLMs are concerned... the proper spelling is the way Big Tech thinks of it: "making cents."

😎

Expand full comment

Birgitte, bingo :)

Expand full comment