3 Comments
⭠ Return to thread

I've never liked the "symbol grounding" term. It seems to imply that it just takes some sort of grounding module to suddenly give everything meaning. As I see it, it's the central issue. A symbol isn't really a symbol unless it is attached to its meaning which means a world model. Until some AI contains a very substantial world model and the machinery to use it and enhance it on the fly, there will be no AGI. As LLMs do not even attempt to build a world model, except for one based on word order, I doubt it will get anywhere close to AGI.

Expand full comment
Comment removed
Feb 14, 2024
Comment removed
Expand full comment

Disagree. It's all just patching. Until the AI can learn on its own, we won't get far with LLMs. Humans use language to communicate. Their cognition is not based on language. This is important. Any AI that is centered on language will always be at a severe disadvantage with respect to reproducing human cognition.

Expand full comment