Discussion about this post

User's avatar
Pseudodoxia's avatar

People need to get ahead of the curve and understand that "world models" as touted are just the next pitch to VCs with no substantive conceptual progress. We don't know how to construct world models that are functionally like a human's (if it's even coherent to describe human cognition in those pretty dualistic terms) because we don't know how to model embodiment (simulating bodies in video games is a farce).

Symbolic AI faced intractable problems with world modelling because it was the wrong substrate, just as LLMs are, but for different reasons. It would be as reprehensible as the current strategy in Silicon Valley to not learn from that history.

Lurtz's avatar

I believe LLMs got so hyped because they felt like a big shortcut. Anyone can understand that building world models to a level that would actually revolutionize artificial intelligence is damn near impossible, just as neurosymbolic AI is hard because it has to be actually robust and has to follow the rules. It requires very hard work. Building true artificial intelligence is extremely difficult. We knew this before the current AI hype.

LLMs felt like cheating because it initially just seemed to "grow" and develop "emergent" capabilities. I am fairly certain this is why many people got so hooked on the idea. It felt like: "Maybe we don't actually have to do it the hard way. We can just make it bigger and it will probably start evolving by itself".

LLMs are, as was apparent quite early on, just an illusion of intelligence by means of scaling a quite simple concept to an absurd degree. It's sad to me that even the brilliant people working on these technologies sort of just ignored these facts and instead got high on the "what if?"

107 more comments...

No posts

Ready for more?