Discussion about this post

User's avatar
Saty Chary's avatar

Great article!

Again, all this can be traced to 'lack of common sense', which in turns stems from training/'learning' via data - in other words, the robot has no first-hand, physical, ongoing, EXPERIENCE with the physical world it inhabits. The most common things that we say, eg. 'put things away' mean inherently nothing to a robot.

We humans directly deal with the env'mt, form models of it in our minds (however incorrect/incomplete/arbitrary/... they might be), invent symbols (language, math) to externalize our models, communicate our models to each other via those symbols. This is precisely how we have come such a long way, collectively, from our cave past.

The problem is, in current AI, language is used to 'communicate' (train all at once, really) with an **entity that is not set up to form its own models directly from the environment**! IMO this is the #1 problem with any/all of AI [neuro, symbolic, RL...].

Expand full comment
Corey's avatar

Very good article. In your recent conversation with Michael Shermer, you likened AI researchers to alchemists, arguing that they've racked up some achievements but don't really understand the thing they're trying to re-create. The analogy that occurs to me is cargo cults; you can make a bamboo plane with all the external features carefully rendered, but if you don't understand what a plane is or how it works then you'll never make it fly.

Still, I can't quite reconcile this argument with the examples I've seen of LLMs explaining jokes or writing original stories based on short prompts, for example. Is it possible that something akin to conceptual understanding has emerged there? Or am I just being taken in by the magic trick?

Expand full comment
23 more comments...

No posts