Discussion about this post

User's avatar
Saty Chary's avatar

Hi Gary, thanks for the excellent writeup! The Moravec 'coffee-making' challenge remains alive and well, Optimus (or even a BD robot) isn't about to solve it anytime soon.

An embodied presence by itself will not result in general intelligence, there needs to be matched 'embrainment' - a brain design that permits native representation of the world, using which the system can imagine, expect, reason, etc.

Instead, if the robot uses ML, it's simply computing outputs based on learned patterns in input data, which amounts to operating in a derivative computational world while being in the real physical one! There is no perception, no cognition, of the real world - because there is no innate experiencing of it.

Sure, it will work in a structured, mostly static, unchanging environment (a narrow/deep 'win' of sorts, in keeping with the history of AI) - but an average home is anything but.

Robustness in intelligence can only result from a design that can deal with exceptions to the norm (within limits - the more the limits, the less capable the system).

Expand full comment
JJHW's avatar

I always have thought of humanoid robots as the ornithopters of the robotics world. For a true AGI you need far more neurons and far denser connections than we can achieve in silico at present. I think the best we can do for now is use NNs for perception and higher level systems such as OpenCog and Open-NARS for higher level reasoning with a society of mind / drives approach.

Expand full comment
12 more comments...

No posts