15 Comments
Oct 1, 2022·edited Oct 1, 2022Liked by Gary Marcus

Hi Gary, thanks for the excellent writeup! The Moravec 'coffee-making' challenge remains alive and well, Optimus (or even a BD robot) isn't about to solve it anytime soon.

An embodied presence by itself will not result in general intelligence, there needs to be matched 'embrainment' - a brain design that permits native representation of the world, using which the system can imagine, expect, reason, etc.

Instead, if the robot uses ML, it's simply computing outputs based on learned patterns in input data, which amounts to operating in a derivative computational world while being in the real physical one! There is no perception, no cognition, of the real world - because there is no innate experiencing of it.

Sure, it will work in a structured, mostly static, unchanging environment (a narrow/deep 'win' of sorts, in keeping with the history of AI) - but an average home is anything but.

Robustness in intelligence can only result from a design that can deal with exceptions to the norm (within limits - the more the limits, the less capable the system).

Expand full comment

I always have thought of humanoid robots as the ornithopters of the robotics world. For a true AGI you need far more neurons and far denser connections than we can achieve in silico at present. I think the best we can do for now is use NNs for perception and higher level systems such as OpenCog and Open-NARS for higher level reasoning with a society of mind / drives approach.

Expand full comment

Augmented Inference does some amazing things in structured environments like the game go and protein folding. Unless humans some day decide that they just want to lay back and be fed sweet drinks and masturbated, they probably aren't going to put up with clumsy articulated metal and plastic getting in their way. I'm still waiting to see a use case for a humanoid robot other than cuteness. Anyone?

Expand full comment

Rather than contribute to groupthink, I will respectfully say that the above tweets and quotes sound witty and educated, but are mathematically wrong (maybe Hubicki’s somewhat neutral), as well as modeling what is occurring in a very inaccurate fashion.

In the 1960s Herbert Simon, an economist (although today he would have been some category in the information/math/computer fields) gave a lecture series at MIT which was then published in a small book “The Sciences of the Artificial”. It’s been many years since I’ve read that book but there was a story in it about two watchmakers Hora and Tempus. They both made similar watches of about 1000 (I think) parts each, but the phone would ring frequently in their shops and when they answered the phone what they were building had to be put down and fell apart. One of the watchmakers prospered (I forget the names, so I will call them Bob and Bill) and one went out of business. What happened? Bill made watches out of subassemblies of 10 parts each, while Bob made complete watches without the inefficiency of subassemblies. However, Bill was the one who survived, while Bob went out of business. You only lose a small part of your overall work when you lose the time of putting up to 10 parts in an assembly versus losing possibly up to almost all the time for putting all the parts in a whole watch, when the watch is put down to answer the phone. If there is only a 1% chance of the phone ringing when either watchmaker was adding a part to the watch or subassembly, this translates into it taking Bob *thousands* as much time and effort to make a watch as Bill.

As I wrote in the comments to the previous column on this topic, you will not readily get AGI/robust robotics/human-like abilities until you solve causality (which will solve compositionality), the spatial binding problem, the temporal binding problem, the grounding problem (which does *not* occur simply because you have a robot body, bit more complex than that) and put Spelke into the architecture. Is anyone at Tesla thinking about this as a coherent whole? (Probably not; but are people at universities with “AI==deep learning Departments” thinking about it either?)

But…. it does not matter. Like Bill, Tesla will build its robots and make them better and better, as each aspect of the robot is improved by them. Cognition will come. They will change it a number of times, and yes eventually it will perform at an AGI level. And yes, the Tesla robot (and other companies too) will change society in a way much more profound than we can imagine.

It was wonderful to watch the Tesla robot march across the stage 😊

Expand full comment

I watched the start of Musks April 2022 interview on TED that you mentioned. He indeed says self-steering probably (more strongly that 'might' even) requires 'generic AI' (after starting with something like "sometimes I'm wrong" about his prediction from 2005 that one could drive a self-steering car from LA to NY by now). But he also follows that up immediately after with the statement that he is convinced Tesla will crack that within a year and then adds something like 'but maybe in a year it turns out to be wrong again'.

Expand full comment

It is funny how "experts" are cited and how they say for example that the robot had to be carried, while completely ignoring the second robot which could in fact walk.

From things like this everybody has to draw his/her own conclusions. Maybe the "experts" are more frightened than they want to show. This starts exactly like in the automobile and space industry. And look where it got them.

Expand full comment

That is a fair interpretation of what we saw. The Q&A session on Optimus was actually the most insightful part. There we saw Musk's extreme optimism and computationalist position at display. On the one hand he was realistic saying that he did not know exactly where the development of Optimus would end up and that it would take 5 or 10 years to unlock the potential. On the other hand, he said that it would be able to make art, deal with emotion, would be able to converse and that they were going to build in local safeguards to turn it off that are not accessible through the internet. My take is that we should be grateful that there is someone out there who will eventually demonstrate, against his own expectation, that computationalism is false.

Expand full comment

Honestly, I think he might have a micro-dosing problem. His behavior has just been SO WEIRD this year, the most likely explanation is probably the same for Elon Musk as anyone else - drugs and/or alcohol. He is just saying so many ridiculous things, like the reason Tesla should build AGI is because they are (checks notes) "a publicly traded company and if you don't like Tesla's AI you can buy their stock and fire [Elon]." Did he forget about ALL THE OTHER COMPANIES HE IS COMPETING WITH, save OpenAI, and even they are really Microsoft's lab in terms of who pays the bills. Like, only a drunk person or someone totally absent mentally would embarrass himself by being so utterly unprepared, he was so clearly winging it. If it weren't for his insane compensation package there really isn't much reason for him to stick around Tesla, since he hasn't even been CEO since the SEC forced him to resign from that role years ago.

Expand full comment

Don‘t forget that Elon has a use case nobody else has. He wants to go to Mars. A bunch of autonomous robots would certainly be one way of preparing a station before humans arrive. Whether humanoid is the right form factor for that needs to be shown, but Mars would certainly be free of many of the safety requirements Earth-bound robots need to adhere to, especially before the arrival of humans.

Expand full comment