7 Comments
Sep 29, 2022Liked by Gary Marcus

Great column -- very relevant and to the point.

I agree.... but... I enjoyed Tesla AI last year (even the dancing robot :) and I've arranged my schedule so no work this Friday to see the next Tesla AI day. Wonderful feeling.... (sort of like walking into Radio Shack many, many years ago when they sold discrete components and with $20 I could buy parts and and solder and build anything that afternoon.... although what it turned out was garbage but ok for a kid). Who knows what Elon Musk is going to build with the tools he has available. It won't be AGI. (But I'm sure insane brute force computing, a lot of cool designs, a better future for all of us, etc....)

Robust robotics will not *readily* occur (other ways to achieve it also) until you solve causality (compositionality is solved automatically), spatial binding problem, temporal binding problem, grounding problem and put Spelke into a massively parallel architecture. Tesla will not achieve or demonstrate any of that on Friday.... but I'm still looking forward to it.

Expand full comment

The need to create hype is once again overwhelming the need to make progress. The AI world is becoming like a business that can only focus on short-term goals.

Expand full comment

I love your newsletter. Please get a proofreader.

Expand full comment

Agreed that robots are hard, and very time consuming. Those of us that work on AGI in concert with robotics have the inside joke that our goal is the software (AGI) but we spend 90% of our time on robotics, making things work as we want it too, before we even apply our software.

Expand full comment

Generally, what is shown is about as important (or even less important) than what is not shown. A human dressed as a robot doing stuff shows us quite explicitly that there is not yet a robot by a mile. And then something.

Stastistical AI belongs to the ‘data driven rule-based systems in disguise’ class.

Statistical AI is now hitting a wall. Combining it with symbolic AI — that earlier hit a wall — may push the wall back.

But the problem lies much deeper.

Expand full comment

Like with all AI so far, this too will be (at best) a narrow/deep 'win' - might be useful in structured environments, where 'frame' is a feature rather than a problem, and possessing 'grounded meaning' isn't part of the job description.

Expand full comment