25 Comments
Aug 18, 2022Liked by Gary Marcus

Great article!

Again, all this can be traced to 'lack of common sense', which in turns stems from training/'learning' via data - in other words, the robot has no first-hand, physical, ongoing, EXPERIENCE with the physical world it inhabits. The most common things that we say, eg. 'put things away' mean inherently nothing to a robot.

We humans directly deal with the env'mt, form models of it in our minds (however incorrect/incomplete/arbitrary/... they might be), invent symbols (language, math) to externalize our models, communicate our models to each other via those symbols. This is precisely how we have come such a long way, collectively, from our cave past.

The problem is, in current AI, language is used to 'communicate' (train all at once, really) with an **entity that is not set up to form its own models directly from the environment**! IMO this is the #1 problem with any/all of AI [neuro, symbolic, RL...].

Expand full comment
Aug 18, 2022Liked by Gary Marcus

Well written again. What I find very interesting is that — regardless of facts — convictions (about almost true AI just around the corner) remain unchanged. Of course that rhymes with how people’s handling of facts is more driven by their convictions than by the facts. Psychology has created quite some insights here, especially over the last 20 years or so.

There is a simple bottom line: there is no chance in hell that we will get the AI people believe we soon will have based on massive amounts of classical (machine) logic. It remains true that humanity is convinced our intelligence has to do a lot with our (limited) ability to do logic, but the fact is that we’re better at frisbee than at logic (Andy Clark). That doesn’t mean that that cultural conviction (already mentioned by Dreyfus in his takedown of the first wave of symbolic AI) is going away soon.

Expand full comment
Aug 18, 2022Liked by Gary Marcus

Very good article. In your recent conversation with Michael Shermer, you likened AI researchers to alchemists, arguing that they've racked up some achievements but don't really understand the thing they're trying to re-create. The analogy that occurs to me is cargo cults; you can make a bamboo plane with all the external features carefully rendered, but if you don't understand what a plane is or how it works then you'll never make it fly.

Still, I can't quite reconcile this argument with the examples I've seen of LLMs explaining jokes or writing original stories based on short prompts, for example. Is it possible that something akin to conceptual understanding has emerged there? Or am I just being taken in by the magic trick?

Expand full comment

Don't let perfect be the enemy of good enough.

The fall back solution is to use robots for narrow tasks and in highly constrained environments. Then expand the task list and "improve" the environment to make it more machine friendly - like a shop floor.

My coffee maker can now make hot chocolate, soup or tea. My power drill can do drilling, polishing, screw-driving and mixing. From simple hand tools to power tools to attachments to ... smart tools that can perform more tasks in more environments.

The moon shot's benefit wasn't what we found on the moon, it was the tech developed to get there. Certainly that is true here as well. I still wouldn't let a robot drive my car or feed my pet from a voice command - without supervision. ;-)

Expand full comment

Ah, it's *the* Google's hype-machine at it again. But while robots are still unreliable and untrustworthy in the chaotic and wet environment of reality, using them in highly specialized and controlled environments for very specific tasks (e.g. manufacturing) is already in the works and have been doing well. Ironically, the latter's success might be the chief inducer of the illusion, "hey we can build robot arms that put cars together according to very carefully curated instruction sets, we can totally build full-body humanoid robots that can put dishes in the dishwasher when being told so in natural languages that can mean any number of things to the robot". A fertile assumption, but a fruitless endeavor this shall be, I'm afraid.

Intelligence and consciousness are emergent properties from physical structures and attributes --- this is an ancient idea that has been getting revivals in recent AI research. More people, researchers and money-makers and hype-planners, all need to hear it.

What should be vs what should not be done is not only a debate on the ethics, it is also on legality and the framework of understanding the impact of not just projects like this, but technology in general. Everything has a "move fast and let's see if we break anything" phase at the first sight of glimmer, and then gradually simmers into "let's stop moving and look at what we have left behind in our trails" moment, and I feel that PaLM-SayCan and projects similar to it are at a point where they are not moving fast enough to be reigned in, at least not yet. So reigning them in at the first possible chance --- and by that I mean ASAP --- is probably a wise move before they *actually* break anything.

And ironically, if we start to legislate, regulate, and enforce the regulations on such unreliable AIs now, we may actually avert the fate of having to surrender all AI research accomplishments a zeal similar to that of the Butlerian Jihad from "Dune". And such zeal will arise *not* because AIs are too smart or anything; but because they are not smart / conscious / contextually aware enough to know what they are actually doing to the humans and environments around them.

Expand full comment

Is the right question here really: "To build or not to build?" or is it the less provocative, less exciting, more yawn inducing, but perhaps correct one of: "What regulations shall we start thinking about to ensure that these robots do not harm us?" For instance, why is a bot used in a suicide hotline? Enforceable laws should be passed that prohibit such use, just as there is regulation prohibiting many, many things if a child is involved in the mix.

As for "edge cases" -- such problems are a BIG problem only if the ambition is to build something that is expected to do everything. Driverless cars that will drive anywhere a human can (and places they can't) may not be possible, but surely there is immense value in a driverless car for many situations where variables are far less daunting, perhaps, and edge cases not as crippling (you take a good nap as your car drives through a long streth of highway, for instance).

Expand full comment

Can you envision a time when the Road to Human Interaction We Can Trust, displaces the Road to AI we can trust? After all, if our AI models are meant to copy our human models, is there any room for improvement? Francois Chollet only a couple three years ago suggested that AI could be measured using psychometric tests. Can even human intelligence be so measured? I'd say no. But I am open to argument about it ;-)

Expand full comment

It’s why humans take so many years to ‘mature’ and even then, don’t. How often do we make mistakes misinterpreting what someone else has said? Why would we assumes a robot using a form of ML ‘intelligence’ would do better? Even parity would be a problem.

Expand full comment

I’m around long enough to remember Doug Lenat’s promise (1982?) to have a system (Cyc) , within 2 years, with the common sense knowledge of a 2 year old. At the time - father of a 2 year old - I told my students (4th yr AI) rhat “Doug should get out more - spend time with 2 year olds”. I feel the same about the lose building language processors and calling them Understanders’.

Expand full comment

I just published on Opus Research a fleshed out version of my reaction to this piece, which you can find here: https://opusresearch.net/wordpress/2022/08/26/on-ais-non-conundrum-conundrum/

Expand full comment
Comment deleted
Expand full comment