25 Comments
Aug 18, 2022Liked by Gary Marcus

Great article!

Again, all this can be traced to 'lack of common sense', which in turns stems from training/'learning' via data - in other words, the robot has no first-hand, physical, ongoing, EXPERIENCE with the physical world it inhabits. The most common things that we say, eg. 'put things away' mean inherently nothing to a robot.

We humans directly deal with the env'mt, form models of it in our minds (however incorrect/incomplete/arbitrary/... they might be), invent symbols (language, math) to externalize our models, communicate our models to each other via those symbols. This is precisely how we have come such a long way, collectively, from our cave past.

The problem is, in current AI, language is used to 'communicate' (train all at once, really) with an **entity that is not set up to form its own models directly from the environment**! IMO this is the #1 problem with any/all of AI [neuro, symbolic, RL...].

Expand full comment
author

agreed

Expand full comment

Experience is life and DNA that's why in my formula of AGI - NLU-NLP-Multi-modal AI-Human-in-the-Loop (RL) - the latter is indispensable. I have a law - Consciousness is not separated from its carrier - https://docs.google.com/presentation/d/1VCjOHOSostUrtxieZvOjaWuTNCT59DMF/edit?usp=sharing&ouid=107490631134624151107&rtpof=true&sd=true

Expand full comment
Aug 19, 2022·edited Aug 19, 2022

Good point. In nature, intelligence is coupled with survival and reproduction, via a body. Brains create consciousness (assuming), so there is a knower/experiencer/Self/I who experiences the world. A zombie brain is an oxymoron:)

Expand full comment

What about that assuming - brains create consciousness - I doubt brains even exist as an entity not as a physiological organ of nervous system that consumes glucose and generates ATPs for supporting all the life processes - imagine just one human fertilized cell - it's already conscious as a human - it's just a fractal that repeats in structure - atom, cell, organism, planet, solar system, galaxy, universe. All cells in an organism just work in parallel.

Expand full comment

A fertilized embryo does have cells that are complex machines, and are autopoietic - but them being even a "little" conscious... hmm :) Structures and phenomena do span giant scales (eg intergalactic webs of gases), not sure if we'd attribute consciousness to them though - but maybe I'm not thinking broadly enough, lol. Also, such things are hard/impossible to verify experimentally.

Your slides - most are in Cyrillic alphabet :(

Expand full comment

Transcription, splicing of exons and translation is a biological and logical machine and that's life process. No matter of scales, nature as a whole, life is conscious in nature ) And concepts of nature and life are universal regardless of anything (experiments, opinions, languages) it's science.

Expand full comment

Nice!

Expand full comment
Aug 18, 2022Liked by Gary Marcus

Well written again. What I find very interesting is that — regardless of facts — convictions (about almost true AI just around the corner) remain unchanged. Of course that rhymes with how people’s handling of facts is more driven by their convictions than by the facts. Psychology has created quite some insights here, especially over the last 20 years or so.

There is a simple bottom line: there is no chance in hell that we will get the AI people believe we soon will have based on massive amounts of classical (machine) logic. It remains true that humanity is convinced our intelligence has to do a lot with our (limited) ability to do logic, but the fact is that we’re better at frisbee than at logic (Andy Clark). That doesn’t mean that that cultural conviction (already mentioned by Dreyfus in his takedown of the first wave of symbolic AI) is going away soon.

Expand full comment
Aug 18, 2022Liked by Gary Marcus

Very good article. In your recent conversation with Michael Shermer, you likened AI researchers to alchemists, arguing that they've racked up some achievements but don't really understand the thing they're trying to re-create. The analogy that occurs to me is cargo cults; you can make a bamboo plane with all the external features carefully rendered, but if you don't understand what a plane is or how it works then you'll never make it fly.

Still, I can't quite reconcile this argument with the examples I've seen of LLMs explaining jokes or writing original stories based on short prompts, for example. Is it possible that something akin to conceptual understanding has emerged there? Or am I just being taken in by the magic trick?

Expand full comment
author

on the jokes: we have a numerator (number of jokes explained) but not a denominator (how many were tried). the disclosure of what was actually tried was close to nil. My guess is that it is indeed more like a trick, not very generalizeable. Nobody at Google responded when I said all this publicly, on Twitter.

Expand full comment
author

ooh great analogy!

Expand full comment

Corey, interesting question, about what might be going on in LLMs. It could be that their complex/deep structure does lead to some form of concept building, but one devoid of fundamental, 'grounded' ones, since their sole source is the text corpus input to them.

Expand full comment

Don't let perfect be the enemy of good enough.

The fall back solution is to use robots for narrow tasks and in highly constrained environments. Then expand the task list and "improve" the environment to make it more machine friendly - like a shop floor.

My coffee maker can now make hot chocolate, soup or tea. My power drill can do drilling, polishing, screw-driving and mixing. From simple hand tools to power tools to attachments to ... smart tools that can perform more tasks in more environments.

The moon shot's benefit wasn't what we found on the moon, it was the tech developed to get there. Certainly that is true here as well. I still wouldn't let a robot drive my car or feed my pet from a voice command - without supervision. ;-)

Expand full comment

Ah, it's *the* Google's hype-machine at it again. But while robots are still unreliable and untrustworthy in the chaotic and wet environment of reality, using them in highly specialized and controlled environments for very specific tasks (e.g. manufacturing) is already in the works and have been doing well. Ironically, the latter's success might be the chief inducer of the illusion, "hey we can build robot arms that put cars together according to very carefully curated instruction sets, we can totally build full-body humanoid robots that can put dishes in the dishwasher when being told so in natural languages that can mean any number of things to the robot". A fertile assumption, but a fruitless endeavor this shall be, I'm afraid.

Intelligence and consciousness are emergent properties from physical structures and attributes --- this is an ancient idea that has been getting revivals in recent AI research. More people, researchers and money-makers and hype-planners, all need to hear it.

What should be vs what should not be done is not only a debate on the ethics, it is also on legality and the framework of understanding the impact of not just projects like this, but technology in general. Everything has a "move fast and let's see if we break anything" phase at the first sight of glimmer, and then gradually simmers into "let's stop moving and look at what we have left behind in our trails" moment, and I feel that PaLM-SayCan and projects similar to it are at a point where they are not moving fast enough to be reigned in, at least not yet. So reigning them in at the first possible chance --- and by that I mean ASAP --- is probably a wise move before they *actually* break anything.

And ironically, if we start to legislate, regulate, and enforce the regulations on such unreliable AIs now, we may actually avert the fate of having to surrender all AI research accomplishments a zeal similar to that of the Butlerian Jihad from "Dune". And such zeal will arise *not* because AIs are too smart or anything; but because they are not smart / conscious / contextually aware enough to know what they are actually doing to the humans and environments around them.

Expand full comment

Is the right question here really: "To build or not to build?" or is it the less provocative, less exciting, more yawn inducing, but perhaps correct one of: "What regulations shall we start thinking about to ensure that these robots do not harm us?" For instance, why is a bot used in a suicide hotline? Enforceable laws should be passed that prohibit such use, just as there is regulation prohibiting many, many things if a child is involved in the mix.

As for "edge cases" -- such problems are a BIG problem only if the ambition is to build something that is expected to do everything. Driverless cars that will drive anywhere a human can (and places they can't) may not be possible, but surely there is immense value in a driverless car for many situations where variables are far less daunting, perhaps, and edge cases not as crippling (you take a good nap as your car drives through a long streth of highway, for instance).

Expand full comment
Aug 19, 2022Liked by Gary Marcus

I agree about the need for thinking about the regulatory framework now. Otherwise we will get the world where lots of 75% systems are deployed to replace humans, and us humans will have to learn to work around the 25%. Make sure to use specific phrasing to ask Rosie to lift grandma, don't let the auto drive in snow, never ask the autodoc about self harm when feeling low.

Expand full comment

Exactly. The tailing wagging the helpless dog. And don't want to be such a dog, do we?

Expand full comment
Aug 19, 2022Liked by Gary Marcus

Thinking through the regulation is good, but enacting it is the hard part. I just finished a discussion draft of a paper for WeRobot doing a case study on legislation regarding sidewalk delivery bots, mobile backpacks, and automated vehicles. Key lesson was that the company developing the tech gets the first crack at drafting the legislative framework. Hoping it helps inform this discussion of how we regulate robots when they can cause harm.

Expand full comment

Very true. I think regulation is (very) hard, but one can deliver success in degrees. The proposition that something should not be done is fraught with what I believe are nearly unanswerable questions. Can we really stop Google from doing X. What is X? What does "stop" mean? Who gets a say into what gets stopped, etc.

PS: I'd love to read the paper.

Expand full comment

Can you envision a time when the Road to Human Interaction We Can Trust, displaces the Road to AI we can trust? After all, if our AI models are meant to copy our human models, is there any room for improvement? Francois Chollet only a couple three years ago suggested that AI could be measured using psychometric tests. Can even human intelligence be so measured? I'd say no. But I am open to argument about it ;-)

Expand full comment

It’s why humans take so many years to ‘mature’ and even then, don’t. How often do we make mistakes misinterpreting what someone else has said? Why would we assumes a robot using a form of ML ‘intelligence’ would do better? Even parity would be a problem.

Expand full comment

I’m around long enough to remember Doug Lenat’s promise (1982?) to have a system (Cyc) , within 2 years, with the common sense knowledge of a 2 year old. At the time - father of a 2 year old - I told my students (4th yr AI) rhat “Doug should get out more - spend time with 2 year olds”. I feel the same about the lose building language processors and calling them Understanders’.

Expand full comment

I just published on Opus Research a fleshed out version of my reaction to this piece, which you can find here: https://opusresearch.net/wordpress/2022/08/26/on-ais-non-conundrum-conundrum/

Expand full comment
Comment deleted
Expand full comment
author

I think a humanoid robot powered thusly would probably not last long in the market. There will likely be a huge market for clueless chatbots, though. And “driverless cars” are in risky betas.

Expand full comment