68 Comments
Feb 13Liked by Gary Marcus

Statistics over understanding. Hammer meets nail. That's it in a nutshell. Fans of Generative AI also hide behind its mystery and opacity as if to say, "We can't look inside so perhaps it is really doing more than just statistics", "Humans learn statistically. Our AIs are learning like humans", or "Perhaps the world is really just statistics all the way down".

Expand full comment
Feb 13·edited Feb 13Liked by Gary Marcus

Dear Gary, that's a lovely way to put it, 'statistics over understanding'!

The statistics are derivative in nature, dependent on word order, pixel order, syllable/tone order... , which have no *inherent* meaning [foreign languages are foreign, when the symbols and utterances mean nothing to those who weren't taught their meaning; same w/ music, math, chemical formulae, nautical charts, circuit diagrams, floor plans...].

Symbols have no inherent meaning, they only have shared meaning. And we can impart such meaning to an AI that shares the world with us, like we do with humans and animals that share the world with us - physically. Everything else is just DATA, ie DOA.

Expand full comment
Feb 14Liked by Gary Marcus

It's also very interesting to talk about chess with ChatGPT: "Can you play chess" - ChatGPT says yes, let's go. It can explain all the rules, opening principles, movement of the pieces, tactics such as pins and fork.

It even spits out some correct moves, using correct notation. But sooner or later it will make moves that are illegal (such as jumping over the opponent's pieces with a rook), despite ChatGPT being able to explain eloquently that rooks can't jump over pieces - it lacks understanding that that's exactly what it did.

Expand full comment

This is kind of funny, AI can do some things that humans can't do, but it also makes mistakes that a human would not do. Smarter than many, then dumber than all.

Expand full comment

I so appreciate these illustrated posts about AI because the non-technically-minded can grasp the main point, that these fancy algorithmic agglomerations are still failing in very fundamental ways. Controversy over when AGI might arrive and why open source models are dangerous are important, but those more esoteric topics also seem like sci-fi to the general reader. This does not. Another article I'm happy to cross post for my readers!

Expand full comment
Feb 13Liked by Gary Marcus

If we give developers $7T, THEN will the program be able to generate images of people writing with their left hand?

Expand full comment

That first defensive e/acc screenshot though, apparently the AI was "teached" better than him... and this is who is designing our future? Good game, humanity.

Expand full comment
Feb 13Liked by Gary Marcus

You are so clearly right on this front, I'm curious why you think people are trying to say otherwise. Is it because that's how they can (try to) raise $7T? Because they actually have no clue how to build AGI? Because they think LLMs can ultimately get the same result (from a practical standpoint) as AGI?

Expand full comment
Feb 17Liked by Gary Marcus

LLMs are a bit like my kids. They are unpredictable, don’t do what you tell them and are very hard to behave correctly. And they occasionally break things.

Expand full comment
Feb 16Liked by Gary Marcus

I can't wait to create the right handed writer/guitarist and wrong clock videos generated by OpenAI Sora.

Expand full comment

Two quotes popped into my head when I read this:

"When we're trying to sell it we call it AI and when we're trying to make it work we call it pattern recognition." -- my "elevator speech" to Honeywell management during the DARPA Grand Challenge.

"Your *other* left foot." -- My drill sergeant in USAF OTS.

Expand full comment

The other left :D

Expand full comment

I am almost at the point in giving up trying to have philosophical conversations with the engineering minded (and business-minded, and most "scientists"). Too demanding, and I am not getting paid... I get though: they want to build things and get things done, not question their assumptions (unless forced too, and even then it's tough).

However, there is no excuse for so-called "scientists" not being willing to be philosophical, since the best scientists in history were also philosophers (Einstein, Werner Heisenberg, etc.). Now they are more often like a kind of technician, or engineers, bureaucrats, businessmen, politically savvy, working in their extreme specialities, not (at least professional) questioning the most basic assumptions about reality, the nature of intelligence, self, understanding, consciousness, what exists, the goal of life, etc. "Who has time for that, except for the 'theory of the leisure class' " as one academic humorously put it.

And meanwhile academic philosophy has gotten lost (psychology too, with its assumptions hardened into dogma), into a rut, trying to be a handmaiden to science, or merely doing conceptual analysis in hyper-specialties no one can understand, irrelevant to living life, losing it's way from the original "love of wisdom" the Greeks knew...

The fact is, no one really knows what intelligence or understanding are. But you have to start somewhere – if doing engineering, then you start from the bad assumptions you have and see what happens – which is what we are seeing now (they just need to be more honest about it); but for science you need to question the assumption; and in philosophy, it goes even deeper, to the source, where we are in the realm of the Unknown – not a comfortable place for many (including the players mentioned about). I see that "somewhere" as empirical, from direct ("inner") experience, to really start to get anywhere regarding intelligence, understanding, and awareness... and few dare to venture there.

But I've already said too much.

Expand full comment

The post uses universals" they have never: and other indications that these posts are opinions with a axe to grind not a scientific expository.

The examples show problems that can be exposed. It would be more interesting to see the power and the limitations that GenAI as all techniques and technologies will have.

"always tried to use statistics as a proxy for deeper understanding"

What would we mean by deeper understanding... do i or you have deep understanding?

Yes I do but actually people only have "deep understanding" at the time they defend their thesis, and only about the topic they studied...It is hard to be up to date and comprehensive to get to be that much of an expert.

Maybe not for you, but aggregating human knowledge and communicating prompts in social terms has become a productive thing for hundreds of millions of people.

Expand full comment

maybe it is only strict filtering mechanisms built into the AI by the developers to reduce bizzare outputs, thereby forcing the AI to rely more on statistically more prevalent images

Expand full comment

Gary, intuitively I agree with all your arguments, but have been playing my own devil’s advocate.

Without referring to current empirical evidence, what are in your view the most succinct fundamental reasons why LLMs will never reach “understanding”, which I interpret as the ability to robustly reason and apply logic,

I.e. that we hit the limits of the current paradigm, and bigger may get better but never anything close to flawless?

If LLMs can roughly be understood to be statistical memory machines, that can adequately represent and reproduce the knowledge that is in the training data, would it be plausible that perfect data for a specific domain, containing all required knowledge and reasoning pathways for that domain (eg known medicine), lead to robust reasoning? Like training a simple regression on real world data from newtonian experiments will lead to a perfect ML model for that physics domain, even without having theoretical conceptual understanding? So in a sense, it gains some implied understanding?

Expand full comment