72 Comments

Deep learning (neural networks) started as alchemy and has now naturally progressed into a religion.

Expand full comment

For a mediocre deep learning skeptic, you sure are right a lot.

Expand full comment

The hype surrounding this is near-astonishing to me. We have YouTubers predicting AGI by 2027. Based on LLMs? Meanwhile, the proliferation of errors and fakery continues unabated > https://petapixel.com/2024/03/07/recreating-iconic-photos-with-ai-image-generators/

What happens when generative AI starts making copies based upon its own copies?

This is what so many want to base a foundation for AGI upon...?

Expand full comment
founding

“The problems with induction noted above stem not from experience per se, but from the attempt to ground knowledge and inference in experience exclusively, which is precisely what machine learning approaches to AI do. We should not be surprised, then, that all the problems of induction bedevil machine learning and data-centric approaches to AI. Data are just observed facts, stored in computers for accessibility. And observed facts, no matter how much we analyze them, don’t get us to general understanding or intelligence.”

Excerpt From

The Myth of Artificial Intelligence

Erik J. Larson

https://books.apple.com/us/book/the-myth-of-artificial-intelligence/id1551746330

This material may be protected by copyright.

As to getting to “general understanding or intelligence”, neither will symbol manipulation, bigger data sets, more parameters, or anything else. At the *very least*, inferences about the physical world is required. Necessary, but not sufficient. Inference implies inferring the intentions of living beings. As does quantum physics.

Expand full comment
Mar 10Liked by Gary Marcus

The deeply annoying thing is we just went through this Frankfortian Bullshit with IBM's Watson.

When are these people going to learn the Map is not the Territory, the Narrative is not the Message?

Expand full comment

The idea that scaling will solve these problems is flawed, IMO. Surely, scaling helps some issues, but if we look at real world intelligence, it's undeniable that the average 5-year-old has a far better and deeper understanding of the "real world" while also not having been trained on the sum total of documents available on the Internet. GPT4 has been scaled up the wazoo, far more than any human, in fact, and still doesn't understand that six-fingered humans just aren't a thing.

Expand full comment
Mar 10·edited Mar 10Liked by Gary Marcus

The problem with the scaling laws is the y-axis: "LLM loss" does not correlate with "general intelligence".

Expand full comment
Mar 10Liked by Gary Marcus

"I still think we need a paradigm shift" - on it! ;-)

Expand full comment
Mar 16Liked by Gary Marcus

Autonomous vehicles have caused deaths.

Commercial robots have caused deaths.

One hallucinating LLM has led to a death.

RAG augmented LLMs are still churning out outrageous hallucinations.

Mix these LLMs with autonomous humanoid robots and this could go really badly.

Expand full comment

Matthew McConaughey in the Wolf of Wall Street: "Fugayzi, fugazi. It's a whazy. It's a woozie. It's fairy dust. It doesn't exist. It's never landed. It is no matter. It's not on the elemental chart. It's not fucking real. "

Expand full comment

You said this one twice:

“Deep learning is at its best when all we need are rough-ready results, where stakes are low and perfect results optional.“ Still true.

Great list BTW 👍

Expand full comment

Hi Gary, nice! Your predictions will 'stand', for as long as there is no fundamental change in the approach. Adding more data isn't fundamental change. Adding more forms of data (images, audio ...) isn't also, fundamental change. AI won't build "world models" by ingesting videos, robots will not leap over the Moravec hump simply by getting interfaced with LLMs.

It is fashionable to wonder about cat-like intelligence or dog-like intelligence or the intelligence of babies etc. None of them do what they do by crunching data or doing symbolic reasoning. Core aspects of biological intelligence lie beyond explicit digital computation.

Expand full comment

Well said. I was at Google until 2017, when the corporate mantra was "AI permeates everything we do."

ChatGPT is, in fact, very useful for certain things. More things than the "expert systems" that were ballyhoo'ed 30 years ago. Probably in 10 years there will be even more things. But replacing humans? Forget it.

"Now with AI!" is the modern "New and improved!" on the product packaging.

Expand full comment

Hubris of Silicon Valley tech bros continues to scale up faster than 333 red balloons. Still true.

Expand full comment

Assume you are completely correct in your assessment of the current neural network approaches: do you see any enduring use case(s) for the current approach/technology, after the current white-hot hype has died down?

Expand full comment
Mar 14·edited Mar 14

And now we have hallucinating hardware.

Quite fascinating how stitching robotics, YOLO, a more up to date LLM and text-to-speech can make for a cool, if not slightly terrifying demo 😅

Would be interesting to see how Figure 01 or a Boston Dynamics counterpart would respond to a masterfully set up optical illusion, like a _picture_ of a dish rack rather than an actual one. Or worse, deepfake instructions via a screen (eg hacked Google Nest Hub)

https://youtu.be/GiKvPJSOUmE

Expand full comment