54 Comments

Deep learning (neural networks) started as alchemy and has now naturally progressed into a religion.

Expand full comment

LOL

Expand full comment

"started as alchemy": please develop the theme. I suspect you would wd do it well.

Expand full comment

well, there isn't much to develop in terms of a theme. Neural networks were originally created as machine analogues to human neurons, there isn't a solid mathematical theory behind them. Apart from the back-propagation algorithm pretty much any advance in neural networks is empirically based and that is how alchemy works, without theory or understanding.

Expand full comment
Comment removed
Mar 11
Comment removed
Expand full comment

Neural networks are not good even for perception due to their fundamental unreliability problem which is baked into the algorithms and can not be fixed without completely changing the paradigm. The unreliability manifests itself as adversarial examples where introduction of even imperceptible (to a human) amount of noise leads to drastic change in the neural network output. This hinders the application of neural networks in the real world where errors have high costs. The problem is exacerbated due to the black box nature of neural networks, so when it makes an error it's not obvious why or how to prevent it from making the same error again. Simply adding the problematic case to the training data doesn't help, because for one, it's expensive to retrain a big network, but more importantly due to the unreliability problem there is no guarantee that the network wouldn't make the same mistake to a slight variation of the problematic case.

As for LLMs, discussing problem solving and reasoning is a very long topic and we don't even have a good definition or understanding of reasoning. So, I am just going to give an example - if LLMs are unable to even learn the rules of simple arithmetic (like addition and multiplication but with large numbers to avoid memorization) how can we talk about any reasoning abilities.

Expand full comment
Comment removed
Mar 12Edited
Comment removed
Expand full comment

LLMs only create the impression of being capable of reasoning by employing huge compute, memory and training data. If they were indeed capable of reasoning they should have no problem with learning the simple rules of arithmetic. They can't because they don't reason, they mostly memorize and it's impossible to memorize all combinations of digits in an arithmetic operation because it's infinite. Here's another simple example - playing tic-tac-toe can be performed by a computer in two ways - it can reason about the current position on the board or it can simply memorize all possible board states (which for a 3x3 board is fairly small) and the corresponding correct moves. By just observing the play of the computer you might think it is reasoning, but in fact it might be just looking up the correct move corresponding to the current state of the board from a look-up table.

Expand full comment
Comment removed
Mar 13
Comment removed
Expand full comment

The problem is that the real world is an open domain and there are infinite number of rare events and edge cases, so it's impossible to painstakingly train a system to perform well without reasoning, especially when that system is unreliable as is the case with neural networks. The only way to solve the problem is to have an internal model of the world and do reasoning on it, just as you mentioned.

Expand full comment
Comment removed
Mar 14Edited
Comment removed
Expand full comment

For a mediocre deep learning skeptic, you sure are right a lot.

Expand full comment

ROFL! Indeed. Great job, Gary.

Expand full comment

The hype surrounding this is near-astonishing to me. We have YouTubers predicting AGI by 2027. Based on LLMs? Meanwhile, the proliferation of errors and fakery continues unabated > https://petapixel.com/2024/03/07/recreating-iconic-photos-with-ai-image-generators/

What happens when generative AI starts making copies based upon its own copies?

This is what so many want to base a foundation for AGI upon...?

Expand full comment

Worse - we have YouTubers predicting human-level AGI by September!

https://www.youtube.com/watch?v=pUye38cooOE

Expand full comment

Oh, yes, that guy. He was actually the one to whom I was referring! I'd call his predictions "optimistic," to put it kindly.

Expand full comment

There is some recent research that attempts to answer your second to last question. It doesn't look good, and will potentially lead to competitive advantage to those large companies, like OpenAI, who had access to the pristine, pre-GenAI Internet:

https://arxiv.org/abs/2305.17493

https://arxiv.org/abs/2307.01850

Expand full comment

“The problems with induction noted above stem not from experience per se, but from the attempt to ground knowledge and inference in experience exclusively, which is precisely what machine learning approaches to AI do. We should not be surprised, then, that all the problems of induction bedevil machine learning and data-centric approaches to AI. Data are just observed facts, stored in computers for accessibility. And observed facts, no matter how much we analyze them, don’t get us to general understanding or intelligence.”

Excerpt From

The Myth of Artificial Intelligence

Erik J. Larson

https://books.apple.com/us/book/the-myth-of-artificial-intelligence/id1551746330

This material may be protected by copyright.

As to getting to “general understanding or intelligence”, neither will symbol manipulation, bigger data sets, more parameters, or anything else. At the *very least*, inferences about the physical world is required. Necessary, but not sufficient. Inference implies inferring the intentions of living beings. As does quantum physics.

Expand full comment

The deeply annoying thing is we just went through this Frankfortian Bullshit with IBM's Watson.

When are these people going to learn the Map is not the Territory, the Narrative is not the Message?

Expand full comment

The idea that scaling will solve these problems is flawed, IMO. Surely, scaling helps some issues, but if we look at real world intelligence, it's undeniable that the average 5-year-old has a far better and deeper understanding of the "real world" while also not having been trained on the sum total of documents available on the Internet. GPT4 has been scaled up the wazoo, far more than any human, in fact, and still doesn't understand that six-fingered humans just aren't a thing.

Expand full comment

Alas, it doesn’t really matter, as perception that it does understand has pummeled actual facts to pulp. It’s more than an uphill battle to try to explain this to anyone who cares to listen/learn 😔

Expand full comment

The problem with the scaling laws is the y-axis: "LLM loss" does not correlate with "general intelligence".

Expand full comment

"I still think we need a paradigm shift" - on it! ;-)

Expand full comment

Autonomous vehicles have caused deaths.

Commercial robots have caused deaths.

One hallucinating LLM has led to a death.

RAG augmented LLMs are still churning out outrageous hallucinations.

Mix these LLMs with autonomous humanoid robots and this could go really badly.

Expand full comment

for sure (I was quoted by Cade Metz in NYT a few days ago briefly making similar point)

Expand full comment

Matthew McConaughey in the Wolf of Wall Street: "Fugayzi, fugazi. It's a whazy. It's a woozie. It's fairy dust. It doesn't exist. It's never landed. It is no matter. It's not on the elemental chart. It's not fucking real. "

Expand full comment

You said this one twice:

“Deep learning is at its best when all we need are rough-ready results, where stakes are low and perfect results optional.“ Still true.

Great list BTW 👍

Expand full comment

Also this has a couple typos:

Neurosymbolic might be a promising alternative. Pending/still true, and DepMind just has a nice Nature paper on a neurosymbolic system, AlhpaGeometry.

Expand full comment

Hi Gary, nice! Your predictions will 'stand', for as long as there is no fundamental change in the approach. Adding more data isn't fundamental change. Adding more forms of data (images, audio ...) isn't also, fundamental change. AI won't build "world models" by ingesting videos, robots will not leap over the Moravec hump simply by getting interfaced with LLMs.

It is fashionable to wonder about cat-like intelligence or dog-like intelligence or the intelligence of babies etc. None of them do what they do by crunching data or doing symbolic reasoning. Core aspects of biological intelligence lie beyond explicit digital computation.

Expand full comment
Comment removed
Mar 11Edited
Comment removed
Expand full comment

It's not an argument to make, it's evidence - billions of life forms that exhibit intelligence, zero chips, registers, etc. involved. Where is the evidence that digital computation plays a role? Show it.

You can't prove a negative. The burden of proof is on those who claim the two are equivalent.

By the way, I have no interest in arguing any further, no time to waste - if you forgot, we already did that a while back.

Expand full comment
Comment removed
Mar 11
Comment removed
Expand full comment

Indeed.

If you remember, I brought up 'direct experience' over and over again, earlier. And, that requires a body, by definition. There is more here: https://www.researchgate.net/publication/378189521_The_Embodied_Intelligent_Elephant_in_the_Room

Cheers.

Expand full comment
Comment removed
Mar 11
Comment removed
Expand full comment

Totally! Learning from failure is a fundamental way biological machines work, some of it is even hardwired (eg. the 'rooting' behavior of infants, how a butterfly on its own unfurls its wet wings and figures out how to fly away).

And, there is utility for sure, in all that we have created so far, even with the incomplete knowledge we have.

Expand full comment
Comment removed
Mar 10
Comment removed
Expand full comment

Bingo. I collect 'animal intelligence' clips as a hobby - they are amusing, amazing, and serve to illustrate the wide variety of 'intelligence', almost all of which involve no explicit symbol manipulation.

Expand full comment
Comment removed
Mar 10
Comment removed
Expand full comment

Hi John, omg! I'd love a casual chat sometime... I looked you up, and VOTEC as well - wow.

Indeed, an animal intelligence 'repository' of sorts would be a treasure trove. Among other things, it would illustrate the analog, structure->phenomena basis of core intelligence, in sharp contrast to symbol processing which the AI community has insisted to be the only approach (via the Physical Symbol System Hypothesis).

You might like these: https://www.researchgate.net/publication/378189521_The_Embodied_Intelligent_Elephant_in_the_Room and https://www.researchgate.net/publication/358886020_A_Physical_Structural_Perspective_of_Intelligence and https://www.researchgate.net/publication/346786737_Intelligence_-_Consider_This_and_Respond

Expand full comment

Well said. I was at Google until 2017, when the corporate mantra was "AI permeates everything we do."

ChatGPT is, in fact, very useful for certain things. More things than the "expert systems" that were ballyhoo'ed 30 years ago. Probably in 10 years there will be even more things. But replacing humans? Forget it.

"Now with AI!" is the modern "New and improved!" on the product packaging.

Expand full comment

Hubris of Silicon Valley tech bros continues to scale up faster than 333 red balloons. Still true.

Expand full comment

Assume you are completely correct in your assessment of the current neural network approaches: do you see any enduring use case(s) for the current approach/technology, after the current white-hot hype has died down?

Expand full comment

And now we have hallucinating hardware.

Quite fascinating how stitching robotics, YOLO, a more up to date LLM and text-to-speech can make for a cool, if not slightly terrifying demo 😅

Would be interesting to see how Figure 01 or a Boston Dynamics counterpart would respond to a masterfully set up optical illusion, like a _picture_ of a dish rack rather than an actual one. Or worse, deepfake instructions via a screen (eg hacked Google Nest Hub)

https://youtu.be/GiKvPJSOUmE

Expand full comment