54 Comments

It's fascinating that the authors conclude that GPT-2 "may be our best model of language representations in the brain" when really what they have is a 25% correlation between one layer of GPT-2 and one aspect of the data. If they mean "our best (simulated) model," then I guess they might have a point, although it's hard to know what being the best AI model of any cognitive process is worth at this point. If they mean "best model (period)," that's quite the claim.

Expand full comment
Oct 16, 2022Liked by Gary Marcus

Another good post to point out problems with the AI reporting out there.

You make it your business to point out errors (generally especially with respect to unsupported claims). But such 'facts' do not convince people. It is the other way around (as psychological research has shown): convictions influence what we accept (or even notice) as 'facts' much more than the other way around. AI hype is just as many other human convictions — especially extreme ones — rather fact- and logic-resistant.

What AI-hype is thus illustrating is — ironically enough — not so much the power of digital AI, but the weakness of humans.

Our convictions stem from reinforcement, indeed a bit like ML. For us it is about what we hear/experience often or hear from a close contact. That is not so different from the 'learning' of ML (unsupervised/supervised). That analogy leads ML/AI-believers to assume that it must be possible to get something that has the same 'power' that we do. Symbolic AI's hype was likewise built on an assumption/conviction, namely that intelligence was deep down based on logic and facts (a conviction that resulted from "2500 years of footnotes to Plato"). At some point, the lack of progress will break that assumption. You're just recognising it earlier than most and that is not a nice situation to be in. Ignorance is ...

Expand full comment
Oct 16, 2022·edited Oct 16, 2022Liked by Gary Marcus

Hi Gary, another great article, thank you for pulling so many diverse pieces together! The misguided optimism and outright errors regarding the amazing qualities of AI, stem from just one thing - conflating a symbol, with its meaning! Words, x-rays, videos, "mean" something to us when we look at them (or hear, touch...) because we have our own understanding of them that is apart from those symbols themselves.

But every form of AI to date does not have an innate representation of anything! Innate representation is by definition, only possible when there is nothing between the system and its environment, that would re-represent, abstract, narrow, simplify... the world.

AI's problem - is us!!

Expand full comment

I have spent my whole career researching and building Human Language Technology. "Speech is Just Around the Corner" -- that is, speech recognition software that is accurate and fast enough will go mainstream very soon -- was something that I heard repeatedly starting from the late 80s, and every year since then, so that, for more than 20 years, we lived in what seemed to be perpetual disappointment. And then, suddenly is seems, in the early 2010s, the problem was solved! Dictation now is almost better than human. So, yes, we are not there yet on many, many AI fronts, and I agree that lots of charlatans are making many unecessary noises, but we will get there. As for those who are earnestly impatient or naively optimistic? We need them to keep hope alive and the money coming to finance the important work that is being done.

Expand full comment

Thanks Gary. Sensible stuff. In the 80s we had promises of true (referred to as 'strong') AI where internal workings were to embody world models of increasing completeness. In contrast, useful behaviours based on statistical (later 'big data') processing were called 'weak' AI. It is sobering to read some of the predictions from 80s evangelists. Incredible then and incredible now.

Expand full comment

Thanks Gary for the great post. Welcome to my blog - t.me/natural_language_explainability

Expand full comment

PS: I've always found this takedown of Chomsky solid and fair: https://norvig.com/chomsky.html

Expand full comment

"we don’t yet have any serious candidates"

We do have exciting alternative approaches that merit serious consideration by the mainstream. In my opinion, deep learning is the biggest red herring on the road to AGI in the history of AI. Symbolic AI is a close second. Neither (alone or combined) will play any role in cracking AGI. AGI researchers should immediately abandon deep learning or any kind of gradient-based optimization model and start focusing on winner-take-all, spike-timing-dependent plasticity (STDP) models.

Read this recent paper for a start. This is the true future of AI.

Columnar Learning Networks for Multisensory Spatiotemporal Learning

https://onlinelibrary.wiley.com/doi/epdf/10.1002/aisy.202200179

Expand full comment

"we don’t yet have any serious candidates" - I do not have an application to install, only a hint about an algorithm. The question is - whether to develop it as it is or think more? If you don't have time for the whole article what about just one section, the first half of it?

Expand full comment