Discussion about this post

User's avatar
James Murnau (aka Tim James)'s avatar

It's fascinating that the authors conclude that GPT-2 "may be our best model of language representations in the brain" when really what they have is a 25% correlation between one layer of GPT-2 and one aspect of the data. If they mean "our best (simulated) model," then I guess they might have a point, although it's hard to know what being the best AI model of any cognitive process is worth at this point. If they mean "best model (period)," that's quite the claim.

Expand full comment
Gerben Wierda's avatar

Another good post to point out problems with the AI reporting out there.

You make it your business to point out errors (generally especially with respect to unsupported claims). But such 'facts' do not convince people. It is the other way around (as psychological research has shown): convictions influence what we accept (or even notice) as 'facts' much more than the other way around. AI hype is just as many other human convictions — especially extreme ones — rather fact- and logic-resistant.

What AI-hype is thus illustrating is — ironically enough — not so much the power of digital AI, but the weakness of humans.

Our convictions stem from reinforcement, indeed a bit like ML. For us it is about what we hear/experience often or hear from a close contact. That is not so different from the 'learning' of ML (unsupervised/supervised). That analogy leads ML/AI-believers to assume that it must be possible to get something that has the same 'power' that we do. Symbolic AI's hype was likewise built on an assumption/conviction, namely that intelligence was deep down based on logic and facts (a conviction that resulted from "2500 years of footnotes to Plato"). At some point, the lack of progress will break that assumption. You're just recognising it earlier than most and that is not a nice situation to be in. Ignorance is ...

Expand full comment
52 more comments...

No posts