Discussion about this post

User's avatar
Rebel Science's avatar

Wow. Interesting read. I'm much more optimistic than Marcus even though I disagree with his neuro-symbolic approach. It's always darkest before dawn. The AI field is on the cusp of another one of Kuhn's proverbial paradigm shifts. I think the time has come for a few brave maverick thinkers to throw the whole field out the window and start anew.

The AI community's obsession with language is a perfect example of putting the cart before the horse. Perceptual generalization should come first. It is the most important component of intelligence but the community, as a whole, has largely ignored it. The representationalist approach, which deep learning embodies, is the notion that *everything* in the world must be represented in the system. It should be the first thing to be thrown out, I'm sorry to say. Corner cases and adversarial patterns have proved deadly to DL, something that the autonomous vehicle industry found out the hard way after spending over 100 billion dollars by betting on DL. Combining DL with symbolic AI will not solve this problem.

Consider that a lowly honeybee's brain has less than 1 million neurons and yet it can navigate and survive in highly complex 3D environments. The bee can do it because it can generalize. It has to because its tiny brain cannot possibly store millions of learned representations of all the objects and patterns it might encounter in its lifetime. In other words, generalization is precisely what is required when scaling is too costly or is not an option. Emulating the generalizing ability of a bee’s tiny brain would be tantamount to solving AGI in my opinion. Cracking generalized perception alone would be a monumental achievement. Scaling and adding motor control, goal-oriented behavior and even a language learning capability would be a breeze in comparison.

The exciting thing is that one does need expensive supercomputers to achieve true perceptual generalization. There's no reason it cannot be demonstrated on a desktop computer with a few thousands neurons. Scaling can come later. I think a breakthrough can happen at any time because some of us AGI researchers see the current AI paradigm merely as examples of what not to do. We're taking a different route altogether. Systematic generalization is a growing subfield of AI. My prediction is that cracking AGI on an insect level can happen within 10 years. Scaling to human-level intelligence and beyond will mostly be an engineering problem with a known solution.

AGI is a race and only the best approach will win. Good luck to all participants.

Expand full comment
Venkat Srinivasan's avatar

Gary, I meant to respond to this when it first arrived. This is a particularly well written piece. Couldnt agree with you more about databases of machine interpretable knowledge. and the need for hand crafted knowledge combined with learning from data if it made sense. Exactly what I had characterized as 'computational abstractions' in one of my papers.

Expand full comment
50 more comments...

No posts