24 Comments
⭠ Return to thread

"The bottom line is this; something that AI once cherished but has now forgotten: If we are to build AGI, we are going to need learn something from humans, and how they reason and understand the physical world and represent and acquire language and complex concepts."

Via a BODY.

The Original Sin of AI, has been, to replace 'DPIC' - Direct, Physical, Interactive, Continuous - *experience*, with digital computation that involves human-created rules ('symbolic AI'), human-gathered (and labeled, for supervised ML) data ('connectionist AI') , human-created goals ('reinforcement learning AI'). While these three major branches of AI have achieved deep+narrow wins, none come anywhere close to what a butterfly, baby or kitten knows.

Biological intelligence occurs on account of directly dealing with the environment, via embodiment - which is why there is no inherent need for rules, data, goals - intelligence just 'happens' [for sure, via use of bodily and 'brainily' structures that are the result of millions of years of evolution].

The body is not simply, input/output for brain computations. Treating it as such, imo, is why AI has failed (in leading to robust, natural-like intelligence).

Expand full comment

I agree, Merleu Ponty and Hurrsel understood the importance of embodiment and why symbolic representation is flawed.

The vehicle in which AI is embodied is what determines how it experiences the world, is it a building or a fish?

Forms have extremely different percepterons, but also different needs and therefore different goals and objectives.

A supremely intelligent building would not thrive if it was forced to exist in the form of a fish. Supreme human intelligence would not help you survive as a nematode worm or a grapefruit.

Currently AI only exists in simulated digital environments, it only exists on demand, it only exists behind fixed UI.

AI isn’t to intelligence what a captivity tiger is to a wild tiger, it’s more like a tiger avatar in a computer game.

AI will keep doing things that seem like magic, because they are new. But it’s a long long way from a self replicating self sustaining wild agent like for example a frog.

AI is still confined to on demand simulation worlds, being spun up and down for party tricks like a magic act.

The path ahead to AGI must forge through

1). Objectivity, how things are

2). Interobjectivity, how things are from other pov

3). Subjectivity, how it think thing are v’s how they really are

4). Intersubjectivity, how other agents think things are v’s how it thinks v’s how they really are

5). Corporeality, how it’s embodiment perceives and interacts with all of the above

6). Intercorporeality, how other agents embodiments perceive and interact with all of the above

The joke is that people think of intelligence as a quotient, when it’s nothing of the sort. A bigger quotient isn’t smarter. Tigers eat apes.

On top of all this... I think AI will progress by identification of discrete tasks and training highly specialised agents for each task and them amalgamating all of these agents into a call up tree. A big enough tree gives the illusion of generalisation. But in reality it’s just a broad and rich tapestry of narrow specialists. This is where we will end up. This might even be what humans are. It could well be that general intelligence doesn’t even exist. Not even in humans.

Expand full comment

I like your 6 'ity's!!

Indeed, a virtual world (eg OpenAI Gym etc.) is also inadequate, because it's limited in scope/complexity - the entire universe and its phenomena can't possibly be simulated there (with the sims needing to interact, run 'forever' etc, entirely untenable; real world phenomena in comparison involve zero computation!).

I too believe in aggregation of specializations - from the cell on up, every biological structure (bacteria, plants, animals...) has evolved this way! Minsky had the right idea (Society of Mind) but that was all in the brain, and, with no 'implementation' specifics.

Physical structures, which display phenomena solely by virtue of their makeup/form, is how biological intelligence is manifested (including neural nets in brains). AI replaces these with computational structures, that is what hasn't worked well, imo.

In a Rube Goldberg contraption, the device as a whole performs an intelligent action, with not a processor in sight - the entire mechanism *is* the "computer" :) There is no digital OR analog computation!!

Expand full comment

Excellent exposition. 3I makes a lot of sense, as much as 4E.

Am sceptical about entirely virtual existence - because that is entirely computation driven, and that has severe limits.

Expand full comment