Share this comment
I agree, Merleu Ponty and Hurrsel understood the importance of embodiment and why symbolic representation is flawed.
The vehicle in which AI is embodied is what determines how it experiences the world, is it a building or a fish?
Forms have extremely different percepterons, but also different needs and therefore different goals and objecti…
© 2025 Gary Marcus
Substack is the home for great culture
I agree, Merleu Ponty and Hurrsel understood the importance of embodiment and why symbolic representation is flawed.
The vehicle in which AI is embodied is what determines how it experiences the world, is it a building or a fish?
Forms have extremely different percepterons, but also different needs and therefore different goals and objectives.
A supremely intelligent building would not thrive if it was forced to exist in the form of a fish. Supreme human intelligence would not help you survive as a nematode worm or a grapefruit.
Currently AI only exists in simulated digital environments, it only exists on demand, it only exists behind fixed UI.
AI isn’t to intelligence what a captivity tiger is to a wild tiger, it’s more like a tiger avatar in a computer game.
AI will keep doing things that seem like magic, because they are new. But it’s a long long way from a self replicating self sustaining wild agent like for example a frog.
AI is still confined to on demand simulation worlds, being spun up and down for party tricks like a magic act.
The path ahead to AGI must forge through
1). Objectivity, how things are
2). Interobjectivity, how things are from other pov
3). Subjectivity, how it think thing are v’s how they really are
4). Intersubjectivity, how other agents think things are v’s how it thinks v’s how they really are
5). Corporeality, how it’s embodiment perceives and interacts with all of the above
6). Intercorporeality, how other agents embodiments perceive and interact with all of the above
The joke is that people think of intelligence as a quotient, when it’s nothing of the sort. A bigger quotient isn’t smarter. Tigers eat apes.
On top of all this... I think AI will progress by identification of discrete tasks and training highly specialised agents for each task and them amalgamating all of these agents into a call up tree. A big enough tree gives the illusion of generalisation. But in reality it’s just a broad and rich tapestry of narrow specialists. This is where we will end up. This might even be what humans are. It could well be that general intelligence doesn’t even exist. Not even in humans.
I like your 6 'ity's!!
Indeed, a virtual world (eg OpenAI Gym etc.) is also inadequate, because it's limited in scope/complexity - the entire universe and its phenomena can't possibly be simulated there (with the sims needing to interact, run 'forever' etc, entirely untenable; real world phenomena in comparison involve zero computation!).
I too believe in aggregation of specializations - from the cell on up, every biological structure (bacteria, plants, animals...) has evolved this way! Minsky had the right idea (Society of Mind) but that was all in the brain, and, with no 'implementation' specifics.
Physical structures, which display phenomena solely by virtue of their makeup/form, is how biological intelligence is manifested (including neural nets in brains). AI replaces these with computational structures, that is what hasn't worked well, imo.
In a Rube Goldberg contraption, the device as a whole performs an intelligent action, with not a processor in sight - the entire mechanism *is* the "computer" :) There is no digital OR analog computation!!
I wonder what you think about this article: https://www.thephilosopher1923.org/post/artificial-bodies-and-the-promise-of-abstraction.
Excellent exposition. 3I makes a lot of sense, as much as 4E.
Am sceptical about entirely virtual existence - because that is entirely computation driven, and that has severe limits.