Case in point: deep learning architectures are designed. E.g. BERT is bi-directional, GPT uni-directional. This difference is not learned but preset ('inborn') to influence learning.
But it is interesting to ask if critical aspects of compositional cognition, e.g. the 'logistics of access' it requires, can be learned from a more basic architecture or need to be preset.
If you look at this issue from an evolutionary perspective, you might say that it would be 'stupid' of evolution not to give an individual a head start in life.
Otherwise, you would have generation after generation of painstaking trial and error like development, and yet nothing to show for it when an individual is born.
Of course, learning is crucial and what deep learning has shown is impressive. Indeed, one of the reasons of why Watson failed could be that it did not have the benefit of 'hands on' experience (it takes more than a lot of information to become a good doctor). So, statistics matters.
But before I would accept that it is the only thing, I would like to see the statistics behind, say, SpongeBob SquarePants. What is there in the environment of a young child that it could so easily relate to character like this?
Hi Gary, excellent article that lays out the two 'sides' :) Indeed, nurture won't be useful without nature.
Also - Bloom's Taxonomy offers a quite useful, graduated/hierarchical list of capabilities, that can serve to create tests against which to assess AI mastery. AI thus far has been stuck at the bottommost level :) :(
Also, seems to me that human learning stands apart from all others', on account of our innate abilities to represent happenings directly, ie gain body-based "experience", AND to represent things (direct experiences, objective knowledge...) symbolically as well. This duality lets us glide back and forth, lets us symbolize our knowledge and experience for others to pick up, and conversely, lets us benefit from others' symbolizations (going back to 1000s of years!). Other animals seem more limited in the 'direct <-> symbolic' mapping.
Indeed!! Same with the current crop of image generators.
I see it this way: GPT-3, DALL E etc create only (in fact, just compute!) word order and RGB pixel values resp, having learned nothing but word order and RGB pixel values - with zero additional understanding of the input or the output; it is we humans, given our world experience, who make sense of the computed word ordering and pixel colors :) :) Would love your thoughts on this.
Let's note the self-serving assumption that's being smuggled in by the "innateness" corner: that an AI always starts with a null baseline and then is fed raw data over time. But what if a system starts from a much richer baseline -- that is, it is able to do things as soon as you turn it on -- and then "learns" on top of what it started with? And what if a meta-AI system, fed with data on how each of the baseline replicas (all starting from the same non-nil baseline) reacted to incoming real world challenges and is able to learn (for instance, how each handles the variouse edge cases that they encounter) and abstract about those various systems so that the new baseline will be augmented accordingly? If we want to refer to these new baselines as "innate" states, what meaning does "innateness" have other than a baseline of ready to perform (as well as learn) data-driven system?
IMHO, the "richer baseline" is the innateness we're talking about. Although an AI does need to be able to start right out of the box, it makes sense for it to also learn, building on its baseline in order to better serve its purpose. My kitchen robot should be able to work once assembled but needs to spend some time orienting itself to my kitchen and learning my preferences before it is ready to make my morning coffee.
I'm not exactly sure what you mean by a meta-AI system but, in terms of my kitchen robot example, we could certainly have all the robots learn from each other. Easy to do if they have an internet connection or have their knowledge held in the cloud. And that learned common knowledge could become innate in future robots of the same product line. In yet another scenario, if I visit your house and like how your kitchen robot behaves, perhaps I could export its learning to my robot. Might have to buy you dinner though. The possibilities are endless.
Indeed, and hence my tentative contention that maybe (and I don't know), it all boils down to data -- whether fed from scratch or fed on top of a hardened baseline of prior learnings.
By meta-AI system I simply mean an AI system whose domain of inquiry is not images or text or sound, but the behavior of a set of more primary AI systems all functioning on top of the same neural net architecture, all with the same initial data set, but each facing different challenges, whether edge cases or non-edge cases, and delivering results that are then annotated for this meta-AI system for it to learn about how to train these AI systems without being tethered to one thread of experiences.
I agree that no rational person who is even remotely aware of findings in neuroscience and cognitive pyschology would disagree that both learning and innateness are part of the answer. I also like how you state the ovious, that learning cannot be explained without admitting that the "learning mechanisms" (or at least many/most?) must be innate - lest we run into an infinite regress. That's an excellent point.
To me, the debate is not whether we have both, learning and innateness. To me the question is how much of each, and what kind of each - i.e., what is it that we learn, and what is that we do not - and cannot, in a lifetime, learn. These are the questions that need to be answered. A chimpanzee living with humans will never be more intelligent than a baby left alone in the jungle. The question thus is what is it that is innate? What mechanisms and what universal logic seems to be built in that ALLOWS US to learn the rest.
Someone please enlighten me on how I can locate the semantic content of this: "learning cannot be explained without admitting that the 'learning mechanisms'... must be innate." In other words: We can't explain x unless we admit that x exists a priori.
As to this question: "What mechanisms and what universal logic seems to be built in that ALLOWS US to learn the rest," I would very much love an answer, or even an outline of an answer, to that question.
"Why am I into both learning and innateness? In large part because I care about what make humans unique."
One take on what makes humans unique...
1) We are made of thought.
2) Thought operates by dividing a single unified reality in to conceptual parts.
3) This conceptual division process is the source of both our genius and insanity.
GENIUS: We can rearrange the conceptual parts in our minds to create visions of reality which don't yet exist, that is, we can be creative.
INSANITY: This thought generated conceptual division process creates a human experience where we feel divided from reality, divided from each other, and even divided within our own minds. This experience of isolation generates fear, which is in turn the source of most human problems.
The best example of this marriage between genius and insanity may be nuclear weapons. What makes us unique? No other species possesses both the brilliance and insanity necessary to engineer it's own extinction.
We are made of thought. Thought operates by a process of division. And the rest of the human story flows from there. This is the innate foundation upon which AI is being developed.
The mechanism of autonomous learning is innate. We can consider this as a kind of knowledge, represented by algorithms and implemented as "hardware". All other knowledge is acquired in the learning process.
All three words being, how shall we call them, black boxes that explain nothing. They are metaphysical posits, which explains why, 60 years on, we have moved not one iota towards replicating what a human doing can do. Until those black boxes have been opened and explained to the degree that would enable us to emulate them, please don't tell me that we have made any scientific advances. ✌️
Agreed that we should expect the human brain and body to be primed for learning and that there is no reason to pretend that we can insulate learning from that priming. The language of nativism and innateness is something I don't tend to embrace (too much confusion ensues in my neck of the intellectual woods).
I am curious where Gary stands on Michael Tomasello's interventions into language acquisition/human communication.
Interventions just means what he has contributed to the debates around the topic of how humans learn and what the evolutionary context was for such learning. Please bear with me because this is way out of field for me. But a while back I read a long Scientific American article which detailed Tomasello's challenge to the "universal grammar" idea that Chomsky introduced and Pinker refined.
My (shaky) impression is that on the topic of "AGI" in general, either a hardcore Chomskyite OR a hardcore Tomasellian would agree with your premise that what humans can do innately and what they learn is best understood as integrated. So far as I can tell, the (very dumb) idea that there is a kind of zero sum game between innate and learned capacities derives from a proposition about machine learning then are then extrapolated to humans (i.e., the endemic problem of anthropomorphization).
So I'm raising the Tomaselli "intervention" as a side question: b/c I am genuinely interested in whether you think it's useful to contemplate the subtly different emphases between Tomaselli and Chomsky/Pinker from the standpoint of theorizing what "AGI" might entail or require.
I guess i would use the word contribution. I think that the Tomasello’s key contribution has been to focus on the value of theory of mind towards acquiring language, and that certainly has value. (I don’t think his arguments against nativism per se are particularly strong.)
It may just be me but I never understood the importance of this debate. Isn't it obvious that in order for the brain to learn anything, it must first exist? There can be no learning without an existing learning mechanism designed for that purpose. Why is there a debate about this? Should not the discussion be about what is the best innate mechanism for generalized intelligence?
In this light, I see nothing in either deep learning or symbolic AI that can properly generalize. The ability to generalize is fundamental to intelligence and must be innate to it. That is, a truly intelligent system must be designed from the ground up with generalization in mind. This includes the design of its sensors and effectors. Many biologists are aware of the complex design of biological sensors, especially the retina and the cochlea. Nothing must be taken for granted.
I would like to add another myth: "Innateness" is a word that means something useful rather than simply a stand in for: 'We are not sure what is going on, really, other than there is something going on beyond just gathering data and creating layers and connections'. In other words, stating that there is something "innate" is not much more sophisticated than what Descartes called "soul" or Kant "noumenal". I would like to suggest that the beef with "innateness" starts and ends with the fact that people who can't define it insist that it has a precise, operationalizable meaning, when it just doesn't.
You mean the one where you state that innateness is the " idea that some important aspects of mental structure are partly shaped by inherited information"? That sort of proves my point -- no? A Popper, a Popper, my Kingdrom for a Popper.
ah so did read but chose to elide “ is, information that was present before ontogeny (e.g., in DNA)—” ; if you can change the DNA and get the same results, the hypothesis is plainly disconfirmed. and so is your irritating condescension.
Has anyone done experiments trying to disconfirm anything that you assert (and failed at trying to disconfirm)? More crucially, does your theory predict anything observable? I don't think anyone can do the former (my point is that yours and Chomsky's et al.'s are metaphysical and not scientific statements), and would be eager to see evidence of the latter. I will leave alone for now the issue that your theory dor not have any explanatory heft. 'Babies don't need to see 1,000 images of balls to say 'Ball' when they see one: they are born with some genetically inherited knowledge and some wiring that allows them to learn without gobs of data'. Ok: you point to a phenomenon that no one is denying is real. That's good. But you sure are NOT saving it by pointing us to "innateness" or by stating that it is "information that was present before ontogeny".
this will be my final reply to you. there’s a considerable and growing literature on developmental neuroscience showing detailed development of neural circuitry that can’t be attributed to learning. that literature wouldn’t exist if the innateness hypothesis were wrong; and its existence is predicted by the hypothesis. if the hypothesis were wrong, and such evidence could not be found, the hypothesis would be refuted. there are also of course deprivation experiments in which some aspects of neural structure grow normally, etc.
And this will be my final reply to you: the "innateness hypothesis" is no hypothesis because it doesn't say anything more than this: 'There is something going on here more than tabula rasa learning'. No one disagree with that, and godspeed to all research out there that will help us get to a point where we can not only explain (and not just state the obvious) but even perhaps operationalize what they have learned to build better systems. Meantime, I don't think that pushing back against hype ("AI" is here and we know how to solve all problems) is a bad thing (and I value that in what you do), but I do find the truculence against folks who are doing good work unecessary and a bit irritating. Peace out!✌️
Hi Gary,
I fully agree.
Case in point: deep learning architectures are designed. E.g. BERT is bi-directional, GPT uni-directional. This difference is not learned but preset ('inborn') to influence learning.
But it is interesting to ask if critical aspects of compositional cognition, e.g. the 'logistics of access' it requires, can be learned from a more basic architecture or need to be preset.
Best,
Frank van der Velde
agreed! and nice to you, here.
Great to be here.
If you look at this issue from an evolutionary perspective, you might say that it would be 'stupid' of evolution not to give an individual a head start in life.
Otherwise, you would have generation after generation of painstaking trial and error like development, and yet nothing to show for it when an individual is born.
Of course, learning is crucial and what deep learning has shown is impressive. Indeed, one of the reasons of why Watson failed could be that it did not have the benefit of 'hands on' experience (it takes more than a lot of information to become a good doctor). So, statistics matters.
But before I would accept that it is the only thing, I would like to see the statistics behind, say, SpongeBob SquarePants. What is there in the environment of a young child that it could so easily relate to character like this?
Hi Gary, excellent article that lays out the two 'sides' :) Indeed, nurture won't be useful without nature.
Also - Bloom's Taxonomy offers a quite useful, graduated/hierarchical list of capabilities, that can serve to create tests against which to assess AI mastery. AI thus far has been stuck at the bottommost level :) :(
Also, seems to me that human learning stands apart from all others', on account of our innate abilities to represent happenings directly, ie gain body-based "experience", AND to represent things (direct experiences, objective knowledge...) symbolically as well. This duality lets us glide back and forth, lets us symbolize our knowledge and experience for others to pick up, and conversely, lets us benefit from others' symbolizations (going back to 1000s of years!). Other animals seem more limited in the 'direct <-> symbolic' mapping.
link to that? don’t know it (or which Bloom)
This for ex (searching brings up more verb lists in each category): https://cft.vanderbilt.edu/guides-sub-pages/blooms-taxonomy/
interesting that gpt can often create, yet never understand
Indeed!! Same with the current crop of image generators.
I see it this way: GPT-3, DALL E etc create only (in fact, just compute!) word order and RGB pixel values resp, having learned nothing but word order and RGB pixel values - with zero additional understanding of the input or the output; it is we humans, given our world experience, who make sense of the computed word ordering and pixel colors :) :) Would love your thoughts on this.
Still reading this but another myth might be:
Deep learning is so powerful it will learn whatever innate knowledge it needs to know anyway.
that was lecun’s position — until he included both an innate architecture and an innate motivational system in his model 🤣
Let's note the self-serving assumption that's being smuggled in by the "innateness" corner: that an AI always starts with a null baseline and then is fed raw data over time. But what if a system starts from a much richer baseline -- that is, it is able to do things as soon as you turn it on -- and then "learns" on top of what it started with? And what if a meta-AI system, fed with data on how each of the baseline replicas (all starting from the same non-nil baseline) reacted to incoming real world challenges and is able to learn (for instance, how each handles the variouse edge cases that they encounter) and abstract about those various systems so that the new baseline will be augmented accordingly? If we want to refer to these new baselines as "innate" states, what meaning does "innateness" have other than a baseline of ready to perform (as well as learn) data-driven system?
IMHO, the "richer baseline" is the innateness we're talking about. Although an AI does need to be able to start right out of the box, it makes sense for it to also learn, building on its baseline in order to better serve its purpose. My kitchen robot should be able to work once assembled but needs to spend some time orienting itself to my kitchen and learning my preferences before it is ready to make my morning coffee.
I'm not exactly sure what you mean by a meta-AI system but, in terms of my kitchen robot example, we could certainly have all the robots learn from each other. Easy to do if they have an internet connection or have their knowledge held in the cloud. And that learned common knowledge could become innate in future robots of the same product line. In yet another scenario, if I visit your house and like how your kitchen robot behaves, perhaps I could export its learning to my robot. Might have to buy you dinner though. The possibilities are endless.
Indeed, and hence my tentative contention that maybe (and I don't know), it all boils down to data -- whether fed from scratch or fed on top of a hardened baseline of prior learnings.
By meta-AI system I simply mean an AI system whose domain of inquiry is not images or text or sound, but the behavior of a set of more primary AI systems all functioning on top of the same neural net architecture, all with the same initial data set, but each facing different challenges, whether edge cases or non-edge cases, and delivering results that are then annotated for this meta-AI system for it to learn about how to train these AI systems without being tethered to one thread of experiences.
I agree that no rational person who is even remotely aware of findings in neuroscience and cognitive pyschology would disagree that both learning and innateness are part of the answer. I also like how you state the ovious, that learning cannot be explained without admitting that the "learning mechanisms" (or at least many/most?) must be innate - lest we run into an infinite regress. That's an excellent point.
To me, the debate is not whether we have both, learning and innateness. To me the question is how much of each, and what kind of each - i.e., what is it that we learn, and what is that we do not - and cannot, in a lifetime, learn. These are the questions that need to be answered. A chimpanzee living with humans will never be more intelligent than a baby left alone in the jungle. The question thus is what is it that is innate? What mechanisms and what universal logic seems to be built in that ALLOWS US to learn the rest.
Isn't that the issue?
And thanks for another great article.
Someone please enlighten me on how I can locate the semantic content of this: "learning cannot be explained without admitting that the 'learning mechanisms'... must be innate." In other words: We can't explain x unless we admit that x exists a priori.
As to this question: "What mechanisms and what universal logic seems to be built in that ALLOWS US to learn the rest," I would very much love an answer, or even an outline of an answer, to that question.
Marcus writes..
"Why am I into both learning and innateness? In large part because I care about what make humans unique."
One take on what makes humans unique...
1) We are made of thought.
2) Thought operates by dividing a single unified reality in to conceptual parts.
3) This conceptual division process is the source of both our genius and insanity.
GENIUS: We can rearrange the conceptual parts in our minds to create visions of reality which don't yet exist, that is, we can be creative.
INSANITY: This thought generated conceptual division process creates a human experience where we feel divided from reality, divided from each other, and even divided within our own minds. This experience of isolation generates fear, which is in turn the source of most human problems.
The best example of this marriage between genius and insanity may be nuclear weapons. What makes us unique? No other species possesses both the brilliance and insanity necessary to engineer it's own extinction.
We are made of thought. Thought operates by a process of division. And the rest of the human story flows from there. This is the innate foundation upon which AI is being developed.
The mechanism of autonomous learning is innate. We can consider this as a kind of knowledge, represented by algorithms and implemented as "hardware". All other knowledge is acquired in the learning process.
Learning is signal transduction in biology or System 1 in cognitive sciences.
Also it is a process of understanding and memorization or in other words sensing, perception, feeling and intuition. Thanks for the great post.
And intuition is innateness or instinct.
All three words being, how shall we call them, black boxes that explain nothing. They are metaphysical posits, which explains why, 60 years on, we have moved not one iota towards replicating what a human doing can do. Until those black boxes have been opened and explained to the degree that would enable us to emulate them, please don't tell me that we have made any scientific advances. ✌️
Agreed that we should expect the human brain and body to be primed for learning and that there is no reason to pretend that we can insulate learning from that priming. The language of nativism and innateness is something I don't tend to embrace (too much confusion ensues in my neck of the intellectual woods).
I am curious where Gary stands on Michael Tomasello's interventions into language acquisition/human communication.
not sure what you mean by his “interventions”
Interventions just means what he has contributed to the debates around the topic of how humans learn and what the evolutionary context was for such learning. Please bear with me because this is way out of field for me. But a while back I read a long Scientific American article which detailed Tomasello's challenge to the "universal grammar" idea that Chomsky introduced and Pinker refined.
My (shaky) impression is that on the topic of "AGI" in general, either a hardcore Chomskyite OR a hardcore Tomasellian would agree with your premise that what humans can do innately and what they learn is best understood as integrated. So far as I can tell, the (very dumb) idea that there is a kind of zero sum game between innate and learned capacities derives from a proposition about machine learning then are then extrapolated to humans (i.e., the endemic problem of anthropomorphization).
So I'm raising the Tomaselli "intervention" as a side question: b/c I am genuinely interested in whether you think it's useful to contemplate the subtly different emphases between Tomaselli and Chomsky/Pinker from the standpoint of theorizing what "AGI" might entail or require.
I guess i would use the word contribution. I think that the Tomasello’s key contribution has been to focus on the value of theory of mind towards acquiring language, and that certainly has value. (I don’t think his arguments against nativism per se are particularly strong.)
It may just be me but I never understood the importance of this debate. Isn't it obvious that in order for the brain to learn anything, it must first exist? There can be no learning without an existing learning mechanism designed for that purpose. Why is there a debate about this? Should not the discussion be about what is the best innate mechanism for generalized intelligence?
In this light, I see nothing in either deep learning or symbolic AI that can properly generalize. The ability to generalize is fundamental to intelligence and must be innate to it. That is, a truly intelligent system must be designed from the ground up with generalization in mind. This includes the design of its sensors and effectors. Many biologists are aware of the complex design of biological sensors, especially the retina and the cochlea. Nothing must be taken for granted.
PS. Even insects generalize.
I would like to add another myth: "Innateness" is a word that means something useful rather than simply a stand in for: 'We are not sure what is going on, really, other than there is something going on beyond just gathering data and creating layers and connections'. In other words, stating that there is something "innate" is not much more sophisticated than what Descartes called "soul" or Kant "noumenal". I would like to suggest that the beef with "innateness" starts and ends with the fact that people who can't define it insist that it has a precise, operationalizable meaning, when it just doesn't.
sorry that you weren’t able to read the footnote where i addressed this issue. browser issue maybe?
You mean the one where you state that innateness is the " idea that some important aspects of mental structure are partly shaped by inherited information"? That sort of proves my point -- no? A Popper, a Popper, my Kingdrom for a Popper.
ah so did read but chose to elide “ is, information that was present before ontogeny (e.g., in DNA)—” ; if you can change the DNA and get the same results, the hypothesis is plainly disconfirmed. and so is your irritating condescension.
Has anyone done experiments trying to disconfirm anything that you assert (and failed at trying to disconfirm)? More crucially, does your theory predict anything observable? I don't think anyone can do the former (my point is that yours and Chomsky's et al.'s are metaphysical and not scientific statements), and would be eager to see evidence of the latter. I will leave alone for now the issue that your theory dor not have any explanatory heft. 'Babies don't need to see 1,000 images of balls to say 'Ball' when they see one: they are born with some genetically inherited knowledge and some wiring that allows them to learn without gobs of data'. Ok: you point to a phenomenon that no one is denying is real. That's good. But you sure are NOT saving it by pointing us to "innateness" or by stating that it is "information that was present before ontogeny".
this will be my final reply to you. there’s a considerable and growing literature on developmental neuroscience showing detailed development of neural circuitry that can’t be attributed to learning. that literature wouldn’t exist if the innateness hypothesis were wrong; and its existence is predicted by the hypothesis. if the hypothesis were wrong, and such evidence could not be found, the hypothesis would be refuted. there are also of course deprivation experiments in which some aspects of neural structure grow normally, etc.
And this will be my final reply to you: the "innateness hypothesis" is no hypothesis because it doesn't say anything more than this: 'There is something going on here more than tabula rasa learning'. No one disagree with that, and godspeed to all research out there that will help us get to a point where we can not only explain (and not just state the obvious) but even perhaps operationalize what they have learned to build better systems. Meantime, I don't think that pushing back against hype ("AI" is here and we know how to solve all problems) is a bad thing (and I value that in what you do), but I do find the truculence against folks who are doing good work unecessary and a bit irritating. Peace out!✌️
1. "best nearby candidate" - almost a quote from my definition for intelligence.
2. "both learning and innateness" - newborns do not know about innateness, but they have brains and may learn about innateness eventually.
3. minor note - "humans unique" - is it about both sapiens and neanderthalensis?