If you believe in innateness, you don’t believe in learning
If you believe in learning, you should oppose innateness
The more things are learned, the less we need innateness
If you believe in innateness, you have to believe in innate, domain-specific knowledge
If you believe that the only things that are innate are domain-general, you aren’t really a nativist
Recent exchanges I have had on Twitter lead me to believe that each of these risible myths is alive and well, even here in late 2022; let me assure you that they all are nonsense. For once I will spare you links to the guilty.
Suffice to say that none follows logically. Regardless of what the empirical facts are for biology, it should be obvious even from the armchair that one could in principle believe, e.g., in both the existence of innateness and learning.
And in fact, literally everyone I know does. None of the myths relate to positions that anyone in the real world holds. For example, I am not aware of anybody who believes in innateness and yet denies that learning is important. Certainly all the outspoken nativists I know (Chomsky, Pinker, Spelke, Gallistel, myself, the late Jerry Fodor, etc) are all perfectly happy to recognize that some sorts of learning take place, even as we all also think that some important part of human mental machinery is built-in, in sense of naturally arising via the processes of developmental biology, independently of specific experience.1
Take Chomsky, often characterized as the ultimate arch nativist. Chomsky’s argument for a Language Acquisition Device is, actually, on close inspection, an argument for (a specialized) learning mechanism. Chomsky’s LADis not mind you, simple stimulus-response-reinforcement, deep reinforcement learning, or back-propagation, or anything else currently fashionable, but it is still a form of learning in which various parameters of a child’s mind are set in specific ways, based on data.
Now Chomsky’s notion of learning may, perhaps, be more like a newborn chick’s imprinting than Skinnerian conditioning, but it is still learning just the same, setting parameters based on data. That’s what learning is, tuning parameters to data, and even Chomsky thinks we do some of that. (He just happens to think that innate constraints play in role in the process that tunes the parameters.)
Likewise, Fodor’s argument in The Language of Thought that concepts must be acquired through some sort of innate primitives and innate combinatorial apparatus doesn’t deny learning. It just holds that whatever concepts are acquired-which is to say learned—are learned via application of the aforementioned combinatorial apparatus.
Pinker’s theory of semantic bootstrapping, likewise, is in part a theory of how some modest bits of innate knowledge about language universals might allow a child to fill in—learn—the rest. Spelke’s Core Knowledge theory is a theory about how Core Knowledge bootstraps other knowledge, and so on. Gallistel is as nativist as they come, and wrote a whole book called The Organization of Learning.
In fact, the more you like learning, the more you should embrace innateness, because part of what you should be embracing is what Peter Marler called innately-guided learning, a rubric that describes each of theories I just discussed. The more of that you have, the more you can probably learn.
A beautiful example of innately-guided learning is the “Garcia effect”, otherwise known as learned taste aversion. Taste something, and get sick soon afterwards, and you will avoid that taste for a very long time. The neural mechanisms for that represent an innate, domain-specific adaptation for learning a very specific type of information. (The imprinting of a chick on its best nearby candidate for mother is another well-known case).
My own guess, based on my reading of the ever-controversial psychology literature, is that humans are likely innately endowed with many learning mechanisms, each with different properties. Some might possibly be tailored to language, others for observational learning, classical conditioning, hypothesis testing, cost-benefit analysis, and so on, perhaps dynamically combined and recombined in various ways in real time. The very variety of innate learning mechanisms may be central to what makes us humans special.
§
In case it is not obvious, I am a card carrying nativist, as enamored with innateness as I am with learning. All of my work, on child language acquisition, generalization in neural networks, my book on developmental biology, etc, have really been about both: about trying to understand how biology (or in the case of machines, prior knowledge) and learning work together.
Why am I into both learning and innateness? In large part because I care about what make humans unique. In keeping with the above, the answer almost surely lies with some set of (not-yet -understood) learning mechanisms that are themselves innate. As Terrace and Pettito and others discovered empirically, if you raise a chimp among humans, you don’t get a human mind. Chimps lack the correct learning mechanisms; they can’t transform linguistic input into human-level understanding. If chimps and humans were born with the same learning mechanisms, you would expect to find more convergence between them than we actually observe. (Yes, their brains are smaller, but human children have smaller brains than adults, yet outperform the adults at learning language. Size may be part of it it, but size isn’t everything. What counts is what you do with the neural material you have.) Knowing more about exactly what it is beyond size that makes our mind unique would tell us an immense amount, both about ourselves, and about how we might want to build our AI.
§
Not wishing to belabor that which ought to be obvious, I leave countering the remaining myths as an exercise for the reader to work out. I trust that they are not challenging to refute. (The trickiest is the idea that if you are nativist you necessarily have to believe in innate, domain-specific knowledge; hint: you don’t. To join the club all you have to do is to think that some nontrivial stuff is innate. My own guess: we have a rich mixture of many innate mechanisms, some domain-specific, some domain-general.)
I only raise all this because so many people periodically forget a fundamental fact: learning and innateness are not and need not be mortal enemies.
Except for advanced learning mechanisms that are themselves learned (like the value of spaced practice, or ways of analyzing a book, eg starting from a perusal of the table of contents), most learning mechanisms themselves probably are innate.
If you believe in learning, you ought believe that your learning mechanisms come from somewhere. On pain of infinite regress, at least one of those learning mechanisms must be innate. So unless you believe in magic, you are already, like it or not, a nativist.
Coda
Good news for those who find any of the above upsetting: You can always redefine your problems away! For example, you can maximally narrow the definition of information and argue that DNA doesn’t contain information (somebody actually tried to tell me this a few days ago) or that DNA contains no information about phenotypes (somebody else tried to tell me this the next day). Alternatively, you can take a maximally broad definition of learning and include under that umbrella literally anything that might be innate by dint of evolution, and call that learning too! (Honestly, someone last week tried to tell me that the structure of the hand was learned; I am not making this stuff up!)
But wait, that’s not all. You can also broaden the term empiricism to include every possible process, and narrow the term nativism to only include literal neural blueprints for entire brains. Or even do both at once! Narrow nativism AND broaden empiricism, and you are guaranteed to have everything line up on your side of the argument! Schopenhauer couldn’t have said it better.
A somewhat more technical way of putting the innateness hypothesis is to say that some important aspects of mental structure are partly shaped by the influence of inherited information—that is, information that was present before ontogeny (e.g., in DNA)—on the structure of the phenotype, as opposed to a view in which all mental structure was shaped by experience with no influence from genetic contributions.
Hi Gary,
I fully agree.
Case in point: deep learning architectures are designed. E.g. BERT is bi-directional, GPT uni-directional. This difference is not learned but preset ('inborn') to influence learning.
But it is interesting to ask if critical aspects of compositional cognition, e.g. the 'logistics of access' it requires, can be learned from a more basic architecture or need to be preset.
Best,
Frank van der Velde
Hi Gary, excellent article that lays out the two 'sides' :) Indeed, nurture won't be useful without nature.
Also - Bloom's Taxonomy offers a quite useful, graduated/hierarchical list of capabilities, that can serve to create tests against which to assess AI mastery. AI thus far has been stuck at the bottommost level :) :(
Also, seems to me that human learning stands apart from all others', on account of our innate abilities to represent happenings directly, ie gain body-based "experience", AND to represent things (direct experiences, objective knowledge...) symbolically as well. This duality lets us glide back and forth, lets us symbolize our knowledge and experience for others to pick up, and conversely, lets us benefit from others' symbolizations (going back to 1000s of years!). Other animals seem more limited in the 'direct <-> symbolic' mapping.