This interesting Nature article by Anthony M. Zador came up in my Twitter feed: A critique of pure learning and what artificial neural networks can learn from animal brains:
Artificial neural networks (ANNs) have undergone a revolution, catalyzed by better supervised learning algorithms. However, in stark contrast to young animals (including humans), training such networks requires enormous numbers of labeled examples, leading to the belief that animals must rely instead mainly on unsupervised learning. Here we argue that most animal behavior is not the result of clever learning algorithms—supervised or unsupervised—but is encoded in the genome. Specifically, animals are born with highly structured brain connectivity, which enables them to learn very rapidly. Because the wiring diagram is far too complex to be specified explicitly in the genome, it must be compressed through a “genomic bottleneck”. The genomic bottleneck suggests a path toward ANNs capable of rapid learning.
The behavior of the vast majority of animals is primarily driven by instinct, that is, innate behavior, with learning being more of a fine tuning mechanism. For simple animals, such as insects, the innate behavior is almost the whole thing. Zador points out, for example, that spiders are born ready to hunt.
By the time we get to mammals, learning is responsible for a larger share of the behavior, but mice and squirrel behavior remains mostly innate. We have a tendency to view ourselves as an exception, and we are, to an extent. Our behavior is far more malleable, subject to revision from learning, than the typical mammal.
But a lot more human behavior is innate than most of us are comfortable acknowledging. We have a hard time seeing it because we’re doing so from within the species. We talk about “general” intelligence as though we were one. But our intelligence is tightly wound to the needs of a social primate species.
I’m a bit surprised that the artificial intelligence field needs to be told that natural neural networks are not born blank slates. Although rather than blank slate philosophy, this might simply represent the desire of engineers to ensure that the learning algorithm well has been thoroughly tapped.
But it seems like the next generation of ANNs will require a new approach. Zador points out how limited our current ANNs actually are.
We cannot build a machine capable of building a nest, or stalking prey, or loading a dishwasher. In many ways, AI is far from achieving the intelligence of a dog or a mouse, or even of a spider, and it does not appear that merely scaling up current approaches will achieve these goals.
Nature’s secret sauce appears to be this innate wiring. But a big question is where this innate wiring comes from. It has to come from the genome, in some manner. But Zador points out that the information capacity of the genome is far smaller, by several orders of magnitude, than what is needed to specify the wiring for a brain.
Although for simple creatures, like c-elegans worms, it is plausible for the genome to actually specify the wiring of their entire nervous system, in the case of more complex animals, particularly humans, it has to be about specifying rules for wiring during development. Interestingly, human genomes are relatively small compared to many others in the animal kingdom, such as fish, indicating that the the genome information bottleneck may actually have some adaptive value.
This means that brain circuits should show repeating patterns, a canonical circuit that many neuroscientists search for. I’m reminded of the hypothesis of cortical columns, which seem similar to the idea of the canonical structure. If so, it would only apply to the cortex itself.
But aside from the cerebellum, most of the neurons in the brain are in the cortex. Of the 86 billion neurons in the human brain, 69 billion are in the cerebellum, 16 billion in the cortex, and all the subcortical and brainstem neurons fall in that last billion or so. I would think the subcortical and brainstem regions are the ones with the most innate wiring, meaning that these are the regions that a lot of the genomic wiring rules would have to apply to, but detailed rules for a billion neurons seem easier to conceive of than for 86 billion.
Zador points out that, from a technological perspective, ANNs learn by encoding the structure of statistical regularities from the incoming data into their network. In the animal versions, evolution could be viewed as an “outer” loop where long term regularities get encoded across generations, and an “inner” loop of the animal learning during its individual lifetime. Although the outer loop only happens indirectly through the genome.
Anyway, it seems like there’s a lot to be learned about building a mind by studying how the human genome codes for and leads to the development of neural wiring. Essentially, our base programming comes from this process.
But apparently it remains controversial that AI research still has things to learn from biological systems. It’s often said that the relationship of AI to brains is like the one between planes and birds. Engineers could only learn so much from bird flight.
But Zador points out that this misses important capabilities we want from an AI. While a plane can fly faster and higher than any bird, it can’t dive into the water and catch a fish, swoop down on a mouse, or hover next to a flower. Computer systems already surpass humans in many specific tasks, but fail miserably in many others, such as language, reasoning, common sense, spatial navigation, or object manipulation, that are trivially simple for us.
If Zador’s right, and it’s hard for me to imagine he isn’t, then AI research still has a lot to learn from biological systems. Frankly, I’m a bit surprised this is controversial. As in many endeavors, intractable problems often become easier if we just broaden the scope of our investigation.
Unless, of course, there’s something about this I’m missing?