Machine learning and the need for innate foundations

This interesting Nature article by Anthony M. Zador came up in my Twitter feed: A critique of pure learning and what artificial neural networks can learn from animal brains:

Artificial neural networks (ANNs) have undergone a revolution, catalyzed by better supervised learning algorithms. However, in stark contrast to young animals (including humans), training such networks requires enormous numbers of labeled examples, leading to the belief that animals must rely instead mainly on unsupervised learning. Here we argue that most animal behavior is not the result of clever learning algorithms—supervised or unsupervised—but is encoded in the genome. Specifically, animals are born with highly structured brain connectivity, which enables them to learn very rapidly. Because the wiring diagram is far too complex to be specified explicitly in the genome, it must be compressed through a “genomic bottleneck”. The genomic bottleneck suggests a path toward ANNs capable of rapid learning.

The behavior of the vast majority of animals is primarily driven by instinct, that is, innate behavior, with learning being more of a fine tuning mechanism.  For simple animals, such as insects, the innate behavior is almost the whole thing.  Zador points out, for example, that spiders are born ready to hunt.

By the time we get to mammals, learning is responsible for a larger share of the behavior, but mice and squirrel behavior remains mostly innate.  We have a tendency to view ourselves as an exception, and we are, to an extent.  Our behavior is far more malleable, subject to revision from learning, than the typical mammal.

But a lot more human behavior is innate than most of us are comfortable acknowledging.  We have a hard time seeing it because we’re doing so from within the species.  We talk about “general” intelligence as though we were one.  But our intelligence is tightly wound to the needs of a social primate species.

I’m a bit surprised that the artificial intelligence field needs to be told that natural neural networks are not born blank slates.  Although rather than blank slate philosophy, this might simply represent the desire of engineers to ensure that the learning algorithm well has been thoroughly tapped.

But it seems like the next generation of ANNs will require a new approach.  Zador points out how limited our current ANNs actually are.

We cannot build a machine capable of building a nest, or stalking prey, or loading a dishwasher. In many ways, AI is far from achieving the intelligence of a dog or a mouse, or even of a spider, and it does not appear that merely scaling up current approaches will achieve these goals.

Nature’s secret sauce appears to be this innate wiring.  But a big question is where this innate wiring comes from.  It has to come from the genome, in some manner.  But Zador points out that the information capacity of the genome is far smaller, by several orders of magnitude, than what is needed to specify the wiring for a brain.

Although for simple creatures, like c-elegans worms, it is plausible for the genome to actually specify the wiring of their entire nervous system, in the case of more complex animals, particularly humans, it has to be about specifying rules for wiring during development.  Interestingly, human genomes are relatively small compared to many others in the animal kingdom, such as fish, indicating that the the genome information bottleneck may actually have some adaptive value.

This means that brain circuits should show repeating patterns, a canonical circuit that many neuroscientists search for.  I’m reminded of the hypothesis of cortical columns, which seem similar to the idea of the canonical structure.  If so, it would only apply to the cortex itself.

But aside from the cerebellum, most of the neurons in the brain are in the cortex.  Of the 86 billion neurons in the human brain, 69 billion are in the cerebellum, 16 billion in the cortex, and all the subcortical and brainstem neurons fall in that last billion or so.  I would think the subcortical and brainstem regions are the ones with the most innate wiring, meaning that these are the regions that a lot of the genomic wiring rules would have to apply to, but detailed rules for a billion neurons seem easier to conceive of than for 86 billion.

Zador points out that, from a technological perspective, ANNs learn by encoding the structure of statistical regularities from the incoming data into their network.  In the animal versions, evolution could be viewed as an “outer” loop where long term regularities get encoded across generations, and an “inner” loop of the animal learning during its individual lifetime.  Although the outer loop only happens indirectly through the genome.

Anyway, it seems like there’s a lot to be learned about building a mind by studying how the human genome codes for and leads to the development of neural wiring.  Essentially, our base programming comes from this process.

But apparently it remains controversial that AI research still has things to learn from biological systems.  It’s often said that the relationship of AI to brains is like the one between planes and birds.  Engineers could only learn so much from bird flight.

But Zador points out that this misses important capabilities we want from an AI.  While a plane can fly faster and higher than any bird, it can’t dive into the water and catch a fish, swoop down on a mouse, or hover next to a flower.  Computer systems already surpass humans in many specific tasks, but fail miserably in many others, such as language, reasoning, common sense, spatial navigation, or object manipulation, that are trivially simple for us.

If Zador’s right, and it’s hard for me to imagine he isn’t, then AI research still has a lot to learn from biological systems.  Frankly, I’m a bit surprised this is controversial.  As in many endeavors, intractable problems often become easier if we just broaden the scope of our investigation.

Unless, of course, there’s something about this I’m missing?

36 thoughts on “Machine learning and the need for innate foundations

  1. I know nothing about all this, Mike, so forgive me, although it struck me that the human brain is subject to neoteny (a kind of developmental regression), meaning it’s given to the furthercreation and expansion of its own physical networks post-partum and as subjected to contact with the environment. Do these artificial neural networks allow for their own form of neoteny, or are they enclosed systems capable only of learning within predefined, given pathways?

    Liked by 1 person

    1. That’s an interesting question Hariod. In general, the ANNs I’ve seen and read about, the size and depth of the network are set by the designer, along with the learning algorithm, and then the data starts feeding in. In that regard, their relatively static compared to the biological versions.

      As Zador describes, biological systems have a lot of innate wiring. But neoteny amounts to a delay in that innate wiring taking place, a delay that allows individual learning to affect the final result. This is particularly relevant for humans, who due to the relation between a newborn’s head size and size of the birth canal, are born earlier in their development cycle than is typical for mammals.

      It also gets to an aspect that Zador discusses but I didn’t get into in the post, that the line between innate development and learning is a blurry one. Is wiring that happens based on events in the womb development or learning?

      I’m not aware of anyone doing development wiring of an ANN in the first place, much less any involving neoteny. Of course, that doesn’t mean someone somewhere isn’t doing it.

      Liked by 1 person

  2. “But Zador points out that the information capacity of the genome is far smaller, by several orders of magnitude, than what is needed to specify the wiring for a brain.”

    There is the notion of algorithmic complexity, which allows a small bundle of information to be expressed as a much larger system given a complex process doing that expression. It probably means we can’t look to the DNA alone as the “wiring diagram” but need to consider the entire genetic system.

    “Computer systems already surpass humans in many specific tasks, but fail miserably in many others, such as language, reasoning, common sense, spatial navigation, or object manipulation, that are trivially simple for us.”

    Maybe part of our growth as a species involves recognizing the limits of our machines. Ever since the Industrial Age our species has had a severe case of techno-lust, and it seems to have taken us down some nasty paths. Maybe we need to stop looking for machines to solve our problems.

    Liked by 1 person

    1. ” It probably means we can’t look to the DNA alone as the “wiring diagram” but need to consider the entire genetic system.”

      Not quite sure if I follow what you mean here. Isn’t DNA a complete picture of the genome? Or do you mean we need to include the cellular machinery and overall environment? If so, I think that’s right. DNA should be thought of more as a recipe than a blueprint.

      “Maybe we need to stop looking for machines to solve our problems.”

      I don’t know. Machines are basically tools. Adam Rutherford made the point that humans have been obligatory tool users for hundreds of thousands, possibly millions of years. We can’t really go back to the straight natural lifestyle. It seems like a matter of just how far we want to take that tool use.

      Of course, maybe the future will be a Dune type scenario, where thinking machines are taboo. I tend to doubt it. New species wide taboos seem unlikely to be consistently observed by everyone, at least unless the innate neural wiring is modified to ensure it.

      Like

      1. “DNA should be thought of more as a recipe than a blueprint.”

        Exactly.

        “We can’t really go back to the straight natural lifestyle.”

        Oh, good heavens, no!

        I’m not one to blame the tool. When I said, “We need to stop looking for machines to solve our problems,” I was referring to us, not the machines. We often look to technology to solve problems that are really up to us to solve socially, that’s all I meant.

        “Of course, maybe the future will be a Dune type scenario, where thinking machines are taboo.”

        Assuming true thinking machines (i.e. AGI) are even possible. 😀

        I always read the Dune scenario as similar to our reaction to inventing (and using) nukes. There was a general consensus that was a bridge too far. IIRC, in Dune thinking machines caused some kind of war or something and the resulting taboo was a reaction.

        Something like that could happen and might put AGI in the same category as nukes — just too horrific to use.

        Remember how, in Babylon 5, it was recognized that telepaths were a new species fundamentally inimical to homo sapiens? There was a kind of “them or us” situation. The Terminator movies expressed a similar idea, “them or us.”

        We couldn’t wait to jump into the deep end of computers and the internet, and while it’s given us many benefits, it’s also given us many problems and headaches. Maybe if we’d been a little more cautious, we could have had the benefits without so many of the problems.

        Just imagine if tech companies had taken the time to make their buggy products more bulletproof.

        But homo sap … well, the name says it all, doesn’t it. Leap, don’t look!

        Like

        1. “Assuming true thinking machines (i.e. AGI) are even possible.”

          I’m increasingly disliking the AGI (artificial general intelligence) phrase. It ignores how specialized our own intelligence is. But on whether a machine intelligence with the same capabilities as a human is possible, it depends on whether you accept that we are machines, just evolved ones.

          “IIRC, in Dune thinking machines caused some kind of war or something and the resulting taboo was a reaction.”

          Frank Herbert’s original conception seems to have been more subtle. It was a war, but a religious one instigated to free humanity from its decadent dependence on machines. His son, Brian Herbert, along with Kevin J. Anderson. when they showed it, turned it into a revolt against machines that had enslaved humanity. I can understand why. A revolt against slave masters makes for a better story.

          But from what I’ve heard, even in the Dune universe, some groups use computers anyway in a clandestine manner.

          “Just imagine if tech companies had taken the time to make their buggy products more bulletproof.”

          Based on my reading of history, the disruptions from new technology are pretty much inevitable and unpredictable. I don’t know of many technologies which haven’t come with both positive and negative consequences. I doubt Gutenberg could have predicted that his invention would lead to the Reformation, Counter-Reformation, Scientific Revolution, or pornography.

          I don’t think we yet know all the long term effects of the internet. I read a prediction somewhere that it would lead to the demise of the nation-state (although that might have been just anarchist wishful thinking).

          Like

          1. “But on whether a machine intelligence with the same capabilities as a human is possible, it depends on whether you accept that we are machines, just evolved ones.”

            Not necessarily. Perhaps it turns out that the only way to make a machine like us is the old-fashioned way.

            “I don’t know of many technologies which haven’t come with both positive and negative consequences. I doubt Gutenberg could have predicted that his invention would lead to the Reformation, Counter-Reformation, Scientific Revolution, or pornography.”

            You seem to be assuming pornography is necessarily a bad thing. It’s been around pretty much as long as humans, and more than one person has observed that any new technology seems to get applied to sex as soon as possible. (There is also that pornography led the way in making the internet commercially viable.)

            Some technologies are relatively benign because they don’t have the capacity to significantly alter the world or society. Others, atomic power, for example, have much more power to change things.

            “I don’t think we yet know all the long term effects of the internet.”

            We know quite a bit about its consequences so far. Many believe Facebook and Twitter were instrumental in the political mess we find ourselves in.

            But again: It’s not the tool, it’s the tool users. The problems are on us.

            Like

          2. “Perhaps it turns out that the only way to make a machine like us is the old-fashioned way.”

            Only time will tell, but betting against our ability to reproduce evolved capabilities doesn’t seem to have fared well historically.

            “You seem to be assuming pornography is necessarily a bad thing.”

            You might note that I listed it right by the scientific revolution. It’s just meant as an example of a major unforeseen consequence. Although knowing what we know now, it seems like an inevitable one.

            Nuclear power is definitely a difficult case. As our power increases, the ability to destroy ourselves increases with it. If we ever get fusion or antimatter, it will be worse. It’s rarely mentioned in space opera, but the ability to accelerate spaceships to the speeds in most of those stories provides the kinetic energy for incredible acts of destruction. (The Expanse does eventually acknowledge this.)

            “Many believe Facebook and Twitter were instrumental in the political mess we find ourselves in.”

            I can’t say I trust Facebook or Twitter, but I also think they get scapegoated a lot. People want to find easy answers for the political situation, but Trump is a symptom, one that’s cropping up in many countries. Until our message to people left behind by globalization is something other than, “Sucks to be you,” we’re going to continue seeing these types of figures.

            Like

          3. “Only time will tell, but betting against our ability to reproduce evolved capabilities doesn’t seem to have fared well historically.”

            True, but brains are unique enough that such logic may not apply.

            (For some reason it reminds me of my field tech days when sometimes customers would complain: “It was working fine yesterday!” Yeah, well, your light bulbs work just fine… until they don’t. Past history carries some weight, but it’s not a limit.)

            “It’s just meant as an example of a major unforeseen consequence.”

            Ah, okay. I was thrown by the sentence preceding that list: “I don’t know of many technologies which haven’t come with both positive and negative consequences.” I just assumed, then, the list had both types of consequences. The others were clearly positives, so… [shrug]

            “It’s rarely mentioned in space opera, but the ability to accelerate spaceships to the speeds in most of those stories provides the kinetic energy for incredible acts of destruction.”

            Indeed. Even basic near-space capability could allow someone to toss an asteroid at the Earth. (Or just deflect one of those near misses.)

            “I can’t say I trust Facebook or Twitter, but I also think they get scapegoated a lot.”

            In my view the former has no value at all and considerable negative value. I believe everyone should delete their Facebook account (I did many years ago). I personally don’t find much value in the latter and have never bothered to have an account. Most of it just seems like noise to me.

            And I stand by what I said: Facebook and Twitter were instrumental in the current mess, both because of their own policies and because of malicious actors (and because we tend to jump blindly into the deep end of things like this).

            But obviously the root causes are social, political, and economic.

            (I don’t know how much reading you do about computer and internet security issues, but I’ve had a long interest, and I find it appalling how bad our software is.)

            Like

          4. “The others were clearly positives, so…”

            LOLS! Depends on who you ask. I doubt the leaders of the counter-reformation saw the reformation as a positive, or vice-versa. And a lot of people in both camps had their issues with the scientific revolution.

            “I don’t know how much reading you do about computer and internet security issues,”

            My job forces me to be cognizant of it. Security updates are a never ending headache. Every product has vulnerabilities, not to mention underlying framework libraries and other infrastructure components. It’s just a matter of whether they’ve been discovered yet. A significant part of modern IT is upgrading stuff, not to get new features, but to make sure we’re still supported in terms of security. And when upgrades aren’t feasible, we firewall and isolate as much as possible.

            And now they’re finding vulnerabilities in processors themselves. I’ve known since my assembly programming days that processors have bugs, so I guess it shouldn’t be surprising. But it’s a serious PITA.

            In some ways it’s like the antibiotic resistance issue. A never ending game of just trying to stay ahead.

            Like

          5. “Depends on who you ask.”

            Well, I think you know me well enough by now to know that the opinions of one group don’t carry a lot of water with me. I think it would be hard to make a case that the printing press resulted in any notable social damage. Disruption, certainly, but in the larger scope a very clear social benefit.

            Contrast that with a technology like nuclear power (or networked computers) where there is a much stronger case for social damage.

            And, again, just to be clear, I’m arguing against the tools, but against our tendency to jump into the deep end of shiny new toys. We’re monkeys playing with dynamite and a lighter.

            “Every product has vulnerabilities, not to mention underlying framework libraries and other infrastructure components.”

            It’s the careless, gaping nature of vulnerabilities that appalls me. A great deal of it comes from bad programming practices. It is possible — albeit not easy — to create nearly bullet-proof code. The real problem is that doing it cuts into profits, so we all get to pay the price.

            “And now they’re finding vulnerabilities in processors themselves.”

            Indeed. And some of them aren’t even bugs, per se. Row hammer, for example, is a consequence of micro-electronics. Others, like spectre, are consequences of trying to increase processor speed.

            This is an example of what I mean about jumping into the deep end without looking. We rushed into the internet age, placed tons of eggs in that basket, only to find out none of it is very robust. A day hardly goes by that I don’t read about a new data breech or malware or phishing scheme.

            “In some ways it’s like the antibiotic resistance issue. A never ending game of just trying to stay ahead.”

            Some of it is classic arms race, but it didn’t have to be this way. The movies have created the idea that any hacker with the right skills and equipment can hack anything and make any computer do anything. As you know, that’s utter Hollywood BS. We can build robust safe systems… if we have the will to do so.

            (I suspect some of the problems are sheer incompetence. Programming is hard. There was an era where “computer programming” was the field with the bucks and lots and lots of unqualified people became “programmers” — although I hate to dignify them with that term. Worse, many managers aren’t qualified to tell a good programmer from a crap one. Seriously, I can’t believe the incompetence of some of the “programmers” I’ve worked with over the years. Makes me weep.)

            Like

          6. The problem is that the incentive structures are all screwed up. You’re right. Writing secure code is certainly possible. But it takes training, effective code reviews, and a good quality assurance team, most of which are typically absent, at least aside from industrial applications where life safety or high expense might be at stake. Of course, the breaches are turning out to be increasingly expensive, but it’s a cost that’s hard to take into account during development, which typically takes place under tight business deadlines.

            And a lot of code in use today is open source, coded and reviewed by people in their spare time. WordPress is a prime example. Anyone self hosting, who isn’t regularly updating their installation, has a lot of vulnerabilities. The Equifax breach came because they were running an old version of the Java Struts framework. Often upgrading those frameworks break code, so they’re time consuming and resource intensive.

            I’m not sure what the solution is. Just saying we all need to be more competent and careful, without reference to the factors that get in the way, doesn’t seem like actionable advice. This is one of the reasons why most businesses are getting out of custom development, a recognition that most organization aren’t prepared to do it well. Not that many of the vendors are much better.

            Like

          7. “I’m not sure what the solution is.”

            There isn’t really a good one. Once we open Pandora’s box, it’s pretty much game over for being smart about it to begin with.

            “Just saying we all need to be more competent and careful, without reference to the factors that get in the way, doesn’t seem like actionable advice.”

            Whether intended or not, that reads like shade. If you have a few hours, I’ll give you plenty of “factors that get in the way.”

            That said, there may be an element of the liberal progressive desire that we, somehow, become better people than we are. The problem, of course, is how the hell do you get folks to go along with the program. But I can dream.

            Like

    2. Can’t help noticing you wear glasses.

      What level of technological development counts as good for the human condition, beyond which is mere “techno-lust”?

      Also, perhaps we should wait to see what machine cognition is capable of before hastily announcing fundamental limits and there being nothing to see behind the curtain?

      Like

    3. “But Zador points out that the information capacity of the genome is far smaller, by several orders of magnitude, than what is needed to specify the wiring for a brain.”

      That’s because the genome doesn’t hold most of that information. The environment does. And throwing in the rest of the cellular machinery isn’t the answer, IMO.

      The genome (and company) encodes a few basic facts, like (the practical import of) the optics of the eye and the acoustics of the ear. But most importantly, the genome promotes a sense of importance. A healthy adult human of the opposite sex is fascinating; the number of blades of grass in a lawn is boring. The ginormous mammalian and then human cortex probably allowed a bunch of scripts – like some analogue of the spider’s web-building instructions – to fade away in favor of learning tricks from parents and other conspecifics, and also learning to experiment, observe, and improve one’s techniques.

      Like

      1. Definitely, the genome should be thought of as a recipe, not a blueprint. It works through and with the environment. The cellular machinery aspect was mentioned because that’s the immediate environment. The broader environment emerges from when and where the genes are expressed (via epigenetics) and by what the resulting proteins do. But selective benefits of a particular gene may not arise until we’re in the extended environment, what Dawkins calls the extended phenotype.

        Like

  3. One would think that the four stages of competence applies here. They are (from bottom to top): unconscious incompetence, followed by conscious incompetence, then conscious competence, finally with unconscious competence. This hierarchy, which seems to have some validity, has consciousness in position of less than dominance. We all seem to learn to walk, talk, and any number of other complex skills without conscious effort. Clearly we have learning abilities that skirt primarily conscious routes.

    Like

    1. I think it’s good to understand those stages. They give clues as to consciousness’ role.
      Consciousness does seem like a crucial component of global operant learning. But as you note, once the behavior is learned, consciousness usually isn’t needed any longer. All clues that will probably eventually be important for human level ANN learning.

      Like

  4. I may have brought up Leslie Valiant before. He regards evolution itself as a kind of learning algorithm. That can account for a lot of the innate stuff. This corresponds, I think, to the outer loop you mention.

    I also think Hoffman has something to say about this. That our perceptions and resulting behaviors are mainly hacks of various sorts that are tuned to fitness rather than reality aligns to the idea that it may require much less to encode a hack in genes than a more reality-based strategy.

    Finally with humans and other more advanced animals epigenetics needs to be looked at. A big part of the import of the delayed PFC maturation in humans is that humans continue to wire their brains for an extended period after birth and this wiring comes from cultural, societal interaction with what has been innately encoded in the genes. I don’t think this is just simply learning by another name. It is actually a dynamic interaction. Language is the best example since seemingly everyone is encoded with rules to learn any language but we grow up learning, not the language we are taught, but the one we are exposed to during critical periods in brain development. Somehow we seamlessly learn the rules with little or no deliberate instruction.

    Like

    1. I don’t recall you mentioning Valiant before, but it definitely sounds like he and Zador are on the same page. It makes sense to me. Natural selection optimizes patterns for successful reproduction, and we can see that long term optimization as a type of learning.

      I think Hoffman’s right about the hacks. But I still think he oversells the concept. What determines whether the hacks are useful or not? What makes them adaptive? Yes, there’s no guarantee what is learned accurately represents the world as it is, but to be effective, it has to have some kind of relation to that reality. There has to be something different between hacks that work and those that don’t.

      No argument from me that epigenetics are part of the puzzle. It’s not just the genes, but when and under what circumstances they get expressed, and epigenetics, at least as I understand it, is where that happens.

      I agree that the development wiring and individual learning are a dynamic interaction. There’s a sub-field called neuroanthropology, which examines how culture influences early brain development, development during critical periods, that often isn’t subsequently reversible.

      Language acquisition is a good example of innate wiring. We come prewired to learn language, which is why babies do it unassisted as long as they hear others talking. But if for any reason a child is deprived of that stimulation, the relevant circuitry apparently gets repurposed and past a certain age, they largely lose the ability to understand language, or at least understand it well.

      Like

      1. “What determines whether the hacks are useful or not? What makes them adaptive? Yes, there’s no guarantee what is learned accurately represents the world as it is, but to be effective, it has to have some kind of relation to that reality. ”

        Hacks are useful if they improve fitness. It’s natural selection. Effectiveness is measured by the actions that come from the perceptions. If the perceptions lead to actions that improve fitness they will be selected for. It’s Darwinism.

        The question usually raised if whether reality based strategies will be improve fitness better than interface (his term which approximately means hack) strategies. Since hack strategies will almost always be faster and less costly, which usually provides a fitness advantage, they win out.

        However, he does write this:

        “We use evolutionary games to show that natural selection does not favor veridical perceptions. This does not entail that all cognitive faculties are not reliable. Each faculty must be examined on its own to determine how it might be shaped by natural selection.

        Whereas in perception the selection pressures are almost uniformly away from veridicality, perhaps in math and logic the pressures are not so univocal, and partial accuracy is allowed. “

        Like

        1. “We use evolutionary games to show that natural selection does not favor veridical perceptions. This does not entail that all cognitive faculties are not reliable.”

          This to me seems contradictory. In the first sentence, he’s saying that natural selection doesn’t select for reliable perceptions (which is all “veridical” means). In the second, he’s saying but that doesn’t mean they’re not reliable.

          Again, I’m not aware of anyone arguing that our perceptions reliably reflect reality. They often don’t. But Hoffman is taking that and over interpreting it, making an unearned leap in logic, one where he removes an explanatory principle and doesn’t replace it with anything else. He’s also ignoring that our inner senses are often no more reliable than the outer ones.

          We have no choice but to use the sensory data we get to come up with the most predictive models we can. Maybe even the most successful ones are ultimately all wrong, but it’s not like we get access to ultimate reality to make a comparison. All we ever get are more predictive models.

          Like

          1. I don’t see it as contradictory at all.

            He is saying: Natural selection does not select for reliable perceptions. It selects for fitness. While “the selection pressures are almost uniformly away from veridicality”, some cognitive facilities, depending on the facility and selection pressures, may be partially reliable. It is just a more nuanced position.

            Like

          2. What’s totally consistent is that perceptions are selected for fitness just like any other biological capability. That wouldn’t seem to be controversial.

            The part that is controversial is the counter-intuitive idea that perceptions selected for fitness would generally not be reliable guides to objective reality. They can be predictive and useful and, in that narrow sense, reliable without being reliable in the broader sense of being a faithful representation of what really exists.

            Like

          3. I guess my issue is that I don’t think we ever get more than predictive models. All we can ever do is replace less predictive models with more predictive ones. We never get more than that.

            An assertion of a radically different reality behind those models is an assertion of a new model. We should judge the new model by the same standard we judge the others: how predictive is it?

            Like

          4. I’m not sure what is the new model and what is the old model that you are comparing.

            Hoffman’s interface model of perception says nothing about ultimate reality. It just says that our perceptions are unreliable guides to objective reality. It makes no statements about what is objective reality.

            You may be confusing that with conscious realism which does take a stand on the nature of objective reality. Hoffman says these two theories are distinct.

            Like

          5. Probably because whoever is interviewing him wants to move quickly to the conscious realism argument.

            I consider conscious realism to be more a philosophical position somewhat like the physicalist position that the world is composed of matter (or something not mind). Models about how to predict things and operate in reality can be done equally well from both philosophical vantage points. The nature of underlying reality is really irrelevant to science.

            Like

  5. BTW did you see this.

    https://www.quantamagazine.org/a-mathematical-model-unlocks-the-secrets-of-vision-20190821/

    It begins:

    This is the great mystery of human vision: Vivid pictures of the world appear before our mind’s eye, yet the brain’s visual system receives very little information from the world itself. Much of what we “see” we conjure in our heads.

    “A lot of the things you think you see you’re actually making up,” said Lai-Sang Young, a mathematician at New York University. “You don’t actually see them.”

    And this:

    While the cortex and the retina are connected by relatively few neurons, the cortex itself is dense with nerve cells. For every 10 LGN neurons that snake back from the retina, there are 4,000 neurons in just the initial “input layer” of the visual cortex — and many more in the rest of it. This discrepancy suggests that the brain heavily processes the little visual data it does receive.

    “The visual cortex has a mind of its own,” Shapley said.

    Like

    1. Thanks! I’ve seen the article in my feeds but haven’t read it yet. But that’s why I often say that to perceive is to predict. Perception is taking in a relatively small amount of information and predicting what’s there. It’s why we’re prone to visual illusions. People see what they expect to see.

      It’s also why eyewitness testimony is so unreliable. Given time, people’s error correction circuitry will usually allow them to perceive things more accurately, but often events happen too fast for that.

      It’s also a stark reminder that our sensory experience is a construction, the mechanics of which we have no access to, no matter how hard we try to introspect it.

      Like

      1. I don’t know much about how AI visual processing works but I wonder if it is taking in too much information as input. That 400 to 1 ratio just looks incredibly out of balance but it makes sense maybe if we are primarily scanning for actionable items.

        Like

        1. Zador discussed how sometimes a smaller network can be more accurate than a larger one, and that larger ones seem to need more data before they’re effective.

          It’s also worth remembering how tiny our central high resolution visual field is. The eyes are constantly moving around in saccadic movements. The spikes to the LGN and V1 should represent what is in our visual field in that instant. That and the meaning the brain extracts from its visual stream probably takes a lot of substrate.

          Still, there’s something like 100 million photoreceptors in the retain, but only a million or so axons in the optic nerve. A lot of consolidation seems to happen even before the signal leaves the retina.

          Like

  6. I’m not going to wade through a bunch of specializations and peacock’s tail of jargon in disciplines not my own, but flying through till the end I saw the same Conclusion that I arrived at with my little piece “Rapunzel, Rapunzel, LET DOWN your hair”…on Intelligence as the chief Model of Agency known across all species of terrestrial origin (Kant’s: Rational Being). I’ll drop the link for convenience, if you’d care to peruse it, quite brief all told.

    https://wordpress.com/view/durandusvonmeissen.wordpress.com

    Like myself, you may not wish to dither with the paragraphs preceding the last…so just scan the first till Intelligence is compared between Organic and Artificial…the last half of the essay or so, for the same Conclusion as your own, methinks.

    Liked by 1 person

    1. Thanks. I’d say I’m more optimistic overall about artificial intelligence than you are. I do think we’ll eventually be able to reproduce any intelligence in nature. At that point, although we might call it artificial intelligence, it will really be more engineered intelligence.

      Not that we’ll necessarily find it productive to mass produce systems with all the idiosyncrasies of biological minds. There’s a lot about us that is only true due to our evolutionary history.

      Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.