The necessity of dexterity for civilization

Today’s SMBC highlights something about humanity that is often overlooked, something that any extraterrestrial intelligence that builds a civilization would have to have.

Click through for hover-text and red button caption.
Source: Saturday Morning Breakfast Cereal – The Mammal Conspiracy

We often talk about the intelligence of dolphins, whales, cephalopods, elephants, and other species.  But something all of these species lack is an ability to alter and control their environment, at least in any detailed fashion, a capability that is at the heart of building a civilization.  When you think about the evolutionary steps that were necessary for humans to have the dexterity that we do, it starts to look like we were the benefactors of a very lucky sequence of events.

First, there needed to be a three dimensional environment like the interlocking tree branches that made the primate body plan adaptive.  Second, the primate line needed to evolve an intelligent line (the great apes).  Third, there needed to be a change in environment that led to some of those apes coming down from the trees to tall grasslands where walking upright was adaptive, freeing their hands for work other than locomotion or hanging.

Only then do we have the stage set for human intelligence to evolve.  Of course, it’s completely conceivable for alternate factors to lead to the evolution of those capabilities.  But the fact that, despite a number of relatively intelligent species in the animal kingdom, it’s only happened once on Earth should give us pause before concluding that it’s at all common for a civilization building species to evolve.

Intelligence and dexterity aren’t the only factors by the way.  Mastery of fire as a tool also seems crucial, something that seems to rule out water dwelling species like cephalopods, who if they lived longer, might have a decent chance at manipulating their environment.

Fermi’s paradox is the question which asks, if extraterrestrial civilizations are common, why weren’t we colonized long ago?  The rarity of the combination of intelligence and dexterity might give a pretty grounded answer to that question, and that’s before we even consider the likelihood of other evolutionary milestones, such as sexual reproduction or multi-cellular life.

So, when thinking about the evolution of human intelligence, be grateful for the existence of jungles and grasslands.  Without them, we might not be here, at least not with enough intelligence to discuss our evolution.

Posted in Zeitgeist | Tagged , , , , , , | 9 Comments

The problems with philosophical zombies

In any online conversation about consciousness, sooner or later someone is going to bring up philosophical zombies as an argument for consciousness being non-physical, or at least some portion of it.  The Stanford Encyclopedia of Philosophy introduces the p-zombie concept as follows:

Zombies in philosophy are imaginary creatures designed to illuminate problems about consciousness and its relation to the physical world. Unlike those in films or witchcraft, they are exactly like us in all physical respects but without conscious experiences: by definition there is ‘nothing it is like’ to be a zombie. Yet zombies behave just like us, and some even spend a lot of time discussing consciousness.

Few people, if any, think zombies actually exist. But many hold they are at least conceivable, and some that they are possible. It seems that if zombies really are possible, then physicalism is false and some kind of dualism is true. For many philosophers that is the chief importance of the zombie idea.

This is the classic version, one that is identical, atom for atom, to a conscious being, but has no conscious experience.

The biggest problem with p-zombies is that the premises of the idea presupposes its purported conclusion, the conclusion that some aspect of the mind is non-physical.  If you remove the assumption of some form of substance dualism, the concept collapses.  It becomes incoherent, a proposition similar to asserting that we can sum 2+2 and not get 4.

So, right off the bat, this classic version of the thought experiment seems like a failure, a circular argument, and for a long time that’s pretty much all the thought I gave to it.  But I recently realized that classic p-zombies have a deeper problem.  Even if you fully accept the dualism premise, it has another assumption, one that does more damage and ultimately makes the concept incoherent.

For the p-zombie concept to work, conscious experience must be an epiphenomenon, something that exists completely separate and apart from the causal framework that produces behavior.  If consciousness is not an epiphenomenon, then its absence would make a difference in the p-zombie’s behavior, which is exactly what is not supposed to happen with a p-zombie.

Here’s the problem.  We know epiphenomenalism is false.  How?  Well, if it’s true, then how can we discuss conscious experience?  Somehow, the language centers of our brains send signals to the motor cortex that drive our speech muscles to make sounds relevant to it.  Somehow signals are sent to my fingers so I can type this blog post, or similar signals are sent to your fingers if you decide to comment on it.

Whatever else it might be, conscious experience must be part of the causal framework that eventually leads to behavior.  It has causal influence on the language centers of the brain if nowhere else, but that’s enough to have causal effects in the world.  Epiphenomenalism cannot be true.

Without epiphenomenalism, it seems like the classic premise of the p-zombie collapses, even for dualists.

371px-unknown_engraver_-_humani_victus_instrumenta_-_ars_coquinaria_-_wga23954Now, maybe we can rescue the zombie concept somewhat if we retreat a bit from the classic conception and instead think about behavioral zombies.  Unlike the classic version, b-zombies are allowed to be physically different from a conscious version of the being.  It’s only in behavior that this kind of zombie is indistinguishable.

A computerized b-zombie seems trivial to do if we only need to momentarily fool an observer.  However, the inability of any automated chat-bot systems to legitimately pass the most common (and weak) form of the Turing test demonstrates that the difficulty quickly escalates.  (In the most commonly pursued version of the test, success is fooling only 30% of human subjects after five minutes of conversation.)  Reliably fooling reasonably sophisticated observers for days, weeks, or months is not possible with any kind of current technology.

The difficulty here is that the longer the b-zombie can keep up the charade, the higher the probability that it isn’t actually a charade, that it is in fact implementing some alternate architecture for consciousness.  Of course, to a substance dualist, physically implemented consciousness isn’t real consciousness.  It’s a facade that mimics the results (including the ability to discuss conscious experience) but doesn’t include the actual qualia associated with it, no matter how much the zombie might insist that it does.

So unlike classic p-zombies, b-zombies are more logically coherent.  They avoid the problem with epiphenomenalism since they can replace the putative non-physical aspect of consciousness with a physical implementation.  But the conceptual existence of a b-zombie doesn’t have the same implications against physicalism, since even if consciousness is fully physical, it’s possible an alternate architecture to produce conscious seeming behavior might do it without conscious experience.

However, as with any conscious system, external observers could never actually access the putative b-zombies’s internal subjective experience, assuming it had one, no matter how much they knew about its internals.  Which means that there would be no objective criteria that could be used to ever know whether a successful b-zombie was actually a zombie or a conscious being.  (This was largely Alan Turing’s point when he first proposed the Turing test.)

This last point tends to make me view the idea of zombies overall as fairly pointless.  It’s the classic problem of other minds.  We can never know for sure that anyone other than ourselves are conscious.  It seems reasonable to conclude that other mentally complete humans are, but everything else is up for debate.  We’re forced to rely on our intuitions for babies, animals, or any other system that might act conscious-like.

Of course, caution is called for.  Historically, those intuitions have often led us astray.  Humans once saw consciousness in all kinds of things: rivers, volcanoes, storms, and many other phenomena that, because their effects often seemed arbitrary and capricious, led us to conclude that there was some god or spirit behind it.  We have to take care that our intuitions are well informed.

But consciousness, once we do establish that it can’t be an epiphenomenon, that it is definitely part of the framework that produces behavior, must have evolved because it had some adaptive value.  That implies that our use of behavior to assess its presence or absence is a sound one, as long as that assessment is rigorous.

Unless of course I’m missing something?

Posted in Mind and AI | Tagged , , , , , , | 45 Comments

The range of conscious systems and the hard problem

This is the fifth and final post in a series inspired by Todd Feinberg and Jon Mallatt’s new book, ‘The Ancient Origins of Consciousness: How the Brain Created Experience‘.  The previous posts were:

In the first post of this series, I noted that F&M (Feinberg and Mallatt) were not attempting to explain human level consciousness, and that bears repeating.  When talking about animal consciousness, we have to be cautious that we don’t project the full breadth of human experience on them.

While a zebrafish has sensory consciousness with its millions of neurons, its conscious experience is not in the same league as a dog’s with 160 million neurons in the dog’s cerebrum alone, much less that of humans with our average of 21 billion cerebral neurons.

Space invaders c. 1978 Source: Wikipedia

Space invaders c. 1978
Source: Wikipedia

One analogy that might illustrate the differences here is to compare 1970s era video games, running on systems with a few kilobytes of memory, to video games in the 1990s running on megabytes of memory, and then again to modern video games with gigabytes to work with.  In all cases, you experience a game, but the 1970s variety were blocky dots interacting with each other (think Pong or Space Invaders), the 1990s versions were cartoons (early versions of Mortal Kombat), and the modern version is like being immersed in a live action movie (the latest versions of Call of Duty).

Call of Duty Source: Wikipedia

Call of Duty c. 2014
Source: Wikipedia

Not that the zerbrafish perceives its experience as low resolution, since even its ability to perceive is itself low resolution.  It only perceives reality in the way it can and isn’t aware of the detail and meaning it misses.

All that said, the zebrafish and many other species do model their environment (exteroceptive awareness), their internal body states (interoceptive awareness), and their reflexive reactions to what’s in the models (affective awareness).  These models give them an inner world, which, to the degree it’s effective, enables them to survive in their outer world.

A lot of human mental processing takes place sub-consciously, which might tempt us to wonder how conscious any of the zebrafish’s processing really is.  But when human consciousness is compromised by injury or disease we become severely incapacitated, unable to navigate the world and take care of ourselves in any sustained manner, something the zebrafish and related species can do, indicating that consciousness is crucial and that organisms like zebrafish and lampreys have some form of it.

Considering all this has also made me realize that what we call self awareness isn’t an either-or thing, being either fully present or absent.  Modelling the environment seems pointless if you don’t have at least a rudimentary representation of your own physical existence and its relation to that environment.  Add in awareness of internal body states and emotional reactions, and at least incipient self awareness seems like an integral aspect of consciousness, even the most primitive kind.

(When I first started this blog, I was open to the possibility that self awareness was something only a few species had, mostly due to the results of the mirror test.  But I now think the mirror test is more about intelligence than self awareness, measuring an animal’s ability to understand that it’s seeing itself in the mirror.)

All of which seems to indicate that many of the differences in consciousness between us and species such as lampreys are matters of degree rather than sharp distinctions.  Of course, the difference between the earliest conscious creatures and pre-conscious ones is also not a sharp one.  There was likely never a first conscious creature, just increasingly sophisticated senses and reflexes, gradually morphing into model driven actions, until there were creatures we’d consider to have primitive consciousness.

This lack of a sharp break bothers many people, who want consciousness to be something objectively fundamental to reality.  Some solve this dilemma with panpsychism, the view that everything in the universe has consciousness, with animals just having it in much higher magnitude than do plants, rocks, or protons.

Others conclude that consciousness is an illusion, a mistaken concept that needs to go the way of biological vitalism.  Best not to mention it, but instead to focus on the information processing necessary to produce certain behaviors.  Many scientists seem to take this approach in their professional papers.

But I’m interested in the differences between systems we intuitively see as conscious and those we don’t.  Concluding that they’re all conscious, or that none of them are, doesn’t seem like progress.  I think the most productive approach is to regard consciousness as a suite of information processing functions.  This does mean there’s an unavoidable aspect of interpretation as to which systems have these functions.  But that type of difficulty already exists for many other categories, such as the distinctions between life and non-life (see viruses).

While F&M weren’t interested in tackling human consciousness, they were interested in addressing the hard problem of consciousness.  Why does it feel “like something” to be certain kinds of systems?  Why is all this information processing accompanied by experience?

I think making any progress on this question requires that we be willing to ask a closely related question: what are feelings?  What exactly is experience?

The most plausible answer is that experience is the process of building, updating, and accessing these models.  If we accept that answer, then the hard problem question becomes: why we does this modeling happen?  The second post in this series discussed an evolutionary answer.

This makes sense when you consider the broader way we use words like “experience” to mean having had extensive sensory access to a topic in order to achieve an expert understanding of it, in other words to build superior internal models of it.

I can’t say I’m optimistic that those troubled by the hard problem will accept this unpacking of the word “experience”.  The reason is that experience is subjectively irreducible.  We can’t experience the mechanics of how we experience, just the result, so for many the idea that this is what experience is, simply won’t ring true.

The flip side of the subjective irreducibility of experience is that an observer of a system can never directly access that system’s subjective state, can never truly know its internal experience or feelings.  We can never know what it’s like to be a bat, no matter how much we learn about its nervous system.

While F&M acknowledge that this subjective-objective divide can’t be closed, they express hope that it can be bridged.  I fear the best that can be done with it is to clarify it, but maybe that’s what they mean by “bridged”.  Those who regard the divide as a problem will likely continue to do so.  Myself, I’ve always regarded the divide as a very profound fact, but not an obstacle to an objective understanding of consciousness.

In conclusion, F&M’s broader evolutionary approach has woken me from my anthropocentric slumber, changing my views on consciousness in two major ways.  First, it’s not enough for a system to model itself for us to consider it conscious; it must also model its environment and the relation between the two, in essence build an inner world as a guide to its actions.  Second, that modeling can be orders of magnitude less sophisticated than what humans do and still trigger our intuition of a fellow conscious being.

Which seems to lower the bar for achieving minimal consciousness in a technological system.  Unless we find a compelling reason to narrow our definition of consciousness, it seems plausible to consider that some autonomous robotic systems have a primitive form of it, albeit without biological motivations.  Self driving cars are the obvious example, systems that build models of the environment as a guide to their actions.

Unless of course I’m overlooking something?

Posted in Mind and AI | Tagged , , , , , , , , | 30 Comments

The neural mechanics of sensory consciousness

This is the fourth in a series of posts inspired by Todd Feinberg and Jon Mallatt’s new book, ‘The Ancient Origins of Consciousness: How the Brain Created Experience‘.  The first three were:

So, at this point in the series, we’ve talked about what is primary or sensory conscious, how it evolved, and the three types: exteroceptive consciousness of the outside world, interoceptive consciousness of internal body states, and affective consciousness of emotional reactions.  But how does it actually happen?  How is it actually implemented in an animal’s brain?

A quick note before we get into this: while there’s undoubtedly enormous value in looking at how consciousness arises in biological systems, I think we have to keep in mind that the way it is specifically implemented there may only be only one of many particular ways to achieve it. Indeed, F&M (Feinberg and Mallatt) note that within the realm of biology itself, there appear to be a wide range of architectures among differing species.

There’s a tendency in these types of discussions to assume that there’s a type of magic involved in the precise mechanisms.  But what I think is far more important are the capabilities, capabilities which may be achievable in other ways.  But maybe this is just my own bias toward functionalism showing.

The fundamental unit of the nervous system is the neuron, a cell that specializes in communication.  Neurons typically have thousands of branching tendrils with connections, called synapses, to the tendrils of other neurons.  Synapses get stronger with more use and weaker with non-use.  (Neurons that fire together, stay together.)  Some synapses excite the receiving neuron, and some inhibit it.  A neuron essentially sums up all the incoming signals from its inbound excitatory and inhibitory synapses and, if excited to a certain threshold, fires off a signal to its outbound synapses, to other neurons which are summing their inputs and deciding if they’ll fire.

Visual sensory path Image credit: KDS444 via Wikipedia

Visual sensory path
Image credit: KDS444 via Wikipedia

The perceptual senses start off with a layer of neurons that fire because of some physical excitation.  For vision, these are photoreceptor neurons on the retina that are sensitive to light in various ways.  The pattern of signals they receive from incoming photons create a neural firing pattern, which triggers signals that go up the optic nerve to the brain, where the overall current mental image of the world is constructed.

The earliest layers of neurons in visual processing are principally excited by things like lines, edges, colors, and other primal aspects.  Subsequent layers become more selective in what excites them, perhaps being triggered by certain shapes or a certain kind of movement.  As the signals propagate up the layers, the neurons become progressively more selective about what triggers them, until we get to clusters of neurons that may only become excited when a certain person’s face is seen, or maybe a certain kind of animal, or any specific object.

F&M identify these neural hierarchies as a crucial element in biological consciousness.  Each sense has its own set of hierarchies, but there is substantial crosstalk between them, and it’s probably reasonable to surmise that there are effectively hierarchies of the hierarchies.

Each concept in the brain has its own twisting winding hierarchy that culminates in clusters of neurons which only light up for that concept.  (In reality, anything but the simplest concepts probably have innumerable hierarchies spread over different senses.)  At the lower levels, the conceptual hierarchies overlap substantially, but the pattern of the neurons firing at those levels determine which neurons at the higher levels light up.  As we go higher up in the hierarchies, they overlap less and less.

This concept is sometimes derisively referred to as the idea of the “grandmother neuron”.  But it’s important to understand that there is no image of grandmother in any one neuron (or of whatever the specific concept is).  That image exists throughout the entire hierarchy.  It’s also unlikely that the top of a hierarchy ever converges to only one neuron.  It may converge on clusters of neurons that fire in specific patterns.  It may be that the same clusters firing in different patterns is the top of a different concept hierarchy.

Neurologist Antonio Damasio calls these culmination points convergence-divergence zones.  In both his and F&M’s views, the hierarchies can be activated from the bottom up, from incoming sensory activation, or from the top down, when crosstalk from some other hierarchy activates a higher point, leading to downward propagation of the hierarchy.  This is likely part of what happens when we remember or imagine an object we’re not currently sensing.

My way of understanding the overall significance of this framework is that the breadth of the sensory layers provide the resolution of what can be perceived in the lower levels, as well as the overall number of concepts that can be perceived at the higher levels.  But the depth of the hierarchies, how deep they can go, probably determines just how much abstracted meaning a system can extract from the sensory information, in other words, how much understanding it can achieve.

F&M investigate how deep these hierarchies need to be for what we’re calling sensory consciousness to exist.  They tentatively conclude that the minimum number of layers is five for vision, but may only be three for the other senses.  They end up splitting the difference and using four as the average minimum.  But they fully admit that this is not at all an area that is fully understood, and that the actual minimums may be much higher.  My own intuition here is that there’s no fact of the matter distinction, just increasing effectiveness as the layers increase.

Obviously brains with more neural substrate will have an advantage, although we always have to remember that absolute size isn’t the crucial factor.  (A substantial portion of an elephant’s or whale’s sensory breadth is taken up with processing all of the interoceptive information coming from their vast body.)  But all else being equal, broader and deeper neural hierarchies will enable broader and deeper sensory consciousness.  Human consciousness and intelligence, with its symbolic and abstract thought, likely requires very deep, integrated, and nested hierarchies of hierarchies.

One quick note about the conceptual hierarchies.  The description above may make it sound like their formation is entirely contingent on incoming sensory information.  But it’s important to remember that minds don’t start as blank slates.  They start with a genetic predisposition to recognize certain concepts.  For example, healthy human babies have a built in capacity to recognize human faces.  Many hierarchies come pre-wired or primed for action, although possibly refined by later events.

Anyone in the know will recognize that this description is woefully oversimplified.  It’s a heavily summarized version inspired by what F&M describe in their book, with insights that I picked up from Antonio Damasio’s book, ‘Self Comes to Mind‘.  There is an enormous amount of important detail I’m leaving out.  The only goal here was to give you a taste of it.  If you want more, I highly recommend their respective books.

In the next and final post of this series, we’ll swing back to the hard problem of consciousness, and discuss to what extent we may have made any progress on it.

Posted in Mind and AI | Tagged , , , , , , , | 6 Comments

Types of sensory consciousness

theancientoriginsofconsciousnesscoverThis is the third in a series of posts inspired by Todd Feinberg and Jon Mallatt’s new book, ‘The Ancient Origins of Consciousness: How the Brain Created Experience‘.  The first two were:

With this post, we’re going to get into the different types of sensory consciousness that F&M (Feinberg and Mallatt) identify in their book.  They identify three types:

  • exteroceptive
  • interoceptive
  • affective

The consciousness I discussed in the last two posts was exteroceptive consciousness, that is, consciousness about the outside world and its relation to the organism.  This type of consciousness requires distance senses such as vision, hearing, and smell, as well as more immediate ones like touch and taste, although these last two senses sit on the border between exteroceptive and interoceptive consciousness.

Exteroceptive consciousness involves building image maps of the environment and objects in it.  “Image” here doesn’t necessarily mean a visual image, although that type of image usually looms large in this type of awareness.  But it can also mean audio images, smell images, or touch images.  These images are formed from the signals coming in from their related senses.

From the initial images, image maps, or models, are constructed, eventually from information integrated across the senses.  These image maps are isomorphic with patterns in the environment, to varying degrees of success.  They are what form the inner world that conscious organisms live in.

Interoceptive consciousness is awareness of internal body states.  Similar to exteroceptive consciousness, it involves taking data from sensory input and constructing isomorphic image maps.  But in this case, it’s senses about the internal state of the body, covering things like the feeling of muscles, the stretch of tissues, the state of the stomach, etc.  The overall image constructed here is one of the body.

This resulting image map seems very similar to Antonio Damasio’s proto-self body image.  In fact, F&M cite Damasio approvingly, although noting that whereas Damasio’s approach was to study the human brain, theirs is broader, looking at the various architectures in the animal kingdom.

The last type of consciousness is affective or limbic consciousness.  The thing about the information in the other two types of awareness above, is that it has no valence, no value attached to it, no assessment of whether the sensory information is good or bad for the organism.  Affective consciousness is where this valence is introduced.

There are, broadly, two types of affects: positive and negative.  For positive, think pleasurable or attractive.  For negative, think aversion, fear, pain, etc.  All emotions and inclinations are essentially variations of these two affects.  F&M are cautious in using the word “emotion” here since it comes with so much baggage, with people meaning different things by that word.  But at their most basic level, emotions are affects.

My way of understanding this is that the emotion itself is a mental reflex, that earlier in evolution led to immediate action, but now surfaces as an affective feeling in conscious organisms, a mental state of preparation for a particular action, which may be overridden if there are other competing affects arising at the same time.  It is affective consciousness that provides the primal interpretation of sensory information in the exteroceptive and interoceptive models as either good or bad.

When we see something, such as a rainbow or an onrushing tiger, our knowledge of its appearance and perceived location comes from exteroceptive consciousness, but our appreciation of the beauty  of the rainbow or the danger of the tiger comes from affective consciousness.  Of course, we don’t subjectively experience these as separate systems, because the information processing happens below the level of our awareness, creating a subjectively unified experience.

When did these types of consciousness evolve?  Exteroceptive is the easy one here.  It’s evolution was heralded by the development of eyes during the Cambrian explosion.  As I mentioned in the first post of this series, high resolution eyes (as opposed to more primitive light sensors) imply mental imagery, which imply isomorphic modeling, models of the environment, a worldview.

There may be some debate about which distance sense evolved first, but eyes, unlike the other sense receptors, are part of the central nervous system.  In their earliest incarnations, they appear to be right next to the brain, if not actually part of it.  For this and other reasons, F&M see vision as being an early development, with possibly the neural hierarchies for vision being subsequently duplicated for the development of the other distance senses.

There are many theories about when affective consciousness developed.  Some put it at the very beginning of sensory consciousness in the Cambrian.  Others push it back to various stages of development such as the rise of amphibians, or of mammals, or even anatomically modern humans, giving a vast range of possible dates, from as early as 550 million ago to as late as 200,000 years ago (when more or less modern humans emerged).

Since affects are a crucial aspect of sentience, figuring out when it developed and which species have it is important.  F&M’s approach is to examine cases of behavior that require affective consciousness, such as learned responses to punishments and awards, behavioral trade-offs, frustration with insurmountable problems, and self-delivery of analgesics when in pain.  Having identified these behaviors, they then look at experiments testing various species to see if they exhibit these behaviors.

Of course, no one can test an extinct Cambrian species, and using modern animals as stand ins for earlier ones in evolutionary history is risky, but the morphological similarities between the simplest modern vertebrates and pre-vertebrates to fossil remains reduces the uncertainty.

In experiments, all vertebrates: fish, amphibians, reptiles, birds, and mammals, demonstrated behaviors that required affective consciousness.  But animals resembling pre-Cambrian forms: C-elegans, flatforms, and Chordates generally did not.  The evidence then, according to F&M, points to affective consciousness evolving more or less concurrently with exteroceptive consciousness, in other words, during the early Cambrian, 550-520 million years ago.

Interoceptive consciousness is a more difficult matter.  F&M found fragmentary evidence that the neurological pathways for it exist in all vertebrates, but were forced to admit that the evidence is sparse and uncertain.  In particular, the relevant pathways are poorly studied outside of mammalian species.

And there are surprising gaps, such as with pain.  Pain requires input from the affective system to actually be pain, but having an affect of pain requires certain types of interoceptive signals.  Pain seems like it would be one of the most fundamental aspects of feeling, but there appear to be good evidence that fish do not feel certain types of it.  They do seem to feel sharp pain, the pain of an injury immediately when it happens, but don’t appear to feel the long burning variety, the one that causes suffering.

F&M speculate that this might have something to do with their environment and feeding needs.  Fish often can’t heed continuous agonizing pain, but have to keep moving to survive.  However, for land animals, signalling that tells them they are damaged and need to find a place to hide and heal can be a survival advantage.

So the weight of the evidence is that exteroceptive and affective consciousness are ancient, along with aspects of the interoceptive variety, and therefore widely prevalent in vertebrates.  (F&M also discuss the possibility of consciousness in cephalopods and arthropods, including many insects, although they express reservations on whether insect brains have the necessary complexity.)

The next post will discuss how the image map models may be constructed.


Posted in Mind and AI | Tagged , , , , , , | 4 Comments

Predators and the rise of sensory consciousness

This is the second post in a series inspired by Todd Feinberg and Jon Mallatt’s new book, ‘The Ancient Origins of Consciousness: How the Brain Created Experience‘.

The first post in the series was: What counts as consciousness?

Life appears to have gotten started fairly early in Earth’s history.  The oldest known fossils are now dated to be about 3.7 billion years old.  Given that the Earth itself is only around 4.5 billion years old, and that considerable evolution would have had to happen for the earliest fossils to exist when they did, life probably got started as soon as conditions were conducive for it.  (Note: “as soon as” on geological time scales.)

For billions of years life evolved and gradually became more complicated.  Around 1.5 billion years ago, the first multicellular life emerged.  Just as cells had communication mechanisms, motile (animal) multicellular life began to develop its own communication mechanism: nervous systems.  The first nervous systems were basically nerve nets, more or less evenly spread throughout the animal.  They enabled the organism to respond to things like pressure somewhere on its outer layers, or to noxious or attractive chemicals.  Nerve nets basically just had reflex action, direct responses to environmental stimuli.

By the end of the Ediacaran geological period, around 550 million years ago, bilateral animals (with two duplicate halves) had appeared, with a nerve cord running down their center.  Animals of this period either didn’t have self locomotion or had only a very primitive form of it.  They generally all fed on a thick green mat of microbes that existed on the ocean floor.

Animals with the central nerve cord (chordates) did have centralized reflexes.  Some had light sensors on their head and other limited senses.  None of these senses had enough resolution to do more than give small hints on what was happening in the environment.  Which was fine, because the organisms at this point didn’t have enough processing power in their nervous system to use more information anyway.

But in a few tens of millions of years, during the Cambrian period, not long on geological scales, the animal kingdom would suddenly explode into a much wider range of body types.  The onslaught of development would be so sudden and rapid, that geologists and paleontologists call the period the Cambrian explosion.  Most significantly for this post, primary sensory consciousness would develop.

Why did evolution suddenly speed up during this time?  There are various theories, but most evolutionary biologists seem to think the most plausible reason is an arms race.  The term “arms race” is a military one, referring to two competing military powers increasing their armaments because of what the other competing power has, resulting in both having an escalating inventory of weapons far in access to what either needs aside from countering each other.  One historically recent example was the Cold War era missile arms race between the USA and Soviet Union.

In the context of evolution, “arms race” basically refers to selection pressures that come from competition rather than from the rest of the environment.  What was the arms race in the Cambrian?  Predation, animals eating other animals, suddenly became prevalent.  This might have happened because of overpopulation and depletion of the microbe mat mentioned above, or maybe due to some other unknown change.

A Cambrian arthropod Image credit: Nobu Tamara via Wikipedia

A Cambrian arthropod
Image credit: Nobu Tamara via Wikipedia

Whatever caused it, it put much more selection pressure on prey species, and eventually on other predators as prey became more elusive and scarce.  Evolution seems to have responded in a variety of ways.  Some species developed hard shells, making themselves more difficult to consume.  Others burrowed deeper into the sea floor.  Some developed hard shells but stayed mobile, holding an advantage because of them.  These were the arthropods, and they appear to have been the main predators of the period.

But one group went the way of speed and agility, the ability to flee or evade.  This involved developing an inner skeleton, unlike the outer exoskeleton of the arthropods.  They developed a central support structure, a backbone, and became the first vertebrates.  But their movement strategy required distance senses, such as eyesight, smell, and hearing, to collect information on the environment, and robust central coordination to make use of that information for movement, movement to evade predators, to fight or try to escape.

Of course, arthropods, as the predators, soon needed to respond in kind, although their brains didn’t rise to the level of sophistication that the vertebrates did.  Part of this might have been because their exoskeletons required periodic moulting, constraining their growth.  Nevertheless, both groups went on to develop complex brains and, arguably, consciousness.

Both developed high resolution eyes (relative to the light sensors that had existed in the Ediacaran), that allowed them to build image maps of the environment.  As I mentioned in the last post, these weren’t human level maps, but they were still effective models that allowed the animals to make predictions.  They had built inner worlds which now guided their behavior.  Although their inner experience was at a far lower resolution than ours, it increased their chances of survival.

There would be other major milestones in cognitive development.  Jawed vertebrates would soon have larger brains.  The first land animals would have larger ones yet.  The mammalian cerebrum would eventually come along and dramatically increase the information in the internal models.  They would continue to grow in size and sophistication, eventually leading to the rise of social species in the last 100 million years.

The biggest developments, from our point of view, would be the rise of primates, and eventually homo sapiens.  Interestingly, it seems that human intelligence was the result of another arms race, this time among humans.  Human intelligence was likely heavily selected over the last million years for social intelligence, rather than any other environmental factors, with a higher level of social intelligence, a better theory of mind for one’s peers and oneself, resulting in increased reproductive success.  Eventually it would result in modern humans, with human levels of consciousness.

But sensory consciousness started in the Cambrian with the rise of image maps, internal neural patterns isomorphic to patterns in the environment (to varying levels of success), or what I prefer to call models, inner worlds that dramatically increased the causal information, the scope of the environment that the organism could respond to.  And we owe that start to the rise of predators.

The next post will get into types of sensory or primary experience.

Posted in Mind and AI | Tagged , , , , , , , , | 7 Comments

What counts as consciousness?

One of the things I get reminded of every few years, is that difficult determinations often look clearer when you consider them in a wider scope.  Years ago, when I was trying to figure out whether conservative or progressive political policies were better, I discovered that widening my investigation to history helped immensely, and widening even further to the history of other developed countries helped even more.  Many of the typical conservative hangups looked parochial in that broader context.

The same thing happened when I was trying to decide how worried to be about artificial intelligence.  Many of the people who are worried about it are familiar with the technology, and their concerns carry weight with the general population.  But learning about neuroscience and evolutionary psychology put those concerns in a much broader context and, at least for me, rendered most of them moot.

Consciousness is one of those topics that people have been writing and debating about for centuries.  But I’ve found that many of the philosophical ideas often kicked around wither in the light of neurological case studies and overall neuroscience.  We’ve gained a lot of insight into consciousness by looking carefully at the human brain, particularly the cases where it gets damaged of malfunctions in some way.  But maybe a broader approach yet is to look at consciousness in animals, particularly in terms of evolution.

This is the approach used by Todd Feinberg and Jon Mallatt in their new book: ‘ The Ancient Origins of Consciousness: How the Brain Created Experience‘.  (This is the first of what I hope will be a series of posts inspired by their fascinating, albeit technical, book.)  The good thing about studying animal consciousness is that it gives a much broader array of systems to study.  And animals can be studied in many more ways, ways that are often ethically unacceptable for humans.  (Some of those ways I personally find unacceptable, but the knowledge gleaned from them is real.)

Of course, the biggest issue with studying animal consciousness is that we lose the primary advantage of focusing on human consciousness.  We know that we ourselves are conscious, and it is uncontroversial to assume that all mentally complete humans are also conscious.

But the farther we move away from healthy adult homo sapiens, the more tenuous this assumption becomes.  We have to be careful not to project our own experience on animals or other systems.  It’s reasonable to assume that animal experience is not human experience, particularly as we move down the intelligence chain.  This raises an interesting question, how much of human experience can we dispense with and still coherently use the label “consciousness”?

In their book, Feinberg and Mallatt make clear that they’re not attempting to explain human level consciousness, but that they are aiming for the hard problem of consciousness, the one that asks, why is there “something it is like” to be a conscious being?  They equate this with what they consider to be primary or sensory consciousness.

But it’s not clear to me that what we mean by “something it is like” is so easily divorced from higher level consciousness capabilities.  It might be that without the ability to reflect on our experience, that it is not necessarily “like anything” to be one of these creatures.  As Thomas Nagel pointed out years ago, we can never know what it’s like to be a bat.  But it’s possible that neither can an actual bat, if it doesn’t have at least some level of introspective ability.  For that and other reasons, we have to be cautious in assuming that animals have an inner experience.

Still, anyone who has ever cared for a pet knows that the intuition of animal consciousness is very powerful.  Whatever mental life animals possess, we sense in them fellow beings in a way that we don’t sense with plants or computer systems.  This isn’t true of all animals of course.  I don’t really sense any consciousness in a worm, a starfish, or an oyster, which makes sense since none of these animals have brains.

But pretty much any animal with eyes tends to trigger my intuition that there is some inner life there, something that is seeing and has some kind of intentionality, a worldview of some kind, even if it’s a limited one.  This is a common intuition, which is why it’s not unusual for movies to show an opening eye to indicate that some thinking feeling thing is present.  According to Feinberg and Mallatt, this turns out to be a reasonably good indicator.

High resolution eyes with lenses, as opposed to simple light sensors, are costly constructs in terms of complexity and energy, and evolution rarely wastes such resources.  But without mental images, eyes would in fact be a waste.  And mental imagery, another costly feature, would itself be useless without modelling of external objects and the environment, along with the animal’s body and its interactions with that environment.  And that modelling would itself be useless without being a guide to possible actions the animal might take.

None of this is to say that the modelling done by the brain of a lamprey, one of the simplest vertebrates that Feinberg and Mallatt conclude may be conscious, is anything like that done by a human brain.  Without a doubt, the lamprey’s models are far less rich, but then the lamprey has no real need of human like models.  All that matters is whether its models are effective in allowing it to navigate its environment, and they generally appear to be so.

Lamprey Image credit: Tiit Hunt via Wikipedia

Lamprey Image credit: Tiit Hunt via Wikipedia

But do these capabilities count as consciousness?  A lamprey doesn’t have a cerebrum, where human and mammal consciousness appears to reside, and sub-cortical processes in humans are below the level of consciousness.  But a mammal with its cerebrum removed or destroyed is a severely disabled creature, without the ability to navigate its world and survive on its own.  A lamprey does have that ability, indicating that the necessary modelling is still taking place somewhere in its more primitive brain.

This makes sense from an evolutionary point of view.  Primary consciousness must have some adaptive value.  It seems reasonable (although admittedly speculative) to assume that it is consciousness which allows animals to have a wide repertoire of available actions to navigate the world, find food and mates, and avoid predators.  These capabilities were likely important catalysts leading to the evolution of complex brains, and consciousness, during the Cambrian explosion.

We’re not talking here about human level consciousness, but Feinberg and Mallatt use the analogy of an airliner and an ox cart.  The experience of riding an ox cart is not the experience of riding on an airliner, but they’re both transportation.  Likewise, the experience of a lamprey is not like the experience of a human, except that they both have experience.

But again, does this really count as consciousness?  What I’ve alluded to here is called exteroceptive consciousness, one type of primary consciousness described in Feinberg and Mallatt’s book.  The other two are interoceptive consciousness and affective consciousness, all of which I’ll describe in more detail in another post.  But after some consideration, I’m inclined to accept it as part of core consciousness, although I would completely understand if someone insisted on the label “proto-consciousness”.  Ultimately, the exact labeling here is a matter of convention on how to discuss a certain point on the evolutionary spectrum.

But this raises another interesting question.  Is the Google self driving car conscious?  It doesn’t have eyes exactly, but it does use LIDAR to model its environment, and its own interactions with that environment.  Of course, the Google car’s models are currently far less effective than a lamprey’s, at least relative to their respective environments, and the motivations of a self driving car are very different from those of a living animal.  But as Google and other technology companies improve these systems, might we eventually reach a point where it makes sense to consider them to have a sort of primal consciousness?

Posted in Mind and AI | Tagged , , , , , , | 25 Comments