The range of conscious systems and the hard problem

This is the fifth and final post in a series inspired by Todd Feinberg and Jon Mallatt’s new book, ‘The Ancient Origins of Consciousness: How the Brain Created Experience‘.  The previous posts were:

In the first post of this series, I noted that F&M (Feinberg and Mallatt) were not attempting to explain human level consciousness, and that bears repeating.  When talking about animal consciousness, we have to be cautious that we don’t project the full breadth of human experience on them.

While a zebrafish has sensory consciousness with its millions of neurons, its conscious experience is not in the same league as a dog’s with 160 million neurons in the dog’s cerebrum alone, much less that of humans with our average of 21 billion cerebral neurons.

Space invaders c. 1978 Source: Wikipedia

Space invaders c. 1978
Source: Wikipedia

One analogy that might illustrate the differences here is to compare 1970s era video games, running on systems with a few kilobytes of memory, to video games in the 1990s running on megabytes of memory, and then again to modern video games with gigabytes to work with.  In all cases, you experience a game, but the 1970s variety were blocky dots interacting with each other (think Pong or Space Invaders), the 1990s versions were cartoons (early versions of Mortal Kombat), and the modern version is like being immersed in a live action movie (the latest versions of Call of Duty).

Call of Duty Source: Wikipedia

Call of Duty c. 2014
Source: Wikipedia

Not that the zerbrafish perceives its experience as low resolution, since even its ability to perceive is itself low resolution.  It only perceives reality in the way it can and isn’t aware of the detail and meaning it misses.

All that said, the zebrafish and many other species do model their environment (exteroceptive awareness), their internal body states (interoceptive awareness), and their reflexive reactions to what’s in the models (affective awareness).  These models give them an inner world, which, to the degree it’s effective, enables them to survive in their outer world.

A lot of human mental processing takes place sub-consciously, which might tempt us to wonder how conscious any of the zebrafish’s processing really is.  But when human consciousness is compromised by injury or disease we become severely incapacitated, unable to navigate the world and take care of ourselves in any sustained manner, something the zebrafish and related species can do, indicating that consciousness is crucial and that organisms like zebrafish and lampreys have some form of it.

Considering all this has also made me realize that what we call self awareness isn’t an either-or thing, being either fully present or absent.  Modelling the environment seems pointless if you don’t have at least a rudimentary representation of your own physical existence and its relation to that environment.  Add in awareness of internal body states and emotional reactions, and at least incipient self awareness seems like an integral aspect of consciousness, even the most primitive kind.

(When I first started this blog, I was open to the possibility that self awareness was something only a few species had, mostly due to the results of the mirror test.  But I now think the mirror test is more about intelligence than self awareness, measuring an animal’s ability to understand that it’s seeing itself in the mirror.)

All of which seems to indicate that many of the differences in consciousness between us and species such as lampreys are matters of degree rather than sharp distinctions.  Of course, the difference between the earliest conscious creatures and pre-conscious ones is also not a sharp one.  There was likely never a first conscious creature, just increasingly sophisticated senses and reflexes, gradually morphing into model driven actions, until there were creatures we’d consider to have primitive consciousness.

This lack of a sharp break bothers many people, who want consciousness to be something objectively fundamental to reality.  Some solve this dilemma with panpsychism, the view that everything in the universe has consciousness, with animals just having it in much higher magnitude than do plants, rocks, or protons.

Others conclude that consciousness is an illusion, a mistaken concept that needs to go the way of biological vitalism.  Best not to mention it, but instead to focus on the information processing necessary to produce certain behaviors.  Many scientists seem to take this approach in their professional papers.

But I’m interested in the differences between systems we intuitively see as conscious and those we don’t.  Concluding that they’re all conscious, or that none of them are, doesn’t seem like progress.  I think the most productive approach is to regard consciousness as a suite of information processing functions.  This does mean there’s an unavoidable aspect of interpretation as to which systems have these functions.  But that type of difficulty already exists for many other categories, such as the distinctions between life and non-life (see viruses).

While F&M weren’t interested in tackling human consciousness, they were interested in addressing the hard problem of consciousness.  Why does it feel “like something” to be certain kinds of systems?  Why is all this information processing accompanied by experience?

I think making any progress on this question requires that we be willing to ask a closely related question: what are feelings?  What exactly is experience?

The most plausible answer is that experience is the process of building, updating, and accessing these models.  If we accept that answer, then the hard problem question becomes: why we does this modeling happen?  The second post in this series discussed an evolutionary answer.

This makes sense when you consider the broader way we use words like “experience” to mean having had extensive sensory access to a topic in order to achieve an expert understanding of it, in other words to build superior internal models of it.

I can’t say I’m optimistic that those troubled by the hard problem will accept this unpacking of the word “experience”.  The reason is that experience is subjectively irreducible.  We can’t experience the mechanics of how we experience, just the result, so for many the idea that this is what experience is, simply won’t ring true.

The flip side of the subjective irreducibility of experience is that an observer of a system can never directly access that system’s subjective state, can never truly know its internal experience or feelings.  We can never know what it’s like to be a bat, no matter how much we learn about its nervous system.

While F&M acknowledge that this subjective-objective divide can’t be closed, they express hope that it can be bridged.  I fear the best that can be done with it is to clarify it, but maybe that’s what they mean by “bridged”.  Those who regard the divide as a problem will likely continue to do so.  Myself, I’ve always regarded the divide as a very profound fact, but not an obstacle to an objective understanding of consciousness.

In conclusion, F&M’s broader evolutionary approach has woken me from my anthropocentric slumber, changing my views on consciousness in two major ways.  First, it’s not enough for a system to model itself for us to consider it conscious; it must also model its environment and the relation between the two, in essence build an inner world as a guide to its actions.  Second, that modeling can be orders of magnitude less sophisticated than what humans do and still trigger our intuition of a fellow conscious being.

Which seems to lower the bar for achieving minimal consciousness in a technological system.  Unless we find a compelling reason to narrow our definition of consciousness, it seems plausible to consider that some autonomous robotic systems have a primitive form of it, albeit without biological motivations.  Self driving cars are the obvious example, systems that build models of the environment as a guide to their actions.

Unless of course I’m overlooking something?

Posted in Mind and AI | Tagged , , , , , , , , | 20 Comments

The neural mechanics of sensory consciousness

This is the fourth in a series of posts inspired by Todd Feinberg and Jon Mallatt’s new book, ‘The Ancient Origins of Consciousness: How the Brain Created Experience‘.  The first three were:

So, at this point in the series, we’ve talked about what is primary or sensory conscious, how it evolved, and the three types: exteroceptive consciousness of the outside world, interoceptive consciousness of internal body states, and affective consciousness of emotional reactions.  But how does it actually happen?  How is it actually implemented in an animal’s brain?

A quick note before we get into this: while there’s undoubtedly enormous value in looking at how consciousness arises in biological systems, I think we have to keep in mind that the way it is specifically implemented there may only be only one of many particular ways to achieve it. Indeed, F&M (Feinberg and Mallatt) note that within the realm of biology itself, there appear to be a wide range of architectures among differing species.

There’s a tendency in these types of discussions to assume that there’s a type of magic involved in the precise mechanisms.  But what I think is far more important are the capabilities, capabilities which may be achievable in other ways.  But maybe this is just my own bias toward functionalism showing.

The fundamental unit of the nervous system is the neuron, a cell that specializes in communication.  Neurons typically have thousands of branching tendrils with connections, called synapses, to the tendrils of other neurons.  Synapses get stronger with more use and weaker with non-use.  (Neurons that fire together, stay together.)  Some synapses excite the receiving neuron, and some inhibit it.  A neuron essentially sums up all the incoming signals from its inbound excitatory and inhibitory synapses and, if excited to a certain threshold, fires off a signal to its outbound synapses, to other neurons which are summing their inputs and deciding if they’ll fire.

Visual sensory path Image credit: KDS444 via Wikipedia

Visual sensory path
Image credit: KDS444 via Wikipedia

The perceptual senses start off with a layer of neurons that fire because of some physical excitation.  For vision, these are photoreceptor neurons on the retina that are sensitive to light in various ways.  The pattern of signals they receive from incoming photons create a neural firing pattern, which triggers signals that go up the optic nerve to the brain, where the overall current mental image of the world is constructed.

The earliest layers of neurons in visual processing are principally excited by things like lines, edges, colors, and other primal aspects.  Subsequent layers become more selective in what excites them, perhaps being triggered by certain shapes or a certain kind of movement.  As the signals propagate up the layers, the neurons become progressively more selective about what triggers them, until we get to clusters of neurons that may only become excited when a certain person’s face is seen, or maybe a certain kind of animal, or any specific object.

F&M identify these neural hierarchies as a crucial element in biological consciousness.  Each sense has its own set of hierarchies, but there is substantial crosstalk between them, and it’s probably reasonable to surmise that there are effectively hierarchies of the hierarchies.

Each concept in the brain has its own twisting winding hierarchy that culminates in clusters of neurons which only light up for that concept.  (In reality, anything but the simplest concepts probably have innumerable hierarchies spread over different senses.)  At the lower levels, the conceptual hierarchies overlap substantially, but the pattern of the neurons firing at those levels determine which neurons at the higher levels light up.  As we go higher up in the hierarchies, they overlap less and less.

This concept is sometimes derisively referred to as the idea of the “grandmother neuron”.  But it’s important to understand that there is no image of grandmother in any one neuron (or of whatever the specific concept is).  That image exists throughout the entire hierarchy.  It’s also unlikely that the top of a hierarchy ever converges to only one neuron.  It may converge on clusters of neurons that fire in specific patterns.  It may be that the same clusters firing in different patterns is the top of a different concept hierarchy.

Neurologist Antonio Damasio calls these culmination points convergence-divergence zones.  In both his and F&M’s views, the hierarchies can be activated from the bottom up, from incoming sensory activation, or from the top down, when crosstalk from some other hierarchy activates a higher point, leading to downward propagation of the hierarchy.  This is likely part of what happens when we remember or imagine an object we’re not currently sensing.

My way of understanding the overall significance of this framework is that the breadth of the sensory layers provide the resolution of what can be perceived in the lower levels, as well as the overall number of concepts that can be perceived at the higher levels.  But the depth of the hierarchies, how deep they can go, probably determines just how much abstracted meaning a system can extract from the sensory information, in other words, how much understanding it can achieve.

F&M investigate how deep these hierarchies need to be for what we’re calling sensory consciousness to exist.  They tentatively conclude that the minimum number of layers is five for vision, but may only be three for the other senses.  They end up splitting the difference and using four as the average minimum.  But they fully admit that this is not at all an area that is fully understood, and that the actual minimums may be much higher.  My own intuition here is that there’s no fact of the matter distinction, just increasing effectiveness as the layers increase.

Obviously brains with more neural substrate will have an advantage, although we always have to remember that absolute size isn’t the crucial factor.  (A substantial portion of an elephant’s or whale’s sensory breadth is taken up with processing all of the interoceptive information coming from their vast body.)  But all else being equal, broader and deeper neural hierarchies will enable broader and deeper sensory consciousness.  Human consciousness and intelligence, with its symbolic and abstract thought, likely requires very deep, integrated, and nested hierarchies of hierarchies.

One quick note about the conceptual hierarchies.  The description above may make it sound like their formation is entirely contingent on incoming sensory information.  But it’s important to remember that minds don’t start as blank slates.  They start with a genetic predisposition to recognize certain concepts.  For example, healthy human babies have a built in capacity to recognize human faces.  Many hierarchies come pre-wired or primed for action, although possibly refined by later events.

Anyone in the know will recognize that this description is woefully oversimplified.  It’s a heavily summarized version inspired by what F&M describe in their book, with insights that I picked up from Antonio Damasio’s book, ‘Self Comes to Mind‘.  There is an enormous amount of important detail I’m leaving out.  The only goal here was to give you a taste of it.  If you want more, I highly recommend their respective books.

In the next and final post of this series, we’ll swing back to the hard problem of consciousness, and discuss to what extent we may have made any progress on it.

Posted in Mind and AI | Tagged , , , , , , , | 6 Comments

Types of sensory consciousness

theancientoriginsofconsciousnesscoverThis is the third in a series of posts inspired by Todd Feinberg and Jon Mallatt’s new book, ‘The Ancient Origins of Consciousness: How the Brain Created Experience‘.  The first two were:

With this post, we’re going to get into the different types of sensory consciousness that F&M (Feinberg and Mallatt) identify in their book.  They identify three types:

  • exteroceptive
  • interoceptive
  • affective

The consciousness I discussed in the last two posts was exteroceptive consciousness, that is, consciousness about the outside world and its relation to the organism.  This type of consciousness requires distance senses such as vision, hearing, and smell, as well as more immediate ones like touch and taste, although these last two senses sit on the border between exteroceptive and interoceptive consciousness.

Exteroceptive consciousness involves building image maps of the environment and objects in it.  “Image” here doesn’t necessarily mean a visual image, although that type of image usually looms large in this type of awareness.  But it can also mean audio images, smell images, or touch images.  These images are formed from the signals coming in from their related senses.

From the initial images, image maps, or models, are constructed, eventually from information integrated across the senses.  These image maps are isomorphic with patterns in the environment, to varying degrees of success.  They are what form the inner world that conscious organisms live in.

Interoceptive consciousness is awareness of internal body states.  Similar to exteroceptive consciousness, it involves taking data from sensory input and constructing isomorphic image maps.  But in this case, it’s senses about the internal state of the body, covering things like the feeling of muscles, the stretch of tissues, the state of the stomach, etc.  The overall image constructed here is one of the body.

This resulting image map seems very similar to Antonio Damasio’s proto-self body image.  In fact, F&M cite Damasio approvingly, although noting that whereas Damasio’s approach was to study the human brain, theirs is broader, looking at the various architectures in the animal kingdom.

The last type of consciousness is affective or limbic consciousness.  The thing about the information in the other two types of awareness above, is that it has no valence, no value attached to it, no assessment of whether the sensory information is good or bad for the organism.  Affective consciousness is where this valence is introduced.

There are, broadly, two types of affects: positive and negative.  For positive, think pleasurable or attractive.  For negative, think aversion, fear, pain, etc.  All emotions and inclinations are essentially variations of these two affects.  F&M are cautious in using the word “emotion” here since it comes with so much baggage, with people meaning different things by that word.  But at their most basic level, emotions are affects.

My way of understanding this is that the emotion itself is a mental reflex, that earlier in evolution led to immediate action, but now surfaces as an affective feeling in conscious organisms, a mental state of preparation for a particular action, which may be overridden if there are other competing affects arising at the same time.  It is affective consciousness that provides the primal interpretation of sensory information in the exteroceptive and interoceptive models as either good or bad.

When we see something, such as a rainbow or an onrushing tiger, our knowledge of its appearance and perceived location comes from exteroceptive consciousness, but our appreciation of the beauty  of the rainbow or the danger of the tiger comes from affective consciousness.  Of course, we don’t subjectively experience these as separate systems, because the information processing happens below the level of our awareness, creating a subjectively unified experience.

When did these types of consciousness evolve?  Exteroceptive is the easy one here.  It’s evolution was heralded by the development of eyes during the Cambrian explosion.  As I mentioned in the first post of this series, high resolution eyes (as opposed to more primitive light sensors) imply mental imagery, which imply isomorphic modeling, models of the environment, a worldview.

There may be some debate about which distance sense evolved first, but eyes, unlike the other sense receptors, are part of the central nervous system.  In their earliest incarnations, they appear to be right next to the brain, if not actually part of it.  For this and other reasons, F&M see vision as being an early development, with possibly the neural hierarchies for vision being subsequently duplicated for the development of the other distance senses.

There are many theories about when affective consciousness developed.  Some put it at the very beginning of sensory consciousness in the Cambrian.  Others push it back to various stages of development such as the rise of amphibians, or of mammals, or even anatomically modern humans, giving a vast range of possible dates, from as early as 550 million ago to as late as 200,000 years ago (when more or less modern humans emerged).

Since affects are a crucial aspect of sentience, figuring out when it developed and which species have it is important.  F&M’s approach is to examine cases of behavior that require affective consciousness, such as learned responses to punishments and awards, behavioral trade-offs, frustration with insurmountable problems, and self-delivery of analgesics when in pain.  Having identified these behaviors, they then look at experiments testing various species to see if they exhibit these behaviors.

Of course, no one can test an extinct Cambrian species, and using modern animals as stand ins for earlier ones in evolutionary history is risky, but the morphological similarities between the simplest modern vertebrates and pre-vertebrates to fossil remains reduces the uncertainty.

In experiments, all vertebrates: fish, amphibians, reptiles, birds, and mammals, demonstrated behaviors that required affective consciousness.  But animals resembling pre-Cambrian forms: C-elegans, flatforms, and Chordates generally did not.  The evidence then, according to F&M, points to affective consciousness evolving more or less concurrently with exteroceptive consciousness, in other words, during the early Cambrian, 550-520 million years ago.

Interoceptive consciousness is a more difficult matter.  F&M found fragmentary evidence that the neurological pathways for it exist in all vertebrates, but were forced to admit that the evidence is sparse and uncertain.  In particular, the relevant pathways are poorly studied outside of mammalian species.

And there are surprising gaps, such as with pain.  Pain requires input from the affective system to actually be pain, but having an affect of pain requires certain types of interoceptive signals.  Pain seems like it would be one of the most fundamental aspects of feeling, but there appear to be good evidence that fish do not feel certain types of it.  They do seem to feel sharp pain, the pain of an injury immediately when it happens, but don’t appear to feel the long burning variety, the one that causes suffering.

F&M speculate that this might have something to do with their environment and feeding needs.  Fish often can’t heed continuous agonizing pain, but have to keep moving to survive.  However, for land animals, signalling that tells them they are damaged and need to find a place to hide and heal can be a survival advantage.

So the weight of the evidence is that exteroceptive and affective consciousness are ancient, along with aspects of the interoceptive variety, and therefore widely prevalent in vertebrates.  (F&M also discuss the possibility of consciousness in cephalopods and arthropods, including many insects, although they express reservations on whether insect brains have the necessary complexity.)

The next post will discuss how the image map models may be constructed.

 

Posted in Mind and AI | Tagged , , , , , , | 4 Comments

Predators and the rise of sensory consciousness

This is the second post in a series inspired by Todd Feinberg and Jon Mallatt’s new book, ‘The Ancient Origins of Consciousness: How the Brain Created Experience‘.

The first post in the series was: What counts as consciousness?

Life appears to have gotten started fairly early in Earth’s history.  The oldest known fossils are now dated to be about 3.7 billion years old.  Given that the Earth itself is only around 4.5 billion years old, and that considerable evolution would have had to happen for the earliest fossils to exist when they did, life probably got started as soon as conditions were conducive for it.  (Note: “as soon as” on geological time scales.)

For billions of years life evolved and gradually became more complicated.  Around 1.5 billion years ago, the first multicellular life emerged.  Just as cells had communication mechanisms, motile (animal) multicellular life began to develop its own communication mechanism: nervous systems.  The first nervous systems were basically nerve nets, more or less evenly spread throughout the animal.  They enabled the organism to respond to things like pressure somewhere on its outer layers, or to noxious or attractive chemicals.  Nerve nets basically just had reflex action, direct responses to environmental stimuli.

By the end of the Ediacaran geological period, around 550 million years ago, bilateral animals (with two duplicate halves) had appeared, with a nerve cord running down their center.  Animals of this period either didn’t have self locomotion or had only a very primitive form of it.  They generally all fed on a thick green mat of microbes that existed on the ocean floor.

Animals with the central nerve cord (chordates) did have centralized reflexes.  Some had light sensors on their head and other limited senses.  None of these senses had enough resolution to do more than give small hints on what was happening in the environment.  Which was fine, because the organisms at this point didn’t have enough processing power in their nervous system to use more information anyway.

But in a few tens of millions of years, during the Cambrian period, not long on geological scales, the animal kingdom would suddenly explode into a much wider range of body types.  The onslaught of development would be so sudden and rapid, that geologists and paleontologists call the period the Cambrian explosion.  Most significantly for this post, primary sensory consciousness would develop.

Why did evolution suddenly speed up during this time?  There are various theories, but most evolutionary biologists seem to think the most plausible reason is an arms race.  The term “arms race” is a military one, referring to two competing military powers increasing their armaments because of what the other competing power has, resulting in both having an escalating inventory of weapons far in access to what either needs aside from countering each other.  One historically recent example was the Cold War era missile arms race between the USA and Soviet Union.

In the context of evolution, “arms race” basically refers to selection pressures that come from competition rather than from the rest of the environment.  What was the arms race in the Cambrian?  Predation, animals eating other animals, suddenly became prevalent.  This might have happened because of overpopulation and depletion of the microbe mat mentioned above, or maybe due to some other unknown change.

A Cambrian arthropod Image credit: Nobu Tamara via Wikipedia

A Cambrian arthropod
Image credit: Nobu Tamara via Wikipedia

Whatever caused it, it put much more selection pressure on prey species, and eventually on other predators as prey became more elusive and scarce.  Evolution seems to have responded in a variety of ways.  Some species developed hard shells, making themselves more difficult to consume.  Others burrowed deeper into the sea floor.  Some developed hard shells but stayed mobile, holding an advantage because of them.  These were the arthropods, and they appear to have been the main predators of the period.

But one group went the way of speed and agility, the ability to flee or evade.  This involved developing an inner skeleton, unlike the outer exoskeleton of the arthropods.  They developed a central support structure, a backbone, and became the first vertebrates.  But their movement strategy required distance senses, such as eyesight, smell, and hearing, to collect information on the environment, and robust central coordination to make use of that information for movement, movement to evade predators, to fight or try to escape.

Of course, arthropods, as the predators, soon needed to respond in kind, although their brains didn’t rise to the level of sophistication that the vertebrates did.  Part of this might have been because their exoskeletons required periodic moulting, constraining their growth.  Nevertheless, both groups went on to develop complex brains and, arguably, consciousness.

Both developed high resolution eyes (relative to the light sensors that had existed in the Ediacaran), that allowed them to build image maps of the environment.  As I mentioned in the last post, these weren’t human level maps, but they were still effective models that allowed the animals to make predictions.  They had built inner worlds which now guided their behavior.  Although their inner experience was at a far lower resolution than ours, it increased their chances of survival.

There would be other major milestones in cognitive development.  Jawed vertebrates would soon have larger brains.  The first land animals would have larger ones yet.  The mammalian cerebrum would eventually come along and dramatically increase the information in the internal models.  They would continue to grow in size and sophistication, eventually leading to the rise of social species in the last 100 million years.

The biggest developments, from our point of view, would be the rise of primates, and eventually homo sapiens.  Interestingly, it seems that human intelligence was the result of another arms race, this time among humans.  Human intelligence was likely heavily selected over the last million years for social intelligence, rather than any other environmental factors, with a higher level of social intelligence, a better theory of mind for one’s peers and oneself, resulting in increased reproductive success.  Eventually it would result in modern humans, with human levels of consciousness.

But sensory consciousness started in the Cambrian with the rise of image maps, internal neural patterns isomorphic to patterns in the environment (to varying levels of success), or what I prefer to call models, inner worlds that dramatically increased the causal information, the scope of the environment that the organism could respond to.  And we owe that start to the rise of predators.

The next post will get into types of sensory or primary experience.

Posted in Mind and AI | Tagged , , , , , , , , | 6 Comments

What counts as consciousness?

One of the things I get reminded of every few years, is that difficult determinations often look clearer when you consider them in a wider scope.  Years ago, when I was trying to figure out whether conservative or progressive political policies were better, I discovered that widening my investigation to history helped immensely, and widening even further to the history of other developed countries helped even more.  Many of the typical conservative hangups looked parochial in that broader context.

The same thing happened when I was trying to decide how worried to be about artificial intelligence.  Many of the people who are worried about it are familiar with the technology, and their concerns carry weight with the general population.  But learning about neuroscience and evolutionary psychology put those concerns in a much broader context and, at least for me, rendered most of them moot.

Consciousness is one of those topics that people have been writing and debating about for centuries.  But I’ve found that many of the philosophical ideas often kicked around wither in the light of neurological case studies and overall neuroscience.  We’ve gained a lot of insight into consciousness by looking carefully at the human brain, particularly the cases where it gets damaged of malfunctions in some way.  But maybe a broader approach yet is to look at consciousness in animals, particularly in terms of evolution.

This is the approach used by Todd Feinberg and Jon Mallatt in their new book: ‘ The Ancient Origins of Consciousness: How the Brain Created Experience‘.  (This is the first of what I hope will be a series of posts inspired by their fascinating, albeit technical, book.)  The good thing about studying animal consciousness is that it gives a much broader array of systems to study.  And animals can be studied in many more ways, ways that are often ethically unacceptable for humans.  (Some of those ways I personally find unacceptable, but the knowledge gleaned from them is real.)

Of course, the biggest issue with studying animal consciousness is that we lose the primary advantage of focusing on human consciousness.  We know that we ourselves are conscious, and it is uncontroversial to assume that all mentally complete humans are also conscious.

But the farther we move away from healthy adult homo sapiens, the more tenuous this assumption becomes.  We have to be careful not to project our own experience on animals or other systems.  It’s reasonable to assume that animal experience is not human experience, particularly as we move down the intelligence chain.  This raises an interesting question, how much of human experience can we dispense with and still coherently use the label “consciousness”?

In their book, Feinberg and Mallatt make clear that they’re not attempting to explain human level consciousness, but that they are aiming for the hard problem of consciousness, the one that asks, why is there “something it is like” to be a conscious being?  They equate this with what they consider to be primary or sensory consciousness.

But it’s not clear to me that what we mean by “something it is like” is so easily divorced from higher level consciousness capabilities.  It might be that without the ability to reflect on our experience, that it is not necessarily “like anything” to be one of these creatures.  As Thomas Nagel pointed out years ago, we can never know what it’s like to be a bat.  But it’s possible that neither can an actual bat, if it doesn’t have at least some level of introspective ability.  For that and other reasons, we have to be cautious in assuming that animals have an inner experience.

Still, anyone who has ever cared for a pet knows that the intuition of animal consciousness is very powerful.  Whatever mental life animals possess, we sense in them fellow beings in a way that we don’t sense with plants or computer systems.  This isn’t true of all animals of course.  I don’t really sense any consciousness in a worm, a starfish, or an oyster, which makes sense since none of these animals have brains.

But pretty much any animal with eyes tends to trigger my intuition that there is some inner life there, something that is seeing and has some kind of intentionality, a worldview of some kind, even if it’s a limited one.  This is a common intuition, which is why it’s not unusual for movies to show an opening eye to indicate that some thinking feeling thing is present.  According to Feinberg and Mallatt, this turns out to be a reasonably good indicator.

High resolution eyes with lenses, as opposed to simple light sensors, are costly constructs in terms of complexity and energy, and evolution rarely wastes such resources.  But without mental images, eyes would in fact be a waste.  And mental imagery, another costly feature, would itself be useless without modelling of external objects and the environment, along with the animal’s body and its interactions with that environment.  And that modelling would itself be useless without being a guide to possible actions the animal might take.

None of this is to say that the modelling done by the brain of a lamprey, one of the simplest vertebrates that Feinberg and Mallatt conclude may be conscious, is anything like that done by a human brain.  Without a doubt, the lamprey’s models are far less rich, but then the lamprey has no real need of human like models.  All that matters is whether its models are effective in allowing it to navigate its environment, and they generally appear to be so.

Lamprey Image credit: Tiit Hunt via Wikipedia

Lamprey Image credit: Tiit Hunt via Wikipedia

But do these capabilities count as consciousness?  A lamprey doesn’t have a cerebrum, where human and mammal consciousness appears to reside, and sub-cortical processes in humans are below the level of consciousness.  But a mammal with its cerebrum removed or destroyed is a severely disabled creature, without the ability to navigate its world and survive on its own.  A lamprey does have that ability, indicating that the necessary modelling is still taking place somewhere in its more primitive brain.

This makes sense from an evolutionary point of view.  Primary consciousness must have some adaptive value.  It seems reasonable (although admittedly speculative) to assume that it is consciousness which allows animals to have a wide repertoire of available actions to navigate the world, find food and mates, and avoid predators.  These capabilities were likely important catalysts leading to the evolution of complex brains, and consciousness, during the Cambrian explosion.

We’re not talking here about human level consciousness, but Feinberg and Mallatt use the analogy of an airliner and an ox cart.  The experience of riding an ox cart is not the experience of riding on an airliner, but they’re both transportation.  Likewise, the experience of a lamprey is not like the experience of a human, except that they both have experience.

But again, does this really count as consciousness?  What I’ve alluded to here is called exteroceptive consciousness, one type of primary consciousness described in Feinberg and Mallatt’s book.  The other two are interoceptive consciousness and affective consciousness, all of which I’ll describe in more detail in another post.  But after some consideration, I’m inclined to accept it as part of core consciousness, although I would completely understand if someone insisted on the label “proto-consciousness”.  Ultimately, the exact labeling here is a matter of convention on how to discuss a certain point on the evolutionary spectrum.

But this raises another interesting question.  Is the Google self driving car conscious?  It doesn’t have eyes exactly, but it does use LIDAR to model its environment, and its own interactions with that environment.  Of course, the Google car’s models are currently far less effective than a lamprey’s, at least relative to their respective environments, and the motivations of a self driving car are very different from those of a living animal.  But as Google and other technology companies improve these systems, might we eventually reach a point where it makes sense to consider them to have a sort of primal consciousness?

Posted in Mind and AI | Tagged , , , , , , | 18 Comments

Why the US two party system is so entrenched

The other day, I came across this Big Think explanation by historian Sean Wilentz on why the US always seems to gravitate to a two party system.

Unfortunately, while I think Wilentz touches on the main points, his explanation doesn’t seem as clear as it could be.

To start off, he refers to the US electoral practice of first past the post voting, or plurality voting, which is a fancy name for only having one winner of an election that goes to the candidate with the most votes.  It’s in contrast to a system that awards proportional representation to all parties that manage to get at least some defined minimal proportion of votes.

Most of the political systems that do some form of plurality voting, tend to have two major parties.  The systems that do proportional systems tend to have several parties.  The tendency of plurality voting systems to gravitate toward two parties is known in political science as Duverger’s law.

Although a more accurate name might have been “Duverger’s trend”, because while most political systems that do plurality voting have two party systems, it’s still possible for an occasional third party (or fourth) to get significant representation in them.  The UK has a plurality system, yet a few years ago it had a viable third party, the Liberal Democrats, who had enough representation to control the balance of power in Parliament.

But in the US system, third parties virtually never get much of a footing.  Occasionally a charismatic presidential candidate manages to get to get enough votes to sway the outcome of an election, but one has never actually won.

Perhaps the most successful third party presidential candidate in American history was Theodore Roosevelt in 1912.  Roosevelt was a popular ex-president who had only been out of office for four years.  (This was before the lifetime two term limit was in place.)  If there was ever a time when a third party candidate should have claimed victory, it would have been that year.  Yet, despite doing better than the Republican candidate, he only managed to split the Republican vote and throw the election to Democrat Woodrow Wilson.

With the exception of a couple of brief periods, the US has been a two party system for our entire history.  Initially the two parties were the Federalists and the Democratic-Republicans.  But this first two-party system had ended by 1820, after the collapse of the Federalists under allegations of treason during the War of 1812.

But by the 1830s, a new two party system had emerged: the Democrats and the Whigs.  The Whigs would later collapse in the 1850s, ripped apart by the slavery issue.  There were multiple parties 1850s and 1860s, mainly because of the convulsions the country was going through in the lead up to and carrying out of the American Civil War.

But within a few years after the Civil War, the two party system was back, now with Democrats and Republicans.  Those parties have remained ever since, although their stances and constituencies have varied tremendously over the decades.  In the 19th century, the Republicans tended to be the progressives and Democrats the conservatives, although the detailed issues were very different.

So, why does the US system so consistently gravitate back to two parties?  Part of it is Duverger’s law, but the utter absence of viable third parties in the US system is striking.  I think Wilentz had the right idea that it is embedded in the US constitution, although not just in the plurality voting aspects.

As all Americans learn in school, the US federal government has three separate branches: the executive (President), the legislative (Congress), and the judiciary.  The Constitution was designed to separate powers between the branches in such a way as to minimize the possibility of a tyranny developing.  This arrangement seems to have worked pretty well, so well in fact that it has generally been copied by the individual states.

However, as Richard Neustadt pointed out in his classic ‘Presidential Power and the Modern Presidents‘, executives and legislators in the US don’t so much have separate powers as shared powers.

What’s the difference between “separate” and “shared”?  In my home state of Louisiana, we have a number of statewide elected officials (treasurer, secretary of state, etc) who operate more or less independently of the state governor.  They’re able to (mostly) stay out of each other’s way.  Their powers are separate.  But that isn’t true between Presidents and Congress or most governors and their state legislators.  To accomplish substantive things, they must work together.  In other words, their powers are mostly shared.

Of course, as anyone paying attention can attest, working together often doesn’t happen.  But the highest probability of it happening is when allies control the different branches.  Without allies in Congress, a President can’t do much more than fairly narrow executive actions, and without an ally in the Presidency, Congress’s ability to pass laws is severely constrained, and both branches can have their initiatives killed by an unfriendly Supreme Court.

For this to work, the alliance needs to be a broad coalition, otherwise it won’t be strong enough or enduring enough.  Our bifurcated system of government requires the coordination from these alliances to function.  But any such successful coalition is going to make decisions that a lot of people don’t like.  The best chance the various opposing constituencies have of fighting the governing coalition is to form their own opposition coalition.

This is pretty clear if you look at the history of how the Democrats and Whigs developed.  President Andrew Jackson was the dominating political presence of his day.  He got things done with his allies in Congress, who eventually became the Democrats.  But a lot of people were passionately opposed to Jackson’s policies, and they eventually coalesced into the Whigs.

Our system of government rewards the largest coalitions, and it is to the advantage of each separate interest group to be in the largest coalition, or if that isn’t possible, to be in the second largest.  In other words, to be part of the two party system.

In our system, the coalitions are formed outside of government and change fairly slowly.  While this can be very stable, it can also lead to entrenched divided government, as it is right now.  It is an arrangement that, while unintentional, is a direct side effect of the way our government and constitution are structured.

Wilentz is right that the only real way to change this situation is to amend the constitution, perhaps radically, introducing proportional representation in Congress or collapsing the executive and legislative branches together.  This would take a two thirds vote of Congress and ratification by the legislatures of three quarters of the states.  In other words, don’t expect movement on this anytime soon.

But the two party system has collapsed twice in American history.  How do we know we’re not in that situation this year?  That one of the third parties isn’t perhaps ascendant?  When asking that question, consider that parties have never won the Presidency until they had a significant representation in Congress and in state legislatures.  Ask yourself how much representation the third party you’re considering has at those levels.  If the answer is minuscule or zilch, then this probably isn’t the year that party will come into power.

One popular reason to vote for a third party candidate is to make a protest vote.  Maybe the major party closest to your views isn’t addressing one or more issues that you care deeply about, and you want to send a message to them.  Protest voting can get the attention of the major parties and convince them to incorporate its views into their plank, but usually only after they have lost an election.  In other words, protest voters should be prepared to watch the candidate on the other end of the political spectrum go into power.

 

Posted in History, Society | Tagged , , , , | 18 Comments

Libertarian free will is incoherent, and that’s good for responsibility

For a while, I’d considered myself done debating free will, having expressed everything about it I had to say.  However, with this Crash Course video, and in light of the discussion on physicality we had earlier this summer, I realized I do have some additional thoughts on it.

Just a quick reminder: I’m a compatibilist.  I’m convinced that the mind is a system that fully exists in this universe and operates according to the laws of physics.  However, I think responsibility remains a coherent and pragmatically useful social concept.

Even if the laws of physics are fully deterministic, the knowledge that we may be held responsible is one of the many causal influences on our choices.   Holding people accountable for their decisions is productive for society.  In that sense, I’m a compatibilist, regarding free will as the ability of a competent person to act on their own desires, even if those desires ultimately have external causes.

For me, free will is something that exists at a sociological, psychological, and legal level.  Like democracy, the color white, or the rules of baseball, you’ll look in vain for it in physics.  At the physics layer, nothing exists except space, elementary particles, and their interactions, and even they may be patterns of even smaller phenomena.  Insisting that anything that we can’t find at this layer doesn’t exist strikes me as unproductive; to be consistent with it requires dismissing the things I listed above, along with most of every day reality.

Anyway, this post isn’t about compatibilism, but old fashioned libertarian free will, that is, the type of free will that many people do think exists, the one that says that even if the laws of physics are mostly or fully deterministic, there is something about the human mind that makes its actions not fully determined by those physics.  It’s an assertion that each human mind is essentially its own uncaused cause.  But is this a coherent concept?

It seems to me that there are two broad approaches to libertarianism.  One is substance dualism, the idea that there are two types of substances in reality: the physical, and the mental.  With substance dualism, free will is possible because the actions of the mind are affected by its non-physical mental components.  Therefore our decisions can’t be fully accounted for by physical causes.  We must bring in mental causation to complete that accounting.

Another approach is to posit (per the Penrose / Hameroff crowd) that quantum indeterminacy is a significant factor in mental processing.  In many ways, this seems like a modern version of the Epicurean swerve proposition, that there is something inherently random about mental processing.  This makes it impossible for us to predict decisions with physics, although in this case it should be possible, in principle, to account for them.  (Whether quantum randomness is real depends on the interpretation of quantum mechanics you prefer.)

The problem I see with both of these approaches, is that even if the mind is not part of the normal physical causal framework, it must operate according to some kind of principles.  These principles might be forever beyond our understanding, but minds have to operate in some manner.  That means, in the case of substance dualism, we haven’t so much avoided or escaped the causal framework as expanded it.  Everything is still determined, it’s just that the causal framework now includes mental substance.

You might argue that perhaps mental dynamics aren’t deterministic, that they have an inherent unpredictability, which would make it similar to the case of quantum consciousness.  In both of these scenarios, it means we’ve added a randomizing element to the causal framework.

But in all cases, the question we have to ask is, what is added to make an action praiseworthy or blameworthy if it wasn’t before?  If physics fully determined our choices before and that made our choices free of responsibility, how does adding mental causation add responsibility back?  Or how does adding randomness do it?

It seems like there is an appeal to ignorance here, to the unknowable.  Somehow, if we can’t predict a person’s actions, then they can be held accountable for them.  Of course, for all practical purposes, we can’t predict a person’s actions even if they are fully determined by physics, which seems to nullify any advantage from the unknowable aspects of the other scenarios.  Chaos theory dynamics might make prediction of physical minds actions just as unreachable as quantum mechanics or ghostly dualism.

Chaos theory is all about the fact that no measurement is infinitely accurate.  There is always a margin of error.  In a complex dynamic system, the margins of error quickly snowball, making the system unpredictable, even in principle.  This is why weather may never be 100% predictable: too many factors that can’t be measure with absolute precision.  And  unless synapses strength turns out to exist in discrete steps (a possibility), then brains may be an excellent candidate for a complex dynamic system.

In other words, introducing non-physical phenomena or new physics doesn’t evade the central issue: does it make sense to hold people accountable for their decisions or not?  It seems like the issue basically remains the same.

If the mind is strictly physical, people do talk about someday altering a convicted criminal’s brain so that they wouldn’t have immoral impulses, or at least would have the will to resist those impulses.  This is usually presented as a more humane approach than traditional punishment, and it might well turn out to be so.  The problem is that we’re still a long way from being able to do that, and even when it is possible, something tells me that people will regard having their mind, their core self, forcibly altered, to be just as nightmarish as many other punishments.

Personally, my own feeling on this is that the mind’s operations are substantially, if not fully, deterministic.  It’s possible that quantum indeterminacy has some role in the brain’s processing, but if so, it seems like it would be an extremely nuanced one.  The brain evolved for animals to make movement decisions, presumably to maximize access to food and reproduction while minimizing exposure to predators.  A rampantly indeterminate brain doesn’t seem like it would be very adaptive.  (One reader did point out to me how a slight indeterminism might be adaptive, although it seemed to me that the unpredictable causal factors in the environment would accomplish much the same thing.)

Myself, I certainly hope that the mind is mostly deterministic.  I know the kind of decisions I want to make, and the idea of some random element affecting those decisions is not one that I’d personally find comforting.  I want learning, practice, and deliberation, and the other things I’ve done to become the person I am, to causally determine my decisions.  It seems like randomness would actually undermine responsibility, rather than justify it.

Unless of course, I’m missing something?

By the way, Crash Course followed up the above video with one on compatibilism.  Here it is:

Posted in Mind and AI, Philosophy | Tagged , , , , , | 30 Comments