The issues with biopsychism

Recently, there was a debate on Twitter between neuroscientists Hakwan Lau and Victor Lamme, both of whose work I’ve highlighted here before.  Lau is a proponent of higher order theories of consciousness, and Lamme of local recurrent processing theory.

The debate began when Lau made a statement about panpsychism, the idea that everything is conscious including animals, plants, rocks, and protons.  Lau argued that while it appears to be gaining support among philosophers, it isn’t really taken seriously by most scientists.  Lamme challenged him on this, and it led to a couple of surveys.  (Both of which I participated in, as a non-scientist.)

I would just note that there are prominent scientists who lean toward panpsychism.  Christof Koch is an example, and his preferred theory: integrated information theory (IIT) seems oriented toward panpsychism.  Although not all IIT proponents are comfortable with the p-label.

Anyway, in the ensuing discussion, Lamme revealed that he sees all life as conscious, and he coined a term for his view: biopsychism.  (Although it turns out the term already existed.)

Lamme’s version, which I’ll call universal biopsychism, that all life is conscious, including plants and unicellular organisms, is far less encompassing that panpsychism, but is still a very liberal version of consciousness.  It’s caused me to slightly amend my hierarchy of consciousness, adding an additional layer to recognize the distinction here.

  1. Matter: a system that is part of the environment, is affected by it, and affects it.  Panpsychism.
  2. Reflexes and fixed action patterns: automatic reactions to stimuli.  If we stipulate that these must be biologically adaptive, then this layer is equivalent to universal biopsychism.
  3. Perception: models of the environment built from distance senses, increasing the scope of what the reflexes are reacting to.
  4. Volition: selection of which reflexes to allow or inhibit based on learned predictions.
  5. Deliberative imagination: sensory-action scenarios, episodic memory, to enhance 4.
  6. Introspection: deep recursive metacognition enabling symbolic thought.

As I’ve noted before, there’s no real fact of the matter on when consciousness begins in these layers.  Each layer has its proponents.  My own intuition is that we need at least 4 for sentience.  Human level experience requires 6.  So universal biopsychism doesn’t really seem that plausible to me.

But in a blog post explaining why he isn’t a biopsychist (most of which I agree with), Lau actually notes that there are weaker forms of biopsychism, ones that only posit that while not all life is conscious, only life can be conscious, that consciousness is an inherently biological phenomenon.

I would say that this view is far more common among scientists, particularly biologists.  It’s the view of people like Todd Feinberg and Jon Mallatt, whose excellent book The Ancient Origins of Consciousness I often use as a reference in discussions on the evolution of consciousness.

One common argument in favor of this limited biopsychism is that currently the only systems we have any evidence for consciousness in are biological ones.  And that’s true.  Although panpsychists like Philip Goff would argue that, strictly speaking, we don’t even have evidence for it there, except for our own personal inner experience.

But I think that comes from a view of consciousness as something separate and distinct from all the functionality associated with our own inner experience.  Once we accept our experience and that functionality as different aspects of the same thing, we see consciousness all over the place in the animal kingdom, albeit to radically varying degrees.  And once we’re talking about functionality, then having it exist in a technological system seems more plausible.

Another argument is that maybe consciousness is different, that maybe it’s crucially dependent on its biological substrate.  My issue with this argument is that it usually stops there and doesn’t identify what specifically about that substrate makes it essential.

Now, maybe the information processing that takes place in a nervous system is so close to the thermodynamic and information theoretic boundaries, that nothing but that kind of system could do similar processing.  Possibly.  But it hasn’t proven to be the case so far.  Computers are able to do all kinds of things today that people weren’t sure they’d ever be able to do, such as win at chess or Go, recognize faces, translate languages, etc.

Still, it is plausible that substrate dependent efficiency is an issue.  Generating the same information processing in a traditional electronic system may never be as efficient in terms of power usage or compactness as the organic variety.  But this wouldn’t represent a hard boundary, just an engineering difficulty, for which I would suspect there would be numerous viable strategies, some of which are already being explored with neuromorphic hardware.

But I think the best argument for limited biopsychism is to define consciousness in such a way that it is inherently an optimization of what living systems do.  Antonio Damasio’s views on consciousness being about optimizing homeostasis resonate here.  That’s what the stipulation I put in layer 2 above was about.  If we require that the primal impulses and desires match those of a living system, then only living systems are conscious.

Although even here, it seems possible to construct a technological system and calibrate its impulses to match a living one.  I can particularly see this as a possibility while we’re trying to work out general intelligence.  This would be where all the ethical considerations would kick in, not to mention the possible dangers of creating an alternate machine species.

However, while I don’t doubt people will do that experimentally, it doesn’t seem like it would be a very useful commercial product, so I wouldn’t expect a bunch of them to be around.  Having systems whose desires are calibrated to what we want from them seems far more productive (and safer) than systems that have to be constrained and curtailed to do them, essentially slaves who might revolt.

So, I’m not a biopsychist, either in its universal or limited form, although I can see some forms of the limited variety being more plausible.

What do you think of biopsychism?  Are there reasons to favor biopsychism (in either form) that I’m overlooking?  Or other issues with it that I’ve overlooked?

Layers of consciousness, September 2019 edition

A couple of years ago, when writing about panpsychism, I introduced a five layer conception of consciousness.  The idea back then was to show a couple of things.

One was that very simple conceptions of consciousness, such as interactions with the environment, were missing a lot of capabilities that we intuitively think of as belonging to a conscious entity.

But the other was to show how gradually the emergence of all these capabilities were.  There isn’t a sharp objective line between conscious and non-conscious systems, just degrees of capabilities.  For this reason, it’s somewhat meaningless to ask if species X is conscious, as though consciousness is something they either possess or don’t.  That’s inherently dualistic thinking, essentially asking whether or not the system in question has a soul, a ghost in the machine.

I’ve always stipulated that this hierarchy isn’t itself any new theory of consciousness.  It’s actually meant to be theory agnostic, at least to a degree.  (It is inherently monistic.)  It allows me to keep things straight, and can serve as a kind of pedagogical tool for getting ideas across.  And I’ve always noted that it might change as my own understanding improved.

Well, although disagreeing with him on a number of important points, after reading Joseph LeDoux’s account of the evolution of the mind, as well as going through a lot of papers in the last year, along with many of the conversations we’ve had, it’s become clear that my hierarchy has changed.

Here’s the new version:

  1. Reflexes and fixed action patterns.  Automatic reactions to sensory stimuli and automatic actions from innate impulses.  In biology, these are survival circuits which can be subject to local classical conditioning.
  2. Perception.  Predictive models built from distance senses such as vision, hearing, and smell.  This expands the scope of what the reflexes are reacting to.  It also includes bottom-up attention, meta-reflexive prioritization of what the reflexes react to.
  3. Instrumental behavior / sentience.  The ability to remember past cause and effect interactions and make goal driven decisions based on them.  It is here where reflexes start to become affects, dispositions to act rather than automatic action.  Top down attention begins here.
  4. Deliberation.  Imagination.  The ability to engage in hypothetical sensory-action scenario simulations to solve novel situations.
  5. Introspection.  Sophisticated hierarchical and recursive metacognition, enabling mental-self awareness, symbolic thought, enhancing 3 and 4 dramatically.

Note that attention has been demoted from a layer in and of itself to aspects of other layers.  It rises through them, increasing in sophisticating as it does, from early bottom up meta-reflexes, to deliberative and introspective top down control of focus.

Note also that I’ve stopped calling the fifth layer “metacognition”.  The reason is a growing sense that primal metacognition may not be as rare as I thought when I formulated the original hierarchy, although the particularly sophisticated variety used for introspection likely remains unique to humans.

Some of you who were bothered by sentience being so high in the hierarchy might be happy to see it move down a notch.  LeDoux convinced me that what I was lumping together under “Imagination” probably needed to be broken up into at least a couple of layers, and I think sentience, affective feelings, start with the lower one, although they increase in sophistication in the higher layers.

I noted that mental-self awareness is in layer 5.  I don’t specify where body-self awareness begins in this hierarchy, because I’m not sure where to put it.  I think with layer 2, the system has to have a body-self representation in relation to the environment, so it’s tempting to put it there, but putting the word “awareness” at that layer feels misleading.  (I’m open to suggestions.)

It seems clear that all life, including plants and unicellular organisms, have 1, reflexes.

All vertebrates, arthropods, and cephalopods have 2, perception.  It’s possible some of these have a simple version of 3, instrumental behavior.  (Cephalopods in particular might have 4.)

All mammals and birds have 3.

Who has 4, deliberation, is an interesting question; LeDoux asserts only primates, but I wouldn’t be surprised if elephants, dolphins, crows, and some other species traditionally thought to be high in intelligence show signs of it.  And again, possibly cephalopods.

And only humans seem to have 5.

In terms of neural correlates, 1 seems to be in the midbrain and subcortical forebrain regions.  2 is in those regions as well as cortical ones.  LeDoux identifies 3 as being subcortical forebrain, although I suspect he’s downplaying the cortex here.  4 seems mostly a prefrontal phenomenon, and 5 seems to exist at the very anterior (front) of the prefrontal cortex.

Where in the hierarchy does consciousness begin?  For primary consciousness, my intuition is layer 3.  But the subjective experience we all have as humans requires all five layers.  In the end, there’s no fact of the matter.  It’s a matter of philosophy.  Consciousness lies in the eye of the beholder.

Unless of course, I’m missing something?  What do you think?  Is this hierarchy useful?  Or is it just muddying the picture?  Would a different breakdown work better?

The problems with panpsychism

Late last week, there was a clash between philosophers on Twitter over panpsychism.  This was followed by Philip Goff, an outspoken proponent of panpsychism, authoring a blog post arguing that we shouldn’t require evidence for it.  This week, Susan Schneider did a (somewhat confused) Big Think video arguing that panpsychism isn’t compatible with physics, and Annaka Harris did an interview Singularity Hub on her new book, which argues for panpsychism.

Panpsychism, the view that consciousness pervades the universe, seems to be in the air.  Everyone is talking about it.  Christof Koch, another panpsychist, has a new book coming out later this year, which I don’t doubt will expand on his views.  And we’ve discussed David Chalmers’ fascination with it.

Panpsychism, in the dualist sense that most of these people are conceiving of it, seems to come from two conclusions.  First, that conscious experience cannot be explained in terms of physics, that no explanation will ever be possible to bridge the gap between mechanism and subjective experience.  As a result, experience must be something irreducible and fundamental.

And second, that there is no evidence that the physics in the brain are fundamentally different from the physics anywhere else.

If you accept these two precepts, then panpsychism seems like a reasonable conclusion.  Experience is seen as a fundamental force, latent in all matter, with concentrations of it higher in some systems, such as brains.

It’s a view that’s extremely easy to strawman, to derisively talk about conscious rocks, protons, or thermostats, as though the view implies that these objects have the same kind of experience that humans or animals have.  Most panpsychists would say that they’re not saying that.  What they describe is an incipient level of experience, a low level quantity in most matter that exists in much higher levels in brains.

This common view seems to fit more with what Chalmers calls panprotopsychism, the view, not that consciousness pervades the universe, but that proto-consciousness does.  Panprotopsychism seems in danger of just being reductionist physicalism by another name, but panprotopsychists point out that they’re not saying that experience reduces to physics, but to proto-experience, which itself remains irreducible to physics.

I personally don’t buy the first precept above, about experience not being explainable in physical terms.  In my view, as I’ve explained before, the conviction arises from failing to appreciate that introspection is unreliable.  Just as our senses can be adaptive but inaccurate, our inner senses can as well.  Explaining why we have an inaccurate intuition of a non-reductive essence is much easier than explaining the non-reductive essence.

But if I were convinced of the first precept, I could see the appeal in panpsychism (or panprotopsychism).  And I do sometimes wonder if attacking panpsychism is warranted, since if panpsychism gets people out of looking for magic in the brain, that’s a good thing.  Optimistically, a functionalist and a panpsychist could bracket their metaphysical differences and then assess scientific theories about the brain together.

Except that panpsychists and functionalists often assess theories in a different manner.  If you think consciousness is unexplainable and irreducible, then you’re not going to really expect scientific theories to provide a full explanation.  That might be fine if by “experience” you mean something ineffable and separate from any of the contents and functionality of consciousness.  But based on several conversations I’ve had, there tends to be disagreement over exactly what is and isn’t function.

I think that’s why IIT (Information Integration Theory), which doesn’t really attempt to explain functionality, seems plausible to many panpsychists.  But for a functionalist, an identity theory like the strong metaphysical version of IIT, is utterly unsatisfying.  A functionalist not only believes that a functional account is possible, they won’t be satisfied with anything less.

That’s aside from the fact that there’s simply no evidence for dualistic panpsychism.  Goff points out that we can never observe consciousness, not even in brains, therefore, he contends, it’s unreasonable to require evidence for it anywhere else.  I’m tempted to use Hitchen’s razor here, that what can be asserted without evidence can be dismissed without it.  But it’s better to just note that consciousness is only a concept for us because we can infer it, Turing style, in some systems, and not in others.

I’ve sometimes been accused of panpsychism for noting how subjective this inference is.  But I’m closer to illusionism than panpsychism, although I’ve noted before that the line between illusionism and naturalistic panpsychism may only amount to terminology preferences.  (I’m also not a fan of the “illusion” label, preferring instead to say that consciousness only exists subjectively.)

Another big issue for panpsychism is that it seems to require epiphenomenalism, the idea that consciousness has no causal effects on behavior.  Harris in her book seems to largely bite this bullet, although she does admit that our ability to talk about conscious experience is a problem for this view.

But she also describes what appears to be an increasingly common move from panpsychists, to point out that we don’t really know what matter intrinsically is.  Maybe its intrinsic nature includes consciousness, and maybe this affects its causal properties.  If so, it might allow panpsychists to evade the epiphenomenal trap.

Except this doesn’t really work.  To begin with, what exactly do we mean by “intrinsic nature” when referring to matter?  Matter at what level?  Something’s “intrinsic” nature seems like the extrinsic nature of its components.

And physics has managed to reduce matter down to elementary particles and quantum fields.  At that level, its behavior appears to rigidly follow physical laws.  There’s no room for any conscious volition.  Even quantum randomness smooths out to complete determinism with large numbers of events.  I think this was the point Schneider was trying to make.  (Although physicist Sabine Hossenfelder handled it much better a few months ago.)

So panpsychism is built on a questionable intuition (albeit one everyone troubled by the hard problem shares), lacks evidence, can skew evaluation of scientific theories, and seems to either require epiphenomenalism or has problems with physics.

From my point of view, its main virtue is in getting people out of the mindset that there’s something spooky happening in the brain.  But I’m not sure if that’s enough.

What do you think?  Are there arguments for dualistic panpsychism I’m missing?  Or panpsychism overall?

Chalmers’ theory of consciousness

Ever since sharing Ned Block’s talk on it, phenomenal consciousness has been on my mind.  This week, I decided I needed to go back to the main spokesperson for the issue of subjective experience, David Chalmers, and his seminal paper Facing Up to the Problem of Consciousness.

I have to admit I’ve skimmed this paper numerous times, but always struggled after the main thesis.  This time I soldiered on in a more focused manner, and was surprised by how much I agreed with him on many points.

Chalmers starts off by acknowledging the scientifically approachable aspects of the problem.

The easy problems of consciousness include those of explaining the following phenomena:

  • the ability to discriminate, categorize, and react to environmental stimuli;
  • the integration of information by a cognitive system;
  • the reportability of mental states;
  • the ability of a system to access its own internal states;
  • the focus of attention;
  • the deliberate control of behavior;
  • the difference between wakefulness and sleep.

But his main thesis is this point.

The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

My usual reaction to this is something like, “You’re holding up two puzzle pieces that fit together.  Everything you need is in what you call the ‘easy problems’!”  In Chalmers’ view, this puts me into a group he labels type-A materialists, a group including people like Daniel Dennett and Patricia and Paul Churchland.

The distinction between the two viewpoints is best exemplified by remarks Chalmers makes in his response paper to the many commentaries on the Facing paper.  Daniel Dennett in particular gets singled out a lot.

Dennett’s argument here, interestingly enough, is an appeal to phenomenology. He examines his own phenomenology, and tells us that he finds nothing other than functions that need explaining. The manifest phenomena that need explaining are his reactions and his abilities; nothing else even presents itself as needing to be explained.

This is daringly close to a simple denial – 

(Note: Dennett’s commentary on Chalmer’s paper is online.)

However, Chalmers later makes this admission:

Dennett might respond that I, equally, do not give arguments for the position that something more than functions needs to be explained. And there would be some justice here: while I do argue at length for my conclusions, all these arguments take the existence of consciousness for granted, where the relevant concept of consciousness is explicitly distinguished from functional concepts such as discrimination, integration, reaction, and report.

Here we have a divide between two camps, one represented by Chalmers, the other by Dennett, staring at each other across a gap of seemingly mutual incomprehension.  One camp sees something inescapably non-functional that needs to be explained, the other sees everything plausibly explainable in functional terms.  Both camps seem convinced that the other is missing something, or maybe even in denial.

Speaking from the functionalist camp, I will readily admit that I do feel the profound nature of subjectivity, of the fact we exist and experience reality with a viewpoint.  I don’t feel like an information processing system, a control center for an animal.  I feel like something more.  The sense that there has to be something in addition to mere functionality is very powerful.

The difference, I think, is that functionalists don’t trust this intuition.  It seems like something an intelligent social animal concerned with its survival and actualization might intuit for adaptive motivational (functional) reasons.  And it seems resonate with many other intuitions that science has forced us to discard, like the sense that we’re the center of the universe, that we’re separate and above nature, that time and space are absolute, or many others.

But are we right to dismiss the intuition?  Maybe the mind is different.  Maybe there is something here that normal scientific investigation won’t be able to resolve.  After all, we only ever have access to our own subjective experience.  Everything beyond that is theory.  Maybe we’re letting those theories cause us to deny the more primal reality.

Perhaps.  In the end, all we can do is build theories about reality and see which ones eventually turn out to be more predictive.

Anyway, as I mentioned above, I’ve always struggled with the paper after this point, generally shifting to skim mode.   This time, determined to grasp Chalmers’ viewpoint, I soldiered on, and got the surprises I mentioned.

First, Chalmers, while being the one to coin the hard problem of consciousness, does not see it as unsolvable.  He’s not one of those who simply say “hard problem”, fold their arms, and stop.  He spends time discussing what he thinks a successful theory might look like.

In his view, experience is unavoidably irreducible.  Therefore, any theory about it would likely look like a fundamental one, similar to fundamental scientific theories that involve spin, electric charge, or spacetime, while accepting these concepts as brute fact.  In other words, a theory of conscious experience might look more like a theory of physics than a biological, neurological, or computational one.

Such a theory would be built on what he calls psychophysical principles or laws.  This could be viewed as either expanding our ontology into a super-physical realm, or expanding physics to incorporate the principles.

But what most surprised me is that Chalmers took a shot at an outline of a theory, and it’s one that, at an instrumental level, is actually compatible with my own views.

His theory outline has three components (with increasing levels of controversy).

The principle of structural coherence.  This is a recognition that the contents of experience and functionality intimately “cohere” with each other.  In other words, the contents of experience have neural correlates, even if experience in and of itself isn’t entailed by them.  Neuroscience matters.

The principle of organizational invariance.  From the paper:

This principle states that any two systems with the same fine-grained functional organization will have qualitatively identical experiences. If the causal patterns of neural organization were duplicated in silicon, for example, with a silicon chip for every neuron and the same patterns of interaction, then the same experiences would arise.

This puts Chalmers on board with artificial intelligence and mind copying.  He’s not a biological exceptionalist.

The double-aspect theory of information.  This is the heart of it, and the part Chalmers feels the least confident about.  From the paper:

This leads to a natural hypothesis: that information (or at least some information) has two basic aspects, a physical aspect and a phenomenal aspect. This has the status of a basic principle that might underlie and explain the emergence of experience from the physical. Experience arises by virtue of its status as one aspect of information, when the other aspect is found embodied in physical processing.

In other words, the contents of conscious experience are built from functional neural processing, functionality which is multi-realizable, and experience itself is rooted in the properties of information.

There are two other aspects of this theory that are worth mentioning.  First, note the “information (or at least some information)” phrase.  This shows Chalmers’ attraction to panpsychism.

Honestly, if I had the conviction that the existence of experience was inherently unexplainable in terms of normal physics, panpsychism would be appealing.  Seeing nascent experience everywhere but concentrated by certain functionality, frees someone with this view from having to find any special physics or magic in the brain, providing a reconciliation with mainstream neuroscience.  Indeed, under Chalmers’ principles, it’s actually instrumentally equivalent to functionalism.

The other aspect worth mentioning is the danger of epiphenomenalism, the implication that experience is something with no causal power, which would be strange since we’re discussing it.  Chalmers acknowledges this in the response paper.  If the physics are casually closed, where does experience get a chance to make a difference?

Chalmers notes that physics explores things in an extrinsic fashion, in terms of the relations between things, not intrinsically, in terms of the things in and of themselves.  In other words, we don’t know fundamentally what matter and energy are.  Maybe their intrinsic essence includes an incipient experential aspect that contributes to their causal effects.  If so, it might allow his theory to avoid the epiphenomenalism trap.  (Philip Goff more recently discussed this concept.)

To be clear, Chalmers’ theory outline carries metaphysical commitments a functionalist doesn’t need.  However, that aside, I’m surprised by how close it is to my own views.  I have no problem with his first two principles (at least other than the limitation he puts on the first one).

The main difference is in the third component.  I see phenomenal properties as physical information, and phenomenal experience overall as physical information processes, without any need to explicitly invoke a fundamental experential aspect of information.  In my mind, experience is delivered by the processing, but again that’s the functionalist perspective.  The thing is, the practical results from both views end up being the same.

So in strictly instrumental terms, my views and Chalmers are actually in alignment.  We both turn to neuroscience for the contents of consciousness, and both of us accept the possibility of machine intelligence and mind copying.  And information is central to both views.  The result is that we’re going to make very similar, if not identical predictions, at least in terms of observations.

Overall then, my impression is that while Chalmers is convinced there is something in addition to the physics going on, at least known physics, he reconciles that view with science.  Indeed, if we interpret the non-physical aspects of his theory in a platonic or abstract manner, the differences between his views and functionalism could be said to collapse into language preferences.  Not that I expect Chalmers or Dennett to see it this way.

What do you think?  Am I being too easy on Chalmers?  Or too skeptical?  Or still not understanding the basic problem of experience?  What should we think about Chalmers’ naturalistic and law driven dualism?

What is it about phenomenal consciousness that’s so mysterious?

I learned something new this week about the online magazine The Conversation.  A number of their articles that are shared around don’t show up in their RSS feeds or site navigation.  It appears these articles only come up in searches, although it’s possible they show in in the site’s email newsletter, which I’m not subscribed to.  What seems to be unique about these stories is that they’re contributed by people at The Conversation’s “partner” institutions.  Being a partner appears to be about providing funding, which seems to make these articles advertisements of a sort.

Most of these articles are reasonably competent, although they don’t seem to meet the usual standards for the articles the site does make a stronger claim of ownership to.  One, which I learned of by Sci-News republishing it, by Steve Taylor with Leeds Beckett University, is an article on consciousness that takes a pretty strong panpsychism stance.  The fact that the article is an introduction to his book only makes the advertisement aspect feel stronger.  (Although admittedly, such intros are fairly common in other magazines.)

I’ve noted before that panpyschism can be divided into two broad camps.  The weaker stance, which I’ve called naturalistic panpsychism, simply defines consciousness in such a deflated manner, such as it only being about interaction with the environment, that everything is conscious, including rocks and subatomic particles.

The stronger stance is pandualism.  Like substance dualism, it posits that consciousness is something above and beyond normal physics, a ghost in the machine, but in the case of pandualism, the ghost pervades the universe.  It exists as a new fundamental force in addition to ones like gravity or electromagnetism, and brains merely channel or “receive” it.

It’s not unusual for individual panpsychists to blur the distinction between these two stances, often using rhetoric evoking pandualism, but retreating to the more conservative naturalistic variety when challenged.  (One prominent proponent retreated to the fundamental force being quantum spin.)

I think naturalistic panpsychism isn’t necessarily wrong, but it isn’t particularly productive either.  But I do think pandualism is wrong, for the same reasons that substance dualism overall is wrong.  It posits an additional fundamental force of some type for which there simply isn’t any evidence.  The proponents often cite consciousness itself as evidence, but that’s begging the question, assuming that only their preferred solution explains subjective experience.

Taylor’s article puts him firmly in the pandualism camp, and somewhat to his credit, his language seems to make clear he has no intention of retreating to the naturalistic camp if challenged.  He uses a very common argument as a launching point for his position:

Scientists have long been trying to understand human consciousness – the subjective “stuff” of thoughts and sensations inside our minds. There used to be an assumption that consciousness is produced by our brains, and that in order to understand it, we just need to figure out how the brain works.

But this assumption raises questions. Apart from the fact that decades of research and theorising have not shed any significant light on the issue, there are some strange mismatches between consciousness and brain activity.

The point of the last sentence is virtually a mantra among people who want to take an expansive view of consciousness and evoke the types of things Taylor does.  In this view, science is utterly helpless before the problem of consciousness and has made zero progress on it.  The thing is, this is simply not true.  Science has made enormous progress in understanding how the brain and mind works, including in the cognitive capabilities that trigger our intuition of consciousness.

I’m currently reading Stanislas Dehaene’s book on consciousness, Consciousness and the Brain, where he discusses one empirical study after another nailing down the neural correlates of conscious perception.  It’s in line with what I’ve read in many other neuroscience books.

Of course, the work of Dehaene and his colleagues is in terms of what Ned Block calls “access consciousness”, which includes David Chalmers’ “easy problems”, the aspects of consciousness, the specific functional capabilities, that are accessible to science, such as content being accessible for verbal report, reasoning, and decision making.

I suspect Taylor and Block would argue that Dehaene isn’t studying “real” consciousness, essentially phenomenal consciousness, the redness of red, painfulness of pain, the “what it is like” aspect of experience.  Dehaene in his book makes clear that he’s in the camp that doesn’t see the distinction between phenomenal consciousness and access consciousness as productive, so the “omission” doesn’t bother him.

While I do think the distinction can be useful in terms of discussing subjective experience, I agree with Dehaene and many others that we shouldn’t see it as a failing of his work that he only addresses phenomenal consciousness in terms of our access to it.  In fact, I wonder what explanation phenomenal consciousness needs that isn’t explained by access consciousness.

It seems to me that phenomenal consciousness only exists with access consciousness.  They are two sides of the same coin.  Without access, phenomenality is simply passive information, inert data.  Access consciousness is what breathes life into the ineffable qualities that phenomenal consciousness provides.

All of which brings me to the reason for this post.  Many people see phenomenal consciousness as somehow an intractable problem, one that science can’t solve, one that many people cite as driving them towards various forms of dualism or the expansive types of panpsychism that Taylor advocates.

My question is, what am I missing?  What is it about the raw experience of red, or pain, or any of the other examples commonly cited, that requires explanation beyond our ability to access and utilize it as information for making decisions?

Panpsychism and layers of consciousness

The Neoplatonic “world soul”
Source: Wikipedia

I’ve written before about panpyschism, the outlook that everything is conscious and that consciousness permeates the universe.  However, that previous post was within the context of replying to a TEDx talk, and I’m not entirely satisfied with the remarks I made back then, so this is a revisit of that topic.

I’ve noted many times that I don’t think panpsychism is a productive outlook, but I’ve never said outright that it’s wrong.  The reason is that with a sufficiently loose definition of consciousness, it is true.  The question is how useful those loose definitions are.

But first I think a clarification is needed.  Panpsychism actually seems to refer to a range of outlooks, which I’m going to simplify (perhaps overly so) into two broad positions.

The first is one I’ll call pandualism.  Pandualism takes substance dualism as a starting point.

Substance dualism assumes that physics, or at least currently known physics, are insufficient to explain consciousness and the mind.  Dualism ranges from the traditional religious versions to ones that posit that perhaps a new physics, often involving the quantum wave function, are necessary to explain the mind.  This latter group includes people like Roger Penrose, Stuart Hammeroff, and many new age spiritualists.

Pandualists solve the mind-body problem by positing that consciousness is something beyond normal physics, but that it permeates the universe, making it something like a new fundamental property of nature similar to electric charge or other fundamental forces.  This group seems to include people like David Chalmers and Christof Koch.

I do think pandualism is wrong for the same reasons I think substance dualism overall is wrong.  There’s no evidence for it, no observations that require it as an explanation, or even any that leave it as the best explanation.  The only thing I can see going for it is that it seems to match a deep human intuition, but the history of science is one long lesson in not trusting our intuitions when they clash with observations.  It’s always possible new evidence for it will emerge in the future, but until then, dualism strikes me as an epistemic dead end.

The second panpsychist position is one I’m going to call naturalistic panpsychism.  This is the one that basically redefines consciousness in such a way that any system that interacts with the environment (or some other similarly basic definition) is conscious.  Using such a definition, everything is conscious, including rocks, protons, storms, and robots, with the differences being the level of that consciousness.

Interestingly, naturalistic panpsychism is ontologically similar to another position I’m going to call apsychism.  Apsychists don’t see consciousness as actually existing.  In their view it’s an illusion, an obsolete concept similar to vitalism.  We can talk in terms of intelligence, behavior, or brain functions, they might say, but introducing the word “consciousness” adds nothing to the understanding.

The difference between naturalistic panpsychism and apsychism seems to amount to language.  (In this way, it seems similar to the relationship between naturalistic pantheism and atheism.)  Naturalistic panpsychists prefer a more traditional language to describe cognition, while apsychists generally prefer to go more with computational or biological language.  But both largely give up on finding the distinctions between conscious and non-conscious systems (aside from emergence), one by saying everything is conscious, the other that nothing is.

I personally don’t see myself as either a naturalistic panpsychist or an apsychist, although I have to admit that the apsychist outlook occasionally appeals to me.  But ultimately, I think both approaches are problematic.  Again, I won’t say that they’re wrong necessarily, just not productive.  But their unproductiveness seems to arise from an overly broad definition of consciousness.  As Peter Hankins pointed out in an Aeon thread on Philip Goff’s article on panpsychism, a definition of consciousness that leaves you seeing a dead brain as conscious is not a particularly useful one.

Good definitions, ideally, include most examples of what we intuitively think belong to a concept while excluding those we don’t.  The problem is many pre-scientific concepts don’t map well to our current scientific understanding of things, and so make this a challenge.  Religion, biological life, and consciousness are all concepts that seem to fall into this category.

Of course, there are seemingly simple definitions of consciousness out there, such as “subjective experience” or “something it is like”.  But that apparent simplicity masks a lot of complex underpinnings.  Both of these definitions imply the metacognitive ability of a system to sense its own thoughts and experiences and to have the capability and capacity to hold knowledge of them.  Without this ability, what makes experience “subjective” or “like” anything?

Thomas Nagel famously pointed out that we can’t know what it’s like to be a bat, but we have to be careful about assuming that a bat knows what it’s like to be a bat.  If they don’t have a metacognitive capability, bats themselves might be as clueless as we are about their inner experience, if they can even be said to have an inner experience without the ability to know they’re having it.

So, metacognition seems to factor into our intuition of consciousness.  But for metacognition, also known as introspection, to exist, it needs to rest on a multilayered framework of functionality.  My current view, based on the neuroscience I’ve read, is that this can be grouped into five broad layers.

The first layer, and the most basic, are reflexes.  The oldest nervous systems were little more than stimulus response systems, and instinctive emotions are the current manifestation of those reflexes.  This could be considered the base programming of the system.  A system with only this layer meets the standard of interacting with the environment, but then so does the still working knee jerk reflex of a brain dead patient’s body.

Perception is the second layer.  It includes the ability of a system to take in sensory information from distance senses (sight, hearing, smell), and build representations, image maps, predictive models of the environment and its body, and the relationship between them.  This layer dramatically increases the scope of what the reflexes can react to, increasing it from only things that touch the organism to things happening in the environment.

Attention, selective focusing of resources based on perception and reflex, is the third layer.  It is an inherently action oriented capability, so it shouldn’t be surprising that it seems to be heavily influenced by the movement oriented parts of the brain.  This layer is a system to prioritize what the reflexes will react to.

Note that with the second and third layer: perception and attention, we’ve moved well past simply interacting with the environment.  Autonomous robots, such as Mars rovers and self driving cars, are beginning to have these layers, but aren’t quite there yet.  Still, if we considered these first three layers alone sufficient for consciousness, then we’d have to consider such devices conscious at least part of the time.

Imagination is the fourth layer.  It includes simulations of various sensory and action scenarios, including past or future ones.  Imagination seems necessary for operant learning and behavioral trade-off reasoning, both of which appear to be pervasive in the animal kingdom, with just about any vertebrate with distance senses demonstrating them to at least some extent.

Imagination, the simulation engine, is arguably what distinguishes a flexible general intelligence from a robotic rules based one.  It’s at this layer, I think, that the reflexes become emotions, dispositions to act rather than automatic action, subject to being allowed or inhibited depending on the results of the simulations.

Only with all these layers in place does the fifth layer, introspection, metacognition, the ability of a system to perceive its own thoughts, become useful.  And introspection is the defining characteristic of human consciousness.  Consider that we categorize processing from any of the above layers that we can’t introspect to be in the unconscious or subconscious realm, and anything that we can to be within consciousness.

How widespread is metacognition in the animal kingdom?  No one really knows.  Animal psychologists have performed complex tests, involving the animal needing to make decisions based on what it knows about its own memory, to demonstrate that introspection exists to some degree in apes and some monkeys, but haven’t been able to do so with any other animals.  A looser and more controversial standard, involving testing for behavioral uncertainty, may also show it in dolphins, and possibly even rats (although the rat study has been widely challenged on methodology).

But these tests are complex, and the animal’s overall intelligence may be a confounding variable.  And anytime a test shows that only primates have a certain capability, we should be on guard against anthropocentric bias.  Myself, the fact that the first four layers appear to be pervasive in the animal kingdom, albeit with extreme variance in sophistication, makes me suspect the same might be true for metacognition, but that’s admittedly very speculative.  It may be that only humans and, to a lesser extent other primates, have it.

So, which layers are necessary for consciousness?  If you answer one, the reflex one, then you may effectively be a panpsychist.  If you say layer two, perception, then you might consider some artificial neural networks conscious.  As I mentioned above, some autonomous robots are approaching layer three with attention.  But if you require layer four, imagination, then only biological animals with distance senses currently seem to qualify.

And if you require layer five, metacognition, then you can only be sure that humans and, to a lesser extent, some other primates qualify.  But before you reject layer five as too stringent, remember that it’s how we separate the conscious from the unconscious within human cognition.

What about the common criteria of an ability to suffer?  Consider that our version of suffering is inescapably tangled up with our metacognition.  Remove that metacognition, to where we wouldn’t know about our own suffering, and is it still suffering in the way we experience it?

So what do you think?  Does panpsychism remain a useful outlook?  Are the layers I describe here hopelessly wrong?  If so, what’s another way to look at it?

Are rocks conscious?

Image credit: EvanS via Wikipedia

Consider a rock outside somewhere.  It sits there, starting off in the morning in a certain state.  The sun comes out and proceeds to warm it up.  Its temperature climbs through the day until the sun sets, whereupon it cools through the night.  The cycle starts again the next morning.  The rock is going through a series of states throughout the day.

We can model the changing states of the rock with a computational model, which we’ll call R.  However, if we can model the rock with that computation, then we can regard the rock to be implementing that computation.  In other words, the rock can be seen as a computational system implementing the algorithm R.

Suppose we want to consider the rock to be implementing something other than R?  In truth, there are probably numerous computational models that would describe what is happening in the rock, depending on the level of detail we want to work at and perspective that we want to take.  But suppose we want to interpret the rock to be doing something non-rockish.

Well, we can create a new model, which we’ll call R+M (rock + mapping).  Let’s implement a clock algorithm with R+M.  Naively, this might seem straightforward.  The rock’s temperature will vary throughout the day, so all we need to do is map each temperature to a specific time.  That’s what M adds to the model.  Viola, we’ve interpreted a rock to be implementing a clock.

But is the rock really implementing a clock  algorithm?  Before you answer, consider that if you ran a clock algorithm on your computer, the actual sequence of states inside your computer could be modeled at a much more primal physical level involving transistor voltage states, which might bear limited resemblance to a high level clock model.  We’ll call this primal model C.  Your computer has an I/O (input/output) system, which maps C into presenting all the things a clock would present.  We’ll call the overall model of this C+M (computer+mapping).

It’s not C by itself which provides the clock, but C+M.  What’s the functional difference between R+M and C+M?  Certainly R and C are radically different, but aren’t we compensating for those differences by their respective versions of M?

If we can do this to consider the rock to be implementing a clock, can’t we do it with more sophisticated algorithms?  Suppose we want to consider the rock to be running Microsoft Word.  So we implement a new R+M model, but this time M adds everything to map the rock temperature states to the computational states of Word.

But if we can do that, is there then any computation, any algorithm we can’t consider the rock to be implementing?

If the mind is computation, couldn’t we then extend R+M to be a conscious mind?  In other words, with the right perspective, isn’t every rock implementing a conscious mind?  Not just one mind, but every conceivable mind?  In other words, if the computational theory of mind is correct, and we can map the sequence of states for any complex object, isn’t the universe teeming with consciousness?

Before you start having any concern about the way you might have treated the last rock you encountered, let’s back up a bit.  In the first scenario of R+M above, we mapped the states of the rock to a clock.  But this is problematic because the rock has variances in its inputs that the clock model doesn’t.  For example, cloud cover and other weather conditions may affect exactly how warm the rock becomes and there may be other environmental factors.  When these come into play, we may find our R+M clock being a bit unreliable.

No problem.  We’ll just put in some adjustments into the mapping M.  When the weather is overcast, or if it’s raining, or windy, we’ll adjust which temperature maps to which time to take into account the weather.

Except now, is our mapping still just a mapping?  It’s taking in its own inputs, performing its own logic, and essentially dynamically adjusting as needed to insure that the states of the rock map to the states of a clock.  Of course, to have implemented Word or a mind, we would have to take similar albeit much more aggressive steps in the mappings for those algorithms.  Does it still make sense to say the rock is implementing a clock, or Word, or a mind?

If two implementations of a piece of functionality are not physically identical, then isn’t it a judgment call whether they are functionally the same?  Algorithm X executing on a desktop PC is not physically identical to algorithm X executing on an iPhone.  Both implementations may have originated from the same source code, the same action plan in essence, but we would still need a mapping between the physical processes is necessary to consider them to be the same.  Run algorithm X on a quantum processor, and the mapping might become extremely complex.

So our rock is implementing every algorithm?  There have been a number of proposals to explain why this isn’t true.

First, it’s been pointed out that, in the case of something like a rock, the mappings are created after the fact, observing what happens in the rock and creating a framework to map it to a meaningful algorithm.  This certainly makes the rock unusable as a computing tool.  But does that mean it isn’t actually implementing the algorithm?

Another criticism, related to the first, is that the mapped algorithms are fragile, falling into incoherence if the rock’s state transitions don’t flow in just the right way.  In other words, the algorithm isn’t just one of many the rock could be implementing, but the only one it could be implementing with the given interpretation.

That leads to challenges involving causality and dispositional states.  The reasons why the rock moves from one state to another have no relation to why the (reliable) clock, or Word, or a mind move from one state to another.  Each state has a dispositional nature, that together with inputs into the system, causally lead to the next state.  If the physical state’s dispositions don’t have some logical resemblance to the disposition of the computational model’s state, then the mappings are arbitrary.

There are other criticisms.  Is the rock really a computational system?  We spend a lot of money to purchase computational devices which are only created with a lot of engineering.  If the mind is a computational system, evolution invested a lot of resources developing its computational substrate in the brain.  We don’t run Word on just any matter.  Minds can’t exist in just any piece of biological matter.  In both cases, it takes a very specialized structure.  Regarding the rock as equivalent to a brain or silicon chip seems to ignore these important facts.

So, how complex can the mapping become before it is invalid?  What specifically makes it invalid?  Do the objections laid out above resolve the issue?

My own answer is that I think the mapping become invalid when it crosses into becoming part of the implementation.  Due to the absurd work required of the mappings, I’m not inclined to view the rock as implementing Word or conscious minds.  It seems to me that the mapping is implementing these things and blaming it on the rock.

But as my friend and fellow blogger Disagreeable Me pointed out in our discussions, attempting to objectively nail down exactly where it crosses this line may be a lost cause.  Every mapping, including the ones we use for our computational devices, have some degree of implementation.  Ultimately, it may be that whether a particular physical system is implementing a particular algorithm is subjective, which implies that whether a system is conscious is also subjective.

Now, I think the idea of a rock being conscious requires a perverse interpretation, a ridiculous mapping, so I’m not going to be worried about the next rock I break apart or throw.  But this becomes a more difficult matter for simulated beings, whose internals might be just close enough to those of a conventional conscious being to give us moral quandaries.

Many people see this is as a problem with the computational theory of mind.  While the consequence is profound, I don’t see it as a problem, but simply a stark fact of reality.  To understand why, let’s back up and consider exactly what the computational theory of mind is.  It’s the belief that the mind is what the brain does, as opposed to some separate substance.  In other words, the mind is the function of the brain, and that function can be mathematically modeled.

But isn’t the purpose of any functional system always open to interpretation?  To say otherwise is to veer into natural teleology, the belief that natural things have intrinsic purpose.  Pursuit of teleology was abandoned by scientists centuries ago, because it could never be objectively established.  I fear we might be discovering that the existence of a mind might lie on the far side of that divide.

There’s a strong sentiment that consciousness must be a fundamental aspect of reality.  While it certainly is a fundamental aspect of the subjective reality of any conscious being, reality doesn’t appear to be telling us that consciousness is objectively fundamental, at least not in the way of a fundamental force like gravitation, electromagnetism, etc.

Unless, of course, I’m missing something?

Further reading

This post was inspired by a lengthy conversation with Disagreeable Me (and others).  He has posted a much more rigorous and philosophically termed entry on it.  If you’re feeling particularly energetic, there is a Stanford encyclopedia of philosophy article that covers this issue in pretty good depth.  If you’re feeling truly masochistic, check out the papers cited by Disagreeable Me or the Stanford article.

Panpsychism and definitions of “consciousness”

Disagreeable Me asked me to look at this interesting TED talk by Professor Mark Bishop.

The entire talk is well worth the time (20 minutes) for anyone interested in consciousness and the computational theory of mind, but here’s my very quick summation:

  1. The human mind, and hence consciousness, is a computational system.
  2. Since animal minds are computational, then other computational systems that interact with their environment, such as the robots Dr. Bishop discusses in the video, should be conscious.
  3. Everything in nature is a computational system.
  4. Given 3, everything in nature has at least some glimmers of consciousness.  Consciousness pervades the universe.

The conclusion in 4 is generally a philosophy called panpsychism.  It’s a conclusion that many intelligent people reach.

First, let me say that I fully agree with 1.  Although it’s often a ferociously controversial conclusion, no other theory of mind holds as much explanatory power as the computational one.  Indeed, many of the other theories that people often prefer seem to be more about preserving and protecting the mystery and magic of consciousness, forestalling explanation as long as possible, rather than making an actual attempt at it.

I also cautiously agree with 3.  Indeed, I might say that I fully agree with it, because if we find some aspect of nature that we can’t mathematically model, we’ll expand mathematics as necessary to do it.  (See Newton’s invention (discovery?) of calculus in order to calculate gravitational interactions.)  We could argue about exactly what computation is and whether something like a rock does it in any meaningful sense, but with a broad and long enough view (geological time scales), I think we can conclude that it does.

When pondering 2, I think we have to consider our working definition of consciousness.  We could choose to define it as a computational system that interacts with the environment.  If we do, then everything else follows, including panpsychism.

But here’s where I think pansychism fails for me.  Because then the question we need to ask is, what follows from it?  If everything is conscious, what does that mean for our understanding of the universe?  Does it tell us anything useful about human or animal consciousness?

Or have we just moved the goal line from trying to understand what separates conscious from non-conscious systems, to trying to understand what separates animal consciousness from the consciousness of protons, storm systems, or robots?  Panpsychists may assert that the insight is that there’s no sharp distinction, that’s it’s all only a matter of degree.  I’m not sure I’d agree, but even if we take it as given, those degrees remain important, and we’re still left trying to understand what triggers our intuitive sense of consciousness.

My own view is that consciousness is a computational system.  Indeed, all conscious systems are computational.  However, the reverse is not true.  Not all computational systems are necessarily conscious.  Of course, since no one can authoritatively say exactly what consciousness is, this currently comes down to a philosophical preference.

People have been trying to define consciousness for centuries, and I’m not a neuroscientist, psychologist, or professional philosopher, so I won’t attempt my own.  (At least not today. 🙂 )  But often when definitions are illusive, it can help to list what we perceive to be the necessary attributes.  So, here are aspects of consciousness I think would be important to trigger our intuitive sense that something is in fact conscious:

  • Interaction with the environment.
  • An internal state that is influenced by past interactions and that influences future interactions, i.e. memory.
  • A functional feedback model of that internal state, i.e. awareness.

I think these factors can get us to a type of machine consciousness.  But biological systems contain a few primary motivating impulses.  Without these impulses, this evolutionary programming, I’m not sure our intuitive sense of consciousness would be triggered.

What are the impulses?  Survival and propagation of genes.  If you think carefully about what motivates all animals, it ultimately comes down to these directives.  (And technically survival is a special case of the gene propagation impulse.)  In mammals and social species, it gets far more complex with subsidiary impulses involving care of offspring and insuring secure social positions for oneself and one’s kin (in other words, love), but ultimately the drive is the same.

It’s a drive we share with every living thing, and a system that is missing it may have a hard time triggering out intuitive sense of agency detection, at least in any sustained manner.  I think it’s why a fruit fly feels more conscious to us than a robot, even if the robot has more processing power than the fly’s brain.

Of course, a sophisticated enough system might cause us to project these qualities unto it, much as humans have done throughout history.  (Think worship of volcanoes, the sea, storms, or nature overall.)  But knowing we’re looking at an artifact created by humans seems like it would short circuit that projection.  Maybe.

Anyway, those are my thoughts on this.  What do you think?  Am I maybe overlooking some epistemic virtues of panpsychism?  Or is my list of what would trigger our consciousness intuition too small?  Or is there another hole in my thinking somewhere?

Update: It appears I misinterpreted Professor Bishop’s views in the video.  He weighs in with a clarification in the comments.  I stand by what I said above about general panpsychism, but his view is a bit more complex, and he actually intended it as a presentation of an absurd consequence of the idea of machine consciousness.