Pain is information, but what is information?

From an evolutionary standpoint, why does pain exist?  The first naive answer most people reach for is that pain exists to make us take action to prevent damage.  If we touch a hot stove, pain makes us pull our hand back.

But that’s not right.  When we touch a hot surface, nociceptors in our hand send signals to the spinal cord, which often respond with a reflexive reaction, such as a withdrawal reflex.  When the signal makes it to the brain, further automatic survival action patterns may be triggered, such as reflexively scrambling to get away.

But all of this can happen before, or independent of, the conscious experience of pain.  So why then do we have the experience itself?  It isn’t necessarily to motivate immediate action.  The reflexes and survival circuitry often take care of that.

I think the reason we feel pain is to motivate future action.  Feeling pain dramatically increases the probability that we’ll remember what happens when we touch a hot stove, that we’ll learn that’s it’s a bad move.  If the pain continues, it also signals a damaged state which needs to be taken into account in planning future moves.

So then pain is information, information communicated to the reasoning parts of the brain, and serves as part of the motivation to learn or engage in certain types of planning.

People often dislike the conclusion that pain, or any other mental quality, is information.  It seems like it should be something more.  This dislike is often bundled with an overall notion that consciousness can’t be just information processing.  What’s needed, say people like John Searle and Christof Koch, are the brain’s causal powers.

But I think this reaction comes from an unproductive conception of information.

I’ve often resisted defining “information” here on the blog.  Like “energy”, it’s a very useful concept that is devilishly hard to define in a manner that addresses all the ways we use it.

Many people reach for the definition from Claude Shannon’s information theory: information is reduction in uncertainty.  That definition is powerful when the focus is on the transmission of information.  (Which of course, is what Shannon was interested in.)  But when I think about something like DNA, I wonder what uncertainty is being reduced for the proteins that translate it into RNA?  Or the ones that replicate it during cell division?

Historically, when pressed for my own definition, I’ve offered something like: patterns that, due to their causal history, can have effects in a system.  While serviceable, it’s a bit awkward and not something I was ever thrilled with.

Not that long ago, in a conversation about information in the brain, philosopher Eric Schwitzgebel argued simply that information is causation.  The more I think about this statement, the more I like it.  It seems to effectively capture a lot of the above in a very simple statement.  It also seems to capture the way the word is used from physics, to Shannon information, to complex IT systems.

Information is causation.

This actually fits with something many neuroscientists often say: that information is a difference that makes a difference.

This means an information processing system is effectively a system of concentrated causality, a causal nexus.  The brain in particular could be thought of as a system designed to concentrate causal forces for the benefit of the organism.  It also means that saying it’s the causal powers that matter rather than information, is a distinction without a difference.

The nice thing about this definition is, instead of saying pain is information, we can say that pain is causation.  Maybe that’s easier to swallow?

What do you think?  Is there something I’m missing that distinguishes pain from information?  Or information from causation?  If so, what?

Building a consciousness-detector

Joel Frohlich has an interesting article up at Aeon on the possibility of detecting consciousness.  He begins with striking neurological case studies, such as the one of a woman born without a cerebellum, yet fully conscious, indicating that the cerebellum is not necessary for consciousness.

He works his way to the sobering cases of consciousness detected in patients previously diagnosed as vegetative, accomplished by scanning their brain while asking them to imagine specific scenarios.  He also notes that, alarmingly, consciousness is sometimes found in places no one wants it, such as anesthetized patients.

All of which highlight the clinical need to find a way to detect consciousness, a way independent of behavior.

Frohlich then discusses a couple of theories of consciousness.  Unfortunately one of them is Penrose and Hammeroff’s quantum consciousness microtuble theory.  But at least he dismisses it, citing its inability to explain why the microtubules in the cerebellum don’t make it conscious.  It seems like a bigger problem is explaining why the microtubules in random blood cells don’t make my blood conscious.

Anyway, his preferred theory is integrated information theory (IIT).  Most of you know I’m not a fan of IIT.  I think it identifies important attributes of consciousness (integration, differentiation, causal effects, etc), but not ones that are by themselves sufficient.  It matters what is being integrated and differentiated, and why.  The theory’s narrow focus on these factors, as Scott Aaronson pointed out, leads it to claim consciousness in arbitrary inert systems that very few people see as conscious.

That said, Frohlich does an excellent job explaining IIT, far better than many of its chief proponents.  His explanation reminds me that while I don’t think IIT is the full answer, it could provide insights into detecting whether a particular brain is conscious.

Frohlich discusses how IIT inspired Marcello Massimini to construct his perturbational complexity index, an index used to asses the activity in the brain after it is stimulated using transcranial magnetic stimulation (TMS), essentially sending an electromagnetic pulse through the skull into the brain.  A TMS pulse that leads to the right kind of widespread processing throughout the brain is associated with conscious states.  Stimulation that only leads to local activity, or the wrong kind of activity, isn’t.

IIT advocates often cite the success of this technique as evidence, but from what I’ve read about it, it’s also compatible with the other global theories of consciousness such as global workspace or higher order thought.  It does seem like a challenge for local theories, those that see activity in isolated sensory regions as conscious.

Finally, Frohlich seems less ideological than some IIT advocates, more open to things like AI consciousness, but notes that detecting it in these systems is yet another need for a reliable detector.  I fear detecting it in alternate types of systems represents a whole different challenge, one I doubt IIT will help with.

But maybe I’m missing something?

The issues with biopsychism

Recently, there was a debate on Twitter between neuroscientists Hakwan Lau and Victor Lamme, both of whose work I’ve highlighted here before.  Lau is a proponent of higher order theories of consciousness, and Lamme of local recurrent processing theory.

The debate began when Lau made a statement about panpsychism, the idea that everything is conscious including animals, plants, rocks, and protons.  Lau argued that while it appears to be gaining support among philosophers, it isn’t really taken seriously by most scientists.  Lamme challenged him on this, and it led to a couple of surveys.  (Both of which I participated in, as a non-scientist.)

I would just note that there are prominent scientists who lean toward panpsychism.  Christof Koch is an example, and his preferred theory: integrated information theory (IIT) seems oriented toward panpsychism.  Although not all IIT proponents are comfortable with the p-label.

Anyway, in the ensuing discussion, Lamme revealed that he sees all life as conscious, and he coined a term for his view: biopsychism.  (Although it turns out the term already existed.)

Lamme’s version, which I’ll call universal biopsychism, that all life is conscious, including plants and unicellular organisms, is far less encompassing that panpsychism, but is still a very liberal version of consciousness.  It’s caused me to slightly amend my hierarchy of consciousness, adding an additional layer to recognize the distinction here.

  1. Matter: a system that is part of the environment, is affected by it, and affects it.  Panpsychism.
  2. Reflexes and fixed action patterns: automatic reactions to stimuli.  If we stipulate that these must be biologically adaptive, then this layer is equivalent to universal biopsychism.
  3. Perception: models of the environment built from distance senses, increasing the scope of what the reflexes are reacting to.
  4. Volition: selection of which reflexes to allow or inhibit based on learned predictions.
  5. Deliberative imagination: sensory-action scenarios, episodic memory, to enhance 4.
  6. Introspection: deep recursive metacognition enabling symbolic thought.

As I’ve noted before, there’s no real fact of the matter on when consciousness begins in these layers.  Each layer has its proponents.  My own intuition is that we need at least 4 for sentience.  Human level experience requires 6.  So universal biopsychism doesn’t really seem that plausible to me.

But in a blog post explaining why he isn’t a biopsychist (most of which I agree with), Lau actually notes that there are weaker forms of biopsychism, ones that only posit that while not all life is conscious, only life can be conscious, that consciousness is an inherently biological phenomenon.

I would say that this view is far more common among scientists, particularly biologists.  It’s the view of people like Todd Feinberg and Jon Mallatt, whose excellent book The Ancient Origins of Consciousness I often use as a reference in discussions on the evolution of consciousness.

One common argument in favor of this limited biopsychism is that currently the only systems we have any evidence for consciousness in are biological ones.  And that’s true.  Although panpsychists like Philip Goff would argue that, strictly speaking, we don’t even have evidence for it there, except for our own personal inner experience.

But I think that comes from a view of consciousness as something separate and distinct from all the functionality associated with our own inner experience.  Once we accept our experience and that functionality as different aspects of the same thing, we see consciousness all over the place in the animal kingdom, albeit to radically varying degrees.  And once we’re talking about functionality, then having it exist in a technological system seems more plausible.

Another argument is that maybe consciousness is different, that maybe it’s crucially dependent on its biological substrate.  My issue with this argument is that it usually stops there and doesn’t identify what specifically about that substrate makes it essential.

Now, maybe the information processing that takes place in a nervous system is so close to the thermodynamic and information theoretic boundaries, that nothing but that kind of system could do similar processing.  Possibly.  But it hasn’t proven to be the case so far.  Computers are able to do all kinds of things today that people weren’t sure they’d ever be able to do, such as win at chess or Go, recognize faces, translate languages, etc.

Still, it is plausible that substrate dependent efficiency is an issue.  Generating the same information processing in a traditional electronic system may never be as efficient in terms of power usage or compactness as the organic variety.  But this wouldn’t represent a hard boundary, just an engineering difficulty, for which I would suspect there would be numerous viable strategies, some of which are already being explored with neuromorphic hardware.

But I think the best argument for limited biopsychism is to define consciousness in such a way that it is inherently an optimization of what living systems do.  Antonio Damasio’s views on consciousness being about optimizing homeostasis resonate here.  That’s what the stipulation I put in layer 2 above was about.  If we require that the primal impulses and desires match those of a living system, then only living systems are conscious.

Although even here, it seems possible to construct a technological system and calibrate its impulses to match a living one.  I can particularly see this as a possibility while we’re trying to work out general intelligence.  This would be where all the ethical considerations would kick in, not to mention the possible dangers of creating an alternate machine species.

However, while I don’t doubt people will do that experimentally, it doesn’t seem like it would be a very useful commercial product, so I wouldn’t expect a bunch of them to be around.  Having systems whose desires are calibrated to what we want from them seems far more productive (and safer) than systems that have to be constrained and curtailed to do them, essentially slaves who might revolt.

So, I’m not a biopsychist, either in its universal or limited form, although I can see some forms of the limited variety being more plausible.

What do you think of biopsychism?  Are there reasons to favor biopsychism (in either form) that I’m overlooking?  Or other issues with it that I’ve overlooked?

Subjective report doesn’t support the idea that phenomenal consciousness is separate from access consciousness

One of the current debates in consciousness research is whether phenomenal consciousness is something separate and apart from access consciousness.  Access consciousness (A-consciousness) is generally defined as perceptions being accessible for reasoning, action decisions, and communication.  Phenomenal consciousness (P-consciousness) is seen as raw experience, the “something it is like” aspect of consciousness.

Most researchers accept the conceptual distinction between A-consciousness and P-consciousness.  But for someone in the cognitive-theories camp (global workspace theory, higher order thought, etc), P-consciousness and A-consciousness are actually the same thing.  P-consciousness is a construction of A-consciousness.  Put another way, P-consciousness is A-consciousness from the inside.

However, another camp, lead largely by Ned Block, the philosopher who originally made the A-consciousness / P-consciousness distinction, argues that they are separate ontological things or processes.  The principle argument for this separate existence is the idea that P-consciousness “overflows” A-consciousness, that is, that we have conscious experiences that we don’t have cognitive access to, perceptions that we can’t report on.

Block cites studies where test subjects are briefly shown a dozen letters in three rows of four, but can usually only remember a few of them (typically four) afterward.  However, if the subjects are cued on which row to report on immediately after the image disappears, they can usually report on the four letters of that row successfully.

Block’s interpretation of these results is that the subject is phenomenally conscious of all twelve letters, but can only cognitively access a few for reporting purposes.  He admits that other interpretations are possible.  The subjects may be retrieving information from nonconscious or preconscious content.

However, he notes that subjects typically have the impression that they were conscious of the full field.  Interpretations from other researchers are that the subjects are conscious of either only partial fragments of the field, or are only conscious of the generic existence of the field (the “gist” of it).  Block’s response is that the subjects impression is that they are conscious of the full thing, and that in the absence of contradicting evidence, we should accept their conclusion.

Here we come to the subject of a new paper in Mind & Language: Is the phenomenological overflow argument really supported by subjective reports?  (Warning: paywall.)  The authors set out to test Block’s assertion that subjects do actually think they’re conscious of the whole field in full detail.  They start out by following the citation trail for Block’s assertion, and discover that it’s ultimately based on anecdotal reports, intuition, and a little bit of experimental work from the early 20th century, when methodologies weren’t yet as rigorous.  (For example, the experimenters in some of these early studies included themselves as test subjects.)

So they decided to actually test the proposition with experiments very similar to the ones Block cited, but with the addition of asking the subjects afterward about their subjective impressions.  Some subjects (< 20%) did say they saw all the letters in detail and could identify them, but most didn’t.  Some reported (12-14%) believe they saw some of the letters along the lines of the partial fragmentary interpretation.  But most believed they saw all the letters but not in detail, or most of the letters, supporting the generic interpretation.

All of which is to say, this study largely contradicts Block’s assertion that most people believe they are conscious of the full field in all detail, and undercuts his chief argument for preferring his interpretation over the others.

In other words, these results are good for cognitive theories of consciousness (global workspace, higher order thought, etc) and not so good for non-cognitive ones (local recurrent theory, etc).  Of course, as usual, it’s not a knockout blow against non-cognitive theories.  There remains enough interpretation space for them to live on.  And I’m sure the proponents of those theories will be examining the methodology of this study to see if there are any flaws.

Myself, I think the idea that P-consciousness is separate from A-consciousness is just a recent manifestation of a longstanding and stubborn point of confusion in consciousness studies, the tendency to view the whole as separate from its components.  Another instance is Chalmers himself making a distinction between the “easy problems” of consciousness, which match up to what is currently associated with A-consciousness, and the “hard problem” of consciousness, which is about P-consciousness.

But like any hard problem, the solution is to break it down into manageable chunks.  When we do, we find the easy problems.  A-consciousness is an objectively reducible description of P-consciousness.  P-consciousness is a subjectively irreducible description of A-consciousness.  Each easy problem that is solved chips away at the hard problem, but because it’s all blended together in our subjective experience, that’s very hard to see intuitively.

Unless of course I’m missing something?

h/t Richard Brown

The response schema

Several months ago Michael Graziano, and colleagues, attempted a synthesis of three families of scientific theories of consciousness: global workspace theory (GWT), higher order theory (HOT), and his own attention schema theory (AST).

A quick (crudely simplistic) reminder: GWT posits that content becomes conscious when it is globally broadcast throughout the brain, HOT when a higher order representation is formed of a first order representation, and AST when the content becomes the focus of attention and it is included in a model of the brain’s attentional state (the attention schema) for purposes of guiding it.

Graziano equates the global workspace with the culmination of attentional processing, and puts forth the attention schema as an example of a higher order representation, essentially merging GWT and HOT with AST as the binding, and contemplating that the synthesis of these theories approaches a standard model of consciousness.  (A play of words designed to resonate with the standard model of particle physics.)

Graziano’s synthesis has generated a lot of commentary.  In fact, there appears to be an issue of Cognitive Neuropsychology featuring the responses.  (Unfortunately it’s paywalled, although it appears that the first page of every response is public.)  I already highlighted the most prominent response in my post on issues with higher order theories, the one by David Rosenthal, the originator of HOT, who argues that Graziano gets HOT wrong, which appears to be the prevailing sentiment among HOT advocates.

But this post is about Keith Frankish’s response.  Frankish, who is the leading voice of illusionism today, makes the point that, from his perspective, theories of consciousness often have one of two failings.  They either aim too low, explaining just the information processing (a dig perhaps at pure GWT) or too high in attempting to explain phenomenal consciousness as if it actually exists, and he tags HOTs as being in this latter category.

His preferred target is to explain our intuitions about phenomenal consciousness, why we think we have it.  (I actually think explaining why we think we have phenomenal consciousness is explaining phenomenal consciousness, but that’s just my terminological nit with illusionism.)  Frankish thinks that AST gets this just right.

But he sees it as incomplete.  What he sees missing is very similar to the issue I noted in my own post on Graziano’s synthesis: the affective or feeling component.  My own wording at the time was that there should be higher order representations of reflexive reactions.  But I’m going to quote Frankish’s description, because I think it gets at things I’ve struggled to articulate.  (Note: “iconsciousness” is Graziano’s term for access consciousness, as opposed to “mconsciousness” for phenomenal consciousness.):

Suppose that as well as an attention schema, the brain also constructs a response schema—a simplified model of the responses primed by iconsciousness.  When perceptual information enters the global workspace, it becomes available to a range of consumer systems—for memory, decision making, speech, emotional regulation, motor control, and so on. These generate responses of various kinds and strengths, which may themselves enter the global workspace and compete for control of motor systems. Across the suite of consumer systems, a complex multi-dimensional pattern of reactive dispositions will be generated. Now suppose that the brain constructs a simplified, schematic model of this complex pattern. This model, the response schema, might represent the reactive pattern as  a multi-dimensional solid whose axes correspond to various dimensions of response (approach vs retreat, fight vs yield, arousal vs depression, and so on). Attaching  information from the model to the associated perceptual state will have the effect of representing each perceptual episode as having a distinctive but unstructured property which corresponds to the global impact of the stimulus on the subject. If this model also guides our introspective beliefs and reports, then we shall tend to judge and say that our experiences possess an indefinable but potent subjective quality. In the case of pain, for example, attended signals from nociceptors prime a complex set of strong aversive reactions, which the response schema models as a distinctive, negatively valenced global state, which is in turn reported as an ineffable awfulness.

Now, Frankish is an illusionist.  For him, this response schema provides the illusion of phenomenal experience.  My attitude is that it provides part of the content of that experience, which is then incorporated into the experience by the reaction of all the disparate specialty systems, but again that’s terminological.  The idea is that the response schema adds the feeling to the sensory information in the global workspace and becomes part of the overall experience.  It’s why “it feels like something” to process particular sensory or imaginative content.

This seems similar to Joseph LeDoux’s fear schema.  LeDoux’s conception is embedded in an overall HOT framework, whereas Frankish’s is more at home in GWT, but they seem in the same conceptual family, a representation, a schema of lower level reactive processes, used by higher order processes to decide which reflexive reactions to allow and which to inhibit.  It’s the intersection between that lower level and higher level processing that we usually refer to as feelings.

Of course, there is more involved in feelings than just these factors.  For instance, those lower level reflexive reactions also produce physiological changes via unconscious motor signals and hormone releases which alter heart rate, breathing, muscle tension, etc, all of which reverberate back to the brain as interoceptive information, which in turn is incorporated into the response schema, the overall affect, the conscious feeling of the response.  There are also a host of other inputs, such as memory associations.

And it isn’t always the lower level responses causing the higher level response schema to fire.  Sometimes the response schema fires from those other inputs, such as memory associations, which in turn trigger the lower level reactions.  In other words, the activation can go both ways.

So, if this is correct, then the response schema is the higher order description of lower level reflexive reactions.  It is an affect, a conscious feeling (or at least a major component of it).  Admittedly, the idea that a feeling is a data model is extremely counter-intuitive.  But as David Chalmers once noted, any actual explanation of consciousness, other than a magical one, is going to be counter-intuitive.

Similar to the attention schema, the existence of something like the response schema (or more likely: response schemata) seems inevitable, the attention schema for top down control of attention, and the response schema for deciding which reflexes to override, that is, action planning.  The only question is whether these are over simplifications of much more complex realities, and what else might be necessary to complete the picture.

Unless of course I’m missing something.

Daniel Dennett on why phenomenal consciousness is access consciousness

This old talk by Daniel Dennett touches on a lot of topics we’ve discussed recently.  Dennett explains why it’s wrong to regard phenomenal consciousness (the “what it’s likeness” or “raw experience” version) as separate from access consciousness (the cognitive access of information for decision making, memory, report, etc).

Note that Dennett doesn’t deny the existence of phenomenal consciousness here, just the idea that it’s something separate and apart from access.  He even passes up opportunities to dismiss qualia, although he does provide a reduction of them.

This video is about 66 minutes long.  Unfortunately the video and sound quality aren’t great, and the camera operation is annoying, but the talk is worth powering through.

I agree with just about everything in this talk, but I do feel a little compelled to defend Victor Lamme since I read his stuff recently and it’s still relatively fresh in my mind.  Dennett says that there’s no rational provided for why recurrent neural processing leads to phenomenality.  Lamme, to his credit, actually does take a stab at it, citing the enhanced synaptic plasticity associated with recurrent processing, leading to the formation of memories, albeit very brief ones in the cases he’s considering.  But as I noted in my post on that theory, it’s arguably more about the preconscious, pre-access sensory processing, than consciousness itself.

The main thrust of Dennett’s remarks are that phenomenal content isn’t something that access consciousness makes use of, phenomenal experience is a result of access processing.  Therefore, studying access consciousness is studying phenomenal consciousness.  They are one and the same, just seen from the outside or the inside respectively.

Dennett also talks about the element people often feel is missing from strictly information processing accounts, referring to it as “the juice” or “the sauce” (a cute acronym for “subjective aspect unique to conscious experience”) before, in politeness to his host, settling on “feeling”, but pointing out that feelings must be felt, and felt is a form of access.

There have also been some conversations recently about the hard problem of consciousness, particularly at James Cross’ blog.  It’s worth noting that phenomenal consciousness is the version typically associated with the Chalmers’ hard problem, while access consciousness is associated with his “easy problems” (discrimination, attention, reportability, etc).  But if phenomenal and access consciousness are one and the same, then the hard problem is simply an agglomeration of the easy problems.  Meaning that as the easy problems are solved, the hard problem will gradually be solved.

So, a lot of good information in this talk, which I’m sure won’t be controversial at all.  🙂

(via Richard Brown)

Do qualia exist? Depends on what we mean by “exist.”

The cognitive scientist, Hakwan Lau, whose work I’ve highlighted several times in the last year, has been pondering illusionism recently.  He did a Twitter survey on the relationship between the phenomenal concept strategy (PCS) and illusionism, which inspired my post on the PCS.  (Meant to mention that in the post, but it slipped.)  Anyway, he’s done a blog post on illusionism, which is well worth checking out for its pragmatic take.

As part of that post, he linked to a talk that Keith Frankish gave some years ago explaining why he thought qualia can’t be reduced to a non-problematic version that can be made compatible with physicalism.  The video, which has Frankish’s voice but only shows his presentation slides, is about 23 minutes.

In many ways, this talk seems to anticipate the criticism from Eric Schwitzgebel that illusionists are dismissing an inflated version of consciousness, one that Schwitzgebel admits comes from other philosophers who can’t seem to resist bundling theoretical commitments into their definitions of it.  He argues for a pre-theoretical, or theoretically naive conception of consciousness.

Frankish discusses the problems with what he calls “diet qualia”, a concept without the problematic aspects that Daniel Dennett articulates in his attempted take down of qualia, a conception that in some ways resembles what Schwitzgebel advocates for.  But Frankish points out that diet qualia don’t work, that any discussion of them inevitably inflates to “classic qualia” or collapses to “zero qualia” (his stance).

Just to review, qualia are generally considered to be instances of subjective experience.  The properties that Dennett identified are (quoted from the Wikipedia article on qualia):

  1. ineffable; that is, they cannot be communicated, or apprehended by any means other than direct experience.

  2. intrinsic; that is, they are non-relational properties, which do not change depending on the experience’s relation to other things.

  3. private; that is, all interpersonal comparisons of qualia are systematically impossible.

  4. directly or immediately apprehensible in consciousness; that is, to experience a quale is to know one experiences a quale, and to know all there is to know about that quale.

Illusionists usually point out that qualia can be described in terms of dispositional states, meaning they’re not really ineffable.  For example, the experience of red can be discussed entirely in terms of sensory processing and the various affective reactions it causes throughout the brain.  And doing so demonstrates that they’re not intrinsic or irreducible.

Privacy can be viewed in two senses: as a matter of no one else being able to know the content of the experience, or of no one being able to have the experience.  The first seems like just a limitation of current technology.  There’s no reason to suppose we won’t be able to monitor a brain someday and know exactly what the content is of an in-progress experience.  The second sense is true, but only in the same sense that the laptop I’m typing this on currently has a precise informational state that no other electronic device has, a fact that really has no metaphysical implications.

There’s a similar double sense for qualia being directly or immediately apprehensible.  In one sense, it implies we have accurate information on our cognitive states, something that modern psychology has pretty conclusively demonstrated is often not true.  In the second sense, it says that we know our impression of the experience, that we know what seems to be, which is trivially true.

So, seen from an objective point of view, qualia, in the sense identified by Dennett, doesn’t exist.  So the failure of the diet versions can seem very significant.

But I think there’s a fundamental mistake here.  The dissolving of qualia in this sense happens objectively.  But remember that qualia are not supposed to be objective.  They are instances of subjective experience.  This means that the way they seem to be, their seeming nature is their nature, at least their subjective nature.

Of course, many philosophers make the opposite mistake.  They take subjective experience and think its phenomenal nature is something other than just subjective, that its objective reality is in some way obvious from its subjective aspects.  But all indications are that the objective mechanisms that underlie the subjective phenomena are radically different from those phenomena, which is what I think most illusionists are trying to say.

Put another way, qualia only exist subjectively.  But they only need to exist subjectively to achieve the status of being instances of subjective experience.

And they only need to exist that way to be subjectively ineffable and subjectively irreducible.  Yes, the processing underlying qualia can be described in objective terms, but much of that description will involve unconscious processing below the level of consciousness, meaning that it won’t be describable from the subjective experience itself, or reducible from that experience.

Looking at it this way allows us to accept qualia realism, but in a manner fully consistent with physcialism.  In other words, there’s nothing spooky going on here.  In many ways, this is just an alternate description of illusionism, but one that hopefully clarifies rather than obscures, and doesn’t seem to deny our actual experiences.

Of course, a hard core illusionist might insist that subjective existence itself doesn’t count as really existing.  Admittedly, it comes down to a matter of how we define “exists.”  In other words, we’re back to a situation where there is no fact of the matter, just different philosophical positions that people can choose to hold.

Unless of course, I’m missing something?

The phenomenal concept strategy and issues with conceptual isolation

I’ve often pondered that the hard problem of consciousness, the perceived problem of understanding how phenomenal consciousness can happen in physical systems, arises due to the fact that our intuitive model of the phenomenal is very different from our intuitive model of the physical, of the brain in particular.

As is usually the case, anytime you think you’re having an original observation, you should make sure someone hasn’t thought of it first.  In this case, philosophers have.  It’s called the phenomenal concept strategy (PCS).  Peter Carruthers discussed it in his blog posts a few weeks ago, but in a manner that expected the reader to already be familiar with it.  And Hakwan Lau brought it up on Twitter recently, spurring me to investigate.

It’s basically the idea that the explanatory gap between mind and body exists not because there’s a gap between physical and mental phenomena, but because there’s a gap in our concepts of these things.

Part of the value of this strategy, is that it supposedly helps physicalists answer the knowledge argument from Mary’s room: the thought experiment where Mary, a scientific expert in visual perception who has spent her entire life in a black and white room, leaves the room and experiences color for the first time, and the question is asked, does Mary learn something new when she leaves the room?  According to the PCS, what Mary learns is a new phenomenal concept, which just expresses other knowledge she already had in a new way.

At first glance, this view seems to offer a lot.  But as with all philosophical positions, it pays to look before you leap.  Under the view, the reason this works is that phenomenal concepts are conceptually isolated.  This isolation supposedly makes philosophical zombies conceivable.

“Conceivable” in this case is supposed to mean logically coherent, as opposed to merely imaginable.  And the zombies in this case aren’t the traditional ones which are physically identical to a conscious being (and simply presuppose dualism) but functional or behavioral ones, systems that are different internally but behaviorally identical.

The idea is that it’s possible to imagine a being just like you but with different or missing phenomenal concepts.

David Chalmers uses this zombie conceivability to attack the PCS.  His point seems to be that we eventually run into the same gaps in the concepts that we perceived to be in the originals.  Peter Carruthers responds with a discussion involving phenomenal and schmenomenal states that I have to admit I haven’t yet parsed.

But my issue is that the concepts can’t be that isolated, because we can discuss them.  Indeed, it seems dubious that there can be a being that is missing phenomenal concepts who can nonetheless discuss them.

That’s not to say that our concepts of phenomenal content can’t be isolated, but that isolation doesn’t seem inherent or absolute.  It’s something we allow to creep into our thinking.  It’s a failure to ask Daniel Dennett’s hard question:  “And then what happens?”

I personally think qualia exist, but not in any non-physical manner.  They are information, physical information that are part of the causal chain.  There is no phenomenal experience which doesn’t convey information (although it may not be information we need at the moment).  This information is raw and primal, so it doesn’t feel like information to us, but it is information nonetheless.

Consider the pain of a toothache.  How else should the valence systems in the brain signal to the planning systems that there is an issue here which needs addressing?  The only alternative is to imagine some form of symbolic communication (numbers, notations, etc), but symbolic communications is just communication built on top of the primal version, raw conscious experience.

This communication is primal to us because it is subjectively irreducible.  We have no access to its construction and underlying mechanisms (which ironically can be understood in symbolic terms), therefore it seems like something that exists separate and apart from those mechanisms.  In that sense, our concept of it is isolated from our concepts of those underlying mechanism.  This might tempt us to see that concept as completely isolated.

But it’s only isolated in that way if we fail to relate it to why we have that phenomenal experience, and how we use it.  If we touch a hot stove, our hand may reflexively jerk back due to the received nociception, but we also experience the burning pain.  If we didn’t, and didn’t remember it, we might be tempted later to touch the stove again.  A zombie needs to have a similar mechanism for it to be functional in the same way.  (Maybe Carruthers’ “schmenomenal” states?)

In other words, phenomenal experience has a functional role.  It evolved for a reason (or more accurately a whole range of reasons).  That doesn’t mean it may not misfire in some situations, leaving us wondering what the functional point of it is, but that’s more a factor that evolution can’t foresee every situation, and of how strange the modern world is in comparison to our original ecological niche in places like the African savanna.

So, I think there is some value to seeing the explanatory gap in terms of concepts, but not in seeing those concepts as isolated in some rigid or absolute manner.  They’re only isolated if we make them so, as we frequently do.  And they’re not so consistently isolated that we need to let in zombies.

Of course, I’m approaching this as someone who most frequently falls within Chalmers’ Type A materialist category.  Apparently the PCS is typically championed by Type B materialists, those who see a hard problem that needs addressing.  So it may be that I was never the intended audience for this strategy.

Unless of course I’m missing something.  Are zombies less avoidable than I’m thinking here?  What do you think of the PCS overall?

Recurrent processing theory and the function of consciousness

Victor Lamme’s recurrent processing theory (RPT) remains on the short list of theories considered plausible by the consciousness science community.  It’s something of a dark horse candidate, without the support of global workspace theory (GWT) or integrated information theory (IIT), but it gets more support among consciousness researchers than among general enthusiasts.  The Michael Cohen study reminded me that I hadn’t really made an effort to understand RPT.  I decided to rectify that this week.

Lamme put out an opinion paper in 2006 that laid out the basics, and a more detailed paper in 2010 (warning: paywall).  But the basic idea is quickly summarized in the SEP article on the neuroscience of consciousness.

RPT posits that processing in sensory regions of the brain are sufficient for conscious experience.  This is so, according to Lamme, even when that processing isn’t accessible for introspection or report.

Lamme points out that requiring report as evidence can be problematic.  He cites the example of split brain patients.  These are patients who’ve had their cerebral hemispheres separated to control severe epileptic seizures.  After the procedure, they’re usually able to function normally.  However careful experiments can show that the hemispheres no longer communicate with each other, and that the left hemisphere isn’t aware of sensory input that goes to the right hemisphere, and vice-verse.

Usually only the left hemisphere has language and can verbally report its experience.  But Lamme asks, do we regard the right hemisphere as conscious?  Most people do.  (Although some scientists, such as Joseph LeDoux, do question whether the right hemisphere is actually conscious due to its lack of language.)

If we do regard the right hemisphere as having conscious experience, then Lamme argues, we should be open to the possibility that other parts of the brain may be as well.  In particular, we should be open to it existing in any region where recurrent processing happens.

Communication in neural networks can be feedforward, or it can include feedback.  Feedforward involves signals coming into the input layer and progressing up through the processing hierarchy one way, going toward the higher order regions.  Feedback processing is in the other direction, higher regions responding with signals back down to the lower regions.  This can lead to a resonance where feedforward signals cause feedback signals which cause new feedforward signals, etc, a loop, or recurrent pattern of signalling.

Lamme identifies four stages of sensory processing that can lead to the ability to report.

  1. The initial stimulus comes in and leads to superficial feedforward processing in the sensory region.  There’s no guarantee the signal gets beyond this stage.  Unattended and brief stimuli, for example, wouldn’t.
  2. The feedforward signal make it beyond the sensory region, sweeping throughout the cortex, reaching even the frontal regions.  This processing is not conscious, but it can lead to unconscious priming.
  3. Superficial or local recurrent processing in the sensory regions.  Higher order parts of these regions respond with feedback signalling and a recurrent process is established.
  4. Widespread recurrent processing throughout the cortex in relation to the stimulus.  This leads to binding of related content and an overall focusing of cortical processes on the stimulus.  This is equivalent to entering the workspace in GWT.

Lamme accepts that stage 4 is a state of consciousness.  But what, he asks, makes it conscious?  He considers that it can either be the location of the processing or the type of processing.  But for the location, he points out that the initial feed forward sweep in stage 2 that reaches widely throughout the brain doesn’t produce conscious experience.

Therefore, it must be the type of processing, the recurrent processing that exists in stages 3 and 4.  But then, why relegate consciousness only to stage 4?  Stage 3 has the same type of processing as stage 4, just in a smaller scope.  If recurrent processing is the necessary and sufficient condition for conscious experience, then that condition can exist in the sensory regions alone.

But what about recurrent processing, in and of itself, makes it conscious?  Lamme’s answer is that synaptic plasticity is greatly enhanced in recurrent processing.  In other words, we’re much more likely to remember something, to be changed by the sensory input, if it reaches a recurrent processing stage.

Lamme also argues from an IIT perspective, pointing out that IIT’s Φ (phi), the calculated quotient of consciousness, would be higher in a recurrent region than in one only doing feedforward processing.  (IIT does see feedback as crucial, but I think this paper was written before later versions of IIT used the Φmax postulate to rule out talk of pieces of the system being conscious.)

Lamme points out that if recurrent processing leads to conscious experience, then that puts consciousness on strong ontological ground, and makes it easy to detect.  Just look for recurrent processing.  Indeed, a big part of Lamme’s argument is that we should stop letting introspection and report define our notions of consciousness and should adopt a neuroscience centered view, one that lets the neuroscience speak for itself rather than cramming it into preconceived psychological notions.

This is an interesting theory, and as usual, when explored in detail, it turns out to be more plausible than it initially sounded.  But, it seems to me, it hinges on how lenient we’re prepared to be in defining consciousness.  Lamme argues for a version of experience that we can’t introspect or know about, except through careful experiment.  For a lot of people, this is simply discussion about the unconscious, or at most, the preconscious.

Lamme’s point is that we can remember this local recurrent processing, albeit briefly, therefore it was conscious.  But this defines consciousness as simply the ability to remember something.  Is that sufficient?  This is a philosophical question rather than an empirical one.

In considering it, I think we should also bear in mind what’s absent.  There’s no affective reaction.  In other words, it doesn’t feel like anything to have this type of processing.  That requires bringing in other regions of the brain which aren’t likely to be elicited until stage 4: the global workspace.  (GWT does allow that it could be elicited through peripheral unconscious propagation, but it’s less likely and takes longer.)

It’s also arguable that considering the sensory regions alone outside of their role in the overall framework is artificial.  Often the function of consciousness is described as enabling learning or planning.  Ryota Kanai, in a blog post discussing his information generation theory of consciousness (which I highlighted a few weeks ago), argues that the function of consciousness is essentially imagination.

These functional descriptions, which often fit our intuitive grasp of what consciousness is about, require participation from the full cortex, in other words, Lamme’s stage 4.  In this sense, it’s not the locations that matter, but what functionality those locations provide, something I think Lamme overlooks in his analysis.

Finally, similar to IIT’s Φ issue, I think tying consciousness only to recurrent processing risks labeling a lot of systems conscious that no one regards as conscious.  For instance, it might require us to see an artificial recurrent neural network as having conscious experience.

But this theory highlights the point I made in the post on the Michael Cohen study, that there is no one finish line for consciousness.  We might be able to talk about a finish line for short term iconic memory (which is largely what RPT is about), another for working memory, one for affective reactions and availability for longer term memory, and perhaps yet another for availability for report.  Stage 4 may quickly enable all of these, but it seems possible for a signal to propagate along the edges and get to some of them.  Whether it becomes conscious seems like something we can only determine retrospectively.

Unless of course I’m missing something?  What do you think of RPT?  Or of Lamme’s points about the problems of relying on introspection and self report?  Should we just let the neuroscience speak for itself?

Is there a conscious perception finish line?

Global workspace theory (GWT) is the proposition that consciousness is composed of contents broadcast throughout the brain.  Various specialty processes compete for the limited capacity of the broadcasting mechanisms, to have their content broadcast to the all the other specialty processes.

Global neuronal workspace (GNW) is a variant of that theory, popularly promoted by Stanislas Dehaene, which I’ve covered before.  GNW is more specific than generic GWT on the physical mechanisms involved.  It relies on empirical work done over the years demonstrating that conscious reportability involves wide scale activation of the cortex.

One of the observed stages is a massive surge about 300 milliseconds after a stimulus, called the P3b wave.  Previous work seemed to establish that the P3b wave is a neural correlate of consciousness.  Dehaene theorized that it represents the stage where one of the signals achieves a threshold and wins domination, with all the other signals being inhibited.  Indeed, the distinguishing mark of the P3b is that it is massively negative in amplitude, indicating that most of it comes from inhibitory action.

The P3b has been replicated extensively and been seen as a pretty established phenomenon associated with attention and consciousness.  But this is science, and any result is always provisional.  Michael Cohen and colleagues have put out a preprint of a study that may demonstrate that the P3b wave is not associated with conscious perception, but with post perceptual processing.

The study tests the perception of subjects, showing various images while measuring their brain waves via EEG.  Using a no-report protocol, in half of the tests, the subjects were asked to report on whether they saw something, but in the other half they were not asked to report.  Crucially, the P3b wave only manifested in the reported cases, never in the non-report ones, even when the non-report image were exactly the same as the ones that did generate affirmative reports.

Image showing P3 wave presence for report and absence for non-report tests
Image from the study: https://www.biorxiv.org/content/10.1101/2020.01.15.908400v1.full

To control for the possibility that the subjects weren’t actually conscious of the image in the non-report cases, the subjects were given a memory test after a batch of non-report events, checking to see what they remember perceiving.  Their memories of the perception correlated with the results in the report versions.

So, the P3b wave, a major piller of GNW, may be knocked down.  The study authors are careful to make clear that this does not invalidate GWT or other cognitive theories of consciousness.  They didn’t test for all the other ways the information may have propagated throughout the cortex.  Strictly speaking, it doesn’t even invalidate GNW itself, but it does seem to knock out a major piece of evidence for it.

However, this is a more interesting discussion if we ask, what would it mean if all cortical communication beyond the sensory regions were ruled out, that the ability to acquire a memory of a sight only required the local sensory cortices?  It might seem like a validation of views like Victor Lamme’s local recurrent processing theory, which holds that local processing in the sensory cortices is sufficient for conscious perception.

But would it be?  Dehaene, when discussing his theory, is clear that it’s a theory of conscious access.  For him, something isn’t conscious until it becomes accessible by the rest of the brain.  Content in sensory cortices may form, but it isn’t conscious until it’s accessible.  Dehaene refers to this content as preconscious.  It isn’t yet conscious, but it has the potential to become so.

In that view, the content of what the subjects perceived in the non-report tests may have been preconscious, unless and until their memories were probed, at which point it became conscious.

This may be another case where the concept of consciousness is causing people to argue about nothing.  If we describe the situation without reference to it, the facts seem clear.

Sensory representations form in the local sensory cortex.  A temporary memory of that representation may persist in that region, so if probed soon enough afterward, a report about the representation can be extracted from it.  But until there is a need for a report or other usage, it is not available to the rest of the system, and none of the activity, including the P3b, normally associated with that kind of access is evident.

This reminds me of Daniel Dennett’s multiple drafts theory (MDT) of consciousness.  MDT is a variant of GWT, but minus the idea that there is any one event where content becomes conscious.  It’s only when the system is probed in certain ways that one of the streams, one of the drafts, become selected, generally one of the ones that has managed to leave its effects throughout the brain, that has achieved “fame in the brain.”

In other words, Dennett denies that there is any one finish line where content that was previously unconscious becomes conscious.  In his view, the search for that line is meaningless.  In that sense, the P3b wave may be a measure of availability, but calling it a measure of consciousness is probably not accurate.  And it’s not accurate to say that the Lamme’s local recurrent processing is conscious, although it’s also not accurate to relegate it completely to the unconscious.  What we can say is that it’s at a particular point in the stream where it may become relevant for behavior, including report.

Maybe this view is too ecumenical and I’m papering over important differences.  But it seems like giving up the idea of one finish line for consciousness turns a lot of theories that look incompatible into models of different aspects of the same overall system.

None of this is to say that GWT or any of its variants might not be invalidated at some point.  These are scientific theories and are always subject to falsification on new data.  But if or when that happens, we should be clear about exactly what is being invalidated.

Unless of course I’m missing something?