Postdictive perception and the consciousness finish line

(Warning: neuroscience weeds)

Recently I noted that one of the current debates in cognitive science is between those who see phenomenal and access consciousness as separate things, and those who see them as different aspects of the same thing.  Closely related, perhaps actually identical, is the debate between local and global theories of consciousness.

Diagram of the brain showing sensory and motor cortices
Image credit: via Wikipedia: Blausen.com staff (2014). “Medical gallery of Blausen Medical 2014”. WikiJournal of Medicine 1 (2). DOI:10.15347/wjm/2014.010. ISSN 2002-4436.

Local theories tend to see processing in sensory cortices (visual, auditory, etc) as sufficient for conscious perception of whatever sensory impression they’re processing.  One example is micro-consciousness theory, which holds that processing equals perception.  So formation of a neural image map in sensory cortices equals consciousness of that image.

(A lot of people seem to hold this processing equals perception idea intuitively.  It fits with ideas from people like Jaak Panksepp or Bjorn Merker.)

Another more sophisticated version is Victor Lamme’s local recurrent processing theory, which adds the requirement of recurrent processing in the sensory cortex, processing that involves feed forward signalling as well as feed back signalling in loops.  I discussed local recurrent theory a while back.

Global cognitive theories require that content have large scale effects throughout the brain to become conscious.  Similar to Lamme’s theory, they often see recurrent processing as a prerequisite, but require that it span regions throughout the thalamo-cortical system, either to have wide scale causal effects (global workspace theories) or to reach certain regions (higher order thought theories).

I mentioned that the local vs global debate may be identical to the phenomenal vs access one.  This is primarily because local theories only make sense if phenomenal consciousness is happening in the local sensory cortices independent of access consciousness.  That processing is inherently pre-access, before the content is available for reasoning, action, and report.

The latest shot in this debate is a paper by Matthias Michel and Adrien Doerig, which will be appearing in Mind & Language: A new empirical challenge for local theories of consciousness.

Michel and Doerig look at a type of perception they call “long-lasting postdiction”.  An example of postdiction is when subjects are shown a red disk followed in rapid succession by a green disk 20 ms (milliseconds) later, resulting in perceptual fusion, where the subject perceives a yellow disk.  This is an example of short-lasting postdiction.  In order for the fusion to occur, the red image needs to be processed, and then the green one, and then the two fused.

Short lasting postdiction could represent a problem for micro-consciousness theory, since it results in formation of image maps for each image in rapid succession but not in a way the leads to each being consciously perceived.  (It’s not clear to me this is necessarily true.  I can see maybe the two types of processing simply bleeding into each other.  And see below for one possible response from the micro-consciousness camp.)

Short-lasting postdictions are less of a problem for local recurrent theory, because it takes more time for the recurrent processing to spin up.  (The paper has a quick but fascinating discussion of the time it takes for information to get from the retina to the cortex and then to various regions, along with citations I may have to check out.)

It’s the long-lasting postdictions that are a problem for local recurrence.  The paper discusses images of pairs of vernier lines that are shown to test subjects.  The images are in various positions, changing every 50 ms or so across a period of 170-450 ms, resulting in a perception of the lines moving.

There are variations where one of the vernier lines are skewed slightly, but only on the first image, resulting in the subject perceiving the skew for the entire moving sequence, even though the verniers in the later images are aligned.  Another variation has two images, one early in the sequence and one toward the end, skewed in opposite directions, resulting in the two skewed images being averaged together and the averaged shape being perceived throughout the sequence.

The main takeaway is that the conscious perception of the sequence appears to be formed after the sequence.  Given the relatively lengthy sequence time of up to 450 ms, this is thought to exceed the time it takes for local recurrent processing to happen, and bleeds over into the ignition of global processing, representing a challenge for local recurrent theory.

The authors note that local theorists have two possible outs.  One is to say that, for some reason, the relevant local processing actually doesn’t happen in these sequences.  This would require identifying some other mechanism that unconsciously holds the intermediate images.  The other is to say that the images are phenomenally experienced, but then subsequently replaced in the access stage.  But this would result in phenomenal experience that has no chance of ever being reportable.  (What type of consciousness are we now talking about?)  And it would make local theories extremely difficult to test.

Interestingly, the authors note that the longer time scales may need to be reconciled with the ones in cognitive theories, such as the global neuronal workspace, which identifies 300 ms as a crucial point in conscious perception.  In other words, while this is an issue for local theories, it could be one even for global theories.

All of this reminds me of the phi illusion Daniel Dennett discussed in his book Consciousness Explained.  He describes this illusion in the build up to discussing his own multiple drafts model, a variation of global workspace theory.  Dennett’s variation is that there’s no absolute moment when a perception becomes conscious.  His interpretation of the phi illusion, which seems like another case of postdiction, is that there is no consciousness finish line.  We only recognize that a perception is conscious retroactively, when it achieves “fame in the brain” and influences memory and report.

Anyway, I personally think the main flaw with local theories is that the processing in question is too isolated, too much a fragment of what we think of as an experience, which usually includes at least the sensory perception and a corresponding affective feeling.  The affect part requires that regions far from the sensory cortices become stimulated.  Even if the localists can find an answer to the postdiction issue, I think this broader one will remain.

Unless of course I’m missing something.

34 thoughts on “Postdictive perception and the consciousness finish line

    1. Each level represents someone’s view of consciousness. Myself, I’m not inclined to call anything without at least level 3 conscious. But consciousness is in the eye of the beholder.

      Blindsight is an interesting case, because there are sensory images involved, just not accessible ones. So it basically resembles level 2 type processing.

      Just for reference for anyone not familiar with it:

      0. A system is part of the environment and is affected by it and affects it.
      1. Reflexes and fixed action patterns.
      2. Perception: models of the environment based on distances senses, expanding the scope of what the reflexes react to.
      3. Volition: action selection, choosing which reflexes to allow or inhibit, based on valenced assessment of cause and effect prediction.
      4. Imagination: expansion of 3 into action-sensory scenario simulations.
      5. Introspection: predictions of the system about itself, enabling symbolic thought.

      The higher up in the hierarchy we go, the smaller the number of systems that exhibit it. 0 represents all matter, 1 all living things. 2-4 apply to increasingly smaller number of animals species, and 5 to humans.

      Like

    1. Good question. I don’t know if that’s known. We know color processing takes place in V2 and V4 of the visual cortex. But where the fusion might happen? The effect described is so rapid, that I actually think it could happen in those regions, but that’s speculation.

      Like

      1. I ask because 20 ms is 50 Hertz, which the field rate of PAL, the European TV standard. (Our rate is 60 Hertz.) Your post doesn’t mention how long the red or green disk stood alone, but if it was a 20 ms red followed by 20 ms of nothing and then 20 ms of green, I’d fully expect those to fuse for the same reason we see motion pictures as motion.

        There was even an early TV standard that used B&W along with a rotating RGB color wheel. So the viewer was really watching a succession of red, green, and blue, images, not RGB pixels as we have now. But that old method worked.

        My uninformed intuition ( 😉 ) is that this persistence of vision thing seems to be a feature of the low-level processing and has nothing to do with consciousness, so I was wondering about the connection.

        Like

        1. It doesn’t sound like there was a blank gap. From the paper:

          When a red disk is followed by a green disk after 20ms, participants report perceiving a single yellow disk,and no red or green disk at all (Figure 1a; Efron, 1967; Pilz et al., 2013). This is a postdictive effect. Both the red and green disks are required to form the yellow percept. The visual system must store the representation of the first disk until the second disk appears to integrate both representations into the percept that subjects report having.

          It’s definitely a feature of low-level processing in the visual cortex. But the point is that we’re not conscious of the intermediate states in that processing. The micro-consciousness theory apparently predicts that we should be. Although that seems odd, since there’s a ton of empirical data that we’re not conscious of an image until it’s been there for at least 40-50ms.

          Liked by 1 person

          1. It’s hard to believe any part of our minds is truly conscious of TV frames (let alone fields). It would also be conscious of fluorescent lights (and now LEDs) going on and off.

            Seeing yellow is exactly what that old TV system did. First a flash of red, then a flash of green. Voila, you see yellow.

            Liked by 1 person

          2. I agree. Micro consciousness does seem like a dubious proposition. I haven’t read Semir Zeki’s arguments yet, but I suspect they’ll be based on a particular conception of consciousness, similar to Lamme’s.

            Liked by 1 person

          3. Skimmed one of Zeki’s papers.

            He posits a three tier consciousness system.
            1. Micro-consciousness, such as what happens in a particular region, such as color perception in V4, or motion perception in V5.
            2. Macro-consciousness, involving the attribute binding between those sensory regions.
            3. Unity-consciousness, the integrated whole.

            A global cognitivist would only be tempted to use the label “consciousness” for the third tier, particularly since most of what happens in 1 or 2 will never make it into 3.

            Like

          4. As you know, I tend towards the view that consciousness is loud and attests to itself. A healthy active consciousness can answer questions. (Or tell you to buzz off and mind your own bidness. 🙂 )

            Liked by 1 person

          5. Maybe I’m not understanding the local theory correctly but I wouldn’t think the fact there is a certain temporal element to processing has anything to do with whether consciousness is local or not. I would think that there is a required time for neurons to sync up to register something conscious but that would apply no matter where the neurons are located or how micro or macro they are. I’m not arguing either for or against the theory. Just that the test doesn’t seem to prove anything.

            “In order for the fusion to occur, the red image needs to be processed, and then the green one, and then the two fused”.

            Doesn’t a lot in that statement depend upon what exactly we mean by “processed”? Are circuits firing processing? Or is processing what happens when a sufficient number of circuits fire in sync? If it is sufficient numbers in sync, then processing really is the fusing.

            Like

          6. I think the issue is the timeline. Per Lamme’s paper.
            Stage 1: local feed forward processing in sensory cortex: first 100ms
            Stage 2: feed forward signals are transmitted globally throughout the cortex 100-200ms
            Stage 3: local recurrent processing 100-200ms
            Stage 4: global recurrent processing (global workspace): 300+ ms

            The idea is that these image sequences run up to 470ms. And people’s perception of the sequence seems to be formed after the sequence is complete. If they were conscious of whatever was happening in local sensory cortex by 100-200 ms, they should have noticed the differences between the early and late parts of the sequence, but they don’t. Instead they fuse them.

            As noted, the longest time frame could potentially even be an issue for global theories. Although as I noted in the post, this would be right up the alley of Dennett’s multiple drafts model.

            Like

          7. Why does the consciousness have to happen at 100-200 ms instead of the 470 ms for the local theory? It could still be local at 470 ms. It seems like you are reasoning backward from a global assumption. Just because something is happening at 100-200 ms doesn’t mean that is conscious.

            Of course, EMF theories probably wouldn’t have any problem whatsoever. All of the processing is unconscious up to the syncing of the neurons then consciousness merges all aspects of experience pretty much simultaneously when sufficient numbers of neurons fire. The experience would be across the brain globally with integrated local elements.

            Like

          8. BTW, these temporal delays point out the whole problem explaining consciousness by neural circuits alone. How does global broadcasting work if it has to manage delays in the neural circuits to propagate itself? How do we sync neurons – quite likely never the identical set of neurons but an ever changing different set of them – when what is being synced is located at different distances and passes through a variable number of connections?

            Like

          9. A core prediction of local recurrent processing theory is that recurrent processing is sufficient for consciousness. And both Lamme and everyone else seem to agree that starts around 100-150ms after the stimulus. So if that prediction is true, then why can’t the subjects distinguish the differences between the earlier images in the 450ms sequence and the later ones? Why do later variations appear to effect their perception of the beginning of the sequence? The most straightforward explanation seems to be that the conscious perception is formed after the sequence, or at least very late in it.

            I don’t really know enough about EMF theories to comment much. But I do have a question. If a central reason for positing those theories is how slow neural signalling is, why doesn’t all the evidence that our perceptions and thinking are also slow not obviate that reason? It seems like EMF theorists are searching for a problem for their preferred solution.

            Like

          10. I would think the conscious experience would be a merging of all of the processing into a “best fit” based on current input and prior experience.

            Regarding your questions about EMF theories, I’m not sure I am following. EMF theories, I think, arise in part from the observation that conscious experience seems to occur when large numbers of neurons fire synchronously. When large number of neurons fire at the same time, the EM field is magnified and neurons on the edge of firing get pushed over the edge into actually firing, hence magnifying the numbers. I think I compared this by analogy to a vibrating tuning fork located near other tuning forks that sets up feedback and resonance with various levels of harmonics. Since the EM field propagates very quickly (light speed?), there is no issue with coordinating different neural circuits through relatively slower chemical transmission.

            To be clear, I think the electrochemical transmission is essential for the brain and consciousness because it enables parallel processing, whereas consciousness itself is single threaded, and because from evolution the whole thing needed to evolve from reflexes and other simple reactive processing. The EM field began to play a role as brains got bigger and the need arose for complex learning and coordination across brain areas.

            Like

          11. The tuning fork is an interesting example since it works on sound waves. Those travel around 343 meters per second at sea level, which is faster than the 70-120 meters per second nerve impulses travel, but still within the same magnitude.

            When I look at a raw EEG track, although various regions do synchronize their activity, the synchronicity isn’t perfect. There appears to be room to factor in transmission delays.

            It’s worth noting that the bindings in a conscious percept are highly selective. Most of the neurons in any particular region are actually being inhibited, except for the relevant ones. The excited circuits are right by the inhibited ones, they’e actually entangled. The circuits competitively inhibit each other through lateral inhibition.

            An electromagnetic field probably does perturb those situations, but since most neurons are being inhibited, it seems like the resulting field should actually make neurons less likely to fire. The P300 wave is labeled “P” because of its positive amplitude, which is associated with inhibition. There are “N” waves where the perturbation might be the opposite, but it doesn’t seem like it would be anything precise or targeted.

            Like

          12. Tuning fork is analogy. EM waves travel at speed of light.

            EEGs are only crude brain surface measurements of electrical flows, not EM fields.

            Like

  1. In your previous discussion of local recurrent theory, you mentioned that you think affective reaction should be an additional requirement for consciousness. I agree with that, so I don’t favor Lamme’s theory as it stands.

    That said, I don’t see the experiment as decisive. A conscious opinion can be revised and overwritten with an incompatible conscious opinion, and verbal reports won’t necessarily show that this happened. I could mention a certain President in this connection. Of course, in his case we have the earlier verbal reports on videotape, but there was more than 500 milliseconds for the change of mind in those cases.

    Liked by 1 person

    1. We do forget things we’ve been conscious of, and people who may be in mental decline are prone to forget a lot.

      But ultimately the only standard we have for consciousness is when people communicate about it. We can use behavior as an indicator, but only once a similar stimulus or activity is consistently associated with language report.

      Once we start saying that people are conscious of something but in a way that they’ll never be able to report, it makes it an extremely difficult proposition to test. I won’t say impossible, because people have been ingenuous in figuring out ways to test things, but it’s not clear how it could be done. It threatens to make the claim unfalsifiable.

      Like

        1. I guess the question is, what exactly would they be testing for? We’re talking about a supposed conscious experience that disappears before it can have any effect on affective, memory, reasoning, or motor systems. You have to wonder what type of consciousness is this supposed to be?

          In the case of split brain patients, all those components are there in each hemisphere.

          Like

          1. Well since I already agreed with you that affect has to be affected (nice alliteration there) for consciousness … next I would point out that affect can impact reasoning and motor systems.

            Liked by 1 person

          2. Two things:

            1. You suggest a problem with an experience that has no effect on affect, memory, reasoning, or motor systems. Would it be a problem if it has an effect on exactly one of these and not the others? Say, motor systems only? [which would look a lot like level 1 reflex]

            2. Mike and Paul (I think) suggest that affective response is required for consciousness. So when you are sitting in a darkish room and look at a bright window for a bit, then close your eyes, you see an afterimage of the window. Does that have an affective response?

            *

            Like

          3. First, my view is that there is no fact of the matter when consciousness begins. Consciousness is in eye of the beholder. We each must choose when we consider the necessary and sufficient attributes are present for minimal consciousness.

            My personal inclination is when we have affective feelings (level 3 in the hierarchy I often present), an impulse the system can try to satisfy with selected actions. Prior to that, we don’t have a system that is a subject of moral concern. After that, assuming the affects are calibrated toward self concern and actualization, we do. (Or at least many of us are inclined to view it as so.)

            (Note that it’s very difficult to be consistent on this, and I don’t claim I always will be. For instance, it’s hard not to see a patient with akinetic mutism as conscious, even though their ability to feel affects is either mostly or completely diminished.)

            Now, you could choose to draw the line at sensory image maps (level 2). If you do, I can’t tell you that you’re wrong. Many people do draw it there, although most of them won’t then admit that the position may commit them to seeing a self driving car as conscious. Based on past conversations, I don’t think that’ll be an issue for you. All I can say is that doesn’t rise to my level.

            On the darkish room and bright window, I do think that experience, at least for a mentally complete human, inevitably has affective feelings. They may not be strong ones, but I think they’re definitely there. For some, the bright window may bring a feeling of relief, for others annoyance, all depending on the person’s innate nature and experiences.

            Like

          4. I accept your position on no fact of the matter, but you’ve stated your own inclination and so I started poking at that. And based on your reply it seems the driver of your choice is morality. Apparently affect is the trigger of that moral concern, and so it becomes part of your requirements, which makes consideration of having every feature of consciousness except affect problematic.

            As for the window, I was specifically referring to the afterimage. I can see gazing at the window as instigating some affect, but I’m having trouble getting affect from contemplating the afterimage. Even with squinting and looking sideways.

            *

            Like

          5. On consideration of the other features, yeah. Like I said, I can’t claim rigorous consistency. I don’t think the amorphous concept of consciousness really allows it.

            On the afterimage, consider this. What leads you to contemplate it? Why do you pay any attention to it? Even if the answer is just curiosity, that’s an affect. (Under Jaak Panksepp’s scheme, it would fall under SEEKING.)

            Liked by 1 person

  2. Whether a content of experience, exteroceptive or interoceptive in origin, reaches a principled threshold for “remarkability” depends not so much on any intrinsic stimulus feature of the content itself as on the pre-cognitive “set” of the subject of experience—on the back-ground dispositional land-scape of his chronic and acute affective and cognitive “constructs”/concerns. “Self-Aware Patterns” believes that there is no fact of the matter about when consciousness begins. I think that there could well be a fact of the matter about when a threshold for remarkability of a content “could” begin—always in relation to prior precipitating conditions. What it is to remark a content is merely to note it , not necessarily to report it.The valencies of the back-ground determine the degree of influence of the noted content on memory, reasoning and action. Now then, onto the phenomenology of experience. Whoops, sorry, I have some cat-grooming to do just now.

    Liked by 1 person

    1. Jeff,
      It sounds like you’re talking about a threshold for attention. But attention is a multi-level process in the brain. Broadly speaking, there is bottom up and top down attention.

      Bottom up is like when you focus on the sensation of a spider crawling up your leg. Notably, our mind could be on something else, even though our reflexive or habitual systems are attending to a stimulus, such as us absent-mindedly scratching an itch, or driving to work while thinking about something else. And it can be affected by all the factors you discuss.

      Top down attention is the executive system exerting control over attention, such as you deciding to read this comment, even if you’re currently in a noisy room. It’s loading the dice on what wins the competition.

      Of course, it’s more complicated than that. There isn’t really a clean divide between bottom up and top down, but rather a spectrum between reflexes, habits, and progressively more sophisticated reasoning. How much has to be there for consciousness?

      Sounds like your attention had to switch to grooming the cat. I won’t ask if it was bottom up or top down. 😉

      Like

  3. Hey Mike.
    While I have almost no doubt that conscious experience comes from the brain, I still do think there is a hard problem.
    Do you have a blogpost where you talk about the HP extensively?
    Note that even we cannot explain consciousness, that do not make the case for what New Agers/mystics/religious… people says.

    Liked by 1 person

        1. Hi Ilyass. I noticed you followed me. Like Mike I write about consciousness a fair bit (not as much as Mike does; you may have already noticed I wander around more 🙂 ). If you’re looking for posts on the hard problem, you can try The Meta-Problem.

          Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.