The unproductive search for simple solutions to consciousness

(Warning: neuroscience weeds)

Earlier this year I discussed Victor Lamme’s theory of consciousness, that phenomenal experience is recurrent neural processing, that is, neural signalling that happens in loops, from lower layers to higher layers and back, or more broadly from region to region and back.  In his papers, Lamme notes that recurrent processing is an aspect of global theories, but he couldn’t see any reason why consciousness should require global activity, and so argued that local recurrent processing in the sensory cortices should be sufficient for phenomenal conscious perception.

That post pointed out functional reasons to see the global activity as necessary, mainly related to affects, and that without them, it seemed like a fragment of an experience at best.  I also saw issues with such a simple definition of consciousness, pointing out that it would require us to regard an artificial neural network with recurrent processing as conscious.  Still, it’s an interesting theory.

Just to recap: the initial sensory stimulus that comes in is feed forward, first locally in the sensory region, then globally throughout the cortex.  It’s well established that this causes unconscious perception.  This may be followed by local recurrent processing (after about 100 ms) and later global recurrent processing.  Global theories see the final stage as conscious.  Lamme sees the local recurrent one as conscious.

A new paper in Neuroscience of Consciousness seems to provide empirical evidence that contradicts the theory: Causal manipulation of feed-forward and recurrent processing differentially affects measures of consciousness.

The study used TMS (transcranial magnetic stimulation) to disrupt processing in the visual cortex at various stages of a perception.  Disrupting it early, in the initial local feedforward stage, had the expected effect of disrupting unconscious perception.  But it also disrupted conscious perception.  The authors note that this would actually be expected by an advocate of local recurrence theory, because the initial feed forward stages are still necessary prep work.

However, disrupting the later processing, in the local recurrent stages, didn’t have the expected effect on conscious perception.  In other words, conscious perception seemed more dependent on the early processing than the later locally recurrent processing.

Although the later disruption did seem to have an effect on the subject’s judgment of whether they had consciously perceived the stimulus, which is interesting.  The authors discuss how this might be consistent with predictive coding theories.

But the main takeaway seems to be that disruption later in the process, during local recurrent processing, didn’t have the effects on phenomenal experience that Lamme’s theory predicts.  They also note that Ned Block’s association of an independent phenomenal consciousness with local recurrent processing is also not supported, although they acknowledge there may be interpretations that Block can take to reconcile with the findings.

I don’t find this particularly surprising.  A simple equating of phenomenality with recurrent processing seems too simple.  It seems to be a common strategy to try to equate consciousness with some aspect or effect of neural processing.  I think trying to do so without an understanding of the causal role, if any, these phenomena play is like trying to understand a car motor by focusing on the timing of the spark plugs.  It might be an important component of how it works, but only a fragment.  At best, in and of themselves, these phenomena might be indicators of consciousness.

These results don’t seem to challenge global theories because the disruptions were only in the visual cortex.  It seems clear the early feed forward signalling could have still ignited the necessary global processing (recurrent and otherwise).

But it seems like another strike against local theories.  Unless of course I’m missing something.

57 thoughts on “The unproductive search for simple solutions to consciousness

  1. One of the main roles of theories is to guide, aka suggest, experiments. So far, so good. Conclusions? Not in order, yet.

    Granted other areas in science show people doing the equivalent of flogging a dead horse (cold fusion, steady state cosmology, etc.) long after most people have given up, but that is a natural and and very human trait. The idea is that we should expend our resources in areas that will prove to be the most fruitful … we just don’t know what those are just yet, so more than a little stumbling around is to be expected.

    We have addressed the issue philosophically for thousands of yeas and scientifically for part of a century, but effectively only for a few years. The tools for effective research just weren’t available. But now they are and it is an exciting time to be alive!

    Stay tuned! (Somehow I think you will!)

    Liked by 4 people

  2. I need to go read that paper before I comment further, but did you see this paper:
    https://psyarxiv.com/hjp3s/
    from Lau and Michel? Their point is that you should not conflate correlates of consciousness with markers of consciousness, and you should not conflate markers of consciousness with constituents of consciousness. A correlates, if I understand them correctly, is just something that happens at the same time, but could be just an effect, as opposed to a cause. A marker is essentially an example of consciousness, but should not be taken to define the parameters of consciousness. So, we might identify a set of neurons that performs a conscious process, but we should not then assume consciousness requires neurons. The constituent of consciousness is that which makes it consciousness, as H2O is that which makes it water.

    So, as you say, most of the neuroscience and current theories are looking at indicators, as opposed to the causes of consciousness. I think you know where I’m gonna go with this, but I’m gonna read that paper now and get back.

    *

    Liked by 3 people

    1. I found this paper tough to parse. Maybe you’ll have an easier time with it and come away with conclusions I missed.

      I have seen that Lau and Michel paper, and did a post on it a while back. https://selfawarepatterns.com/2019/06/24/empirical-vs-fundamental-iit-and-the-benefits-of-instrumentalism/
      My overall take on it is that until you have an a prior justification of a posteriori knowledge, that is, until you have a causal theory that explains observations, you don’t have a complete theory yet. And as I’ve noted before, I’m pretty skeptical there will be any one theory that explains everything that triggers our intuition of consciousness. I’m expecting something like the vast collection of theories involved with microbiology.

      Liked by 2 people

      1. [I, too, found it difficult to figure out what was going on in this paper. See my comment under James Cross’ thread]

        So I am going to accuse you (J’accuse!) of being overly persuaded by the indicators (or the correlates and marker, as Lau and Michel would say) of consciousness without trying to identify the constituents of consciousness. As you will say, everyone has their own indicators, but they’re not looking for the constituents. I don’t think anyone thought it was intuitive that water was a combination of hydrogen and oxygen. Similarly, the constituents of consciousness are unlikely to trigger the intuitive indicators that people are using.

        I have to admit that I read the TMS paper in the context of my own understanding, checking to see if anything conflicted, but I did not find anything that conflicts with said understanding.

        *

        Liked by 1 person

        1. So, give me details on your accusation. Where specifically do you see me being overly persuaded by indicators / correlates / markers?

          Your comments about everyone having their own indicators seems to indicate you think there’s a fact of the matter in terms of the constituents. I take it your answer is psychules / symbolic representations / mutual information?

          Like

          1. So it seems to me that your own requirement for consciousness is access and reportability. Access because that allows imagination, arbitrary consideration, etc. Reportability because you want to confine consciousness to one per person, and the most obvious candidate is “the autobiographical self”, the system that can attach memories to concepts and concepts to words. But this system only has access to some memories, specifically those that constitute “working memory”. If there are other systems that can access other memories, memories generated by early processing but not filtered through attention mechanisms, and these systems can use those memories for things like avoiding obstacles, those systems are not conscious, even though they are doing approximately similar things, because the conscious-like things they are doing are not available to the one system which “counts”, the autobiographical self.

            So yes, *all* of these things can be described in terms of psychules/symbolic representations/mutual information, including the feed-forward sweep, local recurrence, predictive processing, global broadcast, concept formation, arbitrary concept combination(needed for imagination), integration of information, higher order thoughts, etc.

            Okay, have at me.

            *

            Like

          2. Well, as I’ve often said, consciousness lies in the eye of the beholder. I can’t necessarily tell you that your conception is the wrong one. All we can do is explore the logical consequences of it.

            You say these systems are doing approximately the same thing. But is that true? Can these systems integrate sensory and affective reactions for use in reasoning? Can they then use that as a foundation for simulating alternate futures? Do they have episodic memory as opposed to just semantic and procedural memories? Can they reflect on their own experience?

            You might say that those things aren’t necessary for your version of consciousness. Okay, but can you see how our very concept of consciousness depends on them? When we use the word “consciousness”, we tend to project our own experience on whatever we’re talking about, view it as like us to at least some degree. But how much are these fragments of us really like the comprehensive us?

            Like

          3. The analogy that comes to mind is that I’m describing locomotion and you’re describing flight. Just like describing the specific features that allow a bird to fly, you describe features that are specific to human consciousness: reason, imagination, simulation, etc.

            But I suggest that “consciousness” is more akin to “locomotion” because when we talk about the most basic, fundamental kinds of consciousness, we talk about “seeing red”, “smelling smoke”, etc. We don’t necessarily talk about the memories of roses, or the planning to put out a fire. When Mary steps out of the room and sees red, the deed is done, the experience had. That’s why we (most of us, I think) say dogs, and bats, are conscious.

            And just like there are ways to go from short, sturdy bones to long, hollow bones, and from flat hard scales to long, light, curved feathers, thus generating new capabilities, there are ways to go from simple psychules to complex combinations of psychules which provide extra functionality that didn’t exist before. But it’s still locomotion via limbs, and consciousness via psychules.

            *

            Like

          4. The problem with the locomotion / flight analogy is a lot of the things that impress us about flying in a jet won’t be present with a horse drawn cart. I’ll also note here that you switched from talking about subsystems in human brains to talking about animals. Not really the same comparison, as animals are complete systems that have many of the capabilities I mentioned above, albeit in less sophisticated form.

            The problem with talking about seeing red, smelling smoke, etc, is when you perceive these things, you do so as a system with memories of roses, fires, and all the rest. Maybe you don’t consciously think about all that stuff, but it effects your affective reactions. The question I often get from people is, why does it feel like something to see red? It feels like something because our experience is an integration of the sensory stimulus and affective reactions. We can’t separate that reaction from the sensory impression. It’s what gives it its richness.

            (It can be separated in people with brain injuries. Usually the people who have that dissociation are regarded as zombie like, and not of the philosophical variety.)

            In other words, our appreciation of the phenomenal impressions comes from the depth of our systems that categorize and associate the stimulus with memories and primal reactions. To think that another system without that depth of processing is going to have the same experience of red or smoke that you do is to seriously take a lot of your capabilities for granted.

            Like

          5. [several points, so, in order]
            1. I don’t see the problem with the analogy. Just because a jet “impresses” us more than the cart doesn’t mean they aren’t both transportation.

            2. I switched to animals because they are systems people are comfortable calling conscious. People are less comfortable calling sub-systems in the brain conscious, even if they have the exact same capability as the animal.

            3. The problem of seeing red as a system with memories of roses is *your* problem, and it causes you to conflate the seeing with the memories when it would be possible to have the seeing without the memories. Yes, all those things like affects and “feelings” happen, but those exact responses are not necessary for having the experience. They may define the nature of *that* experience, because every experience has some response, but there is no good reason to define the standard human response as the only correct response.

            4. I did not suggest any system has the *same* experience as any other. I’m just suggesting that the best way to describe what some systems are doing is “experiencing”, i.e., being conscious. And more than one of those systems can be in one brain. And the systems in one brain don’t have to be distinct. They can be nested. And in the nested case, it would not be appropriate to say that the experiences of the inner system are the same as the outer system. It depends on how they are described, as psychules.

            *

            Like

          6. Are there systems you don’t regard as conscious? For example, are plants conscious? What about unicellular organisms? Or starfish, jellyfish, or c. elegans worms? Or the devices we’re using right now to have this conversation? I ask because all of these systems seem like they have psychules (if I’m understanding the concept correctly). They all could be said to have sensoriums and motoriums, inputs and outputs.

            If we assume that some systems are not conscious and some are, then the question becomes how we distinguish between them. The historical standard has been self report. And once a particular stimulus or behavior has been reliably correlated with self report, we can look for it in other species or systems. What do you think needs to change with this criteria, and why?

            Like

          7. So yes, all the things you mentioned have psychules, probably, with the possible exception of certain single celled organisms. Viruses do not, I think. A psychule is necessarily a two step process: the first step creates a symbolic symbol (in semiotics terms, a symbolic sign vehicle) which has the purpose of being a signal and no other purpose. The second step is a response to this signal. So a psychule looks like this:
            Input->[mechanism]->signal->[mechanism]->Output
            To identify consciousness you need to identify these parts.

            I’m guessing without knowing that at least some plants meet this criterion. Single cell organisms might meet the criterion via internal messages. And yes, our devices meet the criterion. A digital thermostat meets the criterion, but a standard analog type does not.

            We need to make a distinction between “having” experiences, versus experiencing. Having an experience suggests being able to do something with it after the fact, like remembering, imagining, etc., which involves higher order psychules.

            Also, the simplest form of psychule is not going to explain qualia, for example, nor does it go very far toward explaining human consciousness. To get there you need to talk about pattern recognition mechanisms, which are the mechanisms for the first step of a particular kind of psychule. These are Millikan’s unitrackers. There are all kinds of these, including sensory-type, but also motor-plan types and goal-types.

            My current theory is that cortical columns are pretty much equivalent to unitrackers, and each unitracker pretty much represents a concept. Low level unitrackers feed into higher level unitrackers, but higher level unitrackers also feed back to lower levels, which is the processing Lamme is referring to. Activation of a higher level via a different path would cause the unitracker to provide “feedback” a priori, and thus count as a prediction/bias, making the “recognition” more likely to happen.

            This still doesn’t get into qualia, which requires yet another kind of mechanism (for the first step of the psychule). I refer to a single mechanism which can effectively combine the inputs from multiple unitrackers and generate an output (sign vehicle) unique to the combination. This is Eliasmith’s semantic pointer. Because different outputs from the same mechanism can “mean” different things, we can ask, what does the current output “mean” or “represent”, or what type or category of thing (literally, “qualia”). It’s this kind of mechanism which will allow things like episodic memory, imagination, reports, etc

            *
            [got all that? 🙂 ].

            Like

          8. I’m not quite sure I’m getting the distinction you make between systems that have the symbolic vehicles and those that don’t. What provides the vehicle for a digital thermostat that isn’t present in an old school one?

            And perhaps related, the distinction between having an experience and experiencing. I’m not sure if I find it coherent to talk about experiencing without having the experience. This gets into what we mean by “experience”. It seems like a system which is unable to utilize the information that comes in hasn’t experienced it.

            For example, if a doctor hits my patellar tendon with a rubber hammer, my knee will jerk. The processing in the knee happens without any experience. However, I do experience it after the fact. Likewise when low level brain processes react automatically, that’s not the experience. It’s only an experience when it becomes part of my model of myself and the world.

            Although I could quibble with some of the details, I don’t have too much of an issue with the rest. It seems though that by the time you get to semantic pointers, you’ve reached a globalist view of what’s happening.

            Your view of Lamme’s view though, I think, gives him too much credit. He doesn’t describe it as part of an overall system. He specifically seems to reject that view. He sees the recurrent processing itself as experience. He does try to knit a causal link by talking about synaptic plasticity being enhanced, but he doesn’t talk about pattern recognition and feedback. I’d be onboard if he did, but that’s more of a globalalist view of what’s happening.

            [maybe, unless my response shows how confused I am]

            Like

          9. James,

            When I read this article “BIP” became a new concept for me. Did I grow a new cortical column to contain it? Or, did I lose use of an existing cortical column that contained a different concept (no way for me to know which one, of course) so the column could be repurposed to contain “BIP”?

            Liked by 1 person

          10. Re: thermostats, [and I’m making assumptions as to how digital ones work], a digital thermostat makes a reading of the temperature, gets, say, 65. It represents that number somewhere in memory, so a set of voltages. Another part of the system reads that number, calculates that it is too low, and turns on the heater. That set of voltages that means 65 is a symbolic sign vehicle. The voltages would look slightly different if they represented 75, but they would still just be voltages. Their only function is to represent the measured temperature. If you reversed which voltages meant 65 and 75, the thermostat would still function correctly as long as you switched the interpretations appropriately. The specific voltage settings have mutual information with respect to the temperature, and that’s why a response to the voltage settings works as a response to the temperature.

            In a regular thermostat, there is a physical bar that bends depending on the temperature. The higher the temp, the more it bends. If the temperature is high enough, the bar bends enough to close a circuit, which turns off the heater (or turns on a cooler). There is mutual information between the bar and the temperature, but there is nothing interpreting that information. The bar either closes the circuit or it doesn’t.

            [on further thought, I could see arguing a regular thermostat would count as a psychule. In semiotics terms, the bar is an indexical sign vehicle, as opposed to a symbolic sign vehicle. The shape of the bar “indicates” the temperature. You could argue that those should count for psychules, to which I would say ok.]

            *

            Like

          11. On experiencing versus having an experience.

            I’m saying the psychule, as described, is an experience. You could call that having an experience, but I’m not sure what system is having the experience. I think it better to say “having” an experience is the capability to refer to prior experiences, via new experiences. Let’s use the knee-jerk response. By my terms, the knee to spinal chord to leg muscle process is a psychule, and we say that system has a very basic kind of consciousness. It experiences, but doesn’t “have” an experience. On the other hand, the knee to spinal chord to brain plus muscle to spinal chord to brain are two separate psychules, and they get “combined” in a third psychule and a memory of this combination gets (at least temporarily) stored. To me, this memory (or possibly a different response, such as reporting “ouch”) constitutes “having” the experience, namely, an experience about a prior experience.

            I have to admit, I haven’t spent much time working out the logic of what it means to *be* a conscious system, specifically, what’s needed to “have” consciousness, as opposed to just having internal psychules, but I’m going to go with the above until I have reason not to.

            *

            Like

          12. BTW, when you say

            “ It’s only an experience when it becomes part of my model of myself and the world“,

            I just translate that as —

            It’s only [your autobiographically accessible]] experience when it [modifies] part of [your set of unitrackers accessible by your autobiographical system]

            So we’re all good.

            *

            Like

          13. Re Lamme’s view.

            I am definitely agreeing with Lamme, in that the localized recurrent processing is psychules, so is experience. Every time a untitracker is activated, that’s the first step of a psychule, the second step being the response, whether that response be activation of a higher level unitracker, or feedback to a lower level unitracker (giving recursion), or activation of motor activity, or generating release of hormones, or activation of a multi-signal mechanism (semantic pointers).

            This particular experience only becomes available to the autobiographical system via the semantic pointer pathway.

            Lamme doesn’t talk about pattern recognition because he hasn’t read Ruth Millikan, I wager. Almost nobody has, it seems. At least not the neuroscience folk.

            *

            Like

          14. Thanks for those details. As I’ve said before, consciousness is in the eye of the beholder. For me, what I see you describing are its components, but you seem to see them as various consciousnesses that interact and combine. I can’t tell you you’re wrong (at least in terms of consciousness; I do think you’re wrong on the anatomical functional details). But I can say what you’re describing applies to a lot of relatively simple systems (which you seem good with), but that don’t trigger the intuition of consciousness in most people.

            In the end, my response is similar to the ones I give most panpsychists (not that I’m saying you’re a panpsychist). What you’re describing is interesting, but isn’t my primary interest, which is understanding how our experience comes about, how our consciousness works, along with how systems that trigger our intuition of a fellow consciousness work. For that, the global architectures, or something like them, seem to be required.

            Like

          15. James Cross, pretty sure you didn’t grow a new cortical column, and I don’t really have a theory on how new concepts get handled, other than how they start via semantic pointer-type mechanisms. I suspect repurposing of extant columns happens in the long term, but not sure how “working memory” works. I can imagine short-term multipurpose columns being used, but I have no real clue how that might work.

            Got any ideas?

            *

            Like

          16. I don’t think there is any sort of direct mapping between cortical columns and concepts. There are too few columns in the brain. And if you propose some sort of direct connectivity between columns that would become problematic because the brain would fill up with connections and particularly the long distance connections have to thicker.

            I think a starting place would be to understand how networking is done in the brain.

            Buzsáki talks about the brain as a small world network. Most neurons only have connections to nearby local neurons and there are a relatively a lot fewer connections to remote neurons. This provides a relatively small number of hops between any one neuron and any other neuron without filling up the brain with connections. Oscillatory processes are what keep in the local and remote neurons in sync. A brain structured this way can scale up from the relatively small brain of a shrew to the large brain of the human without slowing down and without exponentially growing the relative number of connections.

            Cortical columns may just be the easiest way to grow brains architecturally, that is mostly an inadvertent side effect of how neurons develop. Or, they could be significant in some other way. I have noted, I think, that the columns residing parallel to each other with the L5 pyramidal neurons dendrites projecting to the top might magnify EM effects of when neurons fire.

            Like

          17. From Wikipedia: “There are about 100,000,000 cortical minicolumns in the neo-cortex with up to 110 neurons each,[13] giving 1,000,000–2,000,000 cortical columns”

            How many cortical columns do you think we would need?

            Interesting that you bring up the pyramidal cells. These are, I think, the core of the column. (There may be a 1:1 correspondence between concept and pyramidal cell, but I don’t know enough to make that statement with confidence.) They reach through all the layers and both send and receive signals to and from the thalamus. The thalamus is where i conjecture the semantic pointer mechanism to be. That connection is how any given concept can be combined with others and broadcast throughout the cortex without having a direct link from the column to every other column.

            *
            [ All roads lead to the thalamus.]

            Like

          18. It depends upon how granular the concept-column relationship is.

            For example, take table as in “kitchen table”. Is there one column for the abstract concept of table and another for each type of table – metal, wood, folding, kitchen, green, red, dining room, my Grandmother’s table, etc. Or are all of those tables mashed together in one column?

            Without details of how it works, it is hard to say.

            But what you are purposing doesn’t make any sense to me from what I understand about how the brain works.

            If I am looking at a table, the color and shape I am seeing is probably in the visual cortex. There may be memories coming from elsewhere – prefrontal and/or hippocampus (this still isn’t well understood) – that result in some level of recognition involving some other location. The point is that “table” emerges from interaction of neurons all over the brain, probably with considerable redundancy, not from firings in a single column.

            The brain consists of specialized modules and consciousness comes from coordination and integration between them.

            Like

  3. Interesting post. It made me think of 2 major useful purposes of research into consciousness (at least): #1 being able to determine whether the subject is conscious in a medical situation, and #2 being able to create consciousness artificially. Another thing your post made me think of was a research paper I had read about dreaming and sleep. I had read that during dreaming, stimulation via the neural pathways between our sensory processing and our memory storage are reversed, so that memory content “feeds” the sensory (visual, audio, tactile, etc) networks. The research paper called this out as a special process associated with dreaming, but after reading your post about recurrent/feed-back stimulating, it seemed likely to me that our consciousness of perception might involve a process similar to dreaming. Global processing is likely also involved but I want to simplify so I can focus just on this feed-back loop. It occurs to me that there might be multiple feedback loops (without being infinitely recursive) so that after one loop we are conscious of perception but we are not conscious that we are conscious (we see without being able to report that we see). After the 2nd loop, we are conscious that we are conscious (reportable). After the 3rd loop, we are conscious of the quality of our consciousness (whether it is a dream or reality). So on and so forth. I’m wondering whether this might generate testable hypotheses.

    Liked by 2 people

    1. Thanks Mike.

      I briefly mentioned predictive coding theory in the post. What’s worth mentioning here about it is that it’s an inside-out understanding of perception. So, it isn’t just dreaming that is memory driving the sensory processing, but everyday perception. What comes from higher order regions are the prediction, and what comes from early sensory regions are the error corrections.

      What’s different in imagination is that we lack the error correction of current sensory input. With dreaming, we lack both executive control over what’s happening (or looser control) from imagination and that error correction from immediate perception.

      On bringing in increasing levels of meta awareness, a big part of it is bringing in additional regions that do the monitoring and provide those metacognitive assessments. Eventually though, we do get to a recursive level of metacognitive awareness, which is used for introspection and self report.

      Liked by 1 person

  4. Consciousness is state to state contrast (though you have to wonder then, what is a state that is not static?). That seems to go for pre-reflective and reflective consciousness both. If you’re eliminating “local processes”, do you mean to eliminate the notion of pre-reflective consciousness as well?
    If so, then we need to go back and revise what we mean to say when we say “consciousness”.

    Liked by 2 people

    1. I don’t know that ruling out local processing for consciousness necessarily rules out pre-reflective consciousness. But establishing pre-reflective consciousness is extremely difficult, since if it isn’t reflective, subjects can’t report on it. People are attempting to get at it with no-report paradigms, but no-report paradigms never completely escape report, they either just delay the report, or look at stimuli that previously had been correlated with report.

      Liked by 1 person

      1. True. Over the years, I’ve developed a peculiar skill. Whenever I’m placing an ice screw, and the screw starts to drop before it bites, I’m able to catch the screw before it falls beyond my reach. Over time, this behavior has become quite reliable, and I rarely ever drop a screw anymore. All this occurs without my thinking about it. In fact, I’m not even aware of what’s happening until I have the ice screw in my hand. I’m guessing that all the neurology occurs locally, yet I’m also aware of an associated phenomenology. I’d like to think that my action is different (conscious) than a pupillary light reflex.
        But if you ever had your eyes dilated, you may have noticed an associated phenomenology even with that basic and very local reflex.
        Yet a person with no cortical activity still has a pupillary light reflex, though we would like to think that it is not generating any phenomenon for that person.
        So what do we say about these local processes? Is there some nascent consciousness in these responses? Or is it only the secondary, reportable stuff that counts in regards to these local processes?

        Liked by 1 person

        1. [just wanted to mention my own learned preconscious skill: I’ve developed a foot catch — when I’m working at the kitchen counter and I knock something off, I will reflexively put my foot under it (to prevent breaking a glass or denting the floor), but there is just enough time for further (preconscious) processing such that if it’s a sharp knife or other danger I will pull my foot back.]

          Liked by 1 person

        2. Good question. I don’t think there is a fact of the matter answer. It comes down to how we want to define “consciousness”. If your definition includes things that aren’t reportable, then you might regard some of those processes as conscious.

          Myself, I think about what causes us to have a concept of consciousness in the first place, which is introspection and report. I think there’s a case to be made for what is within the scope of introspection being conscious, and what is outside of it, unconscious. We can argue that this is true even for creatures which can’t introspect, although for them there’s no marker on the boundary, just one we retconn onto them.

          But that’s a philosophical conclusion on my part. I can’t tell someone who concludes that activity outside of introspection is conscious, is necessarily wrong. All I can do is point out the logical consequences of holding such a position. (It may include things in club consciousness they don’t mean to include.)

          Like

  5. A couple of thoughts.

    If I understand they are presenting simple arrow patterns. Also, I assume the subjects know they will be presented simple arrow patterns. The processing for expected, simple patterns could be different than the processing for a unexpected, complex pattern such as different faces or random complex geometric patterns of various sorts. In particular less processing would be required likely for the simple pattern.

    A little bit of reading on TMS will tell you that, unless it is calibrated almost to the individual, you may just be injecting a lot of variability into the data. It’s not clear from my reading that was done.

    “The use of the same intensity across subjects and brain areas will likely introduce a substantial degree of variability into the data and would lead to too little or too much stimulation for most of the subjects. Therefore, when possible, intensity should be individually adjusted using functional measures, such as visual suppression/phosphene threshold, motor cortex threshold, or the threshold for disrupting the targeted process”.

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6632835/

    See also limitations section.

    Liked by 2 people

    1. On rereading, maybe they did adjust using a phosphene method, except it isn’t clear if the intensity was changed. It reads like they just adjusted the position, so whether that would produce a great deal of individual variability isn’t clear.

      Like

    2. They do acknowledge that the complexity of the images might be a factor in relation to Block’s theory. And it is plausible, even in terms of global theories, that if the images had been complex, there would have needed to be more recurrent processing for identification of it. Still, the fact that the early feed forward processing was sufficient for any kind of conscious processing seems to rule out a simple equivalence of recurrent processing with phenomenal experience.

      For TMS, in the methods section, they do discuss the calibration they perform for each subject.

      Like

      1. Maybe you can explain the Blindsight Inducing Pulse at 110 ms. From a diagram caption:

        “indicating the interventions applied at the period termed the Blindsight Inducing Pulse, BIP (110 ms), together with separate pulses on the same trial during an ‘early’ phase to target feed-forward processing (30 or 70 ms), or during a ‘late’ phase to interfere with later, putatively recurrent processing (150 or 190 ms).”

        Can we be sure the BIP at 110 ms is not affecting the late phase processing that occurs 40 or 80 ms later? Or, maybe I’m not understanding the design well-enough. Like you I’m finding the paper difficult to parse.

        Funny thing. I wandered into the supplemental data and found this disclaimer:

        “Please note that this experiment was started when I started to learn how to program (quite some time ago) and I have tried to change the programs as little as possible following registration. I can now see that the quality of the coding is pretty poor and clunky, but they work. Please bear this in mind when looking at the code provided. ”

        Everybody has to start somewhere, but then again something could be apparently working but not actually giving the right results. 🙂

        Liked by 1 person

        1. I took the BIP as just one of their interventions, but not that it was used everywhere since, assuming it did cause blindsight, there would have been no conscious perception anywhere it was used.

          I don’t like the sound of that disclaimer. I’m beginning to wonder where the editor and reviewers were on this paper.

          Liked by 1 person

          1. I had to read the paper twice, with several hours in between, but I finally figured out what they were doing. Every single trial had the BIP TMS at 110ms. Some trials had an early pulse before and in addition to the BIP, and some had a pulse after and in addition to the BIP. Again, every trial had a BIP. This is shown in figure 1, B.

            I did not go through all the technical stuff, but I assume that this BIP was not completely blindsight inducing, as happens for blindsight caused by actual damage, but instead generated an effect that was close to the threshold, kinda the way they do masking experiments so that the subject reports the masked stimulus about 50% of the time.

            So the authors admit that the BIP is affecting the later local recurrent stuff. Their main point seems to be that, under Lamme’s theory, a pulse after the BIP should have a greater effect than a pulse before the BIP, and that wasn’t the case. I still think there is an argument that an early pulse simply borks all subsequent processing.

            *

            Liked by 2 people

          2. As best I could figure, my understanding of what they were doing is the same. Still the effect of the BIP itself could be nullifying the recurrent processing sufficiently to make the conclusions doubtful.

            Like

          3. I guess where I am going is something like this.

            If feedforward takes place 30-70 ms post stimulus and blindsight can be induced at 110 ms, then it seems that some kind of processing post feedforward is necessary for full conscious awareness. The fact of BIP itself seems to support Lamme’s theory.

            I note that the experimental work is from 2018 so this has been a while in review.

            Like

          4. Skimming the paper again, even the BIP wasn’t actually blindsight inducing. They just used the term because they’d pre-registered it. So it seems the late disruptions just didn’t have the effect they were supposed to according to Lamme’s view and the overall feed forward / recurrent paradigm.

            That said, there are definitely issues with this paper, if not the study methodology. The editor should have made them redo this in a clearer fashion. I hope someone works to replicate the results.

            Like

          5. What was the effect it was supposed to have?

            It reads to me like they are expecting recurrent processing to compensate for interrupted feedforward processing. I doubt that is Lamme’s view.

            How in the world is BIP not BIP and how do they know it is not BIP?

            Like

          6. From the paper:

            The term BIP is used for consistency with the pre-registered protocol. However, as it transpired, the data did not indicate a complete dissociation between conscious perception and reportedly ‘unseen’ discrimination, so should not be interpreted as producing blindsight (see Results section). The BIP was experimentally necessary as previous research (Allen 2012; Allen et al. 2014; Koenig and Ro 2019) and piloting indicated that single pulse TMS, or pairs of TMS pulses in rapid succession, are largely ineffective in changing residual ‘Unseen’ discrimination. The BIP therefore enabled us to probe the primary experimental hypotheses. As the BIP interferes with later recurrent processing, and also because all TMS effects only carry forward in time, the differences between temporal intervention conditions are relative ones, in which early feed-forward processing is disrupted to a greater extent when TMS is additionally applied before the BIP, in comparison to when it is applied after the BIP.

            And later in the results section:

            However, contrary to the theory relating conscious awareness to later recurrent processing (Lamme et al. 2000), conscious detection was also suppressed to a greater extent by early TMS compared to late TMS (PrC Δ sham early (30 and 70 ms) versus late (150 and 190 ms) T(48) = 4.17, P = 1.25 × 10−4, mean = −0.06, 95% CI [−0.08, −0.03], d = 0.60, BFmain(early>late) = 0.05, BFmain(late>early) = 1.73 × 103, BFuni(late>early) = 408.90, BFjzs = 187.88, see Fig. 2B and Table 2). A direct correspondence between feed-forward/recurrent processing and unconscious/conscious perception was therefore not observed.

            My current interpretation is that they expected a relative deterioration in conscious perception from the later pulses that they didn’t get.

            Like

          7. Looking around, I can find a lot of research on blindsight inducing TMS at 100-110 ms so is the study claiming that research is wrong or that they did something different from the other research?

            The problem is the BIP/non-BIP (whatever) could have disrupted recurrent processing in all scenarios which they acknowledge when they write:

            “As the BIP interferes with later recurrent processing, and also because all TMS effects only carry forward in time”

            So any differences that might have been expected would get washed out with the BIP/non-BIP. In other words, with BIP/non-BIP, disruption of recurrent processing was always applied so the following would be totally expected.

            “conscious detection was also suppressed to a greater extent by early TMS compared to late TMS”

            Yeah. This isn’t unexpected at all and I wouldn’t think it would be under Lamme’s theory.

            Like

          8. I don’t see the ugly-but-functional programming as a problem. The authors had to leave it in due to preregistration rules, I presume. If the paper was reviewed at the preregistration stage, the first reviewer might not have programming expertise – that doesn’t seem like a huge flaw to me.

            Now, allowing authors to use an acronym (like BIP) *before* defining it – that’s a problem.

            Like

          9. They did define BIP in the pre-register: blindsight induced pulse. Apparently it was the experimental results that didn’t adhere to that definition. So the real issue might have been assuming what the pulse would be what they labeled it to be.

            Like

          10. What I meant was, if you search for “BIP” in their article, the first occurrence is unexplained. The second occurrence says it stands for Blindsight Induced Pulse.

            Like

          11. Ah, I see what you’re saying. Definitely the paper has a lot of those kinds of issues. Although lamentably, it’s not unusual for scientific papers to never even define acronyms or terms they take to be common in their particular sub-field.

            Like

          12. I don’t see a problem necessarily with ugly but the fact the researcher was learning to program makes me wonder if the analysis was done correctly. Novice programmers can easily write code that accidentally skips processing or counts in certain cases and the problem is not obvious in the final results. This would be especially true since there is no expectation of what final results should be so I doubt if there is any easy independent way to know if the statistical analysis is correct. Ugly code would make the issue difficult for another person to detect simply by looking at the code itself. The only good way would be for somebody else to take the raw data and write another program to do the same analysis and see if the results were the same.

            Like

  6. “ What you’re describing is interesting, but isn’t my primary interest, which is understanding how our experience comes about, how our consciousness works, along with how systems that trigger our intuition of a fellow consciousness work.”

    That’s all you had to say. 🙂

    But as for your interests, my money is on unitrackers and semantic pointers and the thalamus. And when I say my money is on, I would like to offer you a wager to make it official: I say within the next five years a neuroscience paper will come out that references Millikan’s unitrackers in relation to cortical columns, or that the source of the canonical “global broadcast” is the thalamus, or that the paper references semantic pointers in the context of “global broadcast”. I win if any of these conditions are met. (Getting them all would be too much.). Loser buys the winner coffee.

    What say ye?

    *

    Like

    1. On the other hand, take a look at

      The cortical column: a structure without a function

      “Equally dubious is the concept that minicolumns are basic modular units of the adult cortex, rather than simply remnants of foetal development. The concept derives from the observation that minicolumns are isolated by cell-sparse zones of neuropil into semi-distinct compartments. However, one could also assert that minicolumns are united by synapse-rich zones of neuropil, linking them seamlessly in the cortical sheet. The ‘elementary unit’ hypothesized by Lorento de Nó remains an elusive quarry. No one has demonstrated a repeating, canonical cellular circuit within the cerebral cortex that has a one-to-one relationship with the minicolumn.”.

      https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1569491/

      Like

      1. Thanks for that reference. It gives a very good history of the development of the idea of cortical columns and minicolumns.

        That said, the gist I got from it is “no one knows what the function of a column is, so maybe there isn’t one.” And when they speak of column, they are referring to what would be a group of mini columns. Because they don’t always find the same group of columns with the same function in different individuals, they conclude that the columnar structure has no function. I think that’s a stretch. Given that I think the basic unit is the minicolumn, I have no problem that the inter-coordination of minicolumns would be an organic, plastic, process. The same could be said of the nodes generated by AlphaZero. If you run AlphaZero twice, you won’t get the exact same organization, but it would be foolish to say the structure of the nodes have no function (even when we can’t figure out that function).

        The quote you provided specifically refers to minicolumns, but it uses the same logic. No one has shown the canonical minicolumn that has the same function in every brain, so maybe there isn’t any function to it. I expect this will be solved when they find the “Jennifer Aniston” minicolumn in more than one brain. But it’s hardly surprising this work wasn’t done by 2005, the date of the paper. The fine structure wasn’t known then. The paper I referenced is the first I know of that addresses exactly how a mini column could function computationally.

        *

        Like

    2. James, I’ll buy you a cup of coffee regardless, but given the kind of stuff that shows up in the literature, no way would I take that bet. The real question is where the preponderance of the evidence currently points.

      We all start off looking hard at the thalamus and brainstem. Even Baars initially thought the thalamus-brainstem core might be the heart of the workspace. But this is science, and we all have to be willing to modify our views based on the data.

      From what I’ve read, the thalamus does have an important relay and hub role, but it’s not the end-all-be-all. Yes, every cortical region has connections with it, making it an integral part of the cortical system. (Which is why it’s often called the “thalamo-cortical” or “cortico-thalamic” system.) But many of the connections between cortical regions are direct, not going through the thalamus. And many cortical regions also have connections with other subcortical structures such as the amygdala, hippocampus, hypothalamus, basal ganglia, etc.

      That said, as far as I understand them, I don’t really have any issues with unitrackers and semantic pointers. I just think the mapping to anatomy is far more muddled and complicated than you imagine. Notably, I think they’re far more distributed than you do.

      One of the things I think we have to be leery of is any idea of a consciousness center in the brain, whether it be the thalamus, brainstem, hippocampus, frontal pole of the prefrontal cortex, or any of the other locations people like to focus on. If there is a consciousness center, does it in fact have a consciousness center within it? And does that center have one as well? Dennett made the point that if your theory of consciousness still has a bit labeled “consciousness” in it, you’re not done yet.

      Anyway, I don’t want to have a stake in where the evidence might go.

      Liked by 1 person

      1. [ditto on the coffee, of course]

        As for the role of the thalamus, I admit that my confidence in the role I have envisioned (home of semantic pointers) is not that high. On a scale of 1 to 10, I give it a 4. In comparison, my confidence in unitracker = cortical column is 8+. But my confidence that semantic pointers are somewhere sub cortical, and play a role in global broadcast, is 6+.

        Just to be sure, even if my initial idea is completely correct, it would be wrong to say the thalamus was the home or center of autobiographical consciousness. The pertinent psychules begin with the unitrackers in the cortex, get converted/combined to sign vehicles by the semantic pointers (wherever they may be) and get responded to in multiple places, including the hippocampus and amygdala as well as various cortical unitrackers. Those responses are the outputs, the second part, of the pertinent psychules.

        *
        [so really, how’s the coffee down there?]

        Like

        1. My suspicion would be that semantic pointers are both sub-cortical and cortical. I tend to think of them, perhaps erroneously, as links between neuronal assemblies. Any sharp division between cortical and sub-cortical function might be the way an engineer would think, but I doubt evolution is tidy in that way. For example, the hippocampus seems to be about spatial navigation, but the adjacent enthorhinal cortex appears to be about mapping concepts in time. These two functions seems related, with no clean division on why one might be in a subcortical region and the other in a cortical one, but it’s not hard to see why they’d be near each other.

          I don’t know much about cortical columns. From the little I do know, they probably come about due to how the genetic code that generates cortex works, a sort of repeating pattern. It seems like evolution found a way to generate a lot of neural substrate and ran with it, looping a number of times depending on species.

          It’s plausible that the denser internal connectivity of columns in relation to inter-column connectivity has functional consequences. But from what I’ve read, the cortex is not a uniform substrate throughout. The sensory cortices are reportedly organized differently from the frontal ones, with the sensory ones being more laminar and the frontal ones much more heavily interconnected. Which means, whatever the functional role of cortical columns might be, it may not consistent across the entire cortex.

          [Can’t say I’m much of a coffee connoisseur. (I drink most of my coffee out of a Keurig.) Community is a popular brand down here. And I think there are some New Orleans blends some people swear by. But I suspect they pale compared to the best in Seattle.]

          Like

  7. I don’t think you would find semantic pointers in both cortex and subcortex. They consist of a set of hundreds of neurons which, by necessity, have a fairly specific organization, and I don’t think it fits well with cortex.

    If you want a feel for columns, I definitely suggest the paper James Cross just provided. Based on that and the paper I linked, it looks like the neurons of a single mini column are clonal, so, originating from a single cell. From my quick view, I think there is just one cell that spans the whole column, the layer 5 pyramidal cell, and a bunch of others that land in one layer or the other. I can imagine it possible that some even migrate to the next column or so.

    Not all columns need to have exactly the same structure with x many cells in layer 3, etc. I think I would expect variations, depending especially where the inputs are coming from, but also where the outputs need to go. Based on Millikan’s ideas, there will be many kinds of unitrackers, with many kinds of functions. Some will be sensory pattern recognizers, which you would find in the sensory areas, but some will be action organizers, so for example, recognizing when it’s time to raise your arm over your head, etc. Those would obviously be in the motor cortex. There would also be goal oriented unitrackers, which would recognize whether a goal is reached or not, and generate response if not, etc. I expect these kinds are especially in the frontal, prefrontal areas. I could see each of these having somewhat different organization. I could see the prefrontal goal recognizers especially having direct long distance cortical links, mostly for suppression and mostly to thereby regulate attention.

    But in the end, I think the whole cortex is essentially columns, and unitrackers.

    *
    [Seattle coffee isn’t just about the coffee. It’s also about the experience, from tiny, hidden away neighborhood grocery stores with a couple tables, to fancy glass and metal extravaganzas with all the connoisseur extras, to old Victorian houses where they just set up the coffee counter in the living room and put tables in all the tiny rooms, to the studio of the local radio station with an LP record library combined with an espresso machine showroom, to the secret hole in the wall in an alley off the public market that has two stools and enough room to fit 10 people standing if they don’t mind being very close to each other. That last one has the best mocha in the city, IMO.

    I’m sure you can get good coffee all over the country, and I expect that the New Orleans coffee you mention is just as good as any Seattle coffee. But I’m a mocha fiend. I never drank coffee until someone told me they made some with milk and chocolate. So now I can basically tell whether the coffee is good or bad, but the chocolate makes or breaks it. All those places I mentioned have exceptional mochas, except sometimes the radio studio. They tend to experiment a lot, so sometimes it’s better than other times.

    So it’s all about atmosphere and chocolate.

    Oh, and the best mocha anywhere is in Vancouver, BC]

    Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.