The difficulty of isolating evidence for the location of consciousness

In the ongoing debate in neuroscience between those who see consciousness being in the back part of the brain, among the sensory processing regions, or in the front of the brain, in the cognitive action planning regions, there are issues confounding the evidence.  Most experiments testing for conscious perception depend on self report from the test subjects, but this causes a problem since the frontal lobes are necessary for any method of self report (speech production, pressing a button, etc), so when those frontal lobes light up in brain scans in correlation with conscious perceptions, the possibility exists that they only light up due to the self report requirement.

So, an experimental protocol was developed: the no-report paradigm.  One group of subjects are given a stimulus and asked to report if they consciously perceive it while their brains are being scanned.  Another group are given the same stimulus that led the first group to report conscious awareness, but the second group is not required to self report, also while being scanned.  The scans of the groups are compared to see if the frontal lobes still light up in the second group.  Generally, although there is variation, the frontal lobes do still light up, implicating frontal regions in conscious perception.

However, Ned Block, a philosopher who thinks it likely that phenomenal consciousness “overflows” the self report from access consciousness, sees an issue that he describes in a paper in the journal, Trends in Cognitive Sciences.  (Warning: paywall)  Block points out that potential confounds remain, because we can’t rule out that the test subject isn’t thinking about reporting their perception, or cognitively processing the perception in some manner, causing the frontal lobes to light up for reasons other than the conscious perception itself.

Block points out that the real fundamental distinction here is between those who see cognition (in the frontal lobes) as necessary for consciousness versus those who see perceptual processing (in the back regions) as sufficient.  Global workspace and higher order thought theories are cognitive accounts, while integrated information and local recurrent loop theories are more sensory oriented.

Block argues that the no-report paradigm needs to be replaced with a no-cognition paradigm, or to avoid begging the question against cognitive accounts, a “no-post-perceptual cognition” paradigm.  But how can cognition be eliminated from subjects who have perceptions?  Short of selectively anesthetizing the frontal lobes (which would be invasive, risky, and unlikely to get past IRBs), is this even possible?

Block focuses on a study of binocular rivalry, by Jan Brascamp and colleagues, as a possible solution.  Binocular rivalry is the phenomenon that, when the eyes are shown very different images, conscious visual perception alternates between the two images, rather than blending them together.  (Blending can happen, but only if the images are similar.)  The goal of Brascamp’s study is to determine whether the selection between the rival images happens in the back or front of the brain.

To do this, the study constructs rival images of random dots such that, although they are different enough to lead to binocular rivalry (the dots in one image move left vs moving right in the other image), they are similar enough that the subject’s attention isn’t called to the switching between the images and so can’t report it.

For subjects who aren’t required to report what they’re seeing, brain scans show variations correlated with the image switching in the back of the brain, but not in the front.  In other words, the study shows that the selection of which image to momentarily “win” in the binocular rivalry happens in the back of the brain.

Block sees the methodology here as an example of the “no-post-perceptual cognition” paradigm, and the specific results as indicating that the frontal lobes aren’t necessarily involved in conscious perception of the images.  He focuses on the fact that subjects could, if queried, identify whether the dots were moving left or right, indicating that they were conscious of the specific image at the moment.

I think there are problems with this interpretation.  By Block’s own description, the subjects didn’t notice and couldn’t self report the oscillations between the rival images, so we shouldn’t expect to see correlated changes in the frontal lobes for those changes.  The subjects may have become conscious of some details in the images when asked to report, but when they weren’t asked to report, it seems more likely they were only conscious of an overall “gist” of what was there, a gist that worked for both images, and so didn’t need to oscillate with them.

The Brascamp et al. study is hard core functional neuroscience, aimed at narrowing the location of a specific function in the brain.  They succeed at establishing that the selection happens in the back of the brain.  But I don’t think a “frontalist” (as Block labels them) should be concerned about this.  A pre-conscious selection happening in the back of the brain doesn’t really seem to challenge their view.

And Brascamp et al. actually seem to come to a different conclusion than Block.  From the final paragraph in their discussion section:

A parsimonious conceptualization of these results frames awareness of sensory input as intimately related to the planning of motor actions, regardless of whether those actions are, in fact, executed. In this view a perceptual change of which the observer is aware might be one that alters candidate motor plans or sensorimotor contingencies. This view also marries the present evidence against a driving role of fronto-parietal regions in perceptual switches to the notion that these regions do play a central role in visual awareness when viewing a conflicting or ambiguous stimulus, a switch in perception may arise within the visual system, but noticing the change may rely on brain regions dedicated to behavioral responses.

So, while the study succeeded in its aims, I can’t see that that the results mean what Block takes them to mean, or that the methodology accomplishes the no-post-perceptual cognition paradigm he’s looking for.  That doesn’t necessarily mean that sensory consciousness isn’t a back of the brain phenomenon.  It just means getting evidence for it is very tricky.

This front vs back debate is a major issue in the neuroscience of consciousness.  One I’m hoping that Templeton contest does succeed in shedding some light on.  Myself, I suspect the frontalists are right, but wouldn’t be surprised if it’s a mix, with maybe sensory consciousness in the back, but emotional and introspective consciousness in the front, with our overall experience being a conjunction of all of them.

What do you think?  Is consciousness a cognitive phenomenon?  Or is perceptual awareness independent of cognition?  Or in a system where the components evolved to work closely together, is this even a well posed question?

65 thoughts on “The difficulty of isolating evidence for the location of consciousness

  1. “A parsimonious conceptualization of these results frames awareness of sensory input as intimately related to the planning of motor actions, regardless of whether those actions are, in fact, executed. ”

    That is exactly what I have been increasingly believing. Consciousness isn’t a passive reception and interpretation of sensory data, but a prelude to action. The reason for it is take action that enhances fitness. Hoffman’s perceive-decide-action loop carries with it the notion that our perceptions – not only what we perceive but how we perceive it – probably are closely related to actions. Consciousness integrates the multiple sensory systems with somatic control.

    I would expect consciousness comes from the feedback loops from front to back/back to front. That it can’t precisely be pinned to either front or back but arises in the interactions.

    Liked by 2 people

    1. That’s similar to my own view as well, but the focus on the loops in and of themselves makes me a little nervous. I see the loops as enablers, similar to the network cabling in a computer network, which involve ongoing feedback loops between servers and workstations. You need all the components for a well functioning system, although the system can function in a compromised manner if some of the pieces are missing.

      Put another way, the PFC isn’t itself conscious, but it’s ongoing communication with the rest of the brain generates the overall system of consciousness.

      Liked by 2 people

      1. Me too! Me too! As in, what you’re talking about matches my framework of input—
        >[mechanism]—>output. But I’m wondering if in the output part you guys are considering simple things like committing to memory or making new associations (“Oh, the dog’s name is Benji”) as sufficient, as opposed to some muscle action plan or other.

        What I’m thinking is this: incoming data, whether visual or auditory or other, gets processed to a point in the back (or wherever), and then the built up concepts compete to be represented in one of possibly a few global workspaces (one GW for each modality?), which I locate for now in the sub-cortical regions (I’m guessing thalamus). The conscious event occurs when that representation (in a GW) becomes input for a process. But I could see the mechanism for that process being in different places. I could see the frontal cortex using those representations to plan actions, but I could also see other areas using those representations to generate associations or memories or action plans. In the case of action plans, I could see the frontal cortex having executive control (mostly by veto) over which plans (or none at all) get executed. How many times in conversation does some phrase pop into your head (or into the GW for speech acts) and you realize that saying it would be a bad idea, and so you don’t, unless you’re Phineas Gage, who lost a good chunk of that veto machinery, so you actually do say it?

        It seems to me this model would be compatible with the experiments Block mentions. Oh Mike, master of cortical anatomy, anything there conflict with what you know?

        *

        Like

        1. James, I appreciate your enthusiasm!

          On outputs and the I->M->O mechanism, it seems like there are a hierarchy of such mechanisms. At the lowest level we have single neurons, which receive inputs and selectively produce output signals. For the brain overall, we have sensory inputs, and motor and hormonal outputs. In between we have all the subsystems. Memory can be regarded as output of some processes, particularly neuromodulatory ones.

          In the case of remembering Benji’s name, from what I understand, the hippocampus has to get involved. Something about the way that sensory input is processed triggers the hippocampus, causing it to nonconsciously repeat the stimulus until the relevant regions in the cortex (probably temporal) strengthen the right synapses.

          We’ve discussed your idea of the workspace being in the thalamus before. Mainstream neuroscience is that the thalamus is mostly a network hub, but this is biology and we can’t rule out that the thalamus doesn’t save some state. If so, it wouldn’t be all of the thalamus, just certain nuclei, because a lot goes through it that we’re not conscious of. And of course, the contents would have to be something like a pointer table rather than actual representations, since it seems like full representations would require more substrate than is there.

          I do think the frontal lobes having veto authority is the right way to think about it. A lot of reflexive and habitual actions well up from subcortical regions. The role of the prefrontal cortex appears to be to orchestrate simulations (recruiting the sensory regions for help) and select which of those reflexes / habits to allow and which to inhibit. This fits with the anatomical evidence that most of the axons projecting down to the brainstem are inhibitory.

          Interestingly, the prefrontal cortex has connections, many (but not all) through the thalamus, with much of the rest of the cortex. In particular, posterior (back) parts of the PFC seem to be connected to earlier sensory integrations. The farther anterior (forward) we go, the more it’s higher level perceptual contents from the parietal regions. This makes sense since these anterior regions are more associated with abstract concepts than more posterior ones.

          I think all of this is compatible with Brascamp’s study (if that’s the one you meant).

          Flattery will get you everywhere with me 🙂 but really I’m just an amateur student myself, picking up what I can from the real experts, working neuroscientists and neurophilosophers.

          Like

          1. Could be. I’m not super familiar with semantic pointers. Most of the material seems to require a non-trivial commitment in time to parse. They sound like mini-representations. I guess, in terms of the thalamus, it comes down to just how mini.

            Like

          2. FWIW, a semantic vector is a vector with several hundreds of components, each component a real number. Typically they are normalized such that the square root of the sum of component squares is one. Thus, each vector defines a point on a hypersphere of hundreds of dimensions.

            They have the interesting property that any two random vectors end up being essentially orthogonal to each other. That’s what enables the combinatorial magic.

            Here’s a Computerphile video that gets into them a bit:

            Like

          3. In the sense that “closeness” of terms means both vectors are “nearby” (i.e. their dot product is low), very similar. But you can do a kind of algebra with semantic vectors not possible with neural nets. They show some of that algebra at the end of the video.

            For instance {dog} + {roar} – {lion} = {bark}

            Like

          4. My understanding is they’re a way of modeling concepts in a way that allows a form of algebra among concepts — the example I described is similar to the test question: “A dog’s [blank] is like a lion’s roar.”

            The neural net in the paper comes from their attempt to provide a “biologically plausible spiking neuron model that processes semantic pointers to account for data from categorization experiments” in support of their idea that semantic vectors model how the mind handles concepts.

            Liked by 1 person

          5. ” Thus, each vector defines a point on a hypersphere of hundreds of dimensions.”

            Any chance that is even remotely in the capability of a human brain without quantum computing being involved?

            Like

          6. A semantic vector is a mathematical object, so in the literal sense, I don’t see how. The idea, AIUI, is that semantic vectors might model how the brain might handle mental concepts if neurons or areas of brain activity are treated as the components of the vector.

            I’d have to read the papers in detail to understand exactly how brain activity is mapped to vector components, because that’s one thing I’m not clear on. As you imply, semantic vector components use real numbers, and I don’t know how brain activity is mapped to a set of real numbers here.

            If it’s one neuron = one vector component, then it’s not the same thing, since neurons, in the large, either fire or don’t fire, and in the small it’s not clear how a real number is extracted from the firing rate or whatever analog property of a neuron.

            The impression I have is they built a simulation using semantic vectors that seems to do some things like a brain does. I’m a little askance at models of physical processes (such as “black holes” with “event horizons” modeled in fluid going down a drain, let alone software models), but it’s certainly a path worth pursuing.

            Liked by 1 person

          7. Mike, I honestly think checking out at least one paper would be worth your while. I suggest this one: https://www.researchgate.net/publication/280629772_Concepts_as_Semantic_Pointers_A_Framework_and_Computational_Model

            The thing is, you seem to have the idea that the representation provided by a semantic pointer must be fairly limited. “They sound like mini-representations.” In fact, with just a few hundred neurons you can represent a really large number of concepts, and you can do error correction. And you can do operations on them, like combining concepts, etc.

            *

            Like

          8. Thanks James. This one is much better for someone not already versed in the material. So far, I’ve only read the first five sections and part of the sixth. I think I grasp the basic concept, but remain unclear on how exactly it happens in the brain. But I can see why you were asking about them in relation to higher order representations. They seem like a viable mechanism for such representations, although those representations are usually seen as being in the prefrontal cortex, or maybe the parietal regions.

            And as I’ve noted before, there’s some resonance with Damasio’s covergence-divergence zones, although a semantic pointer might be more a reference to a CDZ (or collection of related CDZs), particularly since the CDZ is seen as the concept whereas the paper is careful to delineate semantic pointers from the concept itself.

            I’ve had Eliasmith’s book sitting in my Kindle account for a while now. Unfortunately the formatting of the book is maddening, which has delayed my reading it. Maybe sometime soon.

            Like

          9. They seem like a viable mechanism for such representations, although those representations are usually seen as being in the prefrontal cortex, or maybe the parietal regions.

            Eliasmith, I think, might agree with you that those representations are in the prefrontal, but I haven’t seen anything that necessitates that. And it depends what you mean by “the representation”. For me a representation looks like I—>[M]—>O, where the Input is a sign vehicle, and the mechanism interprets that sign vehicle. I think some people sometimes refer to the sign vehicle as “the representation”. In the current case, I think the semantic pointer is the sign vehicle, potentially located in the thalamus, and there are multiple interpreting mechanisms in the cortex, including prefrontal.

            I think this way because a semantic pointer can support a huge variety of concepts, including concepts from various sensory modalities. ( {dog}+{bark} ). Damasio’s convergence zones would explain how places in the cortex become centers of specific concepts, but they would not explain how we can create arbitrary, temporary, ad hoc combinations of concepts, like a mouse named Jehosaphat that thinks it’s a lion and roars. The semantic pointer cannot be tied to any single concept. So a semantic pointer would need to be able to take input from all over the cortex, i.e., from all of those convergence zones. Seems to me the thalamus is the best candidate for that kind of connectivity, with inputs and outputs to every part of the cortex.

            Yes, this is all wild speculation, but I can’t help thinking it fits with just about every theory of consciousness I know about. A semantic pointer located in the thalamus is a perfect candidate for a global workspace. Mechanisms of attention control what gets into the semantic pointer workspace. The vector concept of the semantic pointer is a perfect match to the “qualia space” concept of IIT (see version 3.0), and lines up well with its axioms and postulates. An interpretation of the semantic pointer is by definition a higher order thought. This semantic pointer concept provides a specific well-defined version of philosophical representationalism, and functionalism, and structuralism(?).

            So I remain on the lookout for a better explanation.

            *

            Like

          10. The main issue I see with the thalamus is the amount of substrate it has. And from what I read, we know the lion share of that substrate is taken up by pass-through connections, such as the one with the optic never going to the LGN (in the thalamus) and then to the occipital lobe. Higher order thought, I think, is computationally expensive, and requires a substantial amount of neural real estate. The PFC, being vast, is ideal for that kind of role, and it fits with all the neurological evidence. And it’s actually just as interconnected as the thalamus, including with lots of independent connections to other subcortical regions such as the amygdala, basal ganglia, etc.

            That’s not to say that the pulvinar (in the thalamus) doesn’t have a major role in attention. But from what I understand, it’s a mistake to view it as being in control of attention. It’s more a focal point, from reflexive “bottom up” attention from the brainstem and other subcortical regions, and “top down” attention from the frontal lobes.

            It’s possible there may be semantic pointers in the thalamus, but I would seriously doubt that’s the only place they would be. Indeed, is it reasonable to think there may be hierarchies and clusters of these pointers?

            Like

          11. The main issue I see with the thalamus is the amount of substrate it has. […] Higher order thought, I think, is computationally expensive, and requires a substantial amount of neural real estate.

            That’s kinda like saying a chalkboard won’t work because it is too small to hold all of the words that you could possibly write on it. I’m suggesting the semantic pointer is a chalkboard. The computationally expensive stuff is being done by the agents that can see the chalkboard and write to it. There is a competition to write on the chalk board. Each agent wants to write its own concept on the board. The attention management is done by those agents that control who gets to write on the chalkboard.

            It’s possible there may be semantic pointers in the thalamus, but I would seriously doubt that’s the only place they would be. Indeed, is it reasonable to think there may be hierarchies and clusters of these pointers?

            This I think is an evolution issue. Nature frequently seems to create new features by just creating more of some old feature, and then, because that new stuff isn’t *needed*, it’s free to tinker the new stuff into something useful. But the new stuff doesn’t usually end up being radically different from the old. That’s why I can see multiple, even hierarchical, semantic pointers being in one anatomical feature, but it seems unlikely to develop the same function in a completely separate feature. The neurons in a semantic pointer have a very specific interaction with each other, and I don’t see that just appearing by accident somewhere else.

            So the question is, where would a semantic pointer evolve first?

            *

            Like

          12. The chalkboard analogy sounds more like working memory than the global workspace. We know working memory exists in some form or another, and lesions to the PFC can impair or eradicate it. Of course, we can’t rule out that those particular PFC regions aren’t just a part of networks utilizing something stored somewhere else. And it’s hard to ever completely rule out foundational structures, except that lesions to the thalamus reportedly tend to knock consciousness out completely (as in coma) rather than specific contents.

            Where would a semantic pointer evolve first? Given that I’m still hazy on exactly how they work, at this point all I can say is I don’t know. But some thoughts on the evolution of the forebrain. The thalamus and hypothalamus are the major structures that emerge from the embryonic diencephalon. Most of the rest of the forebrain is in the telencephalon.

            Both structures, the diencephalon and telencephalon, and the relationship between them, are ancient, going back to the earliest vertebrates. If we go back to pre-vertebrate cephalochordates, we can see an early diencephalon without a telencelphalon, but apparently in function it amounts to little more than a hypothalamus. By the time we get to creatures anyone is tempted to label “conscious”, we have both the telencephalon and diencephalon, although it’s not clear that full thalamic functionality is there yet. And even the early telencephalon had a pallium (the precursor to the mammalian cortex). (This is all in the vertebrate line. Invertebrates are completely different.)

            All of which is to say, I can’t see that there’s an obvious answer. If we’re looking for particularly malleable regions, I wonder if the hippocampus wouldn’t have a better claim.

            Like

          13. D’oh! Yes, thank you, you are correct; I had that reversed. Orthogonal vectors have a dot product of zero, close ones have a large one. Sorry about that!

            Like

  2. Unconscious Cognition is clearly in play here, and the necessary precursor for any telic as to Agency and Action, really an indication of Organic Intelligence as set against any Cartesian duality that separates the Mind from the Body: Embodied Consciousness, however ‘aware’ such condition registers with the self-reflective utility of consciousness per se….methinks. The function of Intuition comes to (my) mind, somehow. Great piece of interesting speculation here. Thanks!

    Liked by 1 person

    1. Thanks BQ! Definitely there’s a lot of nonconscious stuff going on here, and all cognition is embodied cognition. On self reflection, the interesting thing about this is most parties acknowledge that introspection is a prefrontal mechanism. (I say “most” because IIT postulates seems to imply that at least some incipient form of it arises from integration, presumably in the back of the brain.)

      Liked by 1 person

      1. Koch argues in The Feeling of Life Itself (which I bought after your review of it) that prefrontal areas are almost completely unnecessary for consciousness since people with severe lesions have had them removed and are almost indistinguishable from people with them. In other words, people without PFC report experiences and seem mostly normal. It may have roles in metacognition but seeing, hearing, and feeling get along fine without it.

        Liked by 1 person

          1. The problem particularly with the second paper (to a much lesser degree the first) is that there would be not an expectation that the PFC, assuming it is intact, would not be active during consciousness. As I think I have said before, we would expect many or most parts of the brain to be active during waking consciousness. That doesn’t mean all of the active parts are required for consciousness. The parts of the brain are all there for a reason. The question is whether all of the parts that are active in a normal brain are absolutely essential for consciousness. We would expect every active part to potentially have some ability to modify consciousness or provide a capability that would not be present without it.

            The first paper is somewhat more problematical to Koch’s argument but it too is based on single cases in its examples and it is hard to tell for sure if the counter examples to Koch might not have damages to other parts of the brain or there might not be other explanatory factors.

            BTW, it seems like the first paper is a companion paper to the another paper with Koch among others as author and there are responses back and forth between the authors. The authors of the first paper dispute the contention in the second paper that the whole PFC was not removed.

            You can see the full paper and responses at these links:

            https://www.jneurosci.org/content/37/40/9603.long

            https://www.jneurosci.org/content/37/40/9593

            I’m slightly inclined to this view from the response to second paper as long as we are talking about basic awareness:

            “We also do not rule out the possibility that some regions of PFC may directly contribute some content to consciousness (Koch et al., 2016b) but emphasize that, overall, PFC is neither necessary nor sufficient for being conscious, unlike posterior cortex.”

            However, the PFC must be fairly special for humans since the human version of it is so large compared to other mammals. The unique delayed maturation of it in humans compared to other primates and its associations with language, symbol manipulation, imagination, and possibly self-control certainly mean it is special in some way. One implication of the delayed maturation of the PFC and its possible association with consciousness would mean that babies and young children should possibly be considered not fully conscious.

            Liked by 1 person

          2. Thanks. I actually did a post a while back highlighting those papers. https://selfawarepatterns.com/2017/04/06/is-consciousness-only-in-the-back-of-the-brain/

            The bottom line, it seems to me, is that we need more data. All this anecdotal stuff is underdetermining things. As I noted above, it won’t surprise me if both camps aren’t overselling their case and the reality is more mottled. Consider if we just bracket the amorphous mess of consciousness for a moment.

            We can all agree that there is sensory integration in the posterior, and even in the absence of frontal lobes, it can result in reflexive and habitual behavior. Based on Koch’s cited cases, it seems evident that someone with frontal lobes temporarily disabled can remember past sensory integration and behavioral responses when their frontal lobes are restored. But introspection, emotional regulation, self report, and cognition overall require the frontal lobes.

            It seems clear to me that the full human experience requires all or most of it, even if we can get by with a compromised version if portions are missing.

            Like

  3. “Consciousness” is appropriately applied to all of cognition, emotion, and executive-planned action. But that doesn’t mean they’re all equal: morally, emotion (valence; good/bad experience) is the most important thing, in my book. So if the back-of-brain processes are sufficient for perception but not for emotion, that makes them a lot less interesting.

    Liked by 1 person

    1. I agree that without emotion, we’re missing a crucial aspect of what most of us mean by “consciousness.” The most common question I hear from people is, “Why does if feel like something to have those perceptions?” There are people who say that it’s all feeling, but then what are those people asking, if not for the affective qualities that are interlaced throughout experience.

      But affects (emotions, hunger, fear, etc) are associated with the front of the brain, which makes sense when you remember that they’re action dispositions.

      Liked by 1 person

      1. I thought the amygdala was heavily involved with emtions.

        “Three brain structures appear most closely linked with emotions: the amygdala, the insula or insular cortex, and a structure in the midbrain called the periaqueductal gray.”

        https://www.brainfacts.org/thinking-sensing-and-behaving/emotions-stress-and-anxiety/2018/the-anatomy-of-emotions-090618#:~:text=Three%20brain%20structures%20appear%20most,midbrain%20called%20the%20periaqueductal%20gray.&text=A%20paired%2C%20almond%2Dshaped%20structure,%2C%20emotional%20behavior%2C%20and%20motivation.

        Like

        1. Depends on who you ask and what you mean by “emotion”. The amygdala is heavily involved in the survival circuitry that produces defensive reactions. The question is where the feeling of the emotion happens.

          Jaak Panksepp and many others argue that the feeling is there, in the amygdala, and associated subcortical circuitry. Joseph Ledoux, who’s done a lot of research in this area, along with people like Lisa Feldmann Barrett, argue that the feeling only happens in the cortex, notably the prefrontal cortex. In that sense, feelings are representations of lower level impulses, representations used in deliberative simulations to decide which of those impulses to allow and which to inhibit.

          When thinking about who might be right here, I think the thing to ask is, what are emotional feelings for? Why did they evolve? What caused natural selection to favor their development? In that sense, an epiphenomenal feeling associated with a reflexive survival circuit doesn’t make much sense. But representations used in cognitive deliberation do.

          Like

          1. Almost everything I read has emotions (particularly fear and anxiety) originating in the amygdala. This even includes fear arising before the reason for the fear become accessible to consciousness. Damages in parts of it can lead to people without any sense of fear. It also seems to have roles in positive emotions, although this is not as widely studied.

            Certainly fear would have a huge role in evolution since it would serve to marshal the organism for fight or flight. The fact that damage to areas of the amygdala can make one fearless, representations notwithstanding, seems conclusive.

            When we move into the area of processing emotions it might be a different matter. Then representations would have a great role.

            It seems the frontal lobe might have a lot to do with expression of emotions. I would think the PFC with its prominence in humans might have a lot to do with regulation and/or suppression of emotions. Possibly the delay in maturation of the PFC in humans allows it to develop while under extensive social control which may be key to human self-domestication.

            Like

          2. Actually, it’s pretty much agreed from by all parties, including Panksepp, that the amygdala is not necessary for fear. Even complete bilateral destruction of the amygdalae doesn’t entirely eradicate fear. It does reduce it substantially. But people without amydalae still feel fear.

            As you noted, the amygdala can be activated before the person feels the fear. Ledoux discusses cases where threats are present subliminally, leading to activation of the amygdala and resulting in physiological reactions such as increased heart rate, sweating palms, etc, but without the subject consciously feeling fear.

            “I would think the PFC with its prominence in humans might have a lot to do with regulation and/or suppression of emotions”

            I think that’s right. But as noted above, I think that’s the reason we feel the emotion. If all we need to do is respond reflexively or habitually, the feeling becomes redundant, a net burden likely to be selected away.

            Like

          3. https://en.wikipedia.org/wiki/S.M._(patient)

            ” If all we need to do is respond reflexively or habitually, the feeling becomes redundant, a net burden likely to be selected away.”

            I don’t think so. If you are on the savanna and a lion approaches, you will feel fear. The fear will ready your body to run, climb a tree, throw a rock. So the readying of the body isn’t redundant. The neurotransmitters are firing away preparing the body for a maximum exertion that could make the difference in life or death. The PFC might engage with the actual decision about what to do but that doesn’t make the fear redundant. The fear was probably there well before the sophisticated PFC. It isn’t there just so the PFC can reason about it.

            Like

          4. From that wikipedia:

            Research has revealed that S.M. is not immune to all fear, however; along with other patients with bilateral amygdala damage, she was found to experience fear and panic attacks of greater intensity than the neurologically healthy controls in response to simulation of the subjective experience of suffocation via carbon dioxide inhalation, feelings which she and the others described as completely novel to them.

            “The fear will ready your body to run, climb a tree, throw a rock. So the readying of the body isn’t redundant. ”

            That’s conflating the defensive reaction with the feeling of fear. Ledoux pointed out that he, as a fear researcher, actually contributed to this confusion years ago when he started referring to the defensive reaction as implicit fear and the conscious feeling as explicit fear. However, he now admits that was a mistake, that the distinction was lost to most science reporters and the general public, which is why he now prefers “defensive reaction”, and “survival circuits” instead of “fear circuits”, except when specifically talking about the conscious feeling of fear.

            (Based on what I’ve read, I actually suspect there are four phenomena: innate defensive reactions, learned defensive reactions, nonconscious feelings of fear, and conscious feelings of fear, with the latter two only happening in the cortex (mostly, but perhaps not exclusively, in the frontal lobes).)

            Like

          5. Evidence is that those panic attacks from suffocation is a different mechanism that bypasses the amygdala. The amygdala response is from an external threat.

            I’m not conflating anything. Fear is what readies the body. It is a somatic response. You’re not suggesting that the PFC needs cognitive awareness of the threat before the somatic response occurs or that the PFC is the source of fear, are you?

            Still the point is that emotions primarily originate outside the PFC, mostly the limbic system, but may be controlled, suppressed, modified, or acted on by the PFC.

            https://en.wikipedia.org/wiki/Limbic_system

            Like

          6. James, regarding the amygdala, I’m going to quote Jaak Panksepp here, mainly because he was also a believer in subcortical feelings and so you might regard him as more credible than Ledoux or Barrett. (Note the reference to Ledoux inadvertently creating this impression.)

            Somehow, after LeDoux’s 1996 book, it has become popular folklore to see the amygdala as the wellspring of all fear, indeed of all emotion—which is a sadly uninformed view. Individuals with totally damaged amygdalae (i.e., people with the congenital Urbach-Wiethe disease, leading to gradual calcification and destruction of the amygdala) can still experience worries, fears, and plenty of other emotions. Also, PLAY, GRIEF, CARE, and SEEKING arousals do not prominently involve the amygdala. Indeed, only one of the subnuclei of the amygdala, the central nucleus, is part of the primary-process emotional system that helps integrate the evolutionarily provided FEAR state with higher-order learning processes (yielding secondary emotions).

            Panksepp, Jaak. The Archaeology of Mind: Neuroevolutionary Origins of Human Emotions (Norton Series on Interpersonal Neurobiology) (p. 71). W. W. Norton & Company. Kindle Edition.

            “You’re not suggesting that the PFC needs cognitive awareness of the threat before the somatic response occurs or that the PFC is the source of fear, are you?”

            Again, the distinction here is between the defensive reaction and the feeling of fear. I don’t think the PFC is necessary for the defensive reactions you list. Those happen from subcortical survival circuits. But from what I’ve read, most feelings of fear happen in the cortex.

            Often people dislike this view because they think it implies that animals don’t have conscious emotional feelings. (Admittedly that is the take Ledoux and Barrett have), but all mammals have a cortex, and even basal vertebrates have a pallium, so I don’t think that conclusion necessarily follows.

            Like

          7. The idea of feelings being felt in the PFC really makes no sense.

            Almost everybody associates fear of external threat with the amygdala. Other parts of the limbic system are associated with other emotions.

            Almost all emotions are associated with bodily feelings and those feelings which are core to the experience of emotion are not processed in the PFC..

            “Contemporary experiments demonstrate the neural and mental representation of internal bodily sensations as integral for the experience of emotions; those individuals with heightened interoception tend to experience emotions with greater intensity. The anterior insula is a key brain area, processing both emotions and internal visceral signals, supporting the idea that this area is key in processing internal bodily sensations as a means to inform emotional experience.”

            https://theweek.com/articles/825005/how-body-talks-brain

            Like

  4. I’m of two minds in thinking about consciousness existing anywhere in the brain. On the positive side, surely there are parts of the brain which are more associated with a given definition for consciousness than others, so in that sense one might legitimately claim that there are more conscious parts. And especially so for me given that I suspect that there are mechanisms in the brain which create affect, or consciousness itself as I like to define the term. I consider this “agency” to exist as the most amazing stuff in the universe. So here I’m good with this project.

    But then I also must ponder this question in terms of more understood ideas. Is there a “light” part of a lightbulb, a “parent” part of a parent, a “wind” part of a fan, or even a “computation” part of a computer? Apparently things which do things, do not have what they do inside them, but instead produce them. So my point is that while there must be parts of the brain which tend to be more associated with a given definition for the “consciousness” term, trying to locate consciousness as if it exists somewhere inside the brain, is probably no more useful that trying to locate where “blogging” exists inside a blogger.

    Liked by 1 person

    1. I get where you’re coming from. And loose language here can be an issue. A more precise way to think about the question is, which regions are necessary and sufficient for consciousness? If you look at the other systems you mentioned, that becomes a coherent question. So in the case of the computer, we can dump the outer casing and still have computation. For that matter, we can dispense with the hard drive (or SSD) and a lot of other stuff.

      Of course, if we dispensed with the power supply, all computation would stop. But that’s arguably a case of a component being an enabler without necessarily being part of the necessary and sufficient components. Arguably we could dump every non-enabling component but the CPU chip, but then the computation might exist but be inaccessible since there would be no I/O.

      As you alluded, definitions matter. If we ask which parts of the computer are necessary and sufficient for Microsoft Windows, we get a very different answer than for computation.

      Liked by 1 person

      1. Much better Mike! I know that I may seem a bit anal-retentive for insisting upon accurate language when discussing these sorts of things (as in “What’s a useful definition for the consciousness term?”, while most people just say what “is” consciousness, as if it exist to discover rather than to define). But language seems to commonly screw people up. If we lazily (or ignorantly) say things incorrectly, then this should tend to throw us off at least in various quasi conscious ways. (I still dislike the “unconscious” term as you can see.)

        So now that we’re officially talking about which brain regions are necessary and sufficient for consciousness rather than the misleading “Where?” question, might an “architect” have any other useful things to say here? Perhaps.

        The lightbulb is necessary but not sufficient to produce light — an electric current must exist as well. Similarly a fan motor properly hooked up to a fan blade is necessary but not sufficient to produce wind — electricity is also needed here.

        Our computers run on the basis of electricity as well, so I’m not inclined to say that computation requires it merely as “an enabler” — a power source is just plain necessary! And properly supplied to a processor, might this be sufficient for computation? If a processor is built directly with information to process, then maybe, though to me that would mean that an informational input source existed. And should we say that processed information is sufficient for computation? Though we might, such processing wouldn’t actually do anything external. So in the end I’d rather say that in order for computation to occur, input must be processed for output function.

        Apparently most everyone in the business considers brains to function as neuron based computers. These machines accept input information, process them by means of logic gates, and so provide output function. I propose however that this kind of computer can’t be outfitted to sufficiently deal with more open environments. Thus the brain apparently produces a tiny virtual computer (consciousness) that functions on the basis of something other than neurons. So here’s where my architecture comes in. If “engineers” would like to use neuroscience to help narrow down where elements associated with consciousness occur in the brain, it should be helpful for them to grasp the nature of this second computer.

        Which areas of the brain are responsible for producing affects? They will produce primary consciousness, and even if purely non-functional epiphenomena. It is here and only here that “there is something it is like” to exist, or the foundation for qualia itself. I consider this to constitute the motivation which drives functional examples of consciousness, just as electricity drives functional technological computers.

        Which areas of the brain are responsible for producing conscious input information, such as vision? Then which areas of the brain are responsible for producing memories of past conscious experiences? (My understanding is that neurons which have fired in the past to produce those experiences, thus have a higher propensity to fire again. Not sure if there are dedicated brain areas for this though.)

        Wherever these three forms of input are produced by the brain, there is then the conscious processor which interprets them and constructs scenarios about what to do to make itself feel as good as it can. I call this “thought”, and have no idea which parts of the brain support this cognizant experiencing entity.

        (There’s also conscious muscle operation output, though in truth the non-conscious brain seems to take care of this based upon detected conscious decisions.)

        Like

        1. “— a power source is just plain necessary!”

          Maybe I should have said “uniquely necessary and sufficient”, since lots of stuff is powered but doesn’t do computation (except in a pancomputationalist sense since any dynamic physical system could be interpreted to be doing computation).

          “So in the end I’d rather say that in order for computation to occur, input must be processed for output function.”

          I think then you’d be talking about something broader than just computation. But you’re right, whatever the processor might do in isolation wouldn’t have any useful effects in the world.

          “Apparently most everyone in the business considers brains to function as neuron based computers.”

          Depends on what you mean by “in the business.” Most neuroscientists seem comfortable discussing neural computation, although many stipulate that it’s a different kind of computation than found in commercial computers. But when it comes to consciousness, there’s a distinction to be made between “the cognitive neuroscience of consciousness” vs “consciousness studies”. The former is mostly comfortable with computation. The latter includes a lot of philosophers of mind, authors, and personalities whose views of consciousness are all over the place. Much of this latter group seems to detest the very idea of brains as computers.

          “I propose however that this kind of computer can’t be outfitted to sufficiently deal with more open environments.”

          What do you make of the machine learning neural networks now common in the AI field? These neural networks remains far simpler than biological neural networks, but even as a pale imitation, they seem to bring in a lot of capabilities missing from traditional hand coded computing. Are you saying the virtual machine emerges from the biological versions?

          “My understanding is that neurons which have fired in the past to produce those experiences,”

          I think that’s the reality, which is why you rarely see me treat memory in and of itself as a functional component, even though it’s critical. Even the hippocampus, which is involved in making sure the right memories stick, does so within the context of a spatial navigation system.

          “Wherever these three forms of input are produced by the brain, there is then the conscious processor which interprets them and constructs scenarios about what to do to make itself feel as good as it can.”

          That’s the question. Where do perceptions and affects become integrated? I think the answer is the PFC, but many insist the posterior cortices, and others list various subcortical locations. Only time, and empirical data, will tell.

          Liked by 1 person

        2. “Apparently most everyone in the business considers brains to function as neuron based computers. These machines accept input information, process them by means of logic gates, and so provide output function.”

          How is this so if brains are analog and digital? If there are analog aspects, there is likely something more than logic gates going on. It isn’t even completely clear that neurons are the sole basis of the processing that does occur in the brain. I know this has been a tempting view but hasn’t exactly been proven and on another post I linked to an paper that suggested something is going on in the dendrites.

          Like

          1. James,
            I’m sure you’re aware that dendrites are part of the neuron. Basically the stuff happening there doesn’t change the overall picture of the neuron summing up its exicitatory and inhibitory signals, and firing if the soma ends up being depolorized to around -55 millivolts.

            If multiple excitatory signals accumulate to reach the threshold, that can be viewed as a type of AND processing. If one of the excitatory signals alone is strong enough to do it, that’s a type of OR processing. And if an inhibitory signals comes in negating the excitatory stuff, that’s a NOT signal. As you note, it’s analog at this stage, so it doesn’t have the clean discrete divide of digital processing, and with thousands of inputs, the reality is more akin to the processing of thousands of logic gates. And, of course, the strength of synaptic connections is a constantly changing thing, at least for the chemical ones.

            I used to think that the logic gate comparison was a little more apt for small dendritic branches, but the general lack of inhibitory connections really just leave those branches as tributaries to the overall summing computation of the neuron.

            Like

          2. Definitely. John Dowling describes the action potential as converting amplitude modulation to frequency modulation, which of course get translated back to amplitudes in the dendrites of the downstream neurons.

            Like

          3. “A sequence, or “train,” of APs [action potentials] may contain information based on rather diverse coding schemes. In motor neurons, for example, the strength at which an innervated muscle is flexed depends solely on the “firing rate,” the average number of APs per unit time (a “rate code”). At the other end of the spectrum lie complex temporal codes based on the precise timing of single APs. They may be locked to an external stimulus such as in the auditory system or be generated intrinsically by the neural circuitry.”

            https://www.pnas.org/content/94/24/12740

            Like

          4. As an aside, logic gates do not a “computer” make.

            It points out why I dislike the comparison of the brain to a computer, because I think it confuses the issue. The brain is nothing like a (conventional digital) computer — other than a vague comparison to logic gates.

            Which is accurate enough, but you can’t create a computer from just logic gates. It requires other pieces, such as counters, latches, clocks, and (most importantly) micro-code.

            What you can create with pure logic is a signal processor more akin to a radio or amplifier.

            Debating whether or not neurons are like logic gates (they are) misses the point. The point is that having something like logic gates isn’t enough for a (conventional digital) computer. (But it is enough for an analog signal processing system that evaluates inputs and produces outputs.)

            Liked by 1 person

          5. “clocks”

            Somewhat of a mystery in all of this still is how neurons synchronize, which they do at least in groups if not the entire brain, and how information based on “complex temporal codes” exactly works. My suspicion still in all of this is that this handling of time and synchronization is intimately connected to consciousness.

            Liked by 1 person

      2. Mike,
        I hadn’t realized that you were talking about unique things that are both necessary and sufficient for computation. I was simply adding up all the necessary components that something seems to need in order for it to exist, and then calling that combined thing sufficient for it. Obviously power itself is not sufficient for computation. But then a computer processor itself should not be either since it should require power from which to compute. Furthermore it should also need something to process, or input information. Regarding output however, I agree that this doesn’t technically seem mandated, though without any function beyond that entity itself (I guess except for externalities such as creating heat, entropy, or that sort of thing). Externally functional computation would require output mechanisms, which is technically what I was referring to above.

        Yes I do realize that some people consider the computer/ brain association anywhere from non useful, to just plain wrong. On the non-useful side, this position is hard for me to understand. If brains accept input information and process it for output function (such as for regulating heart function, which I presume we all acknowledge), then how might one argue that this is not in some sense “computation”? But then along with such an argument I’d hope for a person to provide what they consider to be a more effective analogy for us to use. It seems to me that virtually all of our understandings occur by relating stuff that we don’t feel that we grasp, to stuff that we do feel that we grasp. We seem to need these base models in order to more effectively learn about new things, and so should attempt to build productive associations, and especially to aid our long suffering soft varieties of science.

        Then as for the “just plain wrong” side, I see this as a standard failure in the subject of epistemology. Given that we do not yet have a respected group of professionals which preach that there are no true or false definitions for our terms, but rather only more and less useful ones in a given context, this sort of thing should be expected. So it would seem that the continued failure of philosophy helps our soft sciences remain soft.

        On “machine learning neural networks now common in the AI field”, I can see how there would be hope that our extremely primitive non-conscious machines would become more and more autonomous in this manner. There probably isn’t much room to run here however. Evolution surely took this road as well, but found it insufficient and so developed conscious function to augment non-conscious function. (Here I’m presuming that something as autonomous as an ant, for example, must have a bit of consciousness for guidance as well. Perhaps it doesn’t and so it would be possible for our non-conscious machines to get this far.)

        It must ultimately be physics that evolution uses to create affect, or the personal agent which you and I display. Evolution could only have done this through life based processes, merely because that’s the exclusive tool which it has at its disposal. So it’s conceivable that we could get the physics right by means of technological systems, though I think this belittles the many orders of engineering (let alone architecture) by which we should first need to improve. And there are a couple of fundamental differences between us and evolution. Firstly, evolution needn’t understand anything to do what it does. That’s a huge advantage. Secondly, it has tools from which to function at both micro and macro levels in countless ways over thousands and millions of years. We simply do not and will not ever be able to work in that way.

        On conscious memory, I think your right to not consider it “functional”, by which I believe you mean something that doesn’t process. I consider it as one variety of input to the conscious processor (“thought”). It seems to me that in some capacity memory must have existed from the beginning for effective conscious function to evolve.

        Note that in the evolution of consciousness there would be a non-conscious brain which naturally does things that create an epiphenomenal agent — no functionality but rather just something with sentient existence. Well in all that neuron firing it must have been that when the thing which felt bad/good serendipitously got to choose something to do, some iteration(s) succeeded well enough to continue evolving. But I’m also saying that the propensity for neurons which create an experience to fire again, or “memory”, would have been crucial for such agency to work out from the beginning. If such an entity doesn’t grasp what was felt an instant ago, choosing something to do shouldn’t be effective. But since neurons which fire to create a conscious experience have greater propensity to fire again in at least some capacity, this “memory” should create a ready made form of conscious input for the experiencing entity to use.

        “Where” does this entity and its associated thinking occur? There might not be an actual location in the brain that this virtual computer puts it all together. If you’ve got ten computers working together to solve a problem, is there a single place in the system where they’re working? Similarly if there are various places in the brain which are responsible for what we term “consciousness”, there may not be a single place where it’s all put together. Though neuroscientists in general seem extremely interested in locating such a place, this may be a waste of their time. Regardless of “where” the thinker exists, for now it should be most important for engineers to grasp the sorts of things which facilitate the interpretation of inputs and construction of scenarios in to quest to feel better, or the conscious form of function that I suspect does less than 1000th of one percent as many calculations as the vast supercomputer which facilitates it.

        Liked by 1 person

        1. Eric,
          “Here I’m presuming that something as autonomous as an ant, for example, must have a bit of consciousness for guidance as well.”

          I’m personally not sure about ants based on what I’ve read about their nervous systems and behavior. It’s not clear to what extent their sensory information is integrated, or to what degree they have the ability to learn in a non-reflexive manner, which would imply affects.

          “Note that in the evolution of consciousness there would be a non-conscious brain which naturally does things that create an epiphenomenal agent — no functionality but rather just something with sentient existence.”

          Yeah, we disagree on this point. It seems to me that natural selection operates on behavior and energy costs. A trait that produces no adaptive behavior but costs energy, seems like it’s on the negative side of the selection ledger. Note that every piece of neural substrate costs energy (the most expensive in the body), so if some were generating this epiphenomenal agent, it would face immediate selection pressures. Unless it were a spandrel (in which case we’d need to identify what it rode in on), it would need to provide at least some incipient benefit from the beginning.

          “It seems to me that in some capacity memory must have existed from the beginning for effective conscious function to evolve.”

          The term “memory” has a lot of connotations. Fundamentally, any change in state is a memory, so habituation and sensitization, both of which can happen without a brain, are included. There’s semantic memory, which requires more complexity. And then there’s episodic memory, the most complex, and probably the kind with the fewest number of species having it, although depending on who you ask, that might include all mammals and birds.

          ” there may not be a single place where it’s all put together. ”

          Could be. I read an article this morning that movement affect processing even in early sensory regions. If true, that means the integration may be happening all over the brain.
          https://www.quantamagazine.org/noise-in-the-brains-vision-areas-encodes-body-movements-20191107/
          Although I’m learning to be cautious of these types of articles. Often it’s popular press hyping something out of context. I always feel a lot better when neuroscientists I trust weigh in on this kind of thing, and none have yet for this.

          Liked by 1 person

          1. I saw that article in Quanta too and was wondering about your reaction to it.

            I wouldn’t have predicted it but I’m not surprised about it either. I’ve always felt that the brain (conscious and unconscious) first and foremost was to control the body and manage its relationship with the world. So the fact that information about the body and movements is scattered around isn’t surprising. What I wonder is whether this somatic information extends even into more abstract and meta-cognitive processes. The article focuses on sensory but the studies were done on mice.

            Like

          2. My reaction is interest, but with the caution I noted to Eric. One of the reason I read so many neuroscience books is that many of these popular science articles don’t do a good job of putting their subject into overall context.

            That said, we’ve known for a long time that most of the signalling in the sensory regions is back propagation. (I saw 90% somewhere.) I think the assumption used to be that all the contents of the back propagation originated locally, but it seems like it can be affected by distant regions. It actually isn’t surprising that brain regions are affecting each other, but it is surprising that the effects are reaching the early sensory regions.

            To your wondering about metacognitive processes, if it’s reaching early sensory regions like V1, it seems hard to imagine it isn’t reaching those processes as well. I suspect it’s an ongoing feedback loop, with the various regions constantly affecting each other.

            Some of this might also explain why sensory processing migrated to the forebrain with the evolution of mammals. One of the linked papers noted that V1 in the visual cortex is effected, but not earlier on the visual pathway with the LGN in the thalamus.

            Liked by 1 person

          3. “If true, that means the integration may be happening all over the brain.”

            It sure does seem the brain is highly interconnected in how it works. (And you know I think noise might play a role.)

            FWIW, same writer for Quanta has a couple of other interesting recent articles that are related: A Power Law Keeps the Brain’s Perceptions Balanced and To Pay Attention, the Brain Uses Filters, Not a Spotlight.

            Both involve how holistic the brain seems to be.

            Liked by 1 person

      3. Mike,
        On ants, it’s an open question. But my own brain architecture (supported by the neuroscience of Feinberg and Mallatt, not that I consider their brain architecture as effective as mine) does at least imply that these creatures survive in environments which are too “open” to get by without a bit of agency. The form of consciousness that I mean here is quite primitive however (or “primary” rather than various popular conceptions which seem to require all sorts of fancy brain structures), so we shouldn’t expect to grasp associated “learning” in these creatures even if it does happen. In the end it may feel like something to exist as an ant, or something which helps guide its otherwise robotic function, and regardless of any human grasp of how ant brains function.

        I think that you’re being way too hasty dismissing the potential for affect to evolve given energy concerns. The actual energy required may be quite inconsequential. Even if we were to somehow solve the hard problem of consciousness, if we take history as a guide we should expect evolution’s solution to occur many orders more efficiently. I’d think that your suspicion that affect exists by means of abstract information propagation, and so can happen through any facilitating medium, would naturally place the minimum energy requirement lower than my own suspicion that specific physical mechanisms must be involved. So here we seem turned around.

        More importantly however you seem to be overlooking the essential means by which evolution occurs. Random mutations happen countless times over tens, hundreds, thousands, and millions of years. Thus it’s true that some percentage of mutations should “provide at least some incipient benefit from the beginning”. In addition however certain amazingly complex simultaneous mutations that theoretically “could” line up with each other to thus be adaptive, should thus become quite probable from time to time. And yes, life should carry countless spandrels beyond the ones that are clear to us. These should become adaptive under certain environmental conditions and mutations.

        Of some interest to this discussion may be the work of Dr. Suzana Herculano-Houzel, who likes to think of evolution as “change” rather than “progress”. https://brainsciencepodcast.com/bsp/2017/133-herculano-houzel
        In minute 36 she she discusses how glial cells remain relatively neuron proportional in all sorts of brains. Apparently no one knows why, though this prevalence suggests something fundamental that’s not to be adjusted.

        On memory, yes of course the term is used in many ways. I was referring to just one of them however. This is essentially “past conscious experiences, which are experienced later in a degraded sort of way”. Note that remembering something painful doesn’t bring that specific pain, though it does provide conscious information. Conversely remembering an embarrassing situation can also bring embarrassment. Regardless my point was that when affect first evolved, the experiencer may have naturally had a memory input given the propensity for neurons that have fired, to fire again in a degraded way.

        On the Jordana Cepelewicz article that you cited for suggesting that consciousness may not come together in one place, that provides me with some strong fuel! This journalist is talking about a change in perspective of neuroscientists that’s been happening over the past ten years, and so not something new. It seems to me that either she’s mistaken about this change in perspective, or it has indeed been happening. Apparently she’s even saying that Anne Churchland is involved in this, or the daughter of Patricia and Paul Churchland. So what do you suspect? Have you not noticed this change in perspective over the past decade, and so she must be mistaken?

        Apparently she says that ten years ago neuroscientists thought that a reasonable portion of neuron firing was useless noise. Wow! So the most energy intensive cells in the human body were thought to be so productive, that they could fire randomly quite a lot of the time? If even 1% of the 12 watts it takes to run the human brain is useless (out of 60 in total for an average human), I’d certainly find that interesting. Or could it be that neuroscientists were arrogantly presuming that they understood far more about the why neurons fire than they did. You know exactly what I suspect about this, but what’s your perspective?

        Two questions: Why would neuroscientists suspect that somewhat anesthetized animal neuron firing would occur the same as they normally would? Secondly, why would neuroscientists continue to call a part of the brain “the visual cortex”, if they now realize that it facilitates far more than just vision?

        Liked by 1 person

        1. Eric,
          As you know, I’ve read numerous neuroscience books written within the last ten years, and none have presented the information the way that article did, which makes me suspicious. As I noted, often these types of articles don’t do a good job of fitting the information within the overall context. (The papers it references are much less expansive in their language.)

          What those books have presented is a situation where an enormous amount of backpropagation happens in the sensory cortices. Often this is interpreted as prediction or expectation about what is actually being experienced. If we add that this backpropagation is influenced by other regions of the brain, such as the motor regions, then we get roughly what the article is describing.

          That said, neuroscience is a field where an enormous amount has been learned in just the last few years and developments are ongoing. You characterize scientists before discoveries as being “arrogant.” Rather than giving credit for new discoveries, you’re taking them as signs of incompetence, but that’s not how science works. It isn’t divine revelation. Any theory is subject to revision on new observations, and most scientists are very careful to delineate the limits of current theories.

          On why they used anesthetized animals, I suspect that was the only way they could find to do the studies they were doing. Obviously new methods have been discovered since then. Often science is limited by the practical limitations of current technologies and techniques. These also aren’t a sign of past incompetence, just an indicator that techniques are improving.

          On still calling it the visual cortex, no matter what new discoveries happen, they still need to be reconciled with all the previous observations. The fact remains that several decades of observations show that lesions in the visual areas lead to visual deficits. And numerous studies show how the visual image processing there works. The extra signalling doesn’t change any of that. It just adds to the picture. Again, that’s the danger of reading these types of articles without grounding in the overall subject matter, or when the article itself fails to properly fit its content into the overall context of what’s known.

          Put another way, if you come away from that article thinking that there is no specialization going on anywhere in the brain (which is the takeaway your question implies), you’re over interpreting the results. But again, a lot of that is the author’s responsibility.

          Liked by 1 person

      4. Alright Mike, she must have sensationalized this. “Backpropagation” sounds far less like “does nothing”, or “noise”, so that’s heartening. And apparently you’re saying that neuroscientists haven’t been presuming that anesthetized animals should function the same as normal. Furthermore I guess they only call the visual cortex this given a visual connection, not through a presumption that this is all it does. Sounds good!

        Liked by 1 person

  5. With so much happening in the brain (noise or whatever some of it is), I’m not seeing exactly how we know when we look at activity how much of it is conscious and and how much unconscious. For example, we correlate activity X in brain with reported experience A but do we really know whether X was actually directly related to consciousness of experience A or whether it was part of unconscious processing that was needed for experience A. This might be distinction without a difference but I’m not seeing a coherent explanation how something jumps from unconsciousness to consciousness, why it needs too.

    I find a similar problem with IIT. Assuming a significant amount of unconscious processing occurs in a brain, how much of the integrated information is unconscious and how much conscious? Where does the division occur?

    One other note. I’m surprised no one has mentioned the claustrum as a place for consciousness if we really need to find a place.

    Like

    1. That gets into the question of what makes particular content conscious vs nonconscious. Which also gets into the debate on whether phenomenal consciousness overflows access consciousness.

      It seems like there are different categories:
      1. Content that is introspected. Everyone seems to agree that’s conscious.
      2. Content that is within the scope of introspection but is not currently being introspected.
      3. Content that is not within the scope of introspection but might lay down memories that are later introspectable.
      4. Content that has the potential to enter the scope of introspection, but hasn’t done so yet: preconscious content.
      5. Content that will never be within the scope of introspection, such as autonomic processes or early sensory processing.

      Of course, talking in terms of introspection is inherently an access consciousness perspective. But if content is never accessible, in what sense can we really say it’s conscious?

      On the claustrum, the most plausible speculation I’ve seen on it posits a role as a conductor, keeping the neural oscillations of the overall cortex in synch. Of course, there’s also been speculation about the content of consciousness residing there, but Damasio points out that its limited substrate makes that seem unlikely, at least for any kind of physical understanding.

      Like

      1. I agree mostly about the categories. But how do we associate activity in [pick the brain area] with which category for any given experience. Or could all five categories be found in the same selected brain existing side by side?

        Koch makes a big deal about the claustrum. I think it is the mostly densely connected of any area of the brain and has connections to all of the cortex. So keeping the cortex in synch makes sense, but that may in a sense be saying the same thing is it is area of the brain when everything comes together in a coherent whole, which might be the same thing as the location of consciousness.

        Like

        1. On associating brain regions, these days it’s mostly with brain scans, specifically seeing correlations in changes in activity level. As you’ve noted before, the whole brain is usually active to some degree, but some regions are more active than others.

          I mentioned the no report paradigm in the post. One simpler protocol I heard on a podcast this week is simply having the subject report on every stimulus whether or not they’re conscious of it. That means the circuitry associated with self report stays lit up, leaving the thing to look for are regions that vary in correlation with the subject’s report of being conscious of something.

          Of course, there is Block’s objection. In the end, all that scientists can do is look for correlations in the changes between stimuli and neural activity. If this was simple, it would have been solved long ago. 🙂

          Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.