Is the brainstem conscious?

(Warning: neuroscience weeds and references to gruesome animal research.)

Blausen.com staff (2014). “Medical gallery of Blausen Medical 2014”. WikiJournal of Medicine 1 (2). DOI:10.15347/wjm/2014.010. ISSN 2002-4436.

The vast majority of neuroscientists see consciousness as a cortical phenomenon.  It may be crucially dependent on sub-cortical and sub-cerebral structures, but subjective experience itself exists mainly or entirely in the neocortex.  In this view, the brainstem only produces reflex responses, with anything more sophisticated coming from higher level structures.

But there’s a small but vocal minority in neuroscience who see it differently.  Views in this camp vary somewhat.  Some are more cautious and see the brainstem as perhaps providing a more primal version of consciousness with the cortex providing higher level aspects of it, while others see the brainstem as the primary or even sole source of consciousness.

A scientist often cited for this view is Bjorn Merker and his paper: Consciousness without a cerebral cortex: A challenge for neuroscience and medicine.  (A PDF of the paper is publicly available.)

To understand what Merker is proposing, consider this diagram from the paper:

In each of the four images, the large oval on top is the cortex and overall cerebrum, while the small oval is the brainstem.  The white sections in each image are where consciousness is proposed to reside, with the grey being non-conscious processes.  The two top images reflect, more or less, mainstream neuroscience, with consciousness being entirely a cerebral phenomenon, although in the top right image, it is more crucially dependent on sub-cortical but cerebral structures such as the thalamus and basal ganglia.

The bottom images reflect the minority camp, with the bottom left one reflecting more cautious views involving consciousness spanning both the brainstem and cortex, and the bottom right one the more uncompromising version that only the brainstem is conscious, with the cerebral structures only supplying pre-conscious content.

Merker in the main paper seems to argue for the bottom right view, although in his response to the commentary that was published with the paper , he seems to back off a bit, retreating toward the bottom left view.  (Unfortunately, the commentary and response are pay-walled, but can be found here.)

So how does Merker reach this conclusion?  There’s a lot in this paper, and this post is going to necessarily be selective and highly summarized.  If you’re interested in the details, I highly recommend the paper itself.  It’s a fascinating read for anyone interested in neuroscience, albeit a very technical one.

Merker first cites the work of neurosurgeons Wilder Penfield and Herbert Jasper in the middle 20th century, who performed surgeries on patients with severe epileptic seizures.  It was often necessary for them to remove large tracts of the patient’s neocortex.  While undergoing the procedure, the patients were kept conscious with a local anesthetic so the surgeons could communicate with them and know if they were damaging their cognition.

In these procedures, Penfield and Jasper were impressed by the fact that removal of cortical sections never seemed to interrupt the consciousness of the patient.  They proposed that consciousness must be maintained in lower level structures.

Merker then discusses the Sprague effect.  When one side of a cat’s visual cortex is removed, despite functional eyes, the cat becomes unresponsive to half of its visual field.  In human patients, similar damage results in cortical blindness (and sometimes the phenomenon of blindsight).  However, when the cat’s upper brainstem is additionally damaged in a certain manner, some of the cat’s abilities to respond to visual stimuli returns.

Merker also discusses the abilities of rats that have been decorticated, that is, had their neocortex removed but with the rest of the brain left intact.  These rats often retain a remarkable ability to navigate and engage in customary behavior, including reproduction, despite losing many cognitive abilities detectable to a trained observer.

Finally, Merker discusses hydranencephalic children.  These are children who typically suffer a stroke in the womb that destroys much of their brain.  Generally they are born with only the brainstem and a few lower level cerebral structures.  Their cognitive ability seems to be roughly limited to that of newborns, although they never move beyond that stage.  Despite substantially missing a neocortex, they reportedly display powerful indications of a sort of primal consciousness.

There are issues with all these lines of evidence that weaken Merker’s case.  Some of them Merker admits to, but then summarily dismisses.  For example, in the case of the cat, another interpretation is that the followup damage to the upper brainstem merely destroys the cat’s ability to inhibit its reflexive reactions to visual stimuli, and decorticated rats retain a lot of cerebral structures that mainstream neuroscience sees as sufficient for habitual behavior.

But I’m going to focus on a broader issue.  As neuroscientist Anton Coenen asked in his commentary, “But what kind of consciousness is this?”  When we use the word “consciousness”, we can mean all kinds of things, but there are at least three broad meanings that often get conflated:

  1. Being awake and responsive to stimuli
  2. Awareness with phenomenal experience
  3. Self reflection

When we see behavior indicating 1, we tend to assume that all three versions are present.  In the case of a healthy developed human, it’s usually a safe assumption.  But the further we get from healthy humans, the weaker that assumption becomes.  In non-human animals, 3 may be limited to only a few primate species, and many patients in a vegetative state seem to have 1 without 2 or 3.

On inferring which level of consciousness is present based on behavior, I’m going to quote Richard Feynman on scientific observation:

The first principle is that you must not fool yourself — and you are the easiest person to fool.

Cargo Cult Science

Nowhere is this principle more needed than when using behavior to assess mental states in non-human animals and brain injured humans.  We have to be careful about taking affect displays such as crying, facial expressions, avoidance reflexes, etc, as evidence.  As intuitively powerful as they are, affect displays do not necessarily indicate conscious affective feelings.  Human psychology studies show that many affect displays are unconscious.  This is why body language and unguarded facial expressions are often cited as better indicators of mental states than the more conscious behavior.

The consciousness hierarchy above highlights how important it is to be clear about which type of consciousness we’re discussing.  Merker, to his credit, explicitly identifies the definition he’s working with: information integration for action.  And, despite my quibbles above, I do think he makes a good case that integration for action happens in the upper brainstem.  But integration for action only meets the first level of consciousness.

Consider the phenomenon of mind wandering.  I can be driving to work, mowing the lawn, taking a shower, or doing a host of other complicated physical tasks with little if any conscious thought going into what I’m physically doing.  When driving, I can be thinking about the next blog post I’m going to write or how I’m going to handle a presentation at work.  Clearly some part of my brain is doing integration for action in order for me to drive, but it doesn’t seem to be the parts we normally label as conscious, at least until something about the driving requires that I focus on it, on what needs to happen next.

In practice, most of the habitual automatic but learned behaviors described above are controlled by my basal ganglia, sub-cortical structures above the brainstem.  But if a loud noise causes a startle reflex, that is handled by the brainstem.  The frontal lobe cortex only seems to be involved when some degree of planning is needed, even if only planning for the next few seconds, utilizing integration for planning.

Merker seems right that what happens in the brainstem is the final integration for action, and that all action goes through it.  But the brainstem itself only appears to have its reflexive reactions, reactions which can be inhibited from higher level structures.  Whether those inhibitions arrive are driven by higher level integrations.  These structures process an enormous amount of information that never makes it down to the brainstem.

For example, about 10% of the axons from the retina project to the superior colliculus in the upper brainstem region.  Most of the remainder, including all of the axons from the color sensitive cone cells, project to the thalamus and visual cortex.  This means that the redness of red and many other conscious qualities only happen in the cerebrum.  That information is used by the cortex to decide which reflexive reactions in the brainstem to allow and which to inhibit.  The superior colliculus does have low resolution colorless images, but we appear to have no introspective access to them.

None of this is to imply that the brainstem isn’t crucial for consciousness, particularly the first level.  It arouses the cerebrum, provides the underlying impulses and valences that form the core of feelings, and generally drives the overall system toward homeostasis.  Everything above it is an elaboration of those functions.  But that doesn’t mean it has phenomenal awareness.  What it means is what we call phenomenal consciousness is itself an elaboration of its fundamental functions.

So perhaps a better way of saying this is that Merker and those of similar disposition aren’t wrong about the brainstem’s primacy.  The more cautious views aren’t even wrong that the brainstem has lower level consciousness in the sense of the first level described above.  They’re only wrong to the extent they claim that primacy includes phenomenal or self reflective consciousness.

As is often the case, much of the differences between mainstream neuroscience and the more cautious views in the minority camp amount to differences in what people are willing to call “conscious.”

Unless of course I’m missing something?

27 thoughts on “Is the brainstem conscious?

  1. In the end I do not think we can confine consciousness to parts of the brain. I think it requires all of the brain working in concert. When a part of the brain gets damaged there seems to be some evidence that other parts can take over, but possibly not as effectively as the damaged part that did that job. So, if the visual cortex is slightly damaged and another part of the brain takes over does that part “see”? I think that investigation is a fool’s errand as the visual cortex doesn’t “see” all by itself. It needs input from the eyes (are your eyes conscious or do the “see”?) and interconnections with the rest of the body, for support purposes. Memories are stored in a distributed fashion apparently and I think that consciousness will be discovered to be also.

    Liked by 3 people

    1. In general I agree that what we call consciousness requires large areas of the brain working in concert. (And a recent study bolsters that sentiment.) However, I think we have to be careful not to fold our hands every time discussion about where particular aspects of consciousness, where more targeted cognitive capabilities, happen. Doing so risks closing us off to learning.

      Liked by 2 people

    2. I agree entirely Steve. Attributing consciousness to specific brain parts does not seem productive. Instead consider my own brain architecture.

      From this perspective the entire brain is a vast massively parallel non-conscious computer that accepts inputs, processes them algorithmically, and so produces output function. But one mode of its output is to produce a second computer that functions through the first, and so it outputs things like pain and vision for another computer to deal with. I consider the conscious form of computer to receive inputs of sentience, senses, and and memory, to interpret them in order to build scenarios about how to make itself feel better, and with muscle operation as its only form of non-thought output. The following is a schematic diagram of it in case you or others would like to consider this model further.

      And Mike, to your concern about people folding their hands every time discussion about where particular aspects of consciousness occur, I don’t think that acceptance of my model would lead to that. Here neuroscience would be crucial to explore which parts of the brain output various inputs to the conscious form of computer, as well as what facilitates conscious processing and output function in general.

      Like

        1. Great question James. Thanks for asking.

          Even though I theorize the conscious form of computer to exist entirely through the non-conscious form of computer, they function by means of entirely different kinds of stuff. Note that the non-conscious computer that you’re looking at right now functions on the basis of electricity. Voltage differentials force inputs to be algorithmically processed for output function, such as your screen image from moment to moment. Something similar occurs in the brain I think, though it’s far more elaborate. Neurons function on the basis of complex electro-chemical dynamics and so display the same essential thing. Nerve inputs are accepted, neurons algorithmically process such input in all sorts of ways, and this incites output function such as the regulation of your beating heart. So I’m saying that the vast cranial supercomputer produces a second computer that functions on the basis of something else. So what is it that drives conscious function?

          As I define it, the conscious form of computer functions on the basis of sentience (or “valence” as presented in the diagram). Rather than electricity or chemical-electrical neural dynamics, this form of computer may instead be punished and rewarded. Thus here there is incentive for it to try to feel as good as it can from moment to moment. The thought processor interprets conscious inputs (valence, senses, and memory) and constructs scenarios about what to do given its desire to feel as good as possible. This is not only me trying to figure out what to say to you, but you trying to interpret what I say for an intelligent response. If my valence were eliminated thus rendering me perfectly numb, theoretically I’d just vegetate rather than write these words, or consciously do anything at all. Here there would be no “fuel” to incite conscious function.

          I’d love to go further if you have other questions! As defined here consciousness is a tiny computer, doing less than one thousandth of one percent as many calculations as the vast supercomputer brain that creates it.

          Like

          1. Eric, you say Consciousness is a computer. Do you mean that figuratively or literally? A computer literally is made of physical stuff and you can identify the physical processes that happen, and if you know enough, you can figure out what the processes “mean”.

            Liked by 1 person

        2. Another good question James. The answer will have to be “figuratively” in the sense that “computer” is an analogy here, though I do consider this to be the case for all of our terms. There are no true definitions, I think, but rather only more and less useful ones. I can think of no better analogy for my own experience of existence that what’s generally meant by the “computation” term, and so I’ve adopted it for my model. I compute what I think is going to happen, and so with these predictions try to alter things to promote my valence based interests. Happiness is all that’s valuable to anything conscious I think, and nothing is valuable to anything that’s not conscious by my definitions, so that’s the end of the story there.

          Still given my own personal metaphysics I do consider this outputted computer to be physical. If it’s divine however then, well, same difference even given my wrongness about that. I’m still me regardless of how I’m created. But if you want something tangible, as in where is my vision, my pain, my memory, and so on, all that resides in the conscious entity that I presume is produced by my brain. I’d be surprised if my brain were miles away from where my conscious experience happened to be, but whatever.

          I certainly believe that meaning can be derived from conscious function, but it’s difficult given that we’re the thing that we’re trying to study. Our biases seem to get the better of us in such sciences. Effective brain architecture should help regardless of its author however, and thus our efforts.

          Like

  2. Mike, you seem amenable to the idea that there are different kinds of consciousness-type processes, given that you have provided different kinds of consciousness (reflexive, phenomenal, self-reflective). So do you think it makes more sense to ask “where is Consciousness in the brain”, or rather, for any given system (pre-frontal neocortex, brain as a whole, group of brains, brain + smart phone), to ask what kinds of consciousness-type processes is it capable?

    *

    Liked by 1 person

    1. James, I can see that. In general, the consciousness concept is so amorphous and malleable that I don’t think, aside from colloquial conversation, we can use the word by itself anymore. People already seem to know terms like “social consciousness”, “public consciousness”, or “extended consciousness”.

      I remember years ago when I first switched from driving a small car to an SUV. At first I found it very unnatural to drive the SUV, but someone at work made the comment that, “Eventually your consciousness will expand to incorporate it,” which I thought was a good way of saying it, and probably a profound insight into how we expand our personal space when driving a car.

      Like

      1. Um, maybe just to be ornery, I’m going to point out that my question was an either/or question. So I’ll make it more clear:
        Is it better to ask
        1. Where is Consciousness in the brain? , or
        2. What Consciousness-type things can any given system (part of the brain) do?

        Looking for a 1 or 2 as an answer.

        *

        Liked by 1 person

        1. I don’t know about you, but I learned on the elementary school playground to be suspicious when anyone insists on a binary answer. 🙂

          1 can be productive if we know which type of consciousness we’re discussing, although the answer may well be that it’s in the interaction between numerous regions.

          2 can also be useful, but not unreservedly. I haven’t seen anyone seriously assert that the cerebellum is conscious (even though it has 80% of the neurons in the brain!), or the medulla, or the hypothalamus, at least not by themselves. To talk about these and other structures as “conscious” is probably stretching the word past its productive use.

          Like

          1. Suspicion is fine, but to make progress sometimes you have to commit to an idea and try to shake things out.

            You’re correct that 1 is fine if you have a particular kind of process in mind and want to know where that kind of process happens. (Global workspace comes to my mind.). If it’s an “interaction between numerous systems”, then the combination of those systems is the system in question. It’s perfectly legit to compare the conscious-type processes of a person with those of a person + smart phone, or person + pen + paper, or person + left shoe. Some combinations are simply more useful than others.

            As for the cerebellum, I hereby seriously assert it is capable of conscious-type processes. I just don’t know which ones. Certainly reflexive, possibly phenomenal, probably not self-reflective, possibly some unique to the cerebellum.

            *

            Like

          2. This gets to why I often wonder if the concept of “consciousness” at this point is a productive one. It’s a pre-scientific concept that doesn’t map well to an objective understanding of nervous systems. Historically it referred to all three of the layers I identified in the post as though they are indivisible. As we’ve learned that they are in fact quite divisible in the brain (often tragically so), people’s intuitions haven’t kept up. Consciousness remains in the eye of the beholder.

            That said, I think it’s worth noting that layer 1, as I expressed it in the post, is more than just reflexes. It’s basic perceptual images paired with complex biological reflex arcs. The cerebellum strikes me as not meeting that standard.

            If we fall back to just the reflexes, then the cerebellum does meet that. Although so will many technological systems, unless we stipulate that only creature type reflexes are applicable.

            Like

          3. Mike, I missed where you distinguish simple reflexes from “basic perceptual images with complex biological reflex arcs”. What’s the difference (that makes a difference)?

            *

            Like

          4. James, I didn’t go into detail on it in the post, since it wasn’t relative (and the post was already too long). But you’ll notice that I didn’t use my usual 5 layer hierarchy:
            1. reflexes
            2. perception
            3. attention
            4. imagination
            5. metacognition

            The reason is that the brainstem has 1, a rudimentary version of 2, and the bottom up aspects of 3. It’s missing the high resolution color version of 2 and 4-imagination, which brings in top down attention, which I think are necessary for phenomenal consciousness, and 5, which delivers self reflection (as well as a host of other capabilities).

            The main difference between straight reflexes and what the brainstem has are the rudimentary perceptions and reflex arcs that are complex enough for bottom up attention.

            Hope that makes sense rather than obscuring further.

            Like

          5. Mike, I don’t understand how a simple reflex differs from a reflex arc (and why that makes a difference). Also, what is a rudimentary perception, and how do you know the brain stem has it, and not the cerebellum? Also, what is bottom up attention?

            *

            Like

          6. It’s a matter of judgment whether the added complexity of the arcs makes a difference. (Consciousness is in the eye of the beholder.) But I could see an argument being made that complex arcs exist in the cerebellum, so maybe that one isn’t pertinent.

            Rudimentary perception are basic low resolution sensory images. We know the brainstem has them because it couldn’t react the way it does without them. And from what I’ve read, neuroscientists have been able to observe that firing patterns in the colliculi topographically map to the surface of sensory surfaces such as the retina, which are usually taken to be images. I’m not aware of anything like that having been found for the cerebellum. (If you or anyone else know of any, I’d be very interested. The cerebellum has been implicated in some cognition, but not to the extent of having sensory images, at least that I’ve seen so far.)

            Bottom up attention is you focusing on your arm because you feel a spider on it, or a burning sensation. Top down is deciding to watch a particular movie. Put another way, bottom up is reactionary, top down is planned.

            Like

  3. This is far from my areas of expertise, but I believe many studies have discussed patients with damage and removal of (not all from one patient, of course) all sections of the brain, demonstrating that no one section could be the supreme seat of consciousness… An entire hemisphere can be removed and the individual will seem largely the same – and can still have consciousness and a normal life. I am currently reading a book proposing an external seat of consciousness (not the first time I’ve read that) in another dimension – suggesting that our soul/consciousness exists outside and independent of our body, but in a limited way is connected to our brain and body. I get the impression our greater consciousness may in some ways be analogous to the internet, with our brain as a very limited video game with a minimal connection to the internet / true consciousness, of which we only have glimpses…

    Liked by 1 person

    1. Hi David,
      The brain does have plasticity, but from everything I’ve read, it doesn’t have that much. A person can lose many sections of their neocortex and, provided the equivalent section on the other hemisphere is still present, adapt. But some functionality is hemisphere specific, notably, language. If someone loses their language deciphering center (Wernicke’s area) or their language production area (Broca’s area), in the hemisphere where they reside (typically the left one) the other hemisphere won’t be able to step in and fulfill those roles. (If it happens to a young child, there is a possibility the other hemisphere might be able to do it, but not for an adult.)

      And it’s worth noting that losing large sections of a particular hemisphere usually comes with substantial loss of feeling and movement on the corresponding side of the body. Losing an entire hemisphere results in substantial disabilities. Again, very young children fare better, but for an adult, my understanding is that it’s devastating.

      On consciousness in other dimensions, there’s nothing in neuroscience that currently requires anything like that. Of course, there’s no way to disprove that kind of proposition, but there’s nothing really driving us to it either.

      Like

  4. Of course, Mike, I’m strongly supportive of the “bottom right” brainstem-centric model because it’s the only model for which actual evolutionary, experimental and observational evidence exists. Where’s the paper that persuasively presents the same type of evidence for the very popular cortical consciousness hypothesis? I can’t find it, which I suppose is unsurprising since there’s no evidence whatsoever.

    As of this post, not counting your 7 replies Mike, there are 14 responses to your article about Merker’s paper, which can certainly be characterized as a scientific proposal rich in evidence. For the purposes of his paper, Merker defines consciousness as: “… the state or condition presupposed by any experience whatsoever.”

    I wonder how many respondents (and those who chose not to respond) aren’t at all interested in a scientific approach to consciousness—an approach that involves clearly stated hypotheses supported by evidence. There’s instead a clear preference for the muddle that is philosophical consciousness studies. How many respondents actually read Merker’s paper? No one wrote to dispute the any of the evidence he presented. Notice that Philosopher Eric’s chart and proposal are completely evidence-free, as are several other contributions—David Montaigne wants consciousness to be understood as a “soul.”

    Also Mike, I notice that you’re lately expressing dissatisfaction with even the vocabulary of consciousness discussions, both here …

    “… the consciousness concept is so amorphous and malleable that I don’t think, aside from colloquial conversation, we can use the word by itself anymore” and “… if we know which type of consciousness we’re discussing” and “Consciousness is in the eye of the beholder.”

    Type of consciousness?

    … and in the more recent “What is real?” discussion, where I believe I read a suggestion that we prohibit several words frequently used in consciousness discussions, “feeling” among them, and then see how those discussions proceed. I find such a suggestion truly astonishing and unproductive, particularly considering that the meaning of the word “sentience” is precisely feeling.

    This is not the road to understanding.

    I suggest that everyone take a few relaxing deep breaths and focus on where all the confusion is coming from. The prime suspect in my opinion is Consciousness Philosophy itself, which appears to disapprove of—or has never heard of—an evidence-based scientific approach.

    I recommend that everyone read P. M. S. Hacker’s “The Sad and Sorry History of Consciousness,” available from:

    Click to access ConsciousnessAChallenge.pdf

    If nothing else, Hacker provides a fine history of the word consciousness which is most enlightening. His analysis of Nagel’s “something it is like” as incoherent is spot on.

    As to the responsibilities of Consciousness Philosophy, in another paper, “Philosophy: A Contribution, Not to Human Knowledge, But to Human Understanding,” Hacker writes:

    “First, if one asks a physicist or biologist, a historian or a mathematician what knowledge has been achieved in his subject, he can take one to a large library, and point out myriad books which detail the cognitive achievements of his subject. But if one asks a philosopher for even a single book that will summarize the elements of philosophical knowledge—as one might ask a chemist for a handbook of chemistry—he will have nothing to present. There is no general, agreed body of philosophical knowledge—although there are libraries full of philosophical writings from antiquity to the present day, which are in constant use.”

    Aren’t we all trying to understand?

    Liked by 1 person

    1. Stephen,
      On evidence, I think the disconnect here is that for most neuroscientists, the evidence for consciousness being primarily a cortical phenomenon is… most of neuroscience. Every neurological case study where an injury in the cortex knocks out a particular aspect of consciousness is seen as evidence.

      Now, I know you have a narrative where all that evidence only amounts to the content of consciousness, not the experience of it, which you see being in the brainstem. This implies that all the perceptual and cognitive information that we are consciously aware of has to be shunted down to the brainstem for us to be conscious of it. I went carefully through Merker’s paper. If there was evidence for that kind of comprehensive low level integration happening, I didn’t see it. I did see some speculation along those lines, but speculation isn’t evidence.

      And Merker in his response to all the commentary seems to concede that most of human level consciousness happens in the forebrain. He notes that if we could superimpose the brainstem’s sensory experience on top of the overall human experience, we might not even notice the brainstem’s contributions.

      And the fact that the cortex has 16 billion neurons while all the subcortical structures added together, including the brainstem, only have a billion. It seems like an argument that the tail wags the dog to say that human level consciousness happens in only a few hundred million neurons while this other vast substrate is available. And I think you’re vastly underestimating the computational complexity involved in having human level experience of all that content.

      Finally, when you start concluding that the large numbers of intelligent people who disagree with you are all slackers about doing their homework, or are simply hidebound, I’d advise caution. It’s all too easy to fall into that line of thinking. Most of the time, it’s simply rationalizations for why those people have different opinions from what we’d like them to have, and falling into it can cut us off from learning why they hold those opinions.

      Strangely enough, from what I remember of Hacker’s views, my questioning of consciousness vocabulary and conceptions seems inline with them. But it’s been several months since I read that paper and I might be misremembering the details.

      Definitely we’re all trying to understand, and united in our interest in these topics, even if we come to different conclusions.

      Like

      1. It’s taken me over two weeks to realize that since the evidence for cortical consciousness is exactly zero and most of neuroscience is the evidence then, via the certainty of mathematics, we now know that neuroscience, with its recently discovered aspects of consciousness, doesn’t actually exist. I’m aghast at all the research and reading time I’ve wasted on this non-topic—a true personal tragedy—but, on the other hand, I’m enormously relieved to at last achieve certainty. There’s nothing more satisfyingly real than nothing.

        Along with you Mike, I’m delighted that we’ll no longer be plagued with imagining the intricacies of the intractable Shunting Problem—shunting all of our perceptual and cognitive information from any one consciousness aspect to other, smaller consciousness aspects seems ridiculous, as you point out, and probably really, really sluggish. And the sheer obviousness of your new “Most Cells Means the Most Consciousness” principle causes me to wonder why I ever considered evidence to even be a factor, let alone one of significance. Why have you withheld this powerful principle until so late in our discussion? It’s hardly fair of you to know and yet withhold the penetrating insight that the cerebellum is more conscious than the frontal cortex and the visual cortex is probably more conscious that the temporal lobe and amygdala combined. So many problems solved! Seriously, considering all those variably-sized consciousness aspects, who needs a brainstem? From the vantage point of total certainty, it’s inconceivable why a brainstem aspect would evolve in the first place.

        I belatedly now understand that you’re correct in your view that the word “consciousness” itself is just so much overwrought claptrap. I suggest we donate it to some inferior foreign language. As you’ve helpfully pointed out, words don’t have definitions anyway, just interpretations—mere opinions, every one. And I realize it’s woefully belated but many thanks, Mike! Occam thanks you too—no more absurd consciousness theory razor burn!

        Just call me a reformed Slacker, but better late to a wildly popular dogma than otherwise, as I always say. I’ve half a mind to ask the Japanese prime minister to recommend you for a Nobel Prize, Mike—certainly the one for Smoke, and you might also be a shoe-in for the much coveted prize for Mirrors. Your previous well-reasoned and scientifically grounded repudiation of select portions Einstein’s imaginary Relativity Physics and our equally imaginary universe is already legendary. Who could that make-believe geometry and its imaginary implications possibly benefit? We all owe you our gratitude for revealing that it’s just a bunch of stuff out there. I’m sure Professor Einstein himself wishes from the afterlife that he could return and thank you for your incisive corrections. And I guess that’s going to be another Nobel—the one for Daring Dilettantism!

        I must admit it wasn’t easy to create a post as absurd as yours Mike—you da Champ!

        But—and I guess I just can’t help myself—I still wonder why none of those decorticated mice were unconscious: when I’m bereft of consciousness, I don’t do diddly-squat! Oh, I’m probably just being silly. I admit I never got around to reading the results of experimental mammalian de-brainstem’ing. Maybe I’ll read some blogs that discuss the findings—have you heard of Pitter-Patterns?

        Like

  5. As others have suggested, I suspect our experience of consciousness comes from the holistic operation of the full brain. (Per my laser analogy, it takes all the parts of the system to lase.) As such, I think the only answer to “where is consciousness” is “in the brain.”

    I’ve been so in the zone driving the Los Angeles freeway system that I’ve suddenly “woken up” and wondered exactly where I was along my intended journey and had to wait for a known landmark to be sure. But I know this to be well ingrained “muscle memory” of the same sort musicians learn.

    An experienced driver doesn’t given any more conscious thought to driving than an experienced walker does to walking! Or an experienced musician does to how to make the notes.

    Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.