Joseph LeDoux’s theories on consciousness and emotions

The cover of 'The Deep History of Ourselves'In the last post, I mentioned that I was reading Joseph LeDoux’s new book, The Deep History of Ourselves: The Four-Billion-Year Story of How We Got Conscious Brains.  There’s a lot of interesting stuff in this book.  As its title implies, it starts early in evolution, providing a lot of information on early life, although I didn’t find that the latter parts of the book, focused on consciousness and emotion, made much use of the information from the early chapters on evolution.  Still, it was fascinating reading and I learned a lot.

In the Nautilus piece I shared before, LeDoux expressed some skepticism about animal consciousness being like ours.  That seems to be a somewhat milder stance compared to the one in the book.  Here, LeDoux seems, at best, on the skeptical side of agnostic toward non-human animal consciousness.  The only evidence for consciousness he sees as unequivocal is self report, which of course only humans can provide.

In terms of consciousness theories, LeDoux regards Higher Order Theories (HOT) and Global Workspace Theories (GWT) as the most promising, but his money is on HOT, and he provides his own proposed theoretical extensions to it.  HOT posits that consciousness doesn’t lie in the first order representations made in early sensory regions, but in later stage representations that are about these first order ones.  In essence, to be conscious of a representation requires another higher order representation.

In typical HOT, these higher order representations are thought to be in the prefrontal cortex.  LeDoux attributes a lot of functionality to the prefrontal cortex, more than most neuroscientists.  Some of what he attributes I’ve more commonly seen attributed to regions like the parietal cortex.  But he presents information on the connections between various cortical and subcortical regions to the prefrontal cortex to back up his positions.

In the last post, I laid out the hierarchy I usually use to think of cognitive capabilities.  LeDoux has a similar hierarchy, which he discusses in a paper available online, although his is focused on types of behavior.  Going from simpler to more sophisticated:

  1. Species typical innate behavior
    1. Reflexes: Relatively simply survival circuits, centered on the brainstem regions
    2. Fixed Reaction Patterns: More complex survival circuits, often going through subcortical regions such as the amygdala
  2. Instrumental learned behavior
    1. Habits: Actions that persist despite lack of evidence of a good or bad consequence
    2. Action-outcome behaviors: Actions based on the remembered outcomes of past trial-and-error learning
    3. Nonconscious deliberative actions: Actions taken based on prospective predictions made on future outcomes
    4. Conscious deliberative actions: Deliberative actions accompanied and informed by conscious feeling states

On first review, I was unsure about the distinction between action-outcome and deliberative action.  Action-outcome seems like simply a less sophisticated version of deliberative action, particularly since episodic memory and imagined future scenarios are reputed to use the same neural machinery.  It seemed like just different degrees of what I normally label as imaginative planning.

But on further consideration, I can see a case that simply remembering a past pattern of activity and recognizing the same sequence, is not the same thing as simulating new hypothetical scenarios, specific scenarios that the animal has never experienced before.  Put another way, deliberative actions require taking multiple past scenarios and combining them in creative new ways.

Anyway, LeDoux states that there is “no convincing” evidence for instrumental behavior in pre-mammalian vertebrates, or in invertebrates.  In his view, instrumental behavior only exists in mammals and birds.

(This seems to contrast sharply with Feinberg and Mallatt in The Ancient Origins of Consciousness, who cite numerous studies showing instrumental learning in fish, amphibians, and reptiles.  One of the things I’m not wild about LeDoux’s book, is that while he has bibliographic notes, they’re not in-body citations,  making it very difficult to review the sources of his conclusions.)

Deliberative action, on the other hand, LeDoux only sees existing in primates, with humans taking it to a new level.  Apparently in this hierarchy, consciousness only comes into the picture with the most sophisticated version.  I think “consciousness” in this particular context means autonoetic consciousness, that is, introspective self awareness with episodic memory.

(Endel Tulving, the scientist who proposed the concept of autonoesis, doesn’t see episodic memory developing until humans.  However, there is compelling behavioral evidence that it developed much earlier, and is in, at least, all mammals and birds,  although it’s admittedly far more developed in humans.)

On emotions, LeDoux starts by bemoaning the terminological mess that exists any time emotions are discussed.  He reserves the word “emotion” for conscious feelings, and resists its application to the lower level survival circuitry, which he sees as non-conscious.  He points out that a lot of published results which claim to show things such as fear in flies, are actually just showing survival circuit functionality.  He sees survival circuits as very ancient, going back to the earliest life forms, but emotions as relatively new, only existing in humans.

In LeDoux’s view, emotions, the conscious feelings, are cognitive constructions in the prefrontal cortex, predictions based on signals from the lower level survival circuitry, reinforced by interoceptive signals from the physiological changes that the lower level circuitry initiate: changes in blood pressure, heart rate, breathing, stomach muscle clenching, etc.

LeDoux’s views are similar to Lisa Feldmann Barrett’s constructive emotions theory, and contrast with views such as Jaak Panksepp, who saw consciously felt emotion in the lowest level survival circuits.  Barrett also sees emotions only existing in humans, although she makes allowances for animals to have affects, simpler more primal valenced feelings such as hunger, pain, etc.  I’m not sure what LeDoux’s position is on affects.  He doesn’t mention them in this book.

My views on all this is that I think LeDoux is too skeptical of animal consciousness.  It doesn’t seem like a human without language could pass his criteria.  However, as always, this may come down to which definition of “consciousness” we’re discussing.  Human level consciousness includes introspective self awareness and far wider ranging imagination, enabled by symbolic thought such as language, than exist in any other species.  If we set that as the minimum, then only humans are conscious, but many will see that as too stringent.  In particular, I think a case could be made that it’s far too stringent for sentience.

On emotion, I do think LeDoux is right that the lower level survival circuitry, the reflexes and fixed reaction patterns in subcortical regions, shouldn’t be thought of as feeling states.  This means we shouldn’t take defensive behaviors in simpler animals as evidence for fear, or aggressive behavior as evidence for anger.

On the other hand, I think he’s wrong that feeling states don’t come around until sophisticated deliberative processing.  It seems like any goal-directed instrumental behavior, such as selecting an action for a particular outcome, requires that there be some preference for that outcome, some valence, input from the lower level survival circuits to the higher level ones that decide whether to pursue a goal or avoid an outcome.

This might be far simpler than what humans feel, perhaps only meeting Barrett’s sense of an affect rather than what Barrett and LeDoux see as the full constructed emotion, but they should be felt states nonetheless.  By LeDoux’s own criteria, that would include any animal capable of instrumental behavior, including mammals and birds.  Admittedly, there’s no guarantee these felt states are conscious ones, but again, definitions.

Comparing LeDoux’s book to Feinberg and Mallatt’s, I’m struck by how much of the disagreement actually does come down to definitions.  The real differences, such as which species are capable of operant / instrumental learning, seem like they will eventually be resolvable empirically.  The differences on consciousness, may always be a matter of philosophical debate.

What do you think of LeDoux’s various stances?

Update 9-11-19: The statement above about LeDoux seeing instrumental learning only in mammals and birds isn’t right.  Please see the correction post.

119 thoughts on “Joseph LeDoux’s theories on consciousness and emotions

  1. Our recent back-and-forth about Pluto has me thinking a similar situation occurs here. As you say, it “actually does come down to definitions.” I think, especially here, people see what they want to see.

    I do think there’s a vast gap between human consciousness and even the highest animal consciousness, but I don’t know that I believe human consciousness is something utterly different.

    As I’ve said (and of course my own bias applies here), it’s hard not to live with a dog and not see someone “at home” in their eyes and their actions. They have moods and certainly seem to have emotions — even opinions.

    I think, from what little I’ve read, that people who spend a lot of time with primates feel pretty strongly about their consciousness. (Not to mention octopuses!)

    Liked by 2 people

    1. As someone who’s cared for many dogs, I definitely agree that the sense that someone is there is very powerful. I also get a similar feeling with most other mammals.

      I came across a racoon the other day (more accurately I bumped into it and pissed it off). After we had separated, I looked back at it. It saw me looking back and paused, I guess worried I was thinking about coming back for a rematch. In that moment, I felt a definite connection between two conscious individuals.

      I never really feel that connection with fish, amphibians, or reptiles. I do sense some commonality between us, but it’s far less powerful than the sense from mammals. Or birds.

      Of course, caution is always called for. Out intuitions can betray us. It’s very easy to project mental states, such as complex emotions, that aren’t there. But given the reasoning at the end of my post, I think we’re rational to assume they’re not leading us that far astray for fellow mammals.

      Liked by 1 person

      1. “I never really feel that connection with fish, amphibians, or reptiles.”

        Likewise. My fishing buddy and I have talked about fish a lot. He’s convinced (and I agree) that they’re just algorithms — there isn’t much else going on there. He’s mentioned, for instance, catching the same fish several times in a 20-minute span.

        “…intuitions can betray us.”

        True, but they sometimes don’t. Just because it’s an intuition doesn’t make it necessarily wrong. As you say:

        “I think we’re rational to assume they’re not leading us that far astray for fellow mammals.”

        Agreed. (Looks like a duck, etc.)

        Like

        1. Feinberg and Mallatt do cite studies showing fish capable of instrumental learning (mostly teleosts), but those are probably also the studies that LeDoux considers unconvincing. I suspect if they do have instrumental behavior, it’s far more limited than what mammals and birds have. In other words, they’re mostly reflex driven, if not entirely so. Still, I think a glimmer of instrumental learning would have to come with a glimmer of feeling.

          Liked by 1 person

          1. Aren’t we as much saying here that ‘fish’ would not find pleasure in courtship, display and the often ritualized behavior (just as with birds) of nursery care for their young? I think we’d be mistaken if we called this “algorithmic” behavior, on the order of instinct removed from all such behavior of animals, generally speaking and almost, if not totally, applied.

            Like

          2. Which is to say (as retort), that our birds’ behavior in these respects do not significantly differ from fishes in so many instances, that its impossible to distinguish them, fundamentally as given. So, as with birds and mammals, so equally with fishes, yes? Some ‘affect’ that includes the anticipation of pleasure based on certain instrumental actions AND perceptions (memory/learning), vis-a-vis courtship especially and display, must be held in common, whatever the kingdom divide circa 500 million years previous to the present. We understand already that one area of the brain can adopt the functions of another, if but with reduced utility…a mere computational distinction (quanta), not one of kind (qualis).

            Like

          3. I do think there are notable behavioral differences between birds and fish. But to your question on whether fish experience pleasure, what we can reliably say depends on whether instrumental behavior can be scientifically demonstrated. As I noted above, Feinberg and Mallatt cited a number of studies that purport to show it, but LeDoux seems unconvinced. Having perused a couple of the studies, I can see where the results are open to interpretation. The relatively limited behavioral repertoire of fish doesn’t make it easy.

            Liked by 1 person

          4. “I do think there are notable behavioral differences between birds and fish. But to your question on whether fish experience pleasure, what we can reliably say depends on whether instrumental behavior can be scientifically demonstrated.”

            It certainly can, as Biological History may attest, if we have to invent the Discipline to explain the Process! Of course, only Courtship and Nesting behavior is basically Instinctual…but from which is inspired the quest for Pleasure in all know ‘kinds’ like our own, from mammals to birds at least…if not to fish.

            But what’s wrong with fish that they haven’t the ‘affect’ for pleasure in certain instrumental manners consistent with the memory of its kind, to “get along” in its special set of environmental conditions, happy and thriving?! I’d look Maslow’s Hierarchy of Needs for a hint that Nature provides all conditions for thriving of every species, and that they most, if not all, achieve at last this Hierarchy, no matter their terrestrially confined space, opportunity to expand, much less retract, or intervention by mankind.

            Mankind is like a meteor heading for earth, to extinguish all that came before. That’s a mistake. That’s not a good example of Intelligence. It had to be said.

            Like

  2. If prefrontal synthesis is required for consciousness as his more stringent requirements would suggest, then possibly, according to Vyshedskiy’s theories, even humans were not conscious before 100K years ago. What’s more even language itself with a vocabulary of several thousand words might not be sufficient unless the vocabulary could be assembled in novel ways as modern speakers do.

    Liked by 1 person

    1. That’s actually not far from LeDoux’s view, at least for what he calls autonoetic consciousness. His numbers were between 50k and 200k years. He also sees a crucial role for language. I’m sure Vyshedskiy’s paper broke too late for LeDoux to incorporate it, but it does resonate with his and Tulving’s view that episodic memory is a very late development.

      Like

      1. Episodic memory is different from PFS, I think, although they may be somehow related.

        Regarding PFS, I had read little or nothing about it until I saw the Vyshedskiy’s paper but apparently the concept dates from the 19th century. The idea of juxtaposing independent images to create new ones seems so routine to our thought processes it is hard to think of it as a recent innovation. However, I can see that the ability to understand complex sentences requires we keep in memory the first part of a sentence with the later parts to grasp the meaning of the sentence.

        Wikipedia:

        “There is evidence that a deficit in PFS in humans presents as language which is “impoverished and show[s] an apparent diminution of the capacity to ‘prepositionize’. The length and complexity of sentences are reduced. There is a dearth of dependent clauses and, more generally, an underutilization of what Chomsky characterizes as the potential for rescursivelness of language”

        And:

        “Furthermore, it is the most parsimonious way to explain the formation of new imaginary memories since the same mechanism of Hebbian learning (“neurons that fire together wire together”) that is responsible for externally-driven sensory memories of objects and scenes can be also responsible for memorizing internally-constructed novel images, such as plans and engineering designs.”

        What is a little unclear to me is how or why this ability seems to require environmental influence to develop. In other words, why isn’t it more hard-wired into the brain?

        Like

        1. “Episodic memory is different from PFS, I think, although they may be somehow related.”

          Good point. Their relation is probably similar to what I described in the post between action-outcome and deliberative-action. The latter seems like it might require PFS.

          Although LeDoux does allow that at least simple deliberation is possible in other primates. Maybe PFS is what allows the more developed version in humans. A big part of this, I think, is human introspective abilities, which seem necessary for complex hierachical recursive language and overall symbolic thought.

          On requiring environmental influence, it might be that the ability to combine images was originally a malfunction, cross stream corruption of some sort, that just happened to lead to beneficial results, so it got selected for and fine tuned, but its development was always rooted in working with existing memories, which require an experiential history. Possibly.

          Liked by 1 person

          1. “On requiring environmental influence, it might be that the ability to combine images was originally a malfunction, cross stream corruption of some sort, that just happened to lead to beneficial results, so it got selected for and fine tuned, but its development was always rooted in working with existing memories, which require an experiential history. Possibly.”

            I like that thought.

            Liked by 1 person

  3. Thanks for reading and reporting on LeDoux’s book Mike. Now we don’t have to read it … 😉

    LeDoux’s neuroscience background is apparently Behaviorist and he did time with Gazzaniga also. I wonder if he still credits Gazzaniga’s “two minds” conclusion from the split-brain results, an interpretation that was recently invalidated. His lengthy animal experiments regarding emotions focused on fear and, as such, were designed to evoke fear in small laboratory creatures, perhaps making him, in their eyes, the Dr. Josef Mengele of the laboratory.

    His view that we are the only animals with true feelings motivates me to suggest a new word:

    anthrohubris n., the arrogance of believing that consciousness is the sole possession of genus homo. Syn., see poppycock, balderdash.

    For all his apparent evolutionary cred, he appears to be unaware that the differences between humans and other mammals are not differences of kind, they’re differences of degree. And in his view, emotions, the conscious feelings, are cognitive constructions. From an interview:

    “… conscious awareness of sensory stimuli occurs when attention directs information about a stimulus and retrieves long-term memories into the temporary mental workspace called working memory.”

    He seems to be completely unaware of fundamental physical feelings like touch, for instance, because it boggles the mind (mine at least) to think that touching a newborn activates long term memories before the touch can be felt. As you point out, he doesn’t mention affects but only believes in emotions. You mention “Barrett also sees emotions only existing in humans, although she makes allowances for animals to have affects, simpler more primal valenced feelings such as hunger, pain, etc.”

    Note that the American Psychological Association’s rather broad definition of affect is:

    n. any experience of feeling or emotion, ranging from suffering to elation, from the simplest to the most complex sensations of feeling, and from the most normal to the most pathological emotional reactions.

    I think LeDoux is lost in the trees and blind to the forest.

    I’ll repeat here my definition of consciousness, which I believe is compatible with Damasio’s and other Biological Naturalists:

    If the (as yet unknown) brain structure of an organism is physically configured as a feeling and the feeling is felt by the organism, the organism is conscious.

    Of course, that’s far too simple to support complex theories of consciousness, but it does make evolutionary sense and scales up initially to core consciousness—the feeling of being an organism centered in a world—and nicely scales further to extended consciousness, like that we experience.

    Anyone up to discussing that definition?

    Like

    1. There is a lot of fascinating information in the book. Although I found things to disagree with, for anyone interested enough to read these kinds of book, I do recommend it. But I suspect most people here won’t read it. (Admittedly, I spend a lot time and money reading these books, but I’m a neuroscience nerd.)

      LeDoux does briefly discuss the split-brain patients, and yes he still sees the results as instructive. It’s worth noting that after reviewing their methodology, Michael Gazzaniga doesn’t think the Yair Pinto study adequately guarded against cross-cueing between the hemispheres, the use of bodily mechanisms to clue the other hemisphere on what’s happening.
      https://academic.oup.com/brain/article/140/7/2051/3892700

      I agree that the distinction between affect and emotion seems artificial. I’m fine with saying dogs have simpler emotions than humans. They may not have our complex social emotions but I’m skeptical of the idea that they don’t have feeling states at all.

      Your definition seems plausible to me, at least for sentience, or what Feinberg and Mallatt call affect consciousness. But I don’t think any one definition is the one true one. Consciousness is in the eye of the beholder.

      Like

      1. Mike, regarding your “at least for sentience“, here’s the derivation of the meaning of the word sentient, per https://www.etymonline.com:

        sentient (adj.)

        1630s, “capable of feeling,” from Latin sentientem (nominative sentiens) “feeling,” present participle of sentire “to feel” (see sense (n.)). Meaning “conscious” (of something) is from 1815.

        If we’re all agreed that we’re speaking English as a common tongue with agreed upon meanings for words (and our conversations are pointless babblings if not), that’s over a century of sentient meaning conscious.

        Like

      2. Additionally, Mike, your statement that “Your definition seems plausible to me, at least for sentience … affect consciousness” overlooks the fact that, as I stated, my definition scales up initially to core consciousness and additionally scales further to extended (primate/human) consciousness. That means that all of the contents of consciousness are feelings, call consciousness what you will, affect or otherwise. I’ll copy here my comment from the recently very active post on “faster than light travel” … quoting myself:

        “The glaring omission in nearly all philosophical theories of consciousness is the missing embodiment. Understandably, I suppose, philosophers tend to believe that the brain’s core mission is to think but, in fact, the feeling of embodiment centered in a world is, I would guess, upwards of 99% of our experience. As regards consciousness as a feeling, although most people conceive of feelings as being physical (body associated) feelings such as pain, touch, temperature (cold/hot) and the like, all of the contents of consciousness are feelings, including sight, hearing and, indeed, thought itself. Rather than the usual conception of thought as some airy, ghostly thing (hence body-mind dualism), the embodied characteristic of thought is obvious when we consider that thinking in words is vocalization-inhibited speech—physical subvocalizations that can be detected by the way—and thinking in pictures, as autist Temple Grandin reports, is, similarly, sight-inhibited vision.”

        Liked by 1 person

        1. Touché Stephen! You’ve brought up some very important distinctions here. There is sentience, which is clearly grounded in the primordial architecture of feelings and sensations or the “what it is like” to be some thing. Then there is intellect, which in theory should supersede the more primitive experience of embodiment. Nevertheless, even the more sophisticated expression of intellect itself is still grounded in the more primitive embodiment of feelings and sensations. Within the hierarchy of consciousness, sensations rule.

          Liked by 1 person

          1. Lee, for an impressive teardown of Nagel’s “What-it’s-likeness,” see P. M. S. Hacker’s “Is There Anything It Is Like to Be a Bat?” downloadable from:

            Click to access To%20be%20a%20bat.pdf

            I always found Nagel’s “There is something it is like to be a bat” an incoherent statement, as it implies the existence of some mysterious “something” that is “like” the experience of a bat. Notice that a single word change, to “There is something it feels like to be a bat” produces a completely intelligible assertion, although that assertion is equivalent to the trivial remark that bats are sentient. Of course, the “feels-like” version doesn’t command the attention that consciousness-as-some-spooky-something does, leaving me somewhat sympathetic to Nagel’s desire for at least five minutes of fame. But that has by now stretched to years so that I, for one, would welcome the abandonment by everyone of Nagels’ supposed insight and the shorthand nounification “What-it’s-likeness.”

            Notice the extensive use of the words “feels” and “feelings” in Hacker’s paper. I recall an interview with Chalmers that reports him saying, roughly, “’What-it’s-like’ is essentially what it feels like.” Given more free time than I have now, I could Google for that interview, but I’m kinda busy fixing a desktop PC.

            Like

          2. Lee, owing to more busyness, I’ll respond later today with an overview of my theory about how it all fits together, which is largely based on Damasio’s thinking. Stay tuned …;-)

            Like

      3. The paper describing the split-brain study and its results is titled, “Split Brain: Divided Perception But Undivided Consciousness by Yaïr Pinto and seven others (!). Fetch the PDF at:

        https://www.researchgate.net/publication/ 312973265_Split_brain_Divided_perception_but_undivided_consciousness&usg

        As the title indicates, the consciousness of the split-brain patients remained unified and undisturbed but the contents of their consciousness changed from before the operation. Anyone contributing here can read the paper and evaluate the conclusion for themselves.

        Most interestingly, their paper states:

        … the status of split-brain patients may have important consequences for current dominant theories of consciousness. Congruent with the canonical view of split-brain patients, both the Global Workspace theory (Baars, 1988, 2005; Dehaene and Naccache, 2001) and the Information Integration theory (Tononi, 2004, 2005; Tononi and Koch, 2015) imply that without massive inter-hemispheric communication two independent conscious systems appear. If the canonical view cannot be quantitatively replicated, and evidence for conscious unity in the split-brain syndrome is found, both theories may require substantial modifications.

        The fact that their findings invalidate Gazzaniga’s interpretation inclines me to wonder about his ability to be objective in his analysis. Also the revised view shamefully challenges Roger Sperry’s 1994 Nobel prize and is destructive of a large swath of philosophical conjecture, threatening the livelihoods of scores of GWT and IIT consciousness philosophers. Anyone reading the paper should be able to spot the purported methodological errors committed by the eight specialists and point them out for us. Thanks in advance! 😉

        Like

        1. I had doubts about Pinto, et al.’s methodology when I first read the paper. As the Neuroskeptic noted at the time, the results are discordant with too much other neuroscience. So I’m not surprised Gazzaniga found issues with it. It’s worth noting that we’re comparing conclusions from years of research with numerous contributors, ongoing fine tuning of methodology, and dozens of test subjects, to a tiny study of two subjects decades after they had the surgery.

          And I think Pinto has an ideological agenda calling his own objectivity into question. From an Aeon piece he authored commenting on the paper.

          While the previous model provided strong evidence for materialism (split the brain, split the person), the current understanding seems to only deepen the mystery of consciousness.

          https://aeon.co/ideas/when-you-split-the-brain-do-you-split-the-person
          I think we’d agree that, even if his methodology is eventually vindicated, there are lots of possible explanations that don’t involve non-physical factors.

          Like

          1. It’s not the methodologies that concern me, it’s Gazzaniga’s original “two consciousnesses” conclusion or what appears instead to be a “two persons” conclusion. How do we enumerate persons and consciousnesses? We might say that Mike is conscious, Lee is conscious and Steve is conscious, making in all three consciousnesses in three persons. Or is it six consciousnesses in three persons or six persons with three consciousnesses or similar silliness. What’s with that?

            By the way, anatomically the two hemispheres remain connected—it’s just the high-speed connection that’s severed. Also a cortical consciousness bias is evident in all of these researchers’ discussions and conclusions. We’ve already said about all that can be said as to the validity of that hypothesis, so I won’t add to it here. But notice that Pinto writes “… both eyes sent information to both brain hemispheres” which completely ignores, as did Sperry and Gazzaniga, the sub-cortical (brainstem) reception of visual signals, signals which have been pre-processed by the cortical-like network at the back of the eyeball, and so possess some minimal pre-conscious content. None of these researchers consider the influence of that additional connectivity or factor it into the observed results because of their glaring (no visual pun intended) cortical consciousness bias.

            In the Aeon article you cite, Pinto writes, “We’ve got to admit that split-brain patients feel and behave normally. If a split-brain patient walks into the room, you would not notice anything unusual. And they themselves claim to be completely unchanged, other than being rid of terrible epileptic seizures. If the person was really split, this wouldn’t be true.” So what’s with that too?

            Also consider neuroanatomist’s Jill Bolte Taylor’s very moving personal description of her own experience following a severe left hemisphere hemorrhage, after which her right hemisphere’s contributions to the content of consciousness became prominent. Nowhere does Taylor describe feeling like another person or experiencing another consciousness. It’s clear from her informed (as a neuroanatomist) description that her consciousness, in fact, remained continuous, whole and intact, while the content of her consciousness underwent dramatic change from an analytical and linguistic “left-brain” experience to a holistic “right-brain” one in which, for instance, the phone numbers she looked at were collections of shapes, not recognized as numerals. In her wonderful TED talk, she encourages us to explore the holistic beauty of right-brained experience but never implies in any way that in so doing we would become another person.

            It seems to me that the entire one-two persons/consciousnesses research and its interpretations is ill-founded and hopelessly mired in bias and, as such, I vote to completely ignore it as irrelevant to the understanding of consciousness. Show of hands? 😉

            Liked by 1 person

          2. That flood of italics at the end was caused by an ill-formed terminator “” typo … the italicized word was intended to be content. Mea fatfinger culpa … 😉

            Like

      4. “Consciousness is in the eye of the beholder.”

        Can you convincingly say that this statement is NOT an anthropocentric perspective, even a conceit worthy of Narcissus? I like the word: Poppycock. It just doesn’t get enough ware these days.

        Like

    2. If the (as yet unknown) brain structure of an organism is physically configured as a feeling and the feeling is felt by the organism, the organism is conscious.

      I’m having trouble with the phraseology here. How can a brain be physically configured as a feeling? Could it be posssible for the brain to be configured as a feeling but the feeling not felt by the organism? I’m sympathetic to the view that “feeling” is “consciousness” (see my ongoing discussion in that other post you mentioned) but I’m wondering if you have an explanation of how “feeling” works, physically.

      *

      Like

      1. “…I’m wondering if you have an explanation of how “feeling” works, physically.”

        All you’ve done James is restate the hard problem. For what it’s worth, there is a distinction between sentience and intellect, and that distinction needs to be recognized and respected whenever one discusses consciousness in order to avoid the chaos of discourse.

        According to Wikipedia: “Sentience is the capacity to feel, perceive, or experience subjectively.” In contrast: “intellect… the ability of the mind to come to correct conclusions about what is true or real, and about how to solve problems.” Based upon these agreed upon definitions, it appears that both sentience and intellect are attributes of consciousness, and therefore not the underlying qualitative property of consciousness as such.

        According to my definition: Consciousness is the form through which power is both realized and actualized. One cannot have a coherent conversation about consciousness without first addressing the objective reality of power. As a “thing-in-itself”, power is the wild-card of both causation and consciousness. A wild-card can be any and every thing.

        Like

        1. Lee, “intellect” is a cognitive operation, upwards of 98% of which are unconscious and not in any way available to consciousness (see Lakoff and Johnson’s Philosophy in the Flesh. We occasionally become conscious of some of the results of these unconscious operations, but only rarely—if at all—do we “come to correct conclusions” or “solve problems” on a conscious level. Identifying intellect as an attribute of sentience/consciousness ignores its unconscious nature.

          Like

          1. “Identifying intellect as an attribute of sentience/consciousness ignores its unconscious nature.”

            The short answer is this Stephen: Identifying intellect as an attribute of sentience and/or consciousness does not ignore its unconscious nature. You are either ignoring my definition of consciousness or you do not understand it. According to my definition, consciousness is a framework or an underlying architecture, a qualitative property if you will which is fully capable of accommodating all of the identifiable attributes which are intrinsic to that architecture, hence the term, wild-card…

            Let me quote Niels Bohr: “What is it that we humans depend on? We depend on our words… Our task is to communicate experience and ideas to others. We must strive continually to extend the scope of our description, but in such a way that our messages do not thereby lose their objective or unambiguous character … We are suspended in language in such a way that we cannot say what is up and what is down.”

            Like

          2. Lee, it’s probably that I don’t understand your definition of consciousness, which appears to be a completely philosophical one. My definition of consciousness as feeling, in contrast, is an operational definition of physical consciousness as implemented in a biological brain.

            My difficulty with the philosophical definitions is that, in the case of your definition for instance, I can’t imagine the evolution of a “… framework or an underlying architecture, a qualitative property if you will which is fully capable of accommodating all of the identifiable attributes which are intrinsic to that architecture …”.

            On the other hand, I can easily conceive of the biological evolution and propagation of brain functionality that promotes survival to the point of reproduction. Although the preponderance of consciousness theorizing is philosophical, like your own, I prefer to focus on the biology because I believe that understanding consciousness is ultimately a scientific enterprise. Although that may be viewed as an intellectual deficiency, it’s reflective of my preference for Biological Naturalism.

            Like

          3. I think you just asserted that cognitive operations are more instinctual than rational/intellectual. I posed such as assertion at my AI piece mentioned at first, and still contend that cognition does NOT rely upon a certain ‘set’ of disparate brain functions and regions relied upon in frontal cortex designs, where some posit their critical mass for what amounts to nothing more than the expanded version of a form of Intelligence common to Earth, on the human plane of experience, aspirations and autonomy being our most elaborated design, but not distinct from the whole.

            Like

          4. No, I simply stated that almost all cognitive operations are unconscious and not available to consciousness in any way. They’re unrelated to instinct but implement rationality in some cases, although Lakoff and Johnson contend, and I agree, that man is not a rational animal.

            Like

          5. Well, I’m surprised at that response, particularly how Instinct would be available to Reason without Thought (you end up calling Instinct: “cognitive operations” and place them OUT of consciousness)…but then refer to implement(al) rationality, a piece of human-types consciousness, unaware of opposition, apparently.

            Like

          6. Well, it’s been a long day … I haven’t given much thought to instinct and its place in the consciousness scheme of things and I’m not really sure it’s that important vis-a-vis consciousness, although it clearly figures into behavior. I’ll sleep on it, perhaps for several days … 🙂

            Liked by 1 person

          7. If you were sure, you’d change your mind and take the opposing view, that Instinct is the Rule of Reason, and ultimately Science on every human level. More on that later, I suspect. Ciao.

            Like

          8. Well, BeingQuest, or Durandus (if that’s not too personal), while I’m waiting for the first televised NFL game of the morning to start, I took a look at Wikipedia’s definition of instinct, which is:

            Instinct or innate behavior is the inherent inclination of a living organism towards a particular complex behavior. The simplest example of an instinctive behavior is a fixed action pattern (FAP), in which a very short to medium length sequence of actions, without variation, are carried out in response to a corresponding clearly defined stimulus.”

            So, my initial response in a slightly fatigued state appears to hold, that “… clearly figures into behavior” business. The definition makes it clear that instinctive FAP’s have nothing to do with consciousness or any unconscious cortical elaboration. However, inasmuch as FAP’s are a sequence of actions without variation and therefore bodily movements, those are felt (sentience, again) as they occur.

            Rule of Reason? Science? I fail to see any connection between those and instinctual behaviors.

            I’m quite intrigued to note, by that way, that my lengthy comment of yesterday outlining my consciousness hypothesis—that unconscious cortical processing feeds the brainstem for conscious “display”—has received exactly zero comments. Either I exhausted everyone who tried to read it or, as seems more likely, it’s simply insufficiently philosophical, or, worse, not at all philosophical. Instead it’s a scientifically grounded wholly biological hypothesis conformant with evolutionary and experimental evidence that additionally explains several conundrums currently inexplicable by the wildly popular, but evidence-free cortical consciousness hypothesis.

            Go figure, I tell myself. It’s my fault for forgetting to chart a hierarchy … 😉

            I must say that Durandus is a most unique name, although BeingQuest von Meissen has a certain cachet as well … 😉

            Liked by 1 person

          9. Reason and Instinct occupy the two fundamental modes of Experience with which I am most concerned in my Philosophy. I observe them as being both Objective and Subjective roots from which is grown the Paradigms of Perception with which we, as mankind, and all sentient creatures, have to do.

            This is OUR classroom of Experience, given by Instinct and the evolutionary imperatives, refined by Reason to serve best the goals set before us all, whether sophisticated (as mankind) or simpler and immediate, immanent.

            The Learning Curve of Sentience is really only One Thing, in my book, and occurs from the first in every proximate example to man’s conceits of Awareness, without all the verbal claptrap we too much indulge, but which is common among terrestrial existence here. Frankly, it could be said that mankind’s exemplary nervous system may be the Door to its doom, and not the ‘advanced’ example of terrestrial Intelligence here, among alternatives, quick vanishing.

            We’ll be staring at ourselves in a mesmeric gaze sometime soon, as Narcissus at the well, being the source and object of our own delusions.

            Like

          10. And ” forgetting to charting a hierarchy” is my first fatfinger of the day. I much prefer “forgetting to chart a hierarchy.” … 😉

            Like

        2. Lee, I appreciate the difference between sentience and intellect. But they are not unrelated. Intellect requires sentience. A subject’s intellect is essentially the repertoire of things a subject can do with sentience.

          But consciousness is just sentience at some (observer defined) level of intellect. Some of us (me, anyway) say (i.e., define) that a bare minimum of sentience is enough. Others (Hi Mike!) require a certain higher level of intelligence.

          I also agree that one “cannot have a coherent conversation about consciousness without first addressing the objective reality of power.” That’s why I start that conversation at the bottom. Everything that happens, every change, every event, can be described in terms of Input—>[mechanism]—>Output. I put mechanism in brackets because it is “the thing-in-itself”, the noumenon, multiply realizable. And that mechanism is the causal power, specifically the efficient cause according to Aristotle, the Input and Output being the material and formal causes respectively.

          But this is not enough to explain Consciousness/sentience, because Consciousness requires something more. And it is not okay to say that Consciousness is the secret ingredient of causal power, anymore than it would be okay to say angels are the secret ingredient, or peanut butter.

          It turns out that in order to explain consciousness you do need an additional causal power, namely Aristotle’s final cause. But, ironically, it turns out that the “final cause” is a causation that happens prior to the material cause. It is the causal event which generates the mechanism which is the efficient cause. Note: this final cause is also not sufficient to provide consciousness. Consciousness (sentience) happens when the event is a representation, i.e., when the Input “represents an object/concept” and the mechanism “interprets” that Input as representing the object/concept and the Output is a “valuable” response to that object/concept. So for a conscious event, not only must the mechanism have a “final cause”, or purpose, but the Input must also have a purpose, namely to represent the concept.

          There. Clear as mud.

          *

          Like

          1. “Others (Hi Mike!) require a certain higher level of intelligence.”

            I do think that without some level of instrumental intelligence, there is no sentience. LeDoux and Barrett are right that affective/emotional feelings are higher level interpretations of what the survival circuits, the reflexes and fixed action patterns, are doing. (I just think he’s wrong that such feelings require introspective self awareness.)

            Put another way, affective feelings are high level representations of firing survival circuits (and the interoceptive feedback from their physiological effects). The job of the higher level networks is to decide which survival circuits to allow to finish, and which to inhibit.

            So, no intellect, no sentience. And without sentience, the intellect is an impotent unmotivated engine. They are two aspects of an integrated system.

            Like

          2. James,

            Your explanation is not terribly muddy. Here’s the short answer: If there were no such thing as change, there would be no need to address the notion of power.

            “I put mechanism in brackets because it is “the thing-in-itself”, the noumenon, multiply realizable. And that mechanism is the causal power, specifically the efficient cause according to Aristotle, the Input and Output being the material and formal causes respectively.”

            No real problem there James. I just want to clarify your usage of the “thing-in-itself”. According to Kant: “While we are prohibited from absolute knowledge of the thing-in-itself, we can impute to it a cause beyond ourselves as a source of the representations manifest within us.” Here’s the problem: Kant’s statement would also apply to any and all inputs and outputs, meaning… the entirety of the physical world with all of its parts. It’s at this intersection that the water gets muddy quickly. To clear up the opaque meaning, one has to firmly establish the reality/appearance distinction and respect that distinction whenever discussing these type of topics.

            Like

        3. Lee, you said:

          Here’s the problem: Kant’s statement would also apply to any and all inputs and outputs, meaning… the entirety of the physical world with all of its parts. It’s at this intersection that the water gets muddy quickly. To clear up the opaque meaning, one has to firmly establish the reality/appearance distinction and respect that distinction whenever discussing these type of topics.

          I don’t see the problem. Kant’s statement does apply to all of the physical world. And I do establish the reality/appearance distinction, except that the definition of “appearance” is not well established and may be ambiguous.

          So I see two forms of “appearance”. One is the Output of an event. This output is an affordance for subsequent interpretation. For example, the photons bouncing off an apple are an affordance of interpretation that there is an apple.

          The second form of “appearance” that I’m thinking about is the phenomenological, the feeling, the qualia. This form refers to the actual event of interpretation and more specifically refers to the purpose of the mechanism that is doing the interpreting of the above mentioned affordance. The mechanism is set up such that the input “means” apple. Any reference to the event is a reference to an interpretation of “apple”, and this meaning is inherent in the mechanism.

          *

          Like

          1. James,

            As long as the “mechanism” you are referring to within your examples is consistently the “thing-in-itself”, I can grok what you are trying to articulate.

            The reality/appearance distinction is pretty straight forward. In a nutshell, our universe is not real in the context of (R); that is Reality with a capital (R), i.e the Ultimate Reality. Our universe is an expression of that Ultimate Reality (R), which simply means that our physical universe is the appearance of reality. The notion of the reality/appearance distinction is a highly controversial concept simply because it stands in direct contrast to the ontology of materialism and idealism, both of which are grounded in Realism, with a capital (R).

            It can be scientifically demonstrated that our universe is an expression, but there are few individuals who are even willing to entertain the notion let alone examine the axiomatic evidence. It is what it is…

            Like

        4. I would have thought that Intellect better fitted the definition of your first sentence here, in more Higher Order respects, which would separate it from sheer Instinct and autonomic brain functions 500 million years realized. That mankind claims an Intellect so much removed from Instinct, begs whether mankind thought it had comparisons of equally, or better, evolutionary strategies (NOT, says we), given the State of the World, the Earth and our collective Future today, rapidly multiplying Extinction on every corner of Life, mostly or most efficiently by mankind’s own obsessive rationality of ‘intellect’, or justifications of ‘affect’ made scientific and necessary by the same.

          As a survival ‘instinct’, the world appears to rue the day of its inception. If I live like a Barn Cat or Dog or Pigeon in the Loft…is there any hope to escape it? My affections lend hope to the prospect, if ever there was one; meanwhile ‘causation’ collapses, and I survive in retreat from a world not my own. Hibernation into Instinct. I think I’ll go there, for the winter.

          Like

      2. James, I have several comments to respond to today, so I got mixed up and posted this comment for Lee that was intended for you:

        Owing to more busyness, I’ll respond later today with an overview of my theory about how it all fits together, which is largely based on Damasio’s thinking. Stay tuned …;-)

        Like

      3. Yes, I do have a theory that’s based on Damasio’s in The Feeling of What Happens and, more specifically, in his paper “Consciousness and the brainstem” with Josef Parvizi. Download at:

        Click to access Parvizi_Damasio_ConsciousnBrainstem.pdf

        Damasio is upfront about the brainstem consciousness hypothesis being a minority opinion, although I don’t consider that a flaw—the God-created Earth-centric cosmology was all the rage for centuries but, ultimately, was found to be evidence-free, so I don’t have a problem supporting an evidence-rich hypothesis, even though it’s currently unpopular.

        As I remarked in my intellect/intelligence comment just posted, rather than producing consciousness:

        “The cortex is massively engaged in an ongoing pattern matching process, where both interoceptive and exteroceptive input stimuli are continuously encoded as a story which is continuously pattern-matched against memory, also encoded as stories. Memories, of course, are encoded as gestalts with emotional context in addition to events. Successful pattern matches result in expectations about a story’s outcome, which are incorporated into the conscious “image” of the input stimuli.”

        By the way, I think that our experiences while we’re dreaming are a consequence of this cortical story processing, unfettered from sensory input and, hence, dreamlike.

        All of this takes place in an integrated brain, of course, so the cortical processing is influenced by emotional valence (amygdala, etc.) and biochemical neurotransmitter concentrations, etc. I theorize that the cortex produces what I call “pre-conscious images” which are swept sub-cortically to the brainstem. (Let’s say “brainstem complex”, since it’s really an unknown, although the reticular formation is suggestive to many). Perhaps the transmission takes place via those rhythmic “waves” that propagate continuously from the front to the rear of the brain.

        The brainstem, as it has since early evolutionary times, then “displays”—makes conscious—the pre-conscious images from the cortex, integrating them with it’s own “displays” from its direct inputs from throughout the body. The brainstem, you’ll note, is perfectly connected and positioned to act as a “body map” so that the feelings produced as a consequence of a “display” can be localized to a specific body part, like a pain in your foot.

        That “displaying” is the production of a feeling, which I theorize is a Neural Tissue Configuration, or NCT. Imagine a simple, but crude, example where a small collection of neural tissue is “bent” (this “bending” is an imaginary simplification for illustration purposes) or configured in a particular way. That configuration IS a feeling—of a touch, for example, localized by the configuration’s location in the body map. The configuration does not produce the feeling of a touch—it IS the feeling itself, such as a touch on your upper arm.

        This scheme accounts for the unified presentation of consciousness, although it differs from Damasio’s conception in that he believes that the cortex directly “displays” some conscious images. In my opinion, that reintroduces the unified presentation problem currently plaguing all cortical consciousness hypotheses. I differ with him in that I view those as cortically produced pre-conscious images.

        This hypothesis also explains the evolution of successively more complex and capable cortical structures. Consciousness is a felt simulation of an organism centered in a world and a richer, more complex simulation would clearly be more effective in contributing to survival to reproductive age. The hypothesis also explains why the brainstem activates the cortex—the brainstem alone, as in cortically deficient creatures, creates a comparatively crude simulation and activating the cortex vastly enhances the content of consciousness.

        James, hopefully this wordy explanation answers, or begins to answer your question, “How can a brain be physically configured as a feeling?”. As to your question, “Could it be possible for the brain to be configured as a feeling but the feeling not felt by the organism?” … I’m unsure, except perhaps in the event of a brainstem pathology or a dreaming state.

        Like

        1. This hypothesis also explains Libet’s findings about the timing of a touch on the arm in relation to a cortical stimulation.

          Like

        2. “I’m quite intrigued to note, by that way, that my lengthy comment of yesterday outlining my consciousness hypothesis—that unconscious cortical processing feeds the brainstem for conscious “display”—has received exactly zero comments.”

          Hey Stephen,
          The reason I didn’t respond is I think you know my position. However, someone did ask me about this, and maybe people haven’t seen the earlier discussions. I’m not interested in endless debate on this, just clarifying my understanding of why most neuroscientists see consciousness as a thalamo-cortical phenomenon.

          So, in brief:

          We know from self report that some behavior requires conscious awareness and some doesn’t. For instance, reflexive or habitual behavior can take place while the subject is distracted and so not consciously aware of the activity. There’s a lot of psychological work on this (Daniel Kahneman comes to mind), but anyone who’s daydreamed while mowing the lawn should be familiar with the concept. On the other hand, solving complex problems usually only happens with conscious awareness.

          From neurological case studies, we know that reflexive and habitual behavior are associated with subcortical regions. For example, lesions to the frontal lobes may impair a person’s ability to plan or adapt to circumstances, but their habitual and reflexive behavior is left intact. Animal studies have established that decerebrated animals only have reflexive behavior. What instrumental learning abilities exist in fish, are extinguished when their forebrain and midbrain are separated. And the phenomenon of blindsight seems to confirm that when information only goes through subcortical patheways instead of through cortical visual streams, it isn’t consciously perceived.

          Can we rule out that the midbrain doesn’t contribute to consciousness? I don’t think so, but we don’t seem to have introspective access to the behavior that can be scientifically associated with it, such as eye saccades. And as Bjorn Merker noted, most of the connections from the cerebrum down to the midbrain are inhibitory in nature, which doesn’t seem to leave much room for all the mental content being shunted down there for presentation, but is compatible with higher level circuits selectively allowing or inhibiting motor reflexes.

          We can certainly say that the reticular formation in the brainstem is crucial for turning consciousness on, for driving arousal through the basal ganglia up into the cortex. If it’s damaged too much, there can be no consciousness. But I haven’t seen any indication it’s involved in actual awareness.

          Interestingly, I did a post a few months ago on a paper which covered the history of scientific theories of consciousness. Apparently some scientists in the 19th century thought consciousness resided in the spinal cord, convinced by the reflexive reactions from the bodies of decapitated frogs, so there’s a long tradition of this kind of thought.
          https://selfawarepatterns.com/2019/07/08/consciousness-science-undetermined/

          Like

          1. While I’m contemplating and composing, Mike, I wanted to see what the WordPress parser would do with a right arrow character. It’s Unicode U+21D2 in the Arrows block and I’ll place some between square brackets. TEST: [ ⇒ ⇒ ⇒]. My word processor and text editor are fine with it, but you never know. After it posts, feel free to delete this comment if you like.

            Liked by 1 person

          2. Mike, the hypothesis I’ve proposed entails far more than the assertion that the brainstem produces consciousness. For starters, it’s an actual hypothesis which makes it different from the comments about consciousness that prevail on your blog—it provides clear definitions and an end-to-end explanation of the brain’s production of sentience. In keeping with tradition and for ease of reference, I’ll go ahead and name the hypothesis the

            BRAinstem Sentience Hypothesis, or BRASH

            … which I intend in the “strong, energetic, or highly spirited, (perhaps) in an irreverent way” sense of the definition of the word brash. I reserve the right to change that name should it cause difficulty of some sort but, for an early Monday morning, I find it appealing and a worthy competitor to sexy acronyms like HOT … 😉

            Regarding a discussion of BRASH, I believe it deserves to be taken seriously as a hypothesis so that, rather than being seen as merely a disagreement with the proposal that the cortex produces consciousness, it should be viewed as an attempt at an end-to-end explanation of consciousness production. That means understanding its definitions and granting its validity in a provisional sense while analyzing the hypothesis on the merits from a scientific sensibility, to determine whether the hypothesis is consistent with confirmed evidence—not an interpretation of evidence but the evidence itself. Additional evaluation should consider the resolution of known conundrums by the hypothesis, in this case, an explanation of the unified presentation of consciousness and of Libet’s timing studies involving sensory input vs. cortical stimulus, the origin of our dreams and other puzzles.

            With that as a given, I must first expand the BRASH explanation provided so far with this additional information characterizing the functionality of cortical processing as a Story Engine:

            BRASH proposes that the cortex is a Story Engine that operates unconsciously and constitutes, in fact, that 98% or greater of all cognitive operations (see Philosophy in the Flesh, Lakoff and Johnson, 1999). As I’ve mentioned before, cortical processing takes place in an integrated brain, so it’s influenced by inputs from other brain structures with some, like the amygdala, providing emotional valence. Additional influences are the overall biochemical state of the brain, neurotransmitter concentrations, etc.

            The Story Engine IS the unconscious as we understand it. We can directly glimpse the cortex at work by observing the content of our dreams that “leak” into consciousness while we sleep. Dreams are unquestionably stories, albeit bizarre stories that are unconstrained by sensory input and the need to create the conscious simulation that sustains our lives in our waking state.

            This comment is very lengthy so I’m posting PART 2 next …

            Liked by 1 person

          3. (Cont’d)

            The Collins Dictionary online provides this spectrum of definitions for the word story:

            1. the telling of a happening of connected series of happenings … whether true or fictitious

            2. an anecdote or joke

            5. a report or rumor

            8. the situation with regard to the subject being discussed; the aggregate of facts or circumstances involved

            BRASH extends this list of meanings to additionally refer to the non-narrative conceptions and productions of the cortex, whose fundamental mental tool is the embodied metaphor (see, once again, Philosophy in the Flesh). Embodied metaphors are stories of relationships and their implications that are rooted in our embodiment.

            To further understand the role of the Story as the fundamental unit of cognitive processing and its pervasive role in human thought, consider that Science constructs stories of how the world works, often bolstered by the use of supportive mathematical stories. Memes are stories! Artworks in all media are stories and have been from prehistoric times. In the Ken Burns’ documentary Jazz, trumpeter Wynton Marsalis repeatedly speaks of the stories that musicians tell—musical expressions are stories. BRASH asserts that our memories are stored and retrieved as multi-sensory gestalts that are stories and our pattern-recognition intelligence compares those remembered stories to make predictions and implement creativity. As I’ve commented previously, a recent attempt to train an AI in moral decision making failed until the approach was modified to teach the AI stories with embedded moral decisions. We understand our lives as stories and the bulk of the world’s economic activity is focused on and dedicated to the telling and distribution of stories of all kinds in all media. Combinations and collisions of stories create new stories which we promote and exchange with great energy.

            Two of your concerns, Mike, from your Consciousness Hierarchy, are Prediction and Imagination. BRASH defines both of those as outcomes of cortical story pattern matching, as follows:

            Representing stories as linked “narrative” components, the metaphor has the structure A ➡ B, where ‘A’, ‘B’ and so on are fundamental story primitives such as facts. Let’s assume a simple story’s structure in memory is A ➡ B ➡ C. Current input to cortical processing, sensory and otherwise, that begins A ➡ B will pattern-match with that story, resulting in the expectation, or prediction of C, which may be realized consciously, depending on the situation. Creativity, or Imagination, results when another story retrieved from memory during the processing, say B ➡ E, may result in the pattern-matching output A ➡ B ➡ E with E possibly being promoted to consciousness in preference to C. These are simple examples to be sure, but story pattern matching scales up without constraint to provide explanations for both complex prediction/expectation and imagination.

            Consequently, as should any credible scientifically grounded hypothesis explaining consciousness, BRASH provides a clear explanation for both expectation and imagination. BRASH is additionally grounded in a clear definition of the term consciousness, a definition missing from most philosophical theories of consciousness, which is an invalidating deficiency from a scientific viewpoint. BRASH, in contrast, asserts that conciousness is sentience—feelings across the spectrum from physical feelings like touch to hearing, vision and thought feelings.

            This concludes the description of BRASH, which began with my earlier comment of 9/7 responding to JamesOfSeattle. In that comment I incorrectly specified the acronym for Neural Tissue Configuration, which should be NTC, a not unusual (for me) fatfinger. I’ll next (but later) be commenting in response to your comment Mike (still the only one) wherein you replied to that initial comment of mine to James. In the meantime I’m consumed with diagnosing a problem with a Dell 3040 micro PC.

            Like

          4. Wowsers! Little white arrows on a blue field … I was expecting black on a white (or gray) background. Most interesting … 😉

            Like

          5. You never know with the WP commenting system. I get white on blue in the notification email and in the WP UI, but black on background in the site UI.

            It’s a little better with an actual blog post, where we get to work with a rich text editor, can save drafts, etc, although I’m still using the old editor, not having found the newer ones very enticing.

            Like

          6. I tried an investigatory line with about 10 different arrow styles and WP responded:

            “Sorry, this comment could not be posted.”

            Like

          7. In response to an investigatory line with about 10 different arrow styles, WP responded:
            “Sorry, this comment could not be posted.”

            Like

          8. I think I offended he comment parser … they said the first of my last two could not be posted and then posted it 😉 How temperamental!

            Like

          9. Finally I have some comments responding to your comment of 9/8, following your “So, in brief” …

            I’m not sure why you refer to reflexive and habitual behaviors together. Instinctual and reflexive behaviors are almost alike in that they activate FAPs, although they differ in that reflexive FAPs are initiated by a physical stimulus while instinctual FAPs are initiated by a perceived situation. Quoting a previous comment, a FAP is:

            ”… a Fixed Action Pattern, in which a very short to medium length sequence of actions, without variation, are carried out in response to a corresponding clearly defined stimulus.”

            Note the “very short to medium length” restriction. As regards comparatively lengthy habitual behaviors, I suspect that those are not at all performed unconsciously, no doubt once again putting me at odds with majority opinion. As an example, consider driving home along a route that’s been repeated so many times that it’s well-established in memory. The predominant belief is that attentive consciousness is not involved, or minimally involved in the driving, allowing for the daydreaming you mention, and I’m sure we’ve all arrived home without a detailed memory of making that sort of a drive.

            But notice that we are concluding an absence of consciousness from an absence of memory which I believe is not a valid inference. For that inference to be valid, we must have already established that ALL consciously performed activities are retained in memory which I don’t believe has been experimentally demonstrated. That being the case, the only valid conclusion from such an incident is that the drive home didn’t make it to the memory store, which seems an efficient way for the brain to deal with habitual behaviors—since they’re already firmly established stories in memory there’s nothing to gain from storing them again and then reinforcing them once again during the sleep-time cortical review of the day’s accumulated stories. If, indeed, credible research has shown that all consciously performed activities are retained in memory, I’d appreciate a reference, because I’m unable to find it.

            Regarding your “Animal studies have established that decerebrated animals only have reflexive behavior” you’ve not provided the details of those behaviors in the past and the statement directly contradicts Merker, namely:

            ”After recovery, decorticate rats show no gross abnormalities in behavior that would allow a casual observer to identify them as impaired in an ordinary captive housing situation, although an experienced observer would be able to do so on the basis of cues in posture, movement, and appearance. They stand, rear, climb, hang from bars, and sleep with normal postures. They groom, play, swim, eat, and defend themselves in ways that differ in some details from those of intact animals, but not in outline. Either sex is capable of mating successfully when paired with normal cage mates, though some behavioral components of normal mating are missing and some are abnormally executed. Neonatally decorticated rats as adults show the essentials of maternal behavior, which, though deficient in some respects, allows them to raise pups to maturity. Some, but not all, aspects of skilled movements survive decortication, and decorticate rats perform as readily as controls on a number of learning tests. Much of what is observed in rats (including mating and maternal behavior) is also true of cats with cortical removal in infancy: they move purposefully, orient themselves to their surroundings by vision and touch (as do the rodents), and are capable of solving a visual discrimination task in a T-maze.”

            I omit the paragraphs following that one but the complete information is in Merker’s “Consciousness without a cerebral cortex” and includes references for the experiments performed, all of which I’ve mentioned previously. In my opinion, and that of the researchers, the not-very-short-to-medium-length behaviors specified could hardly be characterized as either reflexive or instinctual. I’ve also previously mentioned a surgical nick to the brainstem that apparently “cures” blindsight. I assume you’re already aware of this information, Mike, since this is a repetition of it, so I don’t understand why you keep repeating the same assertions.

            Odd and ends: a) BRASH doesn’t rule out the midbrain’s contribution to consciousness but, rather, notes that the brain is composed of integrated and interconnected subsystems, input from which most certainly influences cortical processing; b) Inhibitory connections from the cerebrum to the midbrain suppress physical movement, not consciousness “display”; c) I mentioned the reticular formation because it has been suggested repeatedly as the source of conscious “display” but, at present, it’s completely unknown what structure(s) do that—a situation that I expect will persist for some time because, unlike with the cortex, we can’t “pop off the lid” and poke around and intrusive brainstem probing is easily fatal. Conclusive research awaits the development of imaging technologies and, perhaps, nanotechnological probes that have yet to be developed.

            Like

          10. “you’ve not provided the details of those behaviors in the past and the statement directly contradicts Merker,”

            For details on decerebration, there’s a section in the Recognition and Alleviation of Pain in Laboratory Animals report which describes it, and cites papers that go into more detail. It also notes the difference between it and decortication. (Scroll to the section: Further Comments on the Distinction between Nociception and Pain)
            https://www.ncbi.nlm.nih.gov/books/NBK32659/#ch2.s1

            Like

          11. A most interesting and valuable resource, Mike, and many thanks for the link. The definitions section alone is very valuable. While I haven’t had the time to finish the entire paper, only most of section 1, I’ve already found some compelling remarks that are very relevant to our discussion about the cortical vs. brainstem production (“display”) of conscious images. For instance:

            ”Even the argument that certain forebrain structures are required for pain (Rose 2002) is problematic because it presupposes a complete understanding of how and where pain is generated in the human brain, when in fact this is still under study (the anterior cingulate, for instance, is activated by subliminal stimuli—i.e., stimuli of which humans are unaware—as well as by pain; Kilgore and Yurgelun-Todd 2004; Sidhu et al. 2004; Box 1-3). Such an argument also assumes that, evolutionarily, any cortical subregions involved in pain became so only after their specialization into these subregions (thus ignoring the possible functions of these regions’ evolutionary precursors).

            This type of uncertainty is one reason the phylogenetic distribution of pain is a matter of discussion and debate.

            Despite these ongoing debates, it is generally agreed that, in mammals, pain does require a cortex (though see Merker 2007 for an opposing view). Therefore, it is typically assumed that any responses in, for example, decerebrate mammals cannot be used reliably to identify which species or developmental stages feel pain (Box 1-3).” [Italics and bold are mine]

            Clearly, Mike, these remarks about the brain structures that create the feeling of pain apply to the rest of our feeling repertoire—the entirety of our sentience.

            The only other remark I have on the issue of decerebrate vs. decorticate is that the competing theories we’re discussing are cortical vs. brainstem production of conscious images. By drawing a distinction between decerebration and decortication are you suggesting that non-cortical portions of the cerebrum produce consciousness? That proposal would only add to the “Unified Presentation of Consciousness” problem endemic to cortical consciousness theories.

            Like

          12. Glad you’re finding it useful.

            I think you’re misinterpreting the passage you bolded. In the context of the overall document, its point is that the behaviors seen in decerebrated mammals can’t be used as indicators of pain, only of nociception. (Which makes its meaning consistent with the previous sentence.)

            “By drawing a distinction between decerebration and decortication are you suggesting that non-cortical portions of the cerebrum produce consciousness?”

            The thalamus and cortex seem like an integrated system; I don’t think they can be considered apart from each other, at least in healthy mammals. And there’s no doubt that a lot of the subcortical forebrain, such as the basal ganglia, amygdala, etc, participate in behavior, although a lot of that will be habitual or survival circuitry. That should be remembered when assessing the reports of behavior in decorticated animals.

            Like

    3. “For all his apparent evolutionary cred, he appears to be unaware that the differences between humans and other mammals are not differences of kind, they’re differences of degree.”

      You hit the nail on the head of my own spontaneous thought on the matter addressed. And you spoke my mind throughout your response, thanks.

      Like

  4. Mike, in a thread above you said

    So, no intellect, no sentience. And without sentience, the intellect is an impotent unmotivated engine.

    Can you give an example of intellect without sentience? A simple knee-jerk reflex does not seem like it is impotent and unmotivated, but I don’t think you consider it sentience. Is it still intellect, albeit simple?

    *

    Like

    1. James,

      “Can you give an example of intellect without sentience?”

      Not in biology, at least not naturally. Someone who’s had the connections to their ventromedial prefrontal cortex severed (lobotomy) has substantially reduced sentience, although it’s not eliminated.

      ” A simple knee-jerk reflex does not seem like it is impotent and unmotivated, but I don’t think you consider it sentience. Is it still intellect, albeit simple?”

      I wouldn’t consider the knee-jerk reflex either sentient or intellect. It’s an automatic mechanism, one that, because the crossover between sensory and motor pathway happens in the spinal cord, can’t be overridden by cerebral circuitry.

      Of course, that doesn’t mean we don’t have an experience associated with the hammer striking the patellar tendon or our leg muscles contracting. But in this case, it’s after the fact. The same reflex reportedly works in a brain dead body, or even in a recently deceased corpse.

      Like

      1. Okay, then I need a better definition of intellect/intelligence. Can you describe a physical system that demonstrates intelligence and compare/contrast with one that doesn’t have intelligence, like the knee jerk reflex? What features are necessary and sufficient for intelligence?

        *

        Like

        1. The definition of intelligence, like consciousness, is controversial. For this discussion, I think it helps to look at reflexes and fixed action patterns, and see what they’re missing: prediction. (Sorry, I know you hate the word.) Intelligence is about prediction. The knee-jerk reflex makes no predictions; it anticipates or forecasts nothing. Nor does the startle reflex in the brainstem.

          When you think about it, a visual perception, such as discriminating food from a predator, is a prediction, as is a smell, or auditory categorization. And affective feelings, emotions, are predictions about what the lower level survival circuitry will do if allowed. And introspection, the highest form of consciousness that probably only humans possess, involves predictions about the self.

          Like

          1. So here I disagree. A knee-jerk reflex is a prediction. It is a prediction that the result, flexing certain muscles, is the proper (valuable) response to the stimulus. The prediction happens at the time of creating the mechanism. It is not a prediction of the autobiographical self, and so it does not seem like a prediction to you, but that does not mean it is not a prediction.

            And this is true of those other predictions you mention, but those are predictions associated with the autobiographical self. The visual perception of an apple is a prediction that there is an apple out there, but this prediction was made at the time the mechanism was generated, which mechanism was created to recognize apples. Note: some of these predictions, i.e., the creation of mechanisms, happen during evolution, and some can happen at (nearly) real time. An example of the latter is short term memory. For example if you see an apple hanging in a tree, and you move your gaze to the tree trunk while keeping the apple in your peripheral vision, you are not getting enough visual information to identify that green blotch on the side as an apple, but using the mechanism of short-term memory you can predict that that blotch is the apple you were looking directly at a second ago.

            *

            Like

          2. “The prediction happens at the time of creating the mechanism. It is not a prediction of the autobiographical self, and so it does not seem like a prediction to you, but that does not mean it is not a prediction.”

            Well, as someone who doesn’t buy teleology, I don’t think it’s a prediction in any sense. (The closest I’d come is an apparent prediction in the teleonomic sense.) But that aside, I hope it was obvious that I was referring to a prediction made by the individual since we were discussing intelligence. If we start discussing the “intelligence” of evolution as something other than an appearance, that’s sounding a bit too close to intelligent design or guided evolution for my comfort.

            Like

          3. James, it depends on what you mean by “the smell.” If you mean the raw signal coming in to the olfactory bulb, then no prediction is happening there. It’s worth noting that we don’t have conscious access to this raw signal.

            But if by “the smell” you mean the smell of some known thing, such as food or a predator, then that’s a prediction. You could also call it a recognition, but then the question is, recognition for what purpose?

            Admittedly, this categorization seems like it can actually be broken up into two categories. Innately known associations, and learned associations. You could argue that an innate association isn’t really an individual prediction, but just an elaborate reflex (or “prediction” of evolution). A learned association, on the other hand, is a prediction made by the individual. We’re really on the boundary here between reflex and prediction.

            Smell has always been a strange sense. From the earliest vertebrates, it’s always connected directly to the forebrain, unlike most of the other senses which went through the midbrain. (Although 90% of vision axons in mammals now go directly to the forebrain.) Smell seems much more tied to memory than the other senses. It was adaptive for the earliest vertebrates to react reflexively to sight, hearing, touch, or taste, but smell seems much more entangled with memory for its adaptiveness. This has led some biologists to wonder if smell wasn’t the original sense to drive the development of instrumental learning.

            Like

      2. Wikipedia’s (and don’t we love ‘em) Lobotomy article states:

        Following the operation, spontaneity, responsiveness, self-awareness and self-control were reduced. Activity was replaced by inertia, and people were left emotionally blunted and restricted in their intellectual range.

        Nowhere does it say, or even imply, that sentience—consciousness itself—is reduced. As to the knee-jerk type physical reflex, although that particular reflexive bodily motion occurs without control, sans conscious intent, and is very difficult, if not impossible, to suppress, when the reflexive movement takes place in an awake individual, the bodily movement is decidedly felt, as you say, and the initial hammer strike on the knee is felt as well.

        Intelligence. There are legitimate IQ tests online and an examination will show that they measure pattern matching. The cortex is massively engaged in an ongoing pattern matching process, where both interoceptive and exteroceptive input stimuli are continuously encoded as a story which is continuously pattern-matched against memory, also encoded as stories. Memories, of course, are encoded as gestalts with emotional context in addition to events. Successful pattern matches result in expectations about a story’s outcome, which are incorporated into the conscious “image” of the input stimuli. So a simple story of A-then-B results in an expectation upon recognizing A that B is to follow.

        Artificial Intelligence is massive computerized pattern matching operating on vast data sets (stories). As an illustration of the effectiveness of the technique, a recent attempt to teach an AI to make moral decisions failed completely until the programmers hit upon the technique of teaching the AI story scenarios with moral outcomes, whereupon the AI became a near perfect Christian. I kid, I kid … but the technique worked and the AI’s moral decision making was vastly improved.

        And thanks for the italics fix Mike. Perfect!

        Like

        1. “Nowhere does it say, or even imply, that sentience—consciousness itself—is reduced. ”

          Stephen, you quoted it just above this sentence: “and people were left emotionally blunted…”

          On intelligence and the pattern matching, but why does that pattern matching happen? What’s its adaptive value? In other words, why didn’t evolution just stick with reflexes?

          Like

          1. Emotionally blunted does not reduce or eliminate sentience but, rather, is a change in the character of the feelings experienced.

            Pattern matching is that facility that ultimately improves the richness and detail of consciousness, which is the felt simulation of being an organism centered in a world. Sticking with brute reflexes will get you killed a lot sooner I expect..

            Like

  5. Mike, I’m posting this comment back at the root level in response to your comment of 9/11 that begins “Glad you’re finding it useful,” so we could enjoy the lack of accumulated indentation.

    Yes, I believe I misinterpreted as you say. On review, however, I believe they’re saying that, since they assume that the cortex produces consciousness (“though see Merker”), without the cerebrum (and its assumed cortex-produced consciousness) in place, the “developmental stages” of conscious pain cannot be reliably identified. If, however, Merker’s opposing view were considered then one would have to assume that decerebrate mammals would be conscious of pain, raising the ethical issues the paper is primarily concerned with. Frankly, because of the paper’s entire focus on the ethical imperative to not cause pain in animals, I’m surprised they didn’t give the “though see Merker” alternative equal consideration. The absence of that consideration illustrates the possible moral hazards of the cortical consciousness bias, or any unrecognized bias for that matter.

    And, for our still-unresolved cortex-vs-brainstem consciousness discussion, here’s an important quote from the paper:

    “Even the argument that certain forebrain structures are required for pain … is problematic because it presupposes a complete understanding of how and where pain is generated in the human brain, when in fact this is still under study … Such an argument also assumes that, evolutionarily, any cortical subregions involved in pain became so only after their specialization into these subregions (thus ignoring the possible functions of these regions’ evolutionary precursors). Furthermore, it does not clarify the states of animals whose nervous systems differ greatly from that of humans but may still have analogous structures and functions (e.g., invertebrates, which lack a central nervous system, and birds or fish, which have complex forebrains but no neocortex …”

    I’ve italicized their two challenges to cortical conscious theories, one being that we don’t actually know and the second one from an evolutionary standpoint.

    And perhaps you misunderstood my question about whether your distinguishing between decerebration and decortication suggests a role in consciousness production—the actual “display” of conscious images—by non-cortical cerebral structures. To recap: I quoted from Merker the obviously conscious behaviors of decorticated mammals and, in response, you stated that decerebrated mammals only demonstrated reflexive behaviors. The implication is that the difference in behaviors must be due to the differences between decortication and decerebration such that those conscious behaviors noted by Merker must be a result of non-cortical cerebral structures. In other words, those non-cortical cerebral structures must be producing consciousness or, in other words, you’re claiming that decerebration provides evidence for non-brainstem AND non-cortical consciousness that decortication does not.

    Simple logic indicates that you must be supportive of that proposal, which would be a radical departure from any current cortical consciousness proposals.

    The BRASH proposal doesn’t at all deny that the thalamus and cortex are an integrated system that’s critical to the creation of pre-conscious images by the cortex. As I’ve written, “BRASH doesn’t rule out the midbrain’s contribution to consciousness but, rather, notes that the brain is composed of integrated and interconnected subsystems, input from which most certainly influences cortical processing.”

    Additional commentary follows …

    Like

  6. CORTICAL CONSCIOUSNESS CONUNDRUMS

    One more challenge to add to the so far unaddressed challenges to cortical consciousness theories is Merker’s citing of the “purposive, goal-directed behavior” evidence that children born without a cortex are conscious. In fact, newborn consciousness itself is unexplained by cortical consciousness theories, since the cortex of newborns is undeveloped and must undergo a lengthy process of connective “pruning” before achieving normal functioning. Recall that in newborns without a cortex, no unusual newborn behavior is observed, so that the condition is often undiscovered for months. So either all babies are not conscious or the brainstem is producing consciousness for normal newborns and those without a cortex as well. Try to convince parents in both situations that their baby does not feel being touched, held and hugged—I suspect you’d find that a hard sell.

    I’ve noticed that you’ve avoided making any suggestions about how the cortical consciousness theories resolve the several unanswered challenges I’ve presented since you haven’t proposed a single explanation for any of them. Perhaps you and your blog’s commenters that agree that the cortex produces the “display” of conscious content could address the several challenges to that hypothesis that I’ve listed, all of which, by the way, are well explained by BRASH. Here’s a list of those unaddressed challenges to the cortical consciousness hypothesis.

    1. Evolution of cortical consciousness from the precursor reptilian brainstem consciousness—unless you wish to maintain that for hundreds of millions of years creatures having no cortex could not feel anything

    2. Explanation of the gradual evolutionary transfer of consciousness production from precursor brainstem to developing cortex with distributed processing while maintaining a unified single conscious experience

    3. Consciousness in newborns—unless you wish to maintain that human newborns cannot feel anything

    4. Consciousness in newborns without a cortex and its frequent non-diagnosis for several months

    5. The mechanism that produces a unified single consciousness from widely distributed cortical activity in two hemispheres

    6. Libet’s timings that show a touch on the skin being felt before a cortically stimulated touch—if the cortex produces conscious images, the stimulated touch should be felt immediately

    In addition to a complete lack of evidence, these quandaries indicate that the cortical consciousness hypothesis lacks credibility and, considering that an alternate hypothesis easily explains or dispenses with all of them, the cortical consciousness hypothesis must be viewed as indefensible. I look forward to your most interesting post addressing these puzzles.

    Liked by 1 person

    1. Stephen,
      Sorry for the terseness, but as I indicated above, I’m really not interested in endlessly rehashing these points.

      1. All vertebrates, including fish, reptiles, and amphibians, have a forebrain and a diencephalon (thalamus), although they only have a pallium rather than a cortex (or nido-pallium in the case of birds).

      2. Regardless of why it happened, it’s scientifically established that primary sensory processing (except smell) happens in the midbrain in fish, reptiles and amphibians, and in the forebrain for mammals and birds. Instrumental learning in all vertebrates seems to require a forebrain. (A fish whose forebrain has been ablated seems to behave routinely, but loses the ability to learn from experience.)

      3. Newborns are dominated by reflexive behavior, but their cortex is not completely non-functional; it’s just inefficient.

      4. Newborns are dominated by reflexive behavior, which hydranencephalic children never move beyond. You might be outraged, but I see no convincing evidence hydranencephalics are conscious. (Parent emotion is not scientific evidence.)

      5. We’re not conscious of what we’re not conscious of, including gaps in consciousness itself, such as between fragments of it. The anterior PFC appears to produce our impression of a unified experience.

      6. I don’t see the issue. Of course we’d expect neural signalling before conscious awareness. It’d be puzzling if it was otherwise.

      Like

      1. I don’t view your replies as a rehashing, Mike, but as a perplexing series of dodges around the core issue of where in the brain conscious images are “displayed,” which I continue to believe is the subject under discussion.

        So, with great hopes of refocusing on the issue:

        1. I’m referring to the evolution of the forebrain and you responded with a statement about current anatomy.

        2. I’m asking for an explanation of how cortical consciousness evolved from its precursor, not why. Evolution doesn’t concern itself with Why’s.

        3 & 4. I’m referring to sentience, as in babies “feel[ing] being touched, held and hugged” and you responded with statements about behavior.

        Of course the cortex of newborns is “functioning”—the pruning of neural connections and learning to pattern match are both “functioning.” I had previously written that “… the cortex of newborns is undeveloped and must undergo a lengthy process of connective ‘pruning’ before achieving normal functioning,” where normal functioning, which relies on long term memory, is what we see in older children. Newborns don’t possess a store of long term memory.

        5. A unified single consciousness is the well-recognized integrated stream of consciousness. Your reference to “not [being] conscious of what we’re not conscious of” is a tautology that doesn’t invalidate William James’ observation, which is universally recognized as a factual description of conscious experience … the unified “movie-in-the brain”.

        6. Neural signaling occurs in both cases, the touch on the skin and the cortically stimulated touch alike. The issue is why the touch on the skin is felt first, because if consciousness is created by the cortex, the cortical stimulation should be felt first. And it’s not.

        So the score remains Conundrums 6, Cortical Consciousness 0. Please try again or consider that the cortical hypothesis is insupportable and your opinion stands in opposition to both the facts and the evidence. And have a nice day … 🙂

        Like

        1. 1. My point was that the evidence indicates that the earliest vertebrates had forebrains, albeit far less developed than mammalian ones.

          2. You’re assuming there was a precursor. That’s Feinberg and Mallatt’s view based on sensory processing. I’m far less certain than I used to be that pre-mammalian capabilities deserve the label “conscious”, but the point above is that key capabilities that might trigger our intuition of affect consciousness (sentience) appear to be in their forebrain.

          3-4. Behavior is the only evidence we get.

          5. I’ll let my previous reply stand.

          6. From a paper on this effect (note final sentence): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6397274/

          Our results are consistent with a previous observation in non-human primates that intracortical microstimulation of area 1 in primary somatosensory cortex results in significantly slower response times than peripheral stimulation19. This delayed response for DCS is counterintuitive at first, as one may suspect that bypassing the ascending peripheral afferents through DCS would reduce the distance traversed by the sensory volley and consequently result in faster reaction times. However, as previously suggested19, electrical stimulation may be exciting both inhibitory and excitatory connections in unnatural combinations, driving slower behavioral responses.

          That may not be the final explanation, but it’s not an existential conundrum.

          Like

        2. Mike, significant conundrum clarification seems necessary:

          1. Mammals evolved from a group of reptiles called the synapsids who might have had a smidgen of forebrain, but synapsids evolved from even earlier vertebrates with only a brainstem and the forebrain structure evolved subsequently. I cannot conceive of completely insensate zombie vertebrates roaming a hazardous world and living to reproduce, but perhaps you can.

          Perhaps you propose that over 300 million years ago some fertilized egg at some point mysteriously possessed the DNA for a complete brainstem-forebrain brain architecture that neither of its parents possessed, in which case neither parent would feel anything at all when a predator’s jaws crunched its spine … but the offspring would?

          You seem fundamentally resistant to accepting the definition that an organism that feels a single physical feeling like touch or hunger or pain is conscious. If it’s not, please explain what it means to feel?

          2. Every animal structure had a precursor and none of them has ever popped wholly into existence from some inexplicable major new section appearing in a creature’s DNA. DNA undergoes mutations, not wholesale complex additions. And, again, the only “pre-mammalian capability” required for consciousness is simple sentience—to feel a single physical feeling is to be conscious of that feeling.

          Your equating the phrase “affect consciousness” with sentience is suspicious. I suspect you’re using the phrase “affect consciousness” to refer to gut-level emotions rather than to fundamental physical feelings (sentience) like a touch or a toothache.

          3-4. You’re saying that, in the case of all newborns feeling a physical feeling, “Behavior is the only evidence we get.” I’m obliged to point out that behavioral evidence is always the only evidence we get for anyone and everyone’s consciousness! Recognizing consciousness in anyone but ourselves is always an inference—an inference primarily based on behavior but the strength of that inference is vastly increased by a great deal of biological similarity between ourselves and others. To deny sentience to newborns, both normal and hydranencephalic, on behavioral grounds alone is unsupportable and there’s overwhelming evidence of fundamental physical sentience in both cases.

          5. Originally I phrased this conundrum as, “The mechanism that produces a unified single consciousness from widely distributed cortical activity in two hemispheres.” Let me explain the problem in another way:

          Cortical processing is widely distributed based on functionality, i.e., the visual cortex resolves visual content, the auditory cortex resolves auditory content and Broca’s area resolves spoken language content, and so on and so on for a large number of cortical processing functionalities. Since the cortical consciousness hypothesis maintains that the cortex also produces the conscious “display” of that resolved content the conundrum is this:

          How are all of those (presumably) conscious images combined into the unified experience known as the stream of consciousness?

          This unified presentation problem is recognized as unsolved by cortical consciousness proponents, none of whom has proposed any cortical mechanism that would explain such an outcome. No such mechanism is required if a single localized structure, such as (perhaps) the brainstem’s reticular formation, is displaying integrated conscious images as a stream produced by the brainstem and supplemented by the cortex.

          6. I’ve duly noted that final sentence: “However, as previously suggested, electrical stimulation may be exciting both inhibitory and excitatory connections in unnatural combinations, driving slower behavioral responses.”

          These are not behavioral observations of button presses—the test subjects reported their experience (consciousness) of the touches by pressing a button. And the experimenters’ “may be” declaration is a long way from “is”—this is their guess about what causes the delay, not an experimental conclusion.

          The researchers’ unrecognized bias towards cortical consciousness is glaring because a consideration of brainstem sentience (which never occurred to them) would make it obvious that the direct touch is felt first as a consequence of a shorter, more direct neural route to the brainstem, which can immediately “image” the touch, while the processing delay inherent in cortical resolution of the direct cortical stimulus, likely including memory access for pattern matching, explains the results perfectly.

          Conundrums 6, Cortical Consciousness (still) 0

          Mike, I sometimes get the impression that you and LeDoux and your ilk only wish to grant consciousness to immediate relatives and consciousness philosophers … and you aren’t so sure about your relatives. 😉

          Like

          1. 1. Forebrains in pre-mammalians:
            https://en.wikipedia.org/wiki/Brain#Vertebrates
            https://en.wikipedia.org/wiki/Cerebrum#Other_animals
            I also recommend Feinberg and Mallatt’s ‘The Ancient Origins of Consciousness’ or Gerald Schneider’s ‘Brain Structure and Its Origins’ (although the latter is pricey) for more detailed information on vertebrate brain evolution.

            2. Exteroception, interoception and affect are different things. Exteroception and interoception, in and of themselves, have no valence. That comes with affects: hunger, pain, fear, pleasure, etc.

            On the rest, Stephen, you chronically misrepresent my positions. Which is strange since they can be trivially checked by scrolling up. It makes what could be interesting discussions with you exasperating. Anyway, I’ll let what I’ve said stand for 3-6.

            Like

          2. Mike, I’m sorry that you feel like I chronically misrepresent your positions and I would much appreciate learning from you the specifics of any misrepresentations you have identified. For my part, I’ve had difficulty understanding your responses which sometimes seem to ignore the point at issue. I’ve responded to my difficulty in understanding with attempts to clarify my own meaning in hopes that you’ll do the same. That being said, let’s turn once again to the conundrums list.

            1 and 2: Upon rereading several brain evolution papers, a subject I haven’t paid attention to for over a year, I believe that the subject itself and any and all conclusions are far too speculative to contribute substantially to our discussion. I suspect that the authors of those papers would agree. I hereby withdraw these two from the conundrum list.

            3-4. I’m comfortable combining these two puzzles. I believe my example of today of a touch to the palate—a self-administered tongue touch—is to the point here. If newborns with little-to-no cortical development and those born without a cortex feel a purely physical non-valenced touch similar to that then, per the BRASH definition they’re conscious. Of course, as always, we can only observe their behavior but I suspect that if an experimenter or parent were to touch their not-sleeping newborn’s skin a behavioral response (all that we can ever observe) would indicate that the touch was felt. If not, that would support the proposal that all newborns are not conscious. It’s immaterial to me if that’s your own position, but I suspect you’d find yourself contradicted by almost all parents and, more likely, all of them.

            5. I further explained the “unified stream of consciousness” conundrum in today’s comment for you and James.

            6. The experiment you cited confirms, but does not in any way explain Libet’s timings. What you have taken as an explanation is a guess made by the experimenters, whom I believe can confidently be assumed to be cortical consciousness proponents. But no evidence of any kind is cited to support their guess.

            So, let’s call it Concessions 3, Conundrums 3 and Cortical Consciousness 0 … 😉

            My dog, Poquita Loca, stayed overnight at the vet, a new vet for us in whom I have great confidence. She called yesterday evening with a status and a list of further testing but I’ve yet to have an update. Poquita has been on insulin for three years (twice as much as should be used, but ordered by the previous vet) and she stopped eating anything but her beloved beef rib bone last Friday afternoon. We have high hopes. Thanks for your good wishes.

            Liked by 1 person

    2. Mike and Stephen, have you considered the possibility that both the neocortex and the brainstem are components of separate mechanisms that can be considered conscious? Mike would not consider the brain stem conscious because it does not have the higher level cognitive abilities he finds in the cortex-associated mechanism. Stephen recognizes that there are conscious-type capabilities associated with the brain stem that are dissociated from cortex.

      There’s nothing that says a brain can only have one conscious mechanism. When people speak of “one unified field of consciousness”, maybe they’re simply referring to exactly one of those mechanisms, the one referred to by Damasio as the autobiographical self, which is the one that has access to words, which is the one that requires the functionality of the cortex.

      *

      Like

      1. James,
        It all comes down to how you define “consciousness”. Using your input-mechanism-output definition, it is conscious, but then, it seems, so is a plant.

        Under the hierarchy I posted last week…
        1. Reflexes
        2. Perception
        3. Valence directed behavior
        4. Deliberation
        5. Introspection
        …all we see in the brainstem (as is customary, I’m including the midbrain region in “brainstem”) is layer 2, and relatively low level layer 2 at that. In terms of vision, it’s colorless low resolution imagery that we have no introspective access to.

        You can call that “consciousness”. Some do, as in the paper James Cross shared on another thread. But we should be clear what’s missing.

        Like

      2. James, I appreciate your comment and your contribution to the discussion Mike and I are having. I’ve been slow to respond because my unquestionably conscious 12 year-old mini-Schnauzer has been in need of some veterinary attention.

        Mike, I’m including my response to your recent comments here as well …

        If you’ve read my previous comments, you’re familiar with the BRASH proposal which is largely based on the information from Damasio’s The Feeling of What Happens, his paper “Consciousness and the brainstem” (link provided above) and Merker’s “Consciousness without a cerebral cortex” available here:

        https://www.semanticscholar.org/paper/Consciousness-without-a-cerebral-cortex:-a-for-and-Merker/035bc51a8b0c6bf8eba1000576e567f179ddaeb6

        James, your comment indicates that you’re familiar with Damasio’s paper. I find the two-way neural connectivity between the brainstem and the cortex that’s detailed simply astonishing and the role of the brainstem in cortical activation most suggestive.

        The “one unified field of consciousness” you’re referring to is simply the single unified consciousness we all experience, with sight, sound, proprioception, feelings, thought and so on all integrated into the unified stream of consciousness, the “movie-in-the-brain.”

        The role that BRASH assigns to the cortex is story processor, or Story Engine (TM) … (that “TM” is a joke) … whose processing is closely integrated with other brain structures, very dependent on long-term story memory and influenced by overall neurochemistry. But BRASH credits the cortex with the production of pre-conscious images rather than the final conscious images we experience, since BRASH assigns that functionality to the brainstem. The problem, the conundrum, presented by all cortical consciousness proposals is that no mechanism for integrating the conscious outputs of widely distributed cortical functionality (the processing of vision, hearing, language, emotional feelings and so on) has been either identified or proposed. That’s why it’s a well-known named conundrum. Adding brainstem consciousness production to cortical consciousness only further exacerbates the difficulty.

        Perhaps, as Mike has responded, “It all comes down to how you define ‘consciousness’” so let’s look at that. I disagree that we face an insurmountable problem with that fundamental definition as you can see from BRASH’s definition (from Damasio and others) of consciousness, namely, that consciousness is a feeling. Overall, consciousness is “the feeling of what happens.” As Mike pointed out, the input-mechanism-output definition includes plant tropisms (phototropism, geotropism, chemotropism, etc.) as well as simple cells and combinations of cells that respond to environmental conditions and organic/inorganic chemicals. Your definition also doesn’t require a central nervous system—an elaborate structure that is found in every known instance of consciousness, which is always organic, animal consciousness.

        I added to the BRASH definition of consciousness in a previous comment that “[i]f the (as yet unknown) brain structure of an organism is physically configured as a feeling and the feeling is felt by the organism, the organism is conscious.” Neither the basic definition (consciousness is a feeling) nor this further specification have been commented on in, except for Mike’s comments that seem related to the BRASH definition but consistently add more complex emotional processing with words like “valence” and “affect consciousness.” **

        In contrast, the BRASH definition refers to the most fundamental physical feelings and, to provide an example that I’m sure has no valence-y connotations—no “good”- or “bad”-ness—I propose the feeling of a tongue touch to the palate, the roof of mouth in humans and other mammals. If that touch is felt by the organism, the organism is conscious, by definition.

        Mike, I’ve been curious to learn just what it is that your hierarchy actually is and I believe your comment should be taken to mean that you propose it as a definition of consciousness. If you can confirm that’s your intention, then I’ll eventually respond to your recent “Layers of consciousness, September 2019 edition” post where you imply instead that it’s a “conception” of consciousness. I’m not sure what your use of “conception” implies … is it a developmental, evolutionary, or functional hierarchy—or something else altogether?

        **”Valence, as used in psychology, especially in discussing emotions, means the intrinsic attractiveness/”good”-ness (positive valence) or averseness/”bad”-ness (negative valence) of an event, object, or situation. The term also characterizes and categorizes specific emotions. … Joy has positive valence.”

        Liked by 1 person

        1. “Mike, I’ve been curious to learn just what it is that your hierarchy actually is and I believe your comment should be taken to mean that you propose it as a definition of consciousness.”

          Stephen,
          Not quite. It’s actually meant to be somewhat theory neutral. (I say “somewhat” because it’s inherently physicalist.) It is meant to imply a number of definitions I’ve seen people put forward and show them in the hierarchy of increasing capabilities. Its original purpose was to show that some definitions (such as interaction with the environment) omit a lot of what people intuitively attribute to consciousness.

          It’s also meant to convey why I don’t think there’s actually any fact of the matter on the definition. I can see arguments that consciousness requires introspection (fitting John Locke’s old definition for it). I can see an argument that it equals having a sensorium, or that some sort of felt valence is necessary. I can’t see any argument that one of those is the one true definition.

          Much of the difficulty, I think, is that consciousness is a pre-scientific inherently dualistic notion that doesn’t have a direct correlate in the objective world. We can empirically establish the behavioral capabilities of fish, amphibians, reptiles, mammals, birds, insects, etc. But agreeing on which of those capabilities either make or fail to make them conscious? It feels like an increasingly futile effort.

          I often get responses from people that there’s no difficulty. We all just need to agree on a definition. Given the history of this subject, that feels similar to saying we just need to agree on world peace.

          Like

  7. In the first, above, it says:

    To recap: I quoted from Merker the obviously conscious behaviors of decerebrated mammals and, in response, you stated that decerebrated mammals only demonstrated reflexive behaviors.

    And should have been:

    To recap: I quoted from Merker the obviously conscious behaviors of decorticated mammals and, in response, you stated that decerebrated mammals only demonstrated reflexive behaviors.

    Thanks for your understanding … I’ll hire a new editor … 😉

    Like

  8. Mike, I’m commenting at the root level again for the additional space it provides. Regarding your hierarchy you wrote:

    “Its original purpose was to show that some definitions (such as interaction with the environment) omit a lot of what people intuitively attribute to consciousness. It’s also meant to convey why I don’t think there’s actually any fact of the matter on the definition.”

    As a spot-on exposition of the state of those definitions, I strongly recommend P. M. S. Hacker’s paper “The Sad and Sorry History of Consciousness,” which can be found fifth on the list of available Hacker papers at:

    http://info.sjc.ox.ac.uk/scr/hacker/DownloadPapers.html

    I’ve recommended Hacker’s paper previously on Schwitzgebel’s “The Splintered Mind” so you may have already read it. If not, and for those who haven’t, here is a sample of what’s within:

    “The English word ‘conscious’ is recorded by the OED as first occurring at the beginning of the seventeenth century, when, like the Latin ‘conscius’, it signified sharing knowledge with another or being witness to something. In its early forms, it occurred in phrases such as ‘being conscious to another’ and ‘being conscious to something’. But sharing knowledge rapidly evolved into being privy to unshared knowledge, either about others or about oneself. So ‘to be conscious to’ quickly became a cousin to the much older expression ‘to be aware of’.”

    and …

    “The expression ‘conscious’ was introduced into philosophy, almost inadvertently, by Descartes. … The expression and attendant conception, caught on among Descartes’ contemporaries and successors (Gassendi, Arnauld, La Forge) and among English philosophers (Stanley, Tillotson, Cumberland and Cudworth). But it is to Locke, almost fifty years later, that we must turn to find the most influential, fully fledged, philosophical concept of consciousness that was to dominate reflection on the nature of the human mind thenceforth. The attendant conception was to come to its baroque culmination (or perhaps nadir of confusion) in the writings of Kant and the post-Kantian German idealists.”

    Of course, as Hacker points out, the confusion we’re all suffering from results from the all-over-the-map definitions provided by Consciousness Philosophy—philosophical conceptions of consciousness which, as you say, are generally lacking much in the way of “fact[s] of the matter.”

    Note that, in marked contrast, the definition of consciousness anchoring the BRASH proposal, that consciousness is a feeling, is a definition provided by António Damásio, a neuroscientist. Consciousness is precisely sentience in the strict meaning of that word. As such, the definition is clear and precise and doesn’t suffer in any way from the philosophical baggage of 17th century Western philosophy and on that Hacker identifies.

    My qualification that an organism that feels any physical feeling is conscious simply extends Damásio’s definition. I use the example of feeling a physical touch because it clearly and unambiguously communicates what is meant by the definition in a way that everyone can understand. From that understanding, BRASH continues to explain that, as I wrote above:

    “As regards consciousness as a feeling, although most people conceive of feelings as being physical (body associated) feelings such as pain, touch, temperature (cold/hot) and the like, all of the contents of consciousness are feelings, including sight, hearing and, indeed, thought itself.”

    I think it’s obvious then that emotions are feelings as well—the valenced affect consciousness you’ve mentioned. In short, to say that “the brain produces consciousness” is the same as saying that “the brain produces feelings.”

    Without having read them all, I suspect that all of the consciousness discussions on your blog have been philosophical in nature, or at least rooted in philosophical conceptions, with considerable accurate anatomical and neuroscientific information included to make a point or bolster one or another of the philosophical positions. As such, it seems your quintuple hierarchy is meant as an aid to understanding—a light in the darkness meant to help you and the rest of us to find our way through the immense confusion.

    But the Damásio/BRASH definition requires no hierarchy because it simply says that consciousness is sentience—that component of your hierarchy’s number three is the only thing on a one item list. I’ll hazard a guess that, for your blog’s regular audience, the simplicity of that definition is unwelcome because it precludes or end-runs all the philosophical discussions that are so much fun. That’s just a guess though—I’d really like to learn what the objections to neuroscientist Damásio’s definition actually are from any who read this comment and wish to contribute to this discussion.

    Like

    1. Stephen,
      I did read Hacker’s paper back when you shared it on Eric’s blog. A lot of its points resonate with my own conclusions. I am interested in the history of this subject though, so I might make another pass through it.

      Your definition may be based on Damasio’s, and he is an neuroscientist, but he’s far from the only neuroscientist to produce a definition. From what I can see, they’re no better at coming up with a consistent one than the philosophers, although theirs tend to be more grounded. But the definitions of Stanislas Dehaene, Christof Koch, Michael Graziano, Michael Gazzaniga, V.S. Ramachandran, Elkhonon Goldberg, and many others, are all different.

      “Without having read them all, I suspect that all of the consciousness discussions on your blog have been philosophical in nature”

      If you look in the Mind and AI category, most of what you see there will be neuroscience related. I do the occasional philosophy post (usually knocking down some thought experiment), but I’m more focused on the science than the philosophy. That said, the audience for straight up neuroscience is limited. In our conversation above, I think part of the confusion is that I was giving neuroscience answers rather than philosophical ones. (The neuroscience is far less controversial than the philosophy. It’s much easier to establish whether fish can learn to avoid averse stimuli than it is to interpret what that means for any internal experience.)

      On simplicity, after reading dozens of books on this subject, I now seriously doubt there will be one simple theory that answers it all. My experience is that the “simple” ones are vague and the precise ones are incomplete. Similar to the fact that the search for the old elan vital of biology didn’t result in one theory, but a galaxy of microbiological and organic chemistry models, I think consciousness studies will result in a wide range of interacting theories.

      Like

  9. Mike, a cursory looksee at the neuroscientists you mention, from Dehaene to Goldberg indicates that none of them other than Gazzaniga have provided a definition of consciousness. Remarkable but apparently true! Here’s some info I quickly gathered from Wikipedia and other sources:

    Gazzaniga writes that “Consciousness is the word we use to describe the subjective feeling of a number of instincts playing out in time in an organism” and “Whatever captures our attention at that moment is what exists in our consciousness.” and “By calling it an instinct, I’m saying whatever it is we’re talking about, it comes with us.” So he agrees with Damásio about the “feeling” part but “instincts and/or memories” providing the content of consciousness isn’t credible—how are physical feelings like a touch on the skin always instinctive or always remembered?

    Dehaene supports GWT: “When we say that we are aware of a certain piece of information, what we mean is just this: the information has entered into a specific storage area that makes it available to the rest of the brain” and “The flexible dissemination of information, I argue, is a characteristic property of the conscious state.”

    Koch apparently believes in a modern variant of panpsychism, i.e., that some form of consciousness can be found in all things, which, I believe is Tononi’s IIT which “differs from classical panpsychism in that it only ascribes consciousness to things with some degree of irreducible cause-effect power, which does not include ‘a bunch of disconnected neurons in a dish, a heap of sand, a galaxy of stars or a black hole …” but, I notice, is apparently not limited to biological consciousness, the only kind of consciousness that’s ever been experienced or inferred.

    Graziano’s Attention Schema Theory (AST) “… seeks to explain how an information-processing machine could act the way people do, insisting it has consciousness, describing consciousness in the ways that we do, and claiming that it has an inner magic that transcends mere information-processing, even though it does not.” As far as I could determine, Graziano fails to define consciousness.

    Ramachandran sees the self and qualia as intertwined. Without the self, he thinks that there would be nothing that experiences the qualia, and without the experiencing of the qualia there would be nothing to identify as self. Ramachandran defends the now unfashionable view that animals including great apes are not conscious. His exacts views on this are a bit fuzzy as he seems prepared to concede a raw background awareness but not a self.

    Goldberg says consciousness is biological: “… one’s brain is part of one’s physical body” but then focuses on “Goldberg’s Orchestra,” wherein a “…. metaphor of ‘cerebral symphony’ is used by Goldberg as he introduces the front rows (the cortex) and the conductor (the frontal lobes). Goldberg develops a rationale for cerebral asymmetry and specialization that goes well beyond that necessitated by early observations of lateralized language skills.”

    So much for those neuroscientist’s (not) defining and providing consciousness hypotheses.

    Mike, you wrote that you “… seriously doubt there will be one simple theory that answers it all. My experience is that the “simple” ones are vague and the precise ones are incomplete.

    I’ll close this comment about neuroscientist’s hypotheses about consciousness with the opinion that none but Damásio appear to be proposing a genuine scientific hypothesis.

    To be continued …

    Like

  10. In marked contrast, Damásio clearly defines consciousness as a feeling, with core consciousness as the feeling of being embodied and centered in a world and extended consciousness as the cortically elaborated consciousness of mammals and, most certainly primates, obviously including ourselves. He further provides consciousness hypotheses, summarized here from his paper “Consciousness and the brainstem” [All italics his]:

    “The following hypothesis captures the solutions we propose to answer it: core consciousness (the simplest form of consciousness) occurs when the brain’s representation devices generate an imaged, nonverbal account of how the organism’s own state is affected by the organism’s interaction with an object, and when this process leads to the enhancement of the image of the causative object, thus placing the object saliently in a spatial and temporal context. The protagonist of core consciousness is the core self, the simplest form of self.”

    And his hypothesis about the “self”:

    “The proto-self occurs not in one brain region but in many, at a multiplicity of levels, from the brainstem and hypothalamus to the cerebral cortex, in structures that are interconnected by neural pathways. These structures are intimately involved in the processes of regulating and representing the state of the organism, two closely tied operations. In short, the proto-self is a coherent collection of neural patterns which map, moment by moment, the state of the physical structure of the organism in its many dimensions.

    It should be noted at the outset that the proto-self is not the sense of self in the traditional sense, the sort of self on which our current knowing is centered, that is, the core self (the protagonist of core consciousness), and the autobiographical self (the extended form of self which includes one’s identity and is anchored both in our past and anticipated future). The proto-self is the pre-conscious biological precedent of both core and autobiographical self.”

    *** +++ ***

    BRASH follows Damásio completely, while differing from his thinking by characterizing all cortically produced images as pre-conscious, whereas Damásio believes that some cortically produced images are in-and-of-themselves conscious, a position BRASH rejects as incorporating the Unity of Conscious Presentation conundrum common to all cortical consciousness proposals.

    Mike, one thing you and your blog commenters have not done is register an opinion of BRASH and its several conceptual definitions and descriptions. That’s the discussion I would truly appreciate. In hopes of seeding the responses, the fundamental concepts of BRASH are:

    Consciousness: After Damásio, consciousness is a feeling composed of multiple tracks of feelings corresponding to all sensory tracks, additionally identifying thought as a feeling, both thinking in words and thinking in pictures. The embodied characteristic of thought is obvious when we consider that thinking in words is vocalization-inhibited speech—physical subvocalizations that can be detected by the way—and thinking in pictures is sight-inhibited vision.

    Neural Tissue Configuration (NTC): A configuration of neural tissue that “produces” a feeling, in that a particular configuration IS a feeling. In contrast to thought feelings, physical (body-localized) feelings are localized by the configuration’s location in a body map. As an example, an NTC does not produce the feeling of a touch—it IS the feeling itself, such as a touch on your upper arm.

    Memory: Multi-sensory stored gestalts that are stories. Our pattern-recognition intelligence compares those remembered stories to make predictions and implement creativity

    Intelligence: Pattern matching—pattern recognition. Pattern matching is immature in newborns but, after cortical “training” and the accumulation of a usable long-term memory store, pattern matching occurs during the resolution of sensory input so that, for instance, we typically see what we expect to see in versus what is actually in the world. (In this context, the name of your blog, “SelfAwarePatterns” is most suggestive.)

    Story: See the extensive description of Story I provided in my comment beginning “The Collins Dictionary”.

    Story Engine: The role that BRASH assigns to the cortex is story processor, or Story Engine. The cortical Story Engine operates unconsciously and constitutes 98% or greater of all cognitive operations. The Story Engine IS the unconscious as we understand it.

    Autobiographical Self: After Damásio, the autobiographical self is “the extended form of self which includes one’s identity and is anchored both in our past and anticipated future.” Importantly, per BRASH, the autobiographical self is a Story, the ongoing story of our life.

    Expectation: Also Prediction, the outcome of a Story pattern match as in, for example, the incoming sensory story A ⇒ B pattern-matching the remembered story A ⇒ B ⇒ C and yielding the conscious expectation of C.

    Imagination: Also Creativity, wherein the remembered story A ⇒ B ⇒ C combines with another remembered story B ⇒ E, allowing E to be unexpectedly promoted to consciousness in preference to C.

    Dreams: Cortically produced stories that “leak” into consciousness during sleep. Dreams are unconstrained by sensory input and the need to create the conscious simulation that sustains our lives in our waking state, thereby accounting for their usually “bizarre” content.

    As regards functional neuroanatomy, BRASH proposes that the brainstem produces consciousness via the NTC mechanism and the cortex is overwhelmingly responsible for the resolution of the contents of consciousness. This assignment of functionality is the only element of BRASH that has been discussed.

    But now, Mike, how about discussing all the rest?

    Like

    1. Stephen,
      I think if you read the other neuroscientists at length, you’d find their theories harder to summarily dismiss. After learning the details of one theory that might seem plausible, it’s very easy to fall into a rut of assuming every other theory is wrong. Learning about several theories, and the reasoning that drive them, reveals the degrees of freedom that still remain in the data. (Those degrees of freedom are less than what is typically asserted by people with exotic physics or super-physical theories, but still too broad to claim certitude for any one grounded version.)

      On your theory, I think I’ve mentioned this before, but you’re putting a lot of work on the NTC concept, papering over what I see as crucial details. There is, at least:
      1. The incoming sensory signals
      2. The representation built from those signals
      3. The utilization of the representation for action, or potential action
      4. A representation built from 3
      5. The utilization of the second representation for prediction
      Your description of the NTC could be that entire stack, or just the final stage, where I think the actual experience of the feeling happens.

      Like

  11. Mike, I don’t assume every other theory is wrong but those I’m familiar with aren’t scientifically credible. In my view the theories mentioned (and others not discussed) leave a great deal to be desired in terms of proposing a mechanism for the production, resolution and final “display” of conscious images. Take GWT, for instance, where cortical processing results in some unspecified resultant making it to some “global workspace” (presumably cortical, but otherwise undefined) and then—Shazam!—consciousness happens! Or how about IIT, where “information” of some unspecified kind becomes integrated to some mathematical degree and then—Shazam!—consciousness happens! These hypotheses, which also fail to provide a definition of consciousness before Shazam-ing it into existence, are scientifically unacceptable. If you know of a hypothesis other than Damásio’s that’s anchored in a realistic biologically-centered definition of consciousness that proceeds to theorize a credible mechanism for the processing, resolution and “display” of conscious feelings, please advise.

    Regarding your “papered over” list, note that BRASH is an extension of Damásio’s overall view which is itself fully consistent with the accepted findings of neuroscience about brain functionality. I’ve additionally specified the Story as the logical unit of the brain’s processing and explained that current input to cortical processing, sensory and otherwise, is accumulated real-time into a developing Story that is pattern-matched against memories which are stored and retrieved as multi-sensory Story gestalts. The pattern recognition process drives the unconscious development of action plans from the remembered “outcome” of the matched Story or Stories. I precisely defined both expectation/ prediction in Story terms and additionally accounted for novelty to arise from Story pattern matching resulting in imagination/creativity. NTC is a suggestion for the cellular level production of a conscious feeling and is the final “output” level of BRASH—the actual “display” of conscious feelings.

    Although I haven’t said so previously, let me add that the formation of NTC’s—of conscious feelings—can influence resolved action plans and result in precision physical adjustments as in, for instance, sporting activities like the catching of a curve ball. As a consequence, this feedback from consciousness means that the philosophical “zombie” is impossible because consciousness can change the brain, so the conscious person and his exact duplicate would not remain identical. I also propose that in-progress action plans can be influenced in an inhibitory way by conscious feedback.

    I trust that this overall explanation clarifies your misunderstanding that NTC does a lot of the work since, as I explained, the NTC is only the output (the feelings) level of BRASH.

    I don’t understand what you mean by “degrees of freedom” in neuroscientific data—are you referring to a potential theoretical space? If so, those degrees haven’t yet yielded any credible hypotheses other than this one, which, in recognition of it’s origins I’ll generalize as the Damásio-BRASH theory.

    Like

  12. To the definition of Consciousness, i.e., a feeling composed of multiple tracks of feelings corresponding to all sensory tracks, I have insufficiently stressed that feelings are simulations of exteroceptive and interoceptive events. Simulations are not at all like the events in the world that the feelings represent as in, for instance, the feeling “sound” which is completely unlike the vibrations—sound waves—that are usually moving through the air, but propagate through liquids and solids as well. But I recall mentioning that the room that is throbbing with your favorite Evanescence track is actually completely silent.

    Consciousness is a simulation of being embodied and centered in a world.

    Thanks, Mike, for encouraging the need I felt to present an organized written version of BRASH and related topics and issues and thanks for the blog space in which to do it. I’ve created a PDF of all of the relevant comments for a consciousness reference (should one be needed) in “Einstein’s Breadcrumbs” … and, by the way, BRASH is compatible with the block universe of relativity physics, which we’ve discussed previously.

    Like

    1. Thanks Stephen. I’ve mentioned it before, but I do think you should consider your own blog. You would decide the subject matter, and the content would be discoverable by search engines. But it’s not for everyone, so I totally understand if it’s just not your cup of tea.

      Like

      1. My own blog? After nearly four years, I’m still focused on the topic of Consciousness in the Block Universe, the research from which is obviously concentrated in those two areas and the creative part of which lies in trying to understand the meaning of the synthesis of the two. Any blog I created would be similarly focused and would very likely become an unwelcome timesink in the highly unlikely event that it acquired readers. And I’m not sure that after the second or third post that I’d be reading a blog I wrote … 😉

        I’ve been hoping to learn through critiques of the accuracy and value of the raft of definitions I’ve provided here, as well as the overall conceptual integrity of Damásio-BRASH, hoping that the SelfAwarePatterns’ commenters who have promoted of their own definitions, diagrams and schemes for consciousness would have weighed in. But that hasn’t happened, perhaps because of my off-putting scientific and biological focus as opposed to the several philosophical theories proposed here. I agree with Hacker, though, that empirical science is where we anchor our quest for knowledge and that philosophy’s role as a cognitive discipline is to contribute to our understanding:

        “… philosophy can contribute in a unique and distinctive way to understanding in the natural sciences and mathematics. It can clarify their conceptual features, and restrain their tendency to transgress the bounds of sense. It is a Tribunal of Reason, before which scientists and mathematicians may be arraigned for their transgressions. Indeed, the sciences (and to a lesser degree mathematics), in our times, are the primary source of misguided metaphysics—which it is the task of philosophy to curb, not to encourage.”

        I do have some comments to contribute about Everett’s MWI, so I’ll see soon you on your Sean Carroll’s Something Deeply Hidden page.

        Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.