Mark Solms’ theory of consciousness

I recently finished Mark Solms’ new book, The Hidden Spring: A Journey to the Source of Consciousness. There were a few surprises in the book, and it had what I thought were strong and weak points.

My first surprise was Solms’ embrace of the theories of Sigmund Freud, including psychoanalysis. Freud’s reputation has suffered a lot over the decades. Solms thinks this is undeserved. He also complains about the behaviorist movement, which largely arose as a reaction to Freudian type theories, an attempt to put psychology on a more empirical footing. In Solms’ view, this came at the cost of removing the first person perspective from the field, something he doesn’t think has been adequately corrected for even in the post-behaviorist era.

What wasn’t a surprise is Solms’ belief that consciousness is rooted in the brainstem, something he’s well known for. In particular, he sees it centered on three regions in the midbrain region, which he refers to as the “decision triangle”: the PAG (periaqueductal gray), the superior colliculi, and the midbrain locomotor region. His reasoning is similar to Bjorn Merker’s final integration for action idea of consciousness, which Solms regards as the final decision point. The PAG in particular is the center of action.

However, Solms’ views are nuanced. He sees consciousness rooted in affect, in particular the conscious feelings of emotions and drives, which in his view originate from and terminate back to these regions. But the consciousness he sees here is affect consciousness. He acknowledges that perceptual consciousness is a cortical phenomenon, albeit one that only exists when the cortex is suitably aroused by the RAS (reticular activating system), another brainstem region.

This is a minority view in neuroscience, although the differences with the mainstream are also nuanced. All neuroscientists agree that the cortex is aroused by the RAS. They also agree that the most basal drives originate from the brainstem. And even that final integration of action takes place in the midbrain. But most see conscious experience as a cortical phenomenon, with subcortical forebrain regions like the amygdala, nucleus accumbens, hypothalamus, as well as the insular, cingulate, and orbitofrontal cortices, as playing major roles in affective feelings.

In general, I didn’t feel like Solms adequately engaged with the reasons for these mainstream views. He largely went the route of strawmanning them, saying that they only exist due to “theoretical inertia”. Anil Seth in his review of the book linked to some of the reasons. I think Solms passed up an opportunity by not engaging with the broader literature.

It’s worth noting that whether or not consciousness requires a forebrain has little bearing on animal consciousness. The evidence from ontogeny, fossil records, and model organisms all show the forebrain-midbrain-hindbrain architecture arose very early in vertebrate evolution. Any vertebrate fish you commonly think of has a forebrain, including a pallium, the covering over the forebrain that’s the precursor to the mammalian cortex. The idea that the brainstem is the most ancient structure is a common misconception, one even some scientists appear to buy into.

Anyway, Solms also sees Karl Friston’s free energy principle as a major part of his theory. He gives one of the best descriptions of that principle that I’ve seen. Unfortunately my understanding of it remains somewhat blurry, but my takeaway is that it’s about how self organizing systems arise and work. He identifies four principles of such systems:

  1. They are ergodic, meaning they only permit themselves to be in a limited number of states.
  2. They have a Markov blanket, a boundary between themselves and their environment.
  3. They have active inference, that is, they make predictions about their own states and the environment from that environment’s effects on their Markov blanket.
  4. They are self preservative, which means minimizing their internal entropy, maintaining homeostasis, etc.

3 is understood to involve active Bayesian processing, in other words, predictions. All of which leads us to the predictive theory of the brain, which I think is where Solms is at his strongest. Perception involves the more central regions making predictions, which propagate toward the peripheral (sensory) regions. But the peripheral regions compare the prediction with incoming sensory information and send back prediction error signals. This happens across numerous layers. We see what we expect to see, with the incoming information forcing corrections.

Solms notes that a self evidencing system receiving information that violates its expectations can react in one of three ways.

  1. It can act to change conditions to bring the signals more in line with its expectations.
  2. It can change which representation(s) it’s currently using to make better predictions. This is perception.
  3. It can adjust the precision of the predictions to more optimally match the incoming signal.

Solms identifies the last as consciousness. This strikes me as essentially learning, which is in the same ballpark as Simona Ginsburg and Eva Jablonka’s theory. Although Ginsburg and Jablonka require a more sophisticated form of learning (unlimited associative learning) than what Solms appears to be focusing on. The main point here for Solms, is that this is the origin of a conscious feeling, an affect, which again is what he considers the root of consciousness.

Toward the end of the book, Solms provides an extended critique of David Chalmers’ description of the hard problem of consciousness. His main point is that Chalmers overlooked affects in his deliberations. If he hadn’t, maybe he wouldn’t see the problem quite as daunting. Given that the hard problem is often phrased along the lines of, “Why does it feel like something to experience X?”, I think Solms has a point. Although in my experience, talking about affects usually isn’t seen as sufficient by those troubled by the hard problem.

Solms finishes up by noting we won’t know whether we’ve solved the problem of consciousness until we can build an artificial consciousness. So he’s working on a project to do just that, incorporating the free energy principle model. What he describes sounds like it will be a sort of artificial life. He emphasizes that intelligence isn’t the goal, just a self evidencing system concerned with its own survival. The problem will be finding a way to conclusively demonstrate success despite the problem of other minds.

If he succeeds, it sounds like he will immediately turn it off and try to patent the process to prevent it from falling into commercial hands. Aside from the ethical issues, he notes the danger in building self concerned systems. I usually think fears about artificial intelligence are overblown, but in this case, I agree with him on the danger. The good news is I don’t know how useful such systems would be for most commercial purposes anyway. Do we really want self driving cars or mining robots being worried about their own survival?

There’s a lot of interesting stuff in this book. I do think Solms makes a good point that affects, conscious feelings, are often overlooked in theories of consciousness. And I agree with him on their crucial role. But that role gains its power by the reactions of the perceptual and executive systems. Without those reactions, affects are little more than automatic action programs. They only become affects, conscious feelings, in a conscious system, which means making them the foundation of consciousness is a bit circular.

All of which brings us to a point I often return to, that consciousness is a complex phenomenon, one that can’t be reduced to any one particular property. Unless of course, I’m missing something?

This brief summary of Solms’ views omits a lot. If you’re interested in knowing more, aside from reading the book, he has his own blog, well worth checking out.

85 thoughts on “Mark Solms’ theory of consciousness

  1. Yikes. So many interesting points. Gonna start with this:

    I think the main difficulty that most people have is that they think consciousness is monolithic. They think an individual has one consciousness, and so there must be one definition of consciousness which is correct. It’s kinda like the intuitive idea of “life”. An individual has one life, and so there must be one definition of life. The fact is we are made of many “lives”. Each cell in our body is a “life”, doing living things. We even contain transient lives with completely different (although not necessarily independent) genetics. “I contain multitudes.” (A book by Ed Yong, but apparently also a Dylan song.)

    I think consciousness is similar to life. I think the brain stem is doing conscious-type things, and so has its own consciousness. I think systems combine to do more complicated conscious-type things, creating meta-consciousness. I think it is just not useful to say only the more complex things are consciousness.

    *

    Liked by 1 person

    1. I agree with the life comparison. But to your point, the cell is generally considered the minimum unit of life. Proteins, DNA, or lipids are usually not regarded as alive. Although viruses complicate this picture, since they actively reproduce but don’t maintain their own homeostasis.

      What would you say is the minimum unit of consciousness? Can we meaningfully talk about a neuron being conscious? Whatever that minimum unit is, what distinguishes it from its components?

      Like

      1. Actually, I’m not so sure about the cell as being the minimum unit of life. I’m pretty sure some folks talk about metabolism happening in the “soup”, and the development of the enclosing lipid bi-layer was just a step along the way to complexifying. I’ll see if I can find a reference. (Gonna start w/ Terrence Deacon.)

        I think/propose the minimum unit of consciousness, the psychule, is a two-step process: the creation of a symbolic representation (vehicle) and the interpretation of said representation (vehicle). Given this, I would not say a neuron is conscious, as it is only part of one of the two necessary processes, although it could be either of the two, and in fact, is essentially both, serially. I think the major function of neurons is to interpret symbolic representations (neurotransmitters) by creating new symbolic representations ( more neurotransmitters). It’s possible that there are psychules internal to the neuron, but I don’t know of any yet.

        The question becomes: to what entity do you ascribe the consciousness associated with a psychule. I ascribe the consciousness to the system which creates/coordinates both mechanisms, the symbol generator and the interpreter. Any given representation can have multiple interpretations, but two mechanisms can be coordinated to generate an “intended” interpretation.

        I’ll point out here that there are hierarchies of psychules. So my communicating this response to you counts as a psychule (assuming you read and interpret it). That would make us, you and me, potentially conscious as a unit, but the question is what value do you get by recognizing that. It would be more useful ascribing that consciousness to our community of English speakers.

        *

        Liked by 2 people

        1. The soup scenario would strengthen the argument that viruses are alive. If there are entities that sustain and reproduce themselves while depending on the environment for their homeostasis (either in a soup or an invaded cell) and we call that life, it does reduce the unit of life. But it also raises the question on where to draw the line. Are viroids alive? What about prions? Maybe a cutoff is whether it evolves, which would definitely include viruses, but I don’t think prions evolve.

          What counts as a symbolic representation (vehicle)? And what counts as interpretation?

          If two black holes merge and generate a pattern of gravitational waves, do the waves count as a symbol of the merger? If a billion years later LIGO detects the waves and generates a report, does that count as an interpretation?

          Or do we have to have a more permanent or repetitive arrangement? If so, does my laptop’s utilization of a site hostname as a reference to this site count as a symbolic representation? And does its utilization for various purposes count as interpretation? Would that mean my laptop and site together form a consciousness? Or your device and my site?

          I’m deliberately trying to avoid including people in these scenarios, because I think once we include something commonly accepted as conscious, it clouds the criteria.

          Like

          1. Actually, the answers to what counts as alive and what counts as a representation are related. The key ingredient would be teleonomic purpose, what Ruth Millikan refers to as proper function.

            So I say that a symbolic representation is a physical thing generated for the purpose of carrying mutual information w/ respect to a specific pattern, with part of that “purpose” being that the symbol be interpreted w/ respect to that pattern, as opposed to being interpreted w/ respect to mutual information with some other pattern.

            An interpretation by a mechanism is an action generated by the mechanism in response to the recognition of the symbol/vehicle, thus linking the action to a pattern with which the symbol shares mutual information.

            Note that any representation shares non-zero mutual information with many patterns. My go to example is a sign hand written in English in a remote Japanese village saying “Come inside for great food!”. This symbol has very high mutual info w/respect to there being a restaurant inside, but it also has (probably somewhat less) mutual info with there being a fluent English speaker inside. The sign could be interpreted either way, but [hey! brand new thought here] the former could be considered part of the psychule, as that was probably the intention of the person creating the sign.

            The gravitational waves generated by merging black holes would be considered indexical rather than symbolic. Interpreting the waves as meaning “black holes colliding” is akin to what neurons do. Neurons “interpret” neurotransmitters as “yep, that neuron just fired”. A symbol is an arbitrary construct which has no connection to the thing symbolized beyond the intent of the generator. So the gravitational waves would not be a symbolic representation unless you postulate some alien mechanism trying to communicate by banging black holes together.

            *
            [this is great practice for me, so, next question]

            Like

          2. I don’t really have any additional questions at this point. Although some may occur to me later.

            I think my reaction, which is similar to the reaction I have with most liberal conceptions of consciousness, is that it seems to be missing a lot of characteristics of the systems we intuitively think of as conscious, such as perception, feelings, attention, or volition. And it lets a lot of things into club consciousness we don’t intuitively think of as conscious. Is there an argument on why that shouldn’t be a concern? (Well, I guess this is a question after all.)

            Like

          3. Of course this description is missing a lot of characteristics of the systems we intuitively think of as conscious, just as a description of organic molecules is missing characteristics of living things. My point is whatever you or someone else decides is “ consciousness”, it’s going to be made of psychules organized in various ways.

            For example, I’ve talked about unitrackers, which are essentially pattern recognition units. A unitracker is a prime example of a mechanism that generates a symbolic representation. Unitrackers can also be interpreters. You can have unitrackers whose outputs are actions, like “duck”. They can take input from other unitrackers, such as those for suddenly looming objects. The interpretation would be something to the effect of action:duck.

            I’ve also talked about semantic pointers. These are also mechanisms which potentially take inputs from one or more unitrackers and generate symbolic representations. Semantic pointers seem to me to be a prime candidate for a global workspace. Multiple interpreters can respond to/interpret the same representation.

            The main reason that I think understanding psychules is useful is that it provides a physical basis of “aboutness”, and when you take it to the level of unitrackers you get an explanation of “qualia”.

            *

            Like

          4. In terms of the psychules, wouldn’t you say that any information processing system is going to include them?

            I do think unitrackers and semantic pointers are productive concepts, although my conception of the semantic pointers is much more distributed and decentralized than yours. In terms of aboutness, it seems like they’re explaining the details, but not getting at what these relationships fundamentally are.

            For that, the best explanation I’ve seen so far is prediction (or inference if you prefer). It’s what converts a homeostatic system to an allostatic one. And although I’m still fuzzy on the overall approach, it’s the part of the free energy principle that makes the most sense to me.

            Like

          5. In fact I would say that any information processing system includes psychules.

            As for a global workspace, I’m trying to figure out how a distributed system might work. To me it seems like the options are (to use a theater metaphor) a big screen on the wall (central) or a small screen at every seat (distributed). How is an observer going to get the information from multiple seats?

            Re free energy principle, see my comment below.

            *

            Like

        2. My side note.
          >”a symbolic representation is a physical thing .”
          Not always. For example, numbers are a symbolic representation of something. Yet, those numbers are not a physical thing. 2 + 2 = 4. That is knowledge. The bearer of this knowledge is the paper, a physical thing. This knowledge is a physical thing, too, as it is written in the form of ink dots. However, a symbolic representation or interpretation of this knowledge depends on the interpreter and may not be a physical thing.
          The consciousness child, who did not study numbers and arithmetics yet, could interpret this knowledge very differently than us. That is an important point by itself. A symbolic representation or interpretation depends on the prehistory of the interpreter.

          Like

          1. Hey Victor. I understand what you’re saying, but when I use the term for describing the minimum unit of a conscious-type process, I mean a physical thing. It’s what Charles S. Peirce calls a sign vehicle, specifically a symbolic sign vehicle, as opposed to an indexical or iconic sign vehicle.

            And I agree the interpretation depends on the history of the interpreter, as I explain in the further comments above.

            *

            Liked by 1 person

        1. Hi Dr. Michael,
          So, the question that comes to mind is, how would you describe the experience of red to someone born blind? Or a searing toothache or backache to someone who’s never experienced one? These seem like experiences where language commonly fails, but remain experiences nonetheless. (I do think it’s possible in principle to provide such a description, but it would be an extremely long and complex one, and someone wouldn’t need it for any of these that they had experienced.

          Like

          1. Are those experiences conscious ones though? I think we can experience seeing red as in:

            1. the experience of seeing light of a certain wavelength hitting the retina and then being processed by the brain

            or

            2. “I saw red and it felt like this…”

            The first one is an experience that remains below consciousness. As organisms we are aware of the light but I would not say we are conscious of it.

            In 2, we try and describe the sensation of seeing red. I think that 2 is the conscious thought – the closest we will ever come to describing what seeing red is. But 1 is the actual act of seeing red. I do not think it is possible to describe this even to ourselves let alone anyone else and certainly not a blind person. I do not think language ‘fails’ us in these circumstances. I just think that language is the only way we can be conscious.

            Liked by 1 person

    1. Reading that paragraph in isolation reminds me of concepts I recently read about called causal structuralism and holism, the idea that things can only be understood in relation to other things, and how they’re effected by and affect those other things.

      Liked by 2 people

  2. Slight tangent: whenever I meet a psychologist, I always ask what they think of Freud. The most generous response I ever got was, “Well, he got the discussion started, and he deserves some respect for that.”

    Liked by 1 person

    1. And that’s really bending over backward to be generous. Psychology got started well before Freud. Figures like Hermann von Helmholtz, Wilhelm Wundt, or William James all preceded him, and most cognitive scientists today find their insights more accurate. I guess we could say Freud drummed up public interest in the field, but that was a double edged sword, and seems like it led to the behaviorist movement as a reaction to repair the field’s reputation.

      Liked by 1 person

  3. I do like Friston’s work, although I believe it to describe subconscious, not conscious processing. My understanding is that increasing precision of predictions (point 3 in your second list) corresponds to attention, and does not in itself account for consciousness, although attention is a component of a description of how consciousness works.

    In my account, consciousness does not arise until the brain turns that attention back upon itself and can selectively attend to (increase the precision of prediction of) a representation of its own subconcious processing, as though it was external sensory data.

    Liked by 1 person

    1. I’m undecided about Friston’s free energy principle. On the one hand, it seems true, but almost trivially so. On the other, I have this vague feeling that I’m not getting it with regard to his theory.

      You might be right on point 3. The learning aspect was my interpretation, but the language is maddeningly abstract. The relationship between attention and consciousness is a very controversial one. Many insist they’re separate. My current take is that bottom up attention is separate from consciousness, but not top down attention. But similar to consciousness, attention is really a vast and complex phenomenon, happening at numerous levels in the brain and at numerous stages of processing, so even pretending that there’s a clean delineation between top and bottom attention may be misleading.

      Your account seems similar to the historical view of consciousness, which basically makes it equivalent to introspection. It’s the version John Locke described. That implies that consciousness is only in a limited number of species, possibly only among humans, and maybe to a lesser degree with great apes and other primates, and possibly cetaceans.

      Usually people move past that view by focusing on the processing that we actually introspect, and note that similar types of processing appear in other animals. Of course, the further you move away from humans, the more limited that commonality becomes.

      Like

  4. Given that the hard problem is often phrased along the lines of, “Why does it feel like something to experience X?”, I think Solms has a point.

    But are you for it or against it?

    That was a joke – it’s because we’re for or against things (we have affect about them) that it feels like something to experience them. At least, that’s what I inferred Solms is saying, based on your description. Am I getting that right?

    Liked by 1 person

    1. You are. Solms uses more traditional language regarding affects, such as valence: seeing something as good or bad, and arousal. The arousal part he sees as crucial, since that’s what effectively powers the perceptual processing in the cortex.

      Like

  5. Regarding the Friston stuff, I’ve spent a fair amount of energy lately trying to figure out how his stuff fits with mine, and here is what I got:

    1. (Ignoring ergodicity. Not sure what this has to do with anything.)

    2. Every physical thing has a Markov blanket, so any mechanism does also. The “sensory states” are the inputs and the “action states” are the outputs. Note: this applies to what has been called the “Friston blanket” as opposed to a Markov blanket as described by Pearl.

    3. Not all things w/ Markov blankets do active inference. But active inference is what unitrackers do.

    4. Homeostasis is where teleonomic purpose comes from in living systems. This does not mean consciousness requires living systems. You can get teleonomic purpose from artificial systems. All you need is a system that has a goal (moves the environment toward a specific state), can create/organize mechanisms, and can select a mechanism that moves the environment toward the goal state. It’s just that such a thing first appears in living systems.

    Re Solms’ contribution: [what the heck is “self evidencing” anyway?]
    Those three options described in the OP are essentially describing how unitrackers work.

    Like

    1. My understanding of the Friston stuff remains mushy, so take these replies with that in mind.

      1. I think the main thing with being ergodic is that the system constrains itself to a limited number of states. A storm won’t do anything to stop its internal state from degrading, such as temperature or wind speed moving in a direction that would lead to its demise. That said, this seems like a detail of 4.

      2. I’m not sure about everything having a Markov blanket. That might be true of the type discussed in the wikipedia article about them, but Solms seems to make them more sophisticated, but that might be more the Friston blanket than the original Markov blanket conception.

      3. Sounds plausible.

      4. Here’s Solms’ first mention of “self evidencing”. It’s mostly synonymous with “self organizing”.

      As we saw in Friston’s soup experiment, generative models come into being with self-organising systems. For that reason, they are sometimes called ‘self-evidencing’ systems, because they model the world in relation to their own viability and then seek evidence for their models. It is as if they say not ‘I think, therefore I am’ but ‘I am, therefore my model is viable’.

      Solms, Mark. The Hidden Spring: A Journey to the Source of Consciousness (p. 173). W. W. Norton & Company. Kindle Edition.

      Like

  6. Nice review Mike … thanks! My comments are about Solms’ brainstem/cortical consciousness ideas.

    You wrote: “The idea that the brainstem is the most ancient structure is a common misconception …”. Yet the alternative—that the entire forebrain-midbrain-hindbrain structure emerged complete at one go—is difficult to imagine. How is that possible? It seems much more likely that the brainstem evolved first, followed by the forebrain (pallium, cortex).

    But if Solms view is correct that there are two sources of consciousness production then consciousness—the production of conscious feelings—evolved twice (!) and in two distinctly different brain structures. This seems most unusual in that it would be duplicating an existing functionality in two separate and unique structures. Are there any other cases in evolutionary anatomy where that took place? In my view, it’s more likely that the forebrain evolved to produce additional conscious content that, transmitted to the brainstem, was then “displayed” (made conscious) by already-existing brainstem functionality.

    One wonders too how a newly evolved forebrain consciousness could integrate seamlessly from the get-go with existing brainstem-produced consciousness in a unified stream with a shared streaming rate. I don’t believe anyone has considered this difficulty.

    Regarding the claim that the contents of brainstem consciousness were (and apparently still are) “feelings of emotions and drives,” it’s difficult to believe that sensory content wasn’t present until forebrain-produced consciousness evolved. That implies that a pre-forebrain organism could feel and be motivated by fear and other emotions but not feel bodily sensations like touch, pain, temperature and so on.

    If Solms actually uses the terms “affect consciousness” and “perceptual consciousness,” he’s postulating multiple consciousnesses as opposed to a singular consciousness with both affect and perceptual content. That’s confusing consciousness with the contents of consciousness, which is unfortunately a widespread misunderstanding.

    One more thing: if, as you wrote, “… the hard problem is often phrased along the lines of, ‘Why does it feel like something to experience X?”, the answer is simply that consciousness (experience) IS feeling. If consciousness is a simulation in feelings of an organism centered in a world, asking why consciousness is what it is makes no sense. Why is a heartbeat alternating heart muscle contraction and relaxation?

    Liked by 1 person

    1. Thanks Stephen. I thought you might find this one interesting.

      On the emergence of the forebrain-midbrain-hindbrain structure, I think the right way to think about it is that the first thing to evolve was the chordate nerve cord. Lancelets, which are taken as model organisms for Pikaia, an early Cambrian species, basically just have that nerve cord. Lamprey’s, which represent a later but very early vertebrate species, have a central swelling at the front of that cord, the early brain. Even in lamprey’s, the forebrain-midbrain-hindbrain structure is discernible, although it’s not clear how differentiated it is compared to most bony fish species.

      https://en.wikipedia.org/wiki/Lamprey#Lifecycle
      (Note that prosencephalon, mesencephalon, rhombencephalon are alternate terms for forebrain, midbrain, and hindbrain.)

      So, I think the evolutionary sequence might have been something like 1) nerve cord, 2) general swelling of that cord toward the head, 3) differentiation between the front, middle, and back of that swelling. Or there could have been three distinct swellings from the beginning based on selected capabilities and the required connections.

      All that said, scientists aren’t sure why brain features evolved when they did. And it’s worth noting that the forebrains in reptiles, birds, and mammals are 6-10 times (or higher) the size of forebrains in fish and amphibians (relative to the body).

      On the evolution of consciousness, I’d be cautious in regarding it as necessarily one ontological thing. In any case, there are numerous cases of features evolving several times independently. Eyesight, for example, reportedly has evolved dozens of times. And if cephalopods are conscious, then since their evolutionary history forked from ours well before vertebrate brains, consciousness would have had to evolve separately in their case.

      That said, my wording of Solms’ view probably doesn’t accurately capture it for the way you’re looking at it. Here’s the relevant snippet from the book.

      As we saw in relation to blindsight, the superior colliculi’s ‘two-dimensional screen-like map’ of the sensory-motor world, as Merker calls it, is unconscious in human beings.35 It contains little more than a representation of the direction of ‘target deviation’ – the target being the focus of each action cycle – producing gaze, attention and action orientation. Brian White calls it a ‘saliency’ or ‘priority’ map. Panksepp explains that this is how our ‘deviations from a resting state come to be represented as states of action readiness’.36 I cannot put it any better myself.

      Perceptual consciousness of the world around us becomes possible with the help of suitably aroused cortex, which (unlike affective consciousness) is what hydranencephalic children and decorticate animals lack. The superior colliculi provide condensed here-and-now mappings of potential targets and actions, but the cortex provides the detailed ‘representations’ that we use to guide each action sequence as it unfolds. In addition to these highly differentiated images, there are in the subcortical forebrain many unconscious action programmes which are called ‘procedures’ and ‘responses’ – not images. (Think, for example, of the automatised kinds of memory you rely upon to ride a bike, or to navigate the route to a familiar location.) These are encoded primarily in the subcortical basal ganglia, amygdala and cerebellum. Memories are not mere records of the past. Biologically speaking, they are about the past but they are for the future. They are, all of them, in their essence, predictions aimed at meeting our needs.

      Solms, Mark. The Hidden Spring: A Journey to the Source of Consciousness (pp. 140-141). W. W. Norton & Company. Kindle Edition.

      Like

      1. Mike, when I said that the production of consciousness evolving twice being difficult to understand, I was referring to a follow-on second appearance of consciousness in another brain structure.

        “… it would be duplicating an existing functionality in two separate and unique structures”

        … so that both must then somehow be coordinated to produce a unified consciousness.

        A parallel would be for an already sighted mammal to evolve additional eyeballs. I don’t think such a duplicate functionality anatomical evolution has ever happened, so it’s on those who support both brainstem and cortical consciousness to explain that mysterious evolutionary path.

        Like

        1. Stephen, I understood your concern. However, I didn’t explain my point very well.

          I think you’re assuming that consciousness can only exist in one location, rather than having components in multiple locations that work together. Consider sensory processing. We know sensory information goes to both the superior and inferior colliculi in the brainstem as well as to the thalamus and sensory cortices. The sensory processing in these locations end up doing different aspects of the overall sensory processing and working together.

          Someone who supports both brainstem and cortical consciousness can similarly see them as providing different aspects of consciousness. That’s one interpretation of what Solms is describing. That also seems to match Antonio Damasio’s view. The other interpretation of Solms is that the cortex is providing “helper” functionality to the consciousness in the PAG, which I think is more in line with your view. (I’m not sure which interpretation of Solms is correct, which is why I just quoted him.)

          That said, with evolution, sometimes additional eyeballs happen.
          https://en.wikipedia.org/wiki/Jumping_spider#Vision

          Like

          1. Yes, I’m assuming that conscious feelings are produced (‘displayed’, or made conscious) by a single brain structure. I also believe that the production of conscious content is overwhelmingly cortical, a belief for which there’s abundant evidence.

            Proponents of theories that the ‘display’ of conscious feelings is distributed in any way, whether brainstem-cortical or multi-location-cortical need to address these points:

            1. Lack evidence for the cortical production of feelings
            2. No explanation for the unified presentation of consciousness
            3. Inability to explain Libet’s ‘touch’ timing findings

            Also, a careful reading of Oliver Sacks Awakenings reveals that the disruptions to the subjective “rate of flow” of the stream of consciousness, including being stopped altogether, was found via autopsy evidence to be caused by significant viral destruction of the substantia nigra which, as you know, is a region in the midbrain. That finding would lend support to the brainstem ‘display’ hypothesis, but also means that:

            4. Proponents of distributed conscious feeling ‘display’ need to explain how substantia nigra flow regulation could operate across multiple widely distributed ‘display’ regions.

            This issue is related to the unexplained unified presentation of consciousness issue. I’ve yet to encounter any discussion whatsoever of the rate of flow issue by distributed consciousness proponents.

            (I have some crude notes on the flow of consciousness issue from Sacks’ book that I might be able to render presentable and shareable should time and circumstances permit.)

            Like

          2. I guess the question is, is there actually a “display” anywhere? I did a post recently on why I think it’s a problematic metaphor.
            https://selfawarepatterns.com/2020/11/28/the-problem-with-the-theater-of-the-mind-metaphor/
            In summary, there’s no evidence for it, and the idea seems to downplay the importance of the “audience”, when the utilization of the information is arguably the most important part of what’s going on.

            I’ve never read Sacks. I probably need to since his books are reputed to be a wealth of neurological case studies.

            Like

          3. Of course there’s no literal display Mike. As I’ve done before, I used ‘display’ in quotes to indicate it’s a metaphor and shorthand for “the production of a conscious feeling” or, perhaps “made conscious.” There’s no literal display or literal audience for any display. You might recall my Neural Tissue Configuration hypothesis in which a particular configuration of neural tissue IS a feeling … that’s where I’m coming from. Again, nothing is being literally displayed.

            Your misinterpretation apparently caused you to ignore the four significant difficulties I identified for proponents of a distributed production of conscious feelings, both cortical and cortical plus brainstem.

            I’m astonished that you’ve not read Sacks’ Awakenings and I suggest putting it at the top of your reading list. Take notes as you encounter consciousness puzzles. His documenting of pathologies of consciousness provides much food for thought on the subject. Importantly, any theory of consciousness must take these pathologies into account and explain how they could arise.

            For example, referring to consciousness of visual input, those afflicted with a ‘frozen’ consciousness (not literally 32℉ but unchanging) reported this visual scene:

            The still picture has no true or continuous perspective, but is seen as a perfectly flat dovetailing of shapes, or as a series of wafer-thin planes. Curves are differentiated into discrete, discontinuous steps: a circle is seen as a polygon. There is no sense of space, or solidity or extension, no sense of objects except as facets geometrically apposed. … The state is there, and it cannot be changed. From gross still vision, patients may proceed to an astonishing sort of microscopic vision or Lilliputian hallucination in which they may see a dust-particle on the counterpane filling their entire visual field, and presented as a mosaic of sharp-faceted faces.

            Perhaps what is being described is pure brainstem-created consciousness that’s completely lacking the rich visual content resolution provided by the visual cortex.

            Another description:

            I had just started running my bath,’ she answered, ‘there was about two inches of water in the bath. The next thing – you touch me, and I see there’s this flood.’ As we talked more, the truth was borne in; that she had been ‘frozen’ at a single perceptual and ontological moment: had stayed motionless at this moment, with (for her) just two inches of water in the bath, throughout the hour or more in which a vast flood had developed. … [A]ll of these observations indicate that she was truly and completely de-activated during her standstills. But it was also apparent that her standstills had no subjective duration whatever. There was no ‘elapsing of time’ for Hester during her standstills; at such times she would be (if the logical and semantic paradox may be allowed) at once action-less, being-less, and time-less. … and this, because for her no time had elapsed.
            This also supports my idea that we externalize the stream of consciousness and think of it as flowing time in the world.

            Regarding the speed of the stream/flow of consciousness (my challenge #4 of yesterday):

            Her symptoms, at first, were paroxysmal and bizarre. She would be walking or talking with a normal pattern and flow, and then suddenly, without warning, would come to a stop—in mid-stride, mid-gesture, or the middle of a word; after a few seconds she would resume speech and movement, apparently unaware that any interruption had occurred. … In the months that followed, these standstills grew longer, and would occasionally last for several hours; she would often be discovered quite motionless, in a room, with a completely blank and vacuous face. The merest touch, at such times, served to dissipate these states, and to permit immediate resumption of movement and speech.

            And:

            Her movements were extraordinarily quick and forceful, and her speech seemed two or three times quicker than normal speech; if she had previously resembled a slow-motion film, or a persistent film-frame stuck in the projector, she now gave the impression of a speeded-up film – so much so that my colleagues, looking at a film of Mrs Y. which I took at this time, insisted the projector was running too fast. Her threshold of reaction was now almost zero, and all her actions were instantaneous, precipitate, and excessively forceful.

            A great read … highly recommended.

            And now, Mike, what do you make of the four challenges I listed yesterday? All four, by the way, do not apply to brainstem-only consciousness production.

            Like

          4. Stephen,
            I actually understood you didn’t mean the term “display” literally, which is why I also quoted it and talked in terms of the problems with the metaphor. Saying there’s a small central place in the brain that’s conscious while the rest isn’t, I think, is the issue. The problem with that kind of thinking is that you’re then forced to confront why only that region is conscious. Does it have a central place within it that’s the conscious part? And does that central place itself have its own place?

            To avoid an infinite regress, at some point we have to stop and explore the actual non-conscious mechanisms of consciousness. To test whether we’re actually doing that, I think we have to ask whether that exploration is giving us any insight into being able to create an artificial consciousness. If not, then we’re not really exploring the mechanisms.

            On your challenge points, 2 and 4 seem tied to the central conscious region. On 3, I’m not sure what in particular about Libet’s results you mean. (In general, I think his results tend to be over-interpreted.)

            On 1, I guess it depends on exactly what you mean by “production of feelings”. There is plenty of evidence that the experience of feelings are heavily affected by injury to the amygdala (Kluver-Bucy syndrome), orbitofrontal cortex (prefrontal lobotomies), anterior cingulate (akinetic mutes and changes in pain perception after lesions), insula, hypothalamus, nucleus acumbens, etc. Often the circuits involved do run from the PAG up through these regions and into the cortex. It’s hard to draw sharp functional boundaries. What we can say is that many of these regions seem crucial but not sufficient.

            Thanks for the Sacks recommendation and quotes!

            Like

          5. I’m completely puzzled by your first two paragraphs Mike. I would never say that “there’s a small central place in the brain that’s conscious while the rest isn’t.” In fact, I would not characterize any part of the brain as “being conscious,” as you seem to believe I have. My claim is that some brain structure (I favor the brainstem) produces conscious feelings. There’s no possibility of an endless regress and no need to consider “non-conscious mechanisms” of consciousness to terminate your imagined regress. There’s also no need for a theory of the biological production of conscious feelings to provide insight into being able to create artificial consciousness. Where is this illogic coming from?

            Recall my definition of consciousness:

            Consciousness, noun. A biological, streaming, embodied simulation in feelings of external and internal sensory events and neurochemical states that is produced by activities of the brain.

            Your last paragraph’s statement that the feelings we experience are heavily affected by amygdala injury and so on lists some of the “activities of the brain” my definition refers to but the activities you mention resolve the contents of those feelings. By the “production of feelings” I’m referring to the actual cellular creation of the feeling itself, which I metaphorically referred to as ‘display.’ It’s the end stage that produces the feeling itself, not the content of the feeling.

            Back to that list.

            1. There is no evidence whatsoever that the cortex produces feelings but abundant evidence that the cortex resolves the contents of feelings. Handfuls of the cortex can be removed and consciousness itself remains undisturbed. Only certain contents of consciousness disappear. For a dramatic first person description of content depletion resulting from a stroke see neuroanatomist Jill Bolte Taylor’s “My Stroke of Insight.” No one has evidence that the cortex produces feelings themselves, i.e. consciousness, or we’d all know about it.

            We’ve previously discussed the Merker, et al evidence for the brainstem’s creation of feelings, so there’s no need to repeat that evidence.

            2 and 4. No mechanism has ever been proposed to explain how distributed consciousness, created by specialized cortical regions, ends up as a unified streaming experience. If the visual cortex independently produces visual feelings and the auditory cortex independently produces conscious feelings of sound, and so on throughout the cortical repertoire, then what explains the fact that all of those feelings are experienced in a unified streaming presentation? What explains the rate of the streaming of that unified presentation? No one, including yourself, has proposed any answers to those questions.

            The end-stage creation of feelings by a single subcortical brain structure perfectly explains the unified presentation as well as allowing flow (stream of consciousness) regulation by, for instance, the nearby substantia nigra.

            3. I’ve mentioned Libet’s timing studies before and no one has provided any explanation other than the very mysterious “backward in time reference” which is nearly magical. Libet’s timing experiments have been repeatedly and successfully replicated. In short, if a touch stimulus is applied directly to the cortex, followed by an actual touch to the skin, then the actual physical touch is felt first. If the cortex produces the feeling of the touch, why isn’t it felt immediately and felt before the physical touch?

            The physical touch sensation is directly conveyed up the spinal column to the brainstem before being forwarded to the cortex so creating the actual touch feeling would be faster than a cortical stimulation. In addition, the cortical stimulation would result in the usual time consuming cortical process of evaluation and “story formation.”

            So all of 1-4 are easily explained by the brainstem’s creation of conscious feelings and none of the challenges have been explained by cortical consciousness creation theories.

            Like

          6. Moving forward, Mike, as you post about a specific theory of the creation of consciousness, perhaps you could evaluate the theory on its ability to explain the four challenges I’ve documented.

            Like

  7. I’ve been away enjoying myself and also reading this book but still haven’t quite finished. Some general reaction to the book.

    It seems like the book has three themes or parts to it.

    The early part is somewhat autobiographical and contains most of the discussion of Freud and Solms’ evolution as a neuroscientist. I didn’t realize Solms has done some ground breaking work on dreams and found them to originate in dopamine circuits – hence, they are in a sense based on wish fulfillment as Freud suggested. I think generally people are too hard on Freud and forget the state of psychology and its almost non-existent tools when he began his work.

    The next part is about the brainstem and the cortex bias of neuroscience – something even Freud fell for. The cortex is a huge part of the human brain so it is natural to think it must be doing most, maybe all, of the work of consciousness. This driven in part by the related bias to believe less complex organisms cannot be conscious since they lack this big chunk of a brain that humans have. My current thinking on the brainstem vs cortex is that both are potentially conscious when their neurons are oscillating in the proper organized patterns and that consciousness arises in the shift of digital to analog processing (temporal to spatial information) that is “felt” by the neurons themselves. These proper organized patterns are fueled by neuromodulators which generate the wake state.

    The third part, which is most of the book, is all about Friston’s free energy theories. I’m still trying to understand this part and may need to do some rereading. I’m not sure how much of it is Friston and how much Solms.

    I’ll probably have by own post on the book eventually.

    Liked by 1 person

    1. I noticed you hadn’t been around much lately. Good to see you back.

      I’m not sure what to make of the Freudian psychoanalysis stuff. It’s hard to separate what might simply have been early theories from notions even psychologists of the day thought were dubious. As I noted to J.S. Pailly above, there were psychologists before Freud, and a lot of what they came up with seem more scientifically grounded, but that might just reflect their better reputations today, and a tendency of current scientists to cite them for what they were right on in comparison to citing Freud for what he was wrong on.

      There was an Aeon piece a few years ago that gave a more sympathetic account of Freud’s theories.
      https://aeon.co/essays/from-philosophy-to-psychoanalysis-a-classic-freudian-move

      On the cortex and brainstem, I don’t know if you noticed my response to Stephen above. In general, if consciousness requires a forebrain, that doesn’t have the implications for animal consciousness many take it to have. In vertebrates, the forebrain-midbrain-hindbrain architecture arose very early (early Cambrian). As always, I think it comes down to how we define “consciousness”. Often what’s envisaged in the midbrain is an anoetic form. But if we ask, what processing is available for introspection and self report, it seems hard to include the PAG.

      I might have to make another pass on the FEP stuff myself. A lot of it seems more geared toward explaining life overall than consciousness in particular. Although the active inference stuff fits with a lot of the predictive brain theories that are currently ascendant in neuroscience.

      Looking forward to your post! This book covered a lot of material, a lot of which I omitted for space reasons. I’m sure you’ll have noticed aspects I missed or forgot about.

      Liked by 1 person

      1. I did see your comment to Stephen above. Still I think part of cortex bias is related to the fact that we humans have a big one.

        Regarding Freud, a lot of people criticize him but haven’t actually read him. There’s some pretty remarkable observations scattered throughout his work.

        I’m with you on the FEP stuff. It may explain a lot about how the brain works but I still don’t see it bridging the gap into how consciousness comes about.

        Liked by 1 person

      2. BTW there is an FEP explanation video by Friston himself that only take 15 minutes and doesn’t require any mathematics.

        Oddly I saw the link from a Bernardo Kastrup Facebook post. He uses FEP and Markov’s blankets to explain alters.

        Still I think it explains something about life, something about the brain, but not exactly how consciousness is required even if it fits into the overall FEP theory.

        Liked by 1 person

        1. Thanks. I’d seen this video before, but it made more sense after reading Solms’ account (and others).

          I don’t know. It seems to have value in understanding self organizing systems. But I’m struck by the cases people have cited that appear to falsify it (such as the fact that a system can avoid surprises if it just hides in a hole and starves itself to death), and the explanations of Friston and others that they just needed to take a broader view. Which makes it feel vaguely unfalsifiable in the same manner as moral consequentialism. It just feels a bit too reductive, and I’m usually onboard with reductionism.

          Liked by 1 person

          1. I think it captures some aspects of self organizing systems. Hiding in a hole would only lead to the “surprise” of hunger. The system needs to maintain homeostasis internally and externally. I have heard the critique that it is unfalsifiable and there might be something to that.

            Liked by 1 person

  8. I didn’t watch (not something I’m interested in right now), but I noticed the name:

    The RI lectures are usually very good and worth watching.

    (FWIW, I rather like Freud’s idea about the Id, Ego, and Super-ego, at least metaphorically. I’ve long been conscious of the “dialog” between them in my mind. I suspect the popular “devil on one shoulder; angel on the other shoulder” metaphor ultimately comes from that internal dialog.)

    Liked by 1 person

    1. Thanks. I watched bits and pieces of it and it seemed like a good summation of the book, albeit light on the free energy principle.

      Every time I hear about the id, I think of the monster from the id in Forbidden Planet.

      Like

  9. Hello. I discovered your site while doing a search on Mark Solms. I’m impressed by the quality. I just finished Solms’ book (having stopped half way through to read Anil Seth’s, which is easier on the neurons). Not ready to weigh in on any balanced assessment, which will wait for a second reading of both. However, a quick thought on the energy intensive nature of attention. If attention is so entropic, how would the continued heightened attention promoted as ‘present moment awareness’ (Kabat-Zinn et al.) result in the positive affect claimed for it ?

    Liked by 1 person

    1. Thanks Chris, and welcome!

      On positive affect for in the moment awareness, not sure. All I can do is speculate. I think of affects as quick summary assessments and reflexive responses reached by our nervous system. That can be triggered by innate dispositions toward sensory information, but also by cognitive deliberation, and a perception of achieving a desired goal that required a lot of effort. That might be what’s at work in what you describe. I think it helps to remember that exteroception, cognition, affect, and interoception are all integrated in loops allowing the components to constantly feed back on each other.

      Like

  10. This comment is based on listening to Mark Solms’ hour long presentation of the ideas in his book.

    His presentation was very interesting, however:

    I have always thought that the essence of David Chalmers’ “Hard Problem” is how do physical phenomena produce a mental effect – not what is the function of that effect. Given any physical system, you can (in principle) calculate how it will evolve in time, but nowhere in that calculation do sensations/feelings get computed.

    Let me put it this way, if I were designing an AI object of some sort, what would I have to add to make it conscious?

    Liked by 1 person

    1. Thanks for commenting.

      That’s obviously an extremely complex question, and the answer will depend on which scientific theory of consciousness you think is correct, if any. But any scientific theory will be in terms of structure and functionality. If you can’t associate that with mental effects, then you may not buy that it’s conscious. Ultimately all we can measure is functionality and capabilities, including behavior such as report.

      So the real question might be: what would we have to add to AI for it to exhibit behavior that inclines us to think it’s conscious, particularly in a sustained manner? That’s not an easy question, but it’s scientifically approachable.

      Like

      1. The trouble with reformulating my AI question as you did, is that it becomes in effect, “how can I fool most people that my AI device is actually conscious?”, which may be a great marketing question, but it isn’t philosophically interesting!

        I think David Chalmers was definitely not asking that question.

        Like

        1. No, he wasn’t. But have you ever heard the phrase, “Fake it ’til you make it”? If an AI is able to “fool” us for an extended period (not just minutes, but for hours, days, even months), then how can we say it isn’t the real thing? What test can you perform to establish the difference? And consider that at some point, the resources required to “fake it” make the claim that it isn’t the real thing the more extraordinary one.

          Like

          1. I hadn’t heard that expression, but I agree that if a device behaved as if conscious for an extended period of time – say months – preferably in contact with people who think about such issues – it would be plausible to assume it really was conscious.

            Of course, if someone did make such a device, they would hopefully tell us all how they made such a device.

            I think the true nature of consciousness is extremely interesting, and when someone like David Chalmers manages to crystallise the essence of the problem so well, it is sad when others try to dodge around his observation rather than face it head on.

            Liked by 1 person

          2. I don’t know that “dodging” a metaphysically intractable question is a vice. Another way to describe it is finding a way to evaluate it scientifically. It’s basically the move Alan Turing made when formulating his classic test.

            Interestingly, Chalmers himself acknowledges that, in addition to the hard problem, there is the meta-problem, of why we think there is a hard problem. If the meta-problem can be solved without reference to the hard problem itself, then the hard problem will have been dissolved. http://consc.net/papers/metaproblem.pdf

            Liked by 1 person

          3. Hello David,
            It seems to me that you should be far more aligned with me than with Mike. On your advice I’ve now watched that video of Solms presented above by Wyrd Smyth. To me Solms seemed pretty sensible. Perhaps we shouldn’t interpret him to believe that the function of consciousness that he proposes (that it should ultimately promote homeostasis), tells us how the brain produces mental effects. Perhaps that’s just part of his general thesis that consciousness seems basic enough to be stem rather than cortical based. (And his evidence for this seems pretty strong.)

            Observe that in general evolution takes traits which exist, whether through standard mutations or as extraneous products of other circumstances, and sometimes makes them functional. From here we might presume that phenomenal experience should have existed epiphenomenally for some amount of time in certain creatures, but then became functional and thus selected for given certain uses. What would you need to add to a machine to get it to phenomenally experience its existence rather than simply fool us into such thinking, as in the case of Turing’s imitation game? I also would rather people not dodge this question.

            It could be that a phenomenal experiencer (like you or me) exists in the form of certain neuron produced fields of electromagnetic radiation. To me this seems to explain quite a few of the associated complexities. Furthermore there’s reasonable experimental evidence supporting this idea. Even if highly validated empirically however, this should not ultimately solve this hard problem any more than Newton or Einstein ultimately solved the hard problem of gravity. Validation however should eject a horrendous number of funky notions in the field today.

            Like

  11. Thanks, SelfAwarePatterns for that link to David Chalmers discussion about the meta problem. Also, of course Alan Touring’s test was also a practical way to decide if a computer can ‘think’. However it seems to me that if he had lived, he would have updated his test.several times to handle the colossal increase in computer speed and memory and the invention of the internet. The latter also makes it possible to ‘cheat’ by scouring the internet for discussions of whatever type is relevant, and using these to make the computer reply more human sounding. Perhaps an updated Touring test would exclude the internet period. Before long, however, and copyright permitting, it might be possible to run a program containing the contents of all books ever written. Clearly that might make the meaning of the Touring test rather blurred!

    I can’t comment on the Meta problem without spending some time trying to cram it into my finite brain – which may simply cause it to loop, however I am encouraged that David Chalmers is not put off by this argument. I am also grateful to be acquainted with this line of argument.

    I’ll put my cards on the table. Although I am not religiously inclined, I think there is a lot of evidence that consciousness exists in a separate realm of reality. Phenomena like NDEs are extraordinarily persuasive. The HP also points in that direction. Whether the meta problem derails that remains to be seen, but if we live in a non-materialist world the meta problem might become hypothetical.

    Liked by 1 person

    1. Thanks David. It probably won’t shock you to learn that my cards go in the opposite direction. If you’re interested in alternate views, I recommend checking out Susan Blackmore’s books. (I think she has one on NDEs. I know she has one on OBEs.) She was a true believer in paranormal phenomena when she was young, but after years of investigation, turned skeptical.

      Like

      1. Susan Blackmore ? (I can’t resist) She was slightly interesting in the 1980’s but she’s never carried out any serious research on NDE’s; rather she cobbled together something called the ‘dying brain theory’ which is based on nothing but tenuous speculation. Basically, she sat down with a pencil and a blank piece of paper. There was never any experiment to support it, yet it was somehow accepted as being “scientific” maybe because it sounded fairly reasonable.

        The popular notion that Susan was a proponent and then ‘saw the light’ is a nice tale she likes to tell. However, that’s not based on any facts, either. She reportedly had a life changing out of body experience while sitting cross legged (smoking weed) in her student flat in Oxford (with her friend Kevin). But then she realised it wasn’t actually real because she later noticed some details about the roof that were not correct (apparently).

        But she didn’t check straight away (apparently as any one surely would?) and she assures us that she believed it was real… until it occurred to her to actually go and take a peek (at the roof she ‘saw’). Bingo, it didn’t quite match, therefore she realised that she had been deceived by her senses, hence all out of body experiences are illusions or delusions, NDE’s debunked.

        It makes a nice story but it doesn’t make any sense. Weed is not Ketamine and doesn’t cause OBE’s. Secondly, if she was the scientist she claims to be, she would have gone outside immediately the next day to check the details of the roof. She wouldn’t have broadcast it to the world and then converted (retracted) months later.

        In my opinion (of course I can’t prove it) it was simply all ‘orchestrated’ in order to give her the status she desired in the public eye (as the voice of common sense…”I’ve had one of those OBE’s and I now know it’s all a trick of the brain and therefore so should you!”) and secondly to appeal to her academic colleagues for whom near death experiences and their obvious implications were basically anathema. There you have it. Her work is now completely irrelevant but it would seem she is still popular. Best regards !

        Like

        1. Thanks for commenting. Without sources, I can’t see any particular reason to doubt her story. But you were clear that this is your opinion. So we’ll have to just agree to disagree.

          Like

          1. Okay, but I thought you would have at least read her story, which I fairly accurately outlined, in part, I can assure you. As I said, smoking a bit of weed doesn’t cause profound out of body experiences. I suppose she could have snorted some ‘special K’ instead, but our Susan wouldn’t have done that as it would have put her career in jeopardy.

            That’s one very good reason why you should doubt her story. As to using her books as a “go to source” of information for NDE’s, sure, if you don’t want to know what’s actually going on, they’re ideal ! Best regards !

            Like

  12. Although I am yet to plough into David Chalmers’ article about the meta problem, after a good night’s sleep, I realised where this approach leads. Here is DC’s formulation of the meta problem

    The meta-problem is the problem of explaining why we think consciousness poses a hard problem, or in other terms, the problem of explaining why we think consciousness is hard to explain.

    Surely just about any intellectual problem can be formulated this way? In addition to Pythagoras’ proof of his theorem about right angle triangles, there would be a corresponding meta problem – explaining why when people read his proof they believe it must be true. Pythagoras’ Theorem would, in this way, be reduced to the messy question as to why some proportion of people read his proof and come to believe it. There would also be a meta-meta hard problem to tackle.

    In other words, the process of constructing meta problems would deconstruct just about all intellectual thought. This isn’t exactly proof that this approach is wrong, but it seems to come close.

    Clearly applying philosophy to neurological issues in this way is self-referential, and such questions almost always open a can of worms!

    Like

    1. I think the answer is that the very existence of the hard problem is controversial. No one really doubts Pythagoras’ theorem exists. In other words, the non-controversial answer to the meta-problem of the theorem is the theorem itself. The equivalent response for meta-problem of consciousness isn’t so obvious.

      It’s worth noting that Chalmers is basically acknowledging something a number of thinkers who think the hard problem is illusory have talked about for years. He’s trying to bring it to more neutral ground.

      Like

      1. The problem as I see it, is that reasoning about the HP seems to employ the same mental processes that we might use for a whole range of scientific/philosophical topics. When I think about the hP, I usually imagine a calculation of what the whole brain is doing. Ideally this should be done quantum mechanically – but of course any such calculation is totally Gedanken.

        If there is, in effect, a flaw in our reasoning that makes certain intellectual activities come up with wrong results, it seems plausible to assume this will demonstrate itself more widely. Thus the real difference between the HP and Pythagorus’ Theorem is that the latter was discovered a long time ago before anyone proposed that some problems might be affected by neural mechanisms. Thus belief in that (hypothetically flawed) theorem spread unchecked! Obviously I don’t believe that is true, but for the same reason, I don’t think the HP can be superseded by the Meta Problem.

        Rather than starting to doubt reason itself, I think it would be better to use a more pragmatic scientific approach. It is well known that QM and General Relativity are incompatible (though I don’t have a clue about the details), so logically neither should be used until that problem is resolved. In practice, of course, both are used and science pushes forwards. I.e. for the time being science runs on the false assumption that GR and QM are compatible – no doubt at some point the confusion will be resolved.

        Likewise, I think if science tentatively accepted that Dualism, or Idealism is true (despite the fact that Dualism can’t be exactly true because the two realms must interact to some degree) we might make more progress than we will by chipping away at the intellectual capacities of the mind itself.

        If you think I am overstaying my welcome here, feel free to say so – I don’t want to be considered a Troll.

        Like

        1. I think the right way to hold QM and GR is that they are useful within certain domains of applicability. We already know of some of the limits of those domains. But accepting them within their respective domains has benefits throughout science and technology. The device you’re using right now makes use of QM. And if you ever use navigator apps, they depend on GPS systems that use GR.

          Accepting functionalism has benefits in biology, psychology, neuroscience, computer science, and many other fields. What would be the benefit in provisionally accepting dualism or idealism?

          You’re not overstaying your welcome David. We disagree, but as long as things stay friendly, it leads to interesting conversations. I hope you’ll consider weighing in on other posts.

          Like

          1. The immediate benefit of tentatively including dualism (leaving aside Idealism for now) in science would be that it would enable a large range of observations to be categorised and examined without having to treat them as frauds or false perceptions. For example, Dean Radin experimentally developed the concept of ‘presentiment’. He used a standard measure of mental arousal – skin conductance – to test if people develop a forewarning just before experiencing a shock of some kind. In a typical experiment, volunteers would be presented with a series of images on a computer screen. Most would be calm scenes of various sorts, but about 1 in 4 would be erotic/violent. The images were randomly selected by a computer, equipped with a true random number generator based on QM. He shows that the skin conductance curve starts to react to the forthcoming shock up to 4 seconds before the event.

            https://www.deanradin.com/publications

            https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4706048/

            Under the standard scientific assumptions this just can’t happen – so in practice people don’t explore the idea much, and I don’t think most scientists are even aware of the phenomenon.

            Even if we assume this experiment is somehow flawed (I’d be surprised) a detailed explanation as to how this *appears* to work would presumably be relevant to many other experiments which use skin conductance to study conventional psychological phenomena.

            I think that even hardened materialists could benefit from the tentative assumption that dualism is true, because if more people explore such phenomena – if they are right, the truth will come out in the end.

            Like

          2. Certainly if we just assumed they were right, it would have benefits for these people and emotional ones for those sympathetic with their claims. But it only has benefits for the rest of us if the claims are accurate in at least some scope of utility.

            I don’t think I’m familiar with this particular study, but I know that similar studies, when attempts have been made to replicate the findings, where shown to have methodological flaws. When those flaws are corrected, the effects disappear.

            I do agree if these kinds of results were widely replicated, science would have to deal with them. But I don’t perceive we’re anywhere near that point. When scanning around, I came across this other study which looks at the results of the one you cited, as well as other, and discusses the issues involved.
            https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7530246/

            Who knows, maybe evidence will eventually force the issue. It wouldn’t be the first time. But based on my previous forays into these areas, my Bayesian priors for it are pretty low.

            Like

  13. Thanks for that great link. I quote:

    Not so, because even if this experiment was replicated 50 times with the same results and using the best experimental conditions, there is no way to know if this effect is a consequence of (1) a precognitive effect, (2) a psi influence from the participants, (3) a psi effect from the experimenter on the computer that chooses the target, and (4) many other options!

    Yes, deciding exactly which kind of psi effect is involved is difficult, but all those options involve psi effects, except of course, the “many other options”. But surely the most relevant question is whether psi is involved. The exact mechanism, can be considered later.

    This is the supposed explanation/refutation of a huge deviation between the 25% expected outcome and the 33% actual result averaged over many experiments over many years by many experimenters. If science can brush off such a result in an off-hand way, isn’t something wrong?

    I mean, if one of those listed effects – including those not explicitly specified – can cause a deviation from chance on that scale, how much faith can we put on psychological experiments generally?

    I probably should have used the Ganzfeld result as my example, but in fact a large number of scientists have left a conventional career to explore a wide range of psi topics – sometimes in their retirement. The results from all these persuade me that science needs another moment, vaguely analogous to the quantum revolution.

    Getting back to the Hard Problem, and the fact that David Chalmers’ original observation has proved so difficult for science to deal with directly, how should science proceed? Should it cast doubt on the ability of philosophers/scientists to impartially process certain problems, or widen the scope of possible acceptable explanations?

    Like

    1. I think science should make progress any way it can. If that meant trying to figure out the hard problem, then that’s where it should go. But it’s not even clear what it would mean to study the hard problem. As Chalmers noted, the “easy” problem are at least scientifically tractable, and steady progress is being made on them. If we answered all the easy problem thoroughly enough to construct a system with access consciousness, how would we establish whether there remained anything left to explain?

      Like

  14. I am a pragmatist, and I’d agree that if the easy problems were satisfactorily solved, I’d probably become a materialist.

    I’m less sure any progress in understanding consciousness is steady at all. Theories of consciousness seem to get replaced at regular intervals. At one time the blackboard theory was popular – a sort of pool of data available to every conscious component. This always reminded me of the humble COMMON block in Fortran, which can be used to make data available globally to a program. This has been heavily used, though it is now rather deprecated.

    Now we have Integrated Information Theory. Here John Horgan describes a workshop on IIT – which doesn’t sound encouraging.

    https://blogs.scientificamerican.com/cross-check/can-integrated-information-theory-explain-consciousness/

    Presumably, Mark Solm’s theory is presumably completely at odds with IIT because it focuses on the brain stem as the source of consciousness – not the much more complex cerebral cortex. Although is it fair in any case to reason that because the brain stem is required for consciousness, it must be the seat of consciousness? I mean blood flow in the brain is a requisite of consciousness, but nobody (I assume) looks at blood to explain consciousness.

    Non-materialists often make an analogy between the brain and a TV set. If you didn’t know about electromagnetic radiation and how to modulate it with information, it would be reasonable to assume the TV creates the images and sounds that you see. If you explored it, rather as neuroscientists do, you would soon find encouraging correlations, but you would never understand a TV set that way because it requires a transmitter and a TV studio that you know nothing about.

    I think a better analogy now would be a planetary robot. At first sight, the locals (aliens!) might assume it was conscious – and indeed making decisions about what to do next. However ultimately its decisions are made by someone at a desk on Earth. This analogy might well connect with Mark Solm’s observation that messing with the Reticular Formation destroys consciousness (this has, I think been known for some time) – because maybe it acts as a transceiver to the conscious realm.

    If a theory like that is true, the question as to why someone experiences the colour red (the HP) might reduce to the fact that some information gets passed to the RF, which passes it on to the other realm, where the experience actually takes place.

    Like

    1. The blackboard theory is still around, but as global workspace theory. It’s actually the leading candidate among scientific consciousness researchers. (Although no theory commands a majority.) GWT is basically modeling attention. But it’s a framework theory with other theories being added on, such as Michael Graziano’s attention schema theory and predictive coding theories. There are also higher order thought theories.

      All of these fit into the overall cognitive neuroscience framework. The creators of the theories tend to each see their own theory as the one true answer. But it seems increasingly evident that they’re each modeling an aspect of the overall problem.

      IIT is more its own thing. It’s more popular outside the field than within it. Can’t say I’m much of a fan, although it might have some interesting insights.

      The problem with the brain as an antenna is that nothing in neuroscience really supports it. Injury to parts of the brain can knock out or alter any aspect of a person’s mind, including the deepest ethical convictions. It doesn’t seem to leave room for anything to be received. And the question with interactionist dualism is, what is the mechanism, and why don’t we see indications of it?

      Like

  15. Thanks for this continuing enjoyable conversation.

    I suppose one question I would have with the BB theory is that I can’t see how it fits with brain structure – which involves direct neuron-neuron interactions – unless the concentrations of the various neuropeptides implement the actual BB.

    When I first saw IIT, I thought I’d have a go at the math, but it seems to be rather vaguely defined – something John Horgan rather confirmed.

    Your criticism of the brain as an antenna would be decisive, but there are equally decisive observations pointing the other way. For example, sometimes people return from an NDE with new knowledge. For example, they may meet someone who they thought was in good health, and report that they are dead – something that is later confirmed. The Ganzfeld results that we discussed earlier are also very persuasive. As I pointed out, identifying the type of psi is difficult, but the evidence that some sort of psi is happening, is amazing.

    Suppose you were operating inside a VR and its interface became badly distorted – e.g. your libido was multiplied by 1000 – I guess the ‘person’ in the VR might appear to have had his ethics overturned, I don’t know.

    There is also a remarkable amount of evidence for reincarnation:

    Click to access jse_22_1_tucker.pdf

    At least one possible interaction mechanism for Dualism was identified by the physicist Henry Stapp. Given a quantum state of some kind that is subject to small perturbations, well separated quantum measurements may well show that it has evolved to a new state. However, if measurements (i.e. observations) are made at close intervals, the chance of the system drifting into a new state can be made arbitrarily small by performing sufficiently frequent measurements:

    Click to access QID.pdf

    Stapp has written up this idea elsewhere in a more mathematical form. It is known as the Quantum Zeno Effect.

    Clearly if enough independent quantum states were measured information could be transmitted. My hunch is that QM is involved in this process one way or another.

    Stapp has also speculated that the interaction might be more direct – an observation might force the wavefunction to collapse to a particular state, however, this would not be consistent with traditional QM.

    Like

    1. Thanks David. Glad you’re enjoying it!

      No one really calls it the black board theory anymore. Even “global workspace” can be misleading. It implies the workspace is a location somewhere, but it isn’t. It’s often described as content winning the competition to be “broadcast” throughout the brain, which works for a quick and dirty explanation, but becomes problematic when trying to understand it in terms of neural structure. The best way to think of it is that to enter the workspace is to have causal effects throughout the brain.

      Daniel Dennett calls it the “fame in the brain” analogy, which I think is probably one of the best descriptions. It also explains what makes something conscious. Consider that if you met a famous person, there would be nothing intrinsic about them that exuded fame. Fame is something they have in virtue of large numbers of people knowing who they are. In the same manner, content becomes conscious when all or most of the various systems throughout the cortex are reacting to it.

      The thing about NDEs and OBEs is that the effects you mention never seem to get captured or verified in a rigorous scientific manner. The scenarios are typically reconstructed based on after the fact interviews of patients and medical personnel. Of course, no one designs an emergency room to capture evidence for NDEs or OBEs. The goal is to save lives and health. But it means we lack the kind of rigorous evidence needed to establish that these are anything other than subjective experiences, albeit ones that are psychologically very powerful.

      The idea that QM has something to do with consciousness is a popular one. But nothing in mainstream neuroscience evidence pushes things in that direction. Even Christof Koch, who seems prepared to explore unusual biophysics, doesn’t see it as a promising direction. Of course, there could be evidence tomorrow that does push in that direction, but until then the idea is a lot more popular outside of neuroscience rather than within it.

      Like

      1. First I’d like to point out that you have skipped over a number of the points I have made – not I think because they were trivial, but because they were (I think) hard problems, if you will pardon the pun.

        Here in particular is a quote I took from one of your links:

        Not so, because even if this experiment was replicated 50 times with the same results and using the best experimental conditions, there is no way to know if this effect is a consequence of (1) a precognitive effect, (2) a psi influence from the participants, (3) a psi effect from the experimenter on the computer that chooses the target, and (4) many other options!

        I pointed out that this is an unreasonable excuse for ignoring the Ganzfeld effect. Clearly, it is not easy to disentangle questions of causality when it comes to psi (maybe that is a fundamental feature of the expanded worldview that psi implies), but that doesn’t alter the fact that the Ganzfeld result (33% right given four choices) is in stark contrast with the result predicted by standard materialist ideas.

        Likewise, Dean Radin’s presentiment effect (short range precognition if you like) is inconsistent with the usual assumption that the laws of physics never allow causality to be reversed, even though with the exception of thermodynamics the laws are time-symmetric. This is a much-repeated experiment and even a detailed explanation of ‘what went wrong’ would be valuable for experimenters who use similar equipment.

        Perhaps I’ll let you respond to that and then loop back to your comments above.

        Finally, for anyone who is interested to discover the scale of the challenge to the conventional understanding of the mind-body relationship, the following book takes some beating:

        It is not a scholarly book, but it provides a splendid overview.

        Like

  16. From the link you posted – [quote]This absence of consensus is related to the difficulty of drawing firm conclusions from the results of psi research. Indeed, they represent an anomaly (Rao and Palmer, 1987) because there is currently no scientific model – based on physical or biology principles – to explain such interactions even if they exist (Kuhn, 1962)3. [/quote]

    Isn’t this the core issue?

    I suggest that there is an information science approach that can offer a process model, putting things into a framework for research. The key is not the elusive “metaphysical consciousness”, but a practical grasp of what it means to experience an understanding – in a biological context.

    The units of measure associated with outcomes can quantify the resultant change in probabilities for related activities to a mental understanding. Organisms understand (formally) environmental affordances activate changes, both informationally and physically.

    Like

  17. So, I’m not sure what’s happening here, but this old post has suddenly attracted a lot of attention from people who want to talk about paranormal stuff. Normally I’m not strict about post topics, but I have no interest in a lot of discussion about paranormal topics. So if someone has shared this post to you with the promise of that type of discussion, I’m sorry, but not a topic I’m interested in spending a lot of time on. I’m sure there are plenty of other sites where that type of discussion would be welcome.

    Like

  18. You got my interest when you asked about a minimum unit of consciousness. I agree it’s too vague to do. But, there are units of informational change and if mind is to be quantified it seems to be a good start. I would say to follow the creation of mutual information in how biological communication works.

    Consciousness seems passive – biology runs on understanding affordances.

    sorry if this is off-topic

    Liked by 1 person

  19. Well, the most interesting discussions are usually to be had between people with opposite views. However, if you think the time has come to finish our discussion, I will reluctantly accept that. Alternatively, since you presumably moderate this site, you could block the other contributors but let our discussion proceed.

    Like

    1. I prefer not to block people except in egregious cases, but I’m not going to respond to anymore paranormal comments. I’m fine with continuing to discuss consciousness from a scientific or analytic philosophy perspective. But I generally avoid writing about NDEs, OBEs, and other paranormal topics for a reason. I’m a skeptic, but a busy one. I just don’t have the time, energy, or interest to go down all those rabbit holes, or put up with the inevitable accusations of close-mindedness because of it. I’d rather discuss and debate the topics I am interested in.

      Like

  20. [quote]
    Even “global workspace” can be misleading. It implies the workspace is a location somewhere, but it isn’t. It’s often described as content winning the competition to be “broadcast” throughout the brain, which works for a quick and dirty explanation, but becomes problematic when trying to understand it in terms of neural structure. The best way to think of it is that to enter the workspace is to have causal effects throughout the brain.
    [/quote]

    I can’t get a clear picture from that as to what the global workspace actually is at the neural level. I mean if one module in the brain accesses the concept “Blue sky means fine weather”, does that mean it immediately signals the fact that it has used that concept – because without such signalling the ‘fame’ couldn’t build up. That seems to involve an awful lot of brain activity. Where are the fame values stored?

    There was a time when all sorts of parallel computer architectures were discussed. They seem to have lost popularity because ultimately there would be a bookkeeping bottleneck where all the parallel speed was lost. There seems to be an interesting analogy here.

    Like

    1. The big thing to understand about the cortex is that the regions are all interconnected, with reciprocal connections between them. That means when a circuit in a particular region fires, it cascades to other regions. If the signals to the other regions “succeed” in causing circuits there to fire, it will involve sending signals back to the first region. The result is a recurrent connection between them, a binding, where, at least for a time, they stimulate each other.

      Most of the time, these dynamics only affect a few regions, so the processing never rises to the level of being anything that leaves an enduring memory trace or is reportable. But if enough regions join in the resonant binding, including the hippocampus so the activity is likely to be reinforced into long term memory, and the relevant prefrontal regions so it can be reported, then we label that activity as having entered the global workspace.

      That’s a description of attention and is probably not the full story. There are probably abstract models of the above process used by the executive systems to influence attention in a top down manner. This is what Michael Graziano’s attention schema theory is about. This attention schema may be part of what gives us the impression of phenomenality, although there are probably other factors involved as well.

      So for your “blue sky means fine weather” example, the attention schema may be used by your executive systems to decide to look up at the sky, where the concepts sky, blue, weather, and fine may then be triggered in various modules and then end up bound together in these reciprocal resonant firing patterns.

      Parallel processing in computing is still very much a thing. Chances are the device you’re using right now does it to some extent. It’s just that the details are now handled for us so we don’t have to think about them ourselves.

      Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.