Mark Solms’ theory of consciousness

I recently finished Mark Solms’ new book, The Hidden Spring: A Journey to the Source of Consciousness. There were a few surprises in the book, and it had what I thought were strong and weak points.

My first surprise was Solms’ embrace of the theories of Sigmund Freud, including psychoanalysis. Freud’s reputation has suffered a lot over the decades. Solms thinks this is undeserved. He also complains about the behaviorist movement, which largely arose as a reaction to Freudian type theories, an attempt to put psychology on a more empirical footing. In Solms’ view, this came at the cost of removing the first person perspective from the field, something he doesn’t think has been adequately corrected for even in the post-behaviorist era.

What wasn’t a surprise is Solms’ belief that consciousness is rooted in the brainstem, something he’s well known for. In particular, he sees it centered on three regions in the midbrain region, which he refers to as the “decision triangle”: the PAG (periaqueductal gray), the superior colliculi, and the midbrain locomotor region. His reasoning is similar to Bjorn Merker’s final integration for action idea of consciousness, which Solms regards as the final decision point. The PAG in particular is the center of action.

However, Solms’ views are nuanced. He sees consciousness rooted in affect, in particular the conscious feelings of emotions and drives, which in his view originate from and terminate back to these regions. But the consciousness he sees here is affect consciousness. He acknowledges that perceptual consciousness is a cortical phenomenon, albeit one that only exists when the cortex is suitably aroused by the RAS (reticular activating system), another brainstem region.

This is a minority view in neuroscience, although the differences with the mainstream are also nuanced. All neuroscientists agree that the cortex is aroused by the RAS. They also agree that the most basal drives originate from the brainstem. And even that final integration of action takes place in the midbrain. But most see conscious experience as a cortical phenomenon, with subcortical forebrain regions like the amygdala, nucleus accumbens, hypothalamus, as well as the insular, cingulate, and orbitofrontal cortices, as playing major roles in affective feelings.

In general, I didn’t feel like Solms adequately engaged with the reasons for these mainstream views. He largely went the route of strawmanning them, saying that they only exist due to “theoretical inertia”. Anil Seth in his review of the book linked to some of the reasons. I think Solms passed up an opportunity by not engaging with the broader literature.

It’s worth noting that whether or not consciousness requires a forebrain has little bearing on animal consciousness. The evidence from ontogeny, fossil records, and model organisms all show the forebrain-midbrain-hindbrain architecture arose very early in vertebrate evolution. Any vertebrate fish you commonly think of has a forebrain, including a pallium, the covering over the forebrain that’s the precursor to the mammalian cortex. The idea that the brainstem is the most ancient structure is a common misconception, one even some scientists appear to buy into.

Anyway, Solms also sees Karl Friston’s free energy principle as a major part of his theory. He gives one of the best descriptions of that principle that I’ve seen. Unfortunately my understanding of it remains somewhat blurry, but my takeaway is that it’s about how self organizing systems arise and work. He identifies four principles of such systems:

  1. They are ergodic, meaning they only permit themselves to be in a limited number of states.
  2. They have a Markov blanket, a boundary between themselves and their environment.
  3. They have active inference, that is, they make predictions about their own states and the environment from that environment’s effects on their Markov blanket.
  4. They are self preservative, which means minimizing their internal entropy, maintaining homeostasis, etc.

3 is understood to involve active Bayesian processing, in other words, predictions. All of which leads us to the predictive theory of the brain, which I think is where Solms is at his strongest. Perception involves the more central regions making predictions, which propagate toward the peripheral (sensory) regions. But the peripheral regions compare the prediction with incoming sensory information and send back prediction error signals. This happens across numerous layers. We see what we expect to see, with the incoming information forcing corrections.

Solms notes that a self evidencing system receiving information that violates its expectations can react in one of three ways.

  1. It can act to change conditions to bring the signals more in line with its expectations.
  2. It can change which representation(s) it’s currently using to make better predictions. This is perception.
  3. It can adjust the precision of the predictions to more optimally match the incoming signal.

Solms identifies the last as consciousness. This strikes me as essentially learning, which is in the same ballpark as Simona Ginsburg and Eva Jablonka’s theory. Although Ginsburg and Jablonka require a more sophisticated form of learning (unlimited associative learning) than what Solms appears to be focusing on. The main point here for Solms, is that this is the origin of a conscious feeling, an affect, which again is what he considers the root of consciousness.

Toward the end of the book, Solms provides an extended critique of David Chalmers’ description of the hard problem of consciousness. His main point is that Chalmers overlooked affects in his deliberations. If he hadn’t, maybe he wouldn’t see the problem quite as daunting. Given that the hard problem is often phrased along the lines of, “Why does it feel like something to experience X?”, I think Solms has a point. Although in my experience, talking about affects usually isn’t seen as sufficient by those troubled by the hard problem.

Solms finishes up by noting we won’t know whether we’ve solved the problem of consciousness until we can build an artificial consciousness. So he’s working on a project to do just that, incorporating the free energy principle model. What he describes sounds like it will be a sort of artificial life. He emphasizes that intelligence isn’t the goal, just a self evidencing system concerned with its own survival. The problem will be finding a way to conclusively demonstrate success despite the problem of other minds.

If he succeeds, it sounds like he will immediately turn it off and try to patent the process to prevent it from falling into commercial hands. Aside from the ethical issues, he notes the danger in building self concerned systems. I usually think fears about artificial intelligence are overblown, but in this case, I agree with him on the danger. The good news is I don’t know how useful such systems would be for most commercial purposes anyway. Do we really want self driving cars or mining robots being worried about their own survival?

There’s a lot of interesting stuff in this book. I do think Solms makes a good point that affects, conscious feelings, are often overlooked in theories of consciousness. And I agree with him on their crucial role. But that role gains its power by the reactions of the perceptual and executive systems. Without those reactions, affects are little more than automatic action programs. They only become affects, conscious feelings, in a conscious system, which means making them the foundation of consciousness is a bit circular.

All of which brings us to a point I often return to, that consciousness is a complex phenomenon, one that can’t be reduced to any one particular property. Unless of course, I’m missing something?

This brief summary of Solms’ views omits a lot. If you’re interested in knowing more, aside from reading the book, he has his own blog, well worth checking out.

52 thoughts on “Mark Solms’ theory of consciousness

  1. Yikes. So many interesting points. Gonna start with this:

    I think the main difficulty that most people have is that they think consciousness is monolithic. They think an individual has one consciousness, and so there must be one definition of consciousness which is correct. It’s kinda like the intuitive idea of “life”. An individual has one life, and so there must be one definition of life. The fact is we are made of many “lives”. Each cell in our body is a “life”, doing living things. We even contain transient lives with completely different (although not necessarily independent) genetics. “I contain multitudes.” (A book by Ed Yong, but apparently also a Dylan song.)

    I think consciousness is similar to life. I think the brain stem is doing conscious-type things, and so has its own consciousness. I think systems combine to do more complicated conscious-type things, creating meta-consciousness. I think it is just not useful to say only the more complex things are consciousness.


    Liked by 1 person

    1. I agree with the life comparison. But to your point, the cell is generally considered the minimum unit of life. Proteins, DNA, or lipids are usually not regarded as alive. Although viruses complicate this picture, since they actively reproduce but don’t maintain their own homeostasis.

      What would you say is the minimum unit of consciousness? Can we meaningfully talk about a neuron being conscious? Whatever that minimum unit is, what distinguishes it from its components?


      1. Actually, I’m not so sure about the cell as being the minimum unit of life. I’m pretty sure some folks talk about metabolism happening in the “soup”, and the development of the enclosing lipid bi-layer was just a step along the way to complexifying. I’ll see if I can find a reference. (Gonna start w/ Terrence Deacon.)

        I think/propose the minimum unit of consciousness, the psychule, is a two-step process: the creation of a symbolic representation (vehicle) and the interpretation of said representation (vehicle). Given this, I would not say a neuron is conscious, as it is only part of one of the two necessary processes, although it could be either of the two, and in fact, is essentially both, serially. I think the major function of neurons is to interpret symbolic representations (neurotransmitters) by creating new symbolic representations ( more neurotransmitters). It’s possible that there are psychules internal to the neuron, but I don’t know of any yet.

        The question becomes: to what entity do you ascribe the consciousness associated with a psychule. I ascribe the consciousness to the system which creates/coordinates both mechanisms, the symbol generator and the interpreter. Any given representation can have multiple interpretations, but two mechanisms can be coordinated to generate an “intended” interpretation.

        I’ll point out here that there are hierarchies of psychules. So my communicating this response to you counts as a psychule (assuming you read and interpret it). That would make us, you and me, potentially conscious as a unit, but the question is what value do you get by recognizing that. It would be more useful ascribing that consciousness to our community of English speakers.


        Liked by 1 person

        1. The soup scenario would strengthen the argument that viruses are alive. If there are entities that sustain and reproduce themselves while depending on the environment for their homeostasis (either in a soup or an invaded cell) and we call that life, it does reduce the unit of life. But it also raises the question on where to draw the line. Are viroids alive? What about prions? Maybe a cutoff is whether it evolves, which would definitely include viruses, but I don’t think prions evolve.

          What counts as a symbolic representation (vehicle)? And what counts as interpretation?

          If two black holes merge and generate a pattern of gravitational waves, do the waves count as a symbol of the merger? If a billion years later LIGO detects the waves and generates a report, does that count as an interpretation?

          Or do we have to have a more permanent or repetitive arrangement? If so, does my laptop’s utilization of a site hostname as a reference to this site count as a symbolic representation? And does its utilization for various purposes count as interpretation? Would that mean my laptop and site together form a consciousness? Or your device and my site?

          I’m deliberately trying to avoid including people in these scenarios, because I think once we include something commonly accepted as conscious, it clouds the criteria.


          1. Actually, the answers to what counts as alive and what counts as a representation are related. The key ingredient would be teleonomic purpose, what Ruth Millikan refers to as proper function.

            So I say that a symbolic representation is a physical thing generated for the purpose of carrying mutual information w/ respect to a specific pattern, with part of that “purpose” being that the symbol be interpreted w/ respect to that pattern, as opposed to being interpreted w/ respect to mutual information with some other pattern.

            An interpretation by a mechanism is an action generated by the mechanism in response to the recognition of the symbol/vehicle, thus linking the action to a pattern with which the symbol shares mutual information.

            Note that any representation shares non-zero mutual information with many patterns. My go to example is a sign hand written in English in a remote Japanese village saying “Come inside for great food!”. This symbol has very high mutual info w/respect to there being a restaurant inside, but it also has (probably somewhat less) mutual info with there being a fluent English speaker inside. The sign could be interpreted either way, but [hey! brand new thought here] the former could be considered part of the psychule, as that was probably the intention of the person creating the sign.

            The gravitational waves generated by merging black holes would be considered indexical rather than symbolic. Interpreting the waves as meaning “black holes colliding” is akin to what neurons do. Neurons “interpret” neurotransmitters as “yep, that neuron just fired”. A symbol is an arbitrary construct which has no connection to the thing symbolized beyond the intent of the generator. So the gravitational waves would not be a symbolic representation unless you postulate some alien mechanism trying to communicate by banging black holes together.

            [this is great practice for me, so, next question]


          2. I don’t really have any additional questions at this point. Although some may occur to me later.

            I think my reaction, which is similar to the reaction I have with most liberal conceptions of consciousness, is that it seems to be missing a lot of characteristics of the systems we intuitively think of as conscious, such as perception, feelings, attention, or volition. And it lets a lot of things into club consciousness we don’t intuitively think of as conscious. Is there an argument on why that shouldn’t be a concern? (Well, I guess this is a question after all.)


          3. Of course this description is missing a lot of characteristics of the systems we intuitively think of as conscious, just as a description of organic molecules is missing characteristics of living things. My point is whatever you or someone else decides is “ consciousness”, it’s going to be made of psychules organized in various ways.

            For example, I’ve talked about unitrackers, which are essentially pattern recognition units. A unitracker is a prime example of a mechanism that generates a symbolic representation. Unitrackers can also be interpreters. You can have unitrackers whose outputs are actions, like “duck”. They can take input from other unitrackers, such as those for suddenly looming objects. The interpretation would be something to the effect of action:duck.

            I’ve also talked about semantic pointers. These are also mechanisms which potentially take inputs from one or more unitrackers and generate symbolic representations. Semantic pointers seem to me to be a prime candidate for a global workspace. Multiple interpreters can respond to/interpret the same representation.

            The main reason that I think understanding psychules is useful is that it provides a physical basis of “aboutness”, and when you take it to the level of unitrackers you get an explanation of “qualia”.



          4. In terms of the psychules, wouldn’t you say that any information processing system is going to include them?

            I do think unitrackers and semantic pointers are productive concepts, although my conception of the semantic pointers is much more distributed and decentralized than yours. In terms of aboutness, it seems like they’re explaining the details, but not getting at what these relationships fundamentally are.

            For that, the best explanation I’ve seen so far is prediction (or inference if you prefer). It’s what converts a homeostatic system to an allostatic one. And although I’m still fuzzy on the overall approach, it’s the part of the free energy principle that makes the most sense to me.


          5. In fact I would say that any information processing system includes psychules.

            As for a global workspace, I’m trying to figure out how a distributed system might work. To me it seems like the options are (to use a theater metaphor) a big screen on the wall (central) or a small screen at every seat (distributed). How is an observer going to get the information from multiple seats?

            Re free energy principle, see my comment below.



        2. My side note.
          >”a symbolic representation is a physical thing .”
          Not always. For example, numbers are a symbolic representation of something. Yet, those numbers are not a physical thing. 2 + 2 = 4. That is knowledge. The bearer of this knowledge is the paper, a physical thing. This knowledge is a physical thing, too, as it is written in the form of ink dots. However, a symbolic representation or interpretation of this knowledge depends on the interpreter and may not be a physical thing.
          The consciousness child, who did not study numbers and arithmetics yet, could interpret this knowledge very differently than us. That is an important point by itself. A symbolic representation or interpretation depends on the prehistory of the interpreter.


          1. Hey Victor. I understand what you’re saying, but when I use the term for describing the minimum unit of a conscious-type process, I mean a physical thing. It’s what Charles S. Peirce calls a sign vehicle, specifically a symbolic sign vehicle, as opposed to an indexical or iconic sign vehicle.

            And I agree the interpretation depends on the history of the interpreter, as I explain in the further comments above.


            Liked by 1 person

        1. Hi Dr. Michael,
          So, the question that comes to mind is, how would you describe the experience of red to someone born blind? Or a searing toothache or backache to someone who’s never experienced one? These seem like experiences where language commonly fails, but remain experiences nonetheless. (I do think it’s possible in principle to provide such a description, but it would be an extremely long and complex one, and someone wouldn’t need it for any of these that they had experienced.


          1. Are those experiences conscious ones though? I think we can experience seeing red as in:

            1. the experience of seeing light of a certain wavelength hitting the retina and then being processed by the brain


            2. “I saw red and it felt like this…”

            The first one is an experience that remains below consciousness. As organisms we are aware of the light but I would not say we are conscious of it.

            In 2, we try and describe the sensation of seeing red. I think that 2 is the conscious thought – the closest we will ever come to describing what seeing red is. But 1 is the actual act of seeing red. I do not think it is possible to describe this even to ourselves let alone anyone else and certainly not a blind person. I do not think language ‘fails’ us in these circumstances. I just think that language is the only way we can be conscious.

            Liked by 1 person

    1. Reading that paragraph in isolation reminds me of concepts I recently read about called causal structuralism and holism, the idea that things can only be understood in relation to other things, and how they’re effected by and affect those other things.

      Liked by 2 people

  2. Slight tangent: whenever I meet a psychologist, I always ask what they think of Freud. The most generous response I ever got was, “Well, he got the discussion started, and he deserves some respect for that.”

    Liked by 1 person

    1. And that’s really bending over backward to be generous. Psychology got started well before Freud. Figures like Hermann von Helmholtz, Wilhelm Wundt, or William James all preceded him, and most cognitive scientists today find their insights more accurate. I guess we could say Freud drummed up public interest in the field, but that was a double edged sword, and seems like it led to the behaviorist movement as a reaction to repair the field’s reputation.

      Liked by 1 person

  3. I do like Friston’s work, although I believe it to describe subconscious, not conscious processing. My understanding is that increasing precision of predictions (point 3 in your second list) corresponds to attention, and does not in itself account for consciousness, although attention is a component of a description of how consciousness works.

    In my account, consciousness does not arise until the brain turns that attention back upon itself and can selectively attend to (increase the precision of prediction of) a representation of its own subconcious processing, as though it was external sensory data.

    Liked by 1 person

    1. I’m undecided about Friston’s free energy principle. On the one hand, it seems true, but almost trivially so. On the other, I have this vague feeling that I’m not getting it with regard to his theory.

      You might be right on point 3. The learning aspect was my interpretation, but the language is maddeningly abstract. The relationship between attention and consciousness is a very controversial one. Many insist they’re separate. My current take is that bottom up attention is separate from consciousness, but not top down attention. But similar to consciousness, attention is really a vast and complex phenomenon, happening at numerous levels in the brain and at numerous stages of processing, so even pretending that there’s a clean delineation between top and bottom attention may be misleading.

      Your account seems similar to the historical view of consciousness, which basically makes it equivalent to introspection. It’s the version John Locke described. That implies that consciousness is only in a limited number of species, possibly only among humans, and maybe to a lesser degree with great apes and other primates, and possibly cetaceans.

      Usually people move past that view by focusing on the processing that we actually introspect, and note that similar types of processing appear in other animals. Of course, the further you move away from humans, the more limited that commonality becomes.


  4. Given that the hard problem is often phrased along the lines of, “Why does it feel like something to experience X?”, I think Solms has a point.

    But are you for it or against it?

    That was a joke – it’s because we’re for or against things (we have affect about them) that it feels like something to experience them. At least, that’s what I inferred Solms is saying, based on your description. Am I getting that right?

    Liked by 1 person

    1. You are. Solms uses more traditional language regarding affects, such as valence: seeing something as good or bad, and arousal. The arousal part he sees as crucial, since that’s what effectively powers the perceptual processing in the cortex.


  5. Regarding the Friston stuff, I’ve spent a fair amount of energy lately trying to figure out how his stuff fits with mine, and here is what I got:

    1. (Ignoring ergodicity. Not sure what this has to do with anything.)

    2. Every physical thing has a Markov blanket, so any mechanism does also. The “sensory states” are the inputs and the “action states” are the outputs. Note: this applies to what has been called the “Friston blanket” as opposed to a Markov blanket as described by Pearl.

    3. Not all things w/ Markov blankets do active inference. But active inference is what unitrackers do.

    4. Homeostasis is where teleonomic purpose comes from in living systems. This does not mean consciousness requires living systems. You can get teleonomic purpose from artificial systems. All you need is a system that has a goal (moves the environment toward a specific state), can create/organize mechanisms, and can select a mechanism that moves the environment toward the goal state. It’s just that such a thing first appears in living systems.

    Re Solms’ contribution: [what the heck is “self evidencing” anyway?]
    Those three options described in the OP are essentially describing how unitrackers work.


    1. My understanding of the Friston stuff remains mushy, so take these replies with that in mind.

      1. I think the main thing with being ergodic is that the system constrains itself to a limited number of states. A storm won’t do anything to stop its internal state from degrading, such as temperature or wind speed moving in a direction that would lead to its demise. That said, this seems like a detail of 4.

      2. I’m not sure about everything having a Markov blanket. That might be true of the type discussed in the wikipedia article about them, but Solms seems to make them more sophisticated, but that might be more the Friston blanket than the original Markov blanket conception.

      3. Sounds plausible.

      4. Here’s Solms’ first mention of “self evidencing”. It’s mostly synonymous with “self organizing”.

      As we saw in Friston’s soup experiment, generative models come into being with self-organising systems. For that reason, they are sometimes called ‘self-evidencing’ systems, because they model the world in relation to their own viability and then seek evidence for their models. It is as if they say not ‘I think, therefore I am’ but ‘I am, therefore my model is viable’.

      Solms, Mark. The Hidden Spring: A Journey to the Source of Consciousness (p. 173). W. W. Norton & Company. Kindle Edition.


  6. Nice review Mike … thanks! My comments are about Solms’ brainstem/cortical consciousness ideas.

    You wrote: “The idea that the brainstem is the most ancient structure is a common misconception …”. Yet the alternative—that the entire forebrain-midbrain-hindbrain structure emerged complete at one go—is difficult to imagine. How is that possible? It seems much more likely that the brainstem evolved first, followed by the forebrain (pallium, cortex).

    But if Solms view is correct that there are two sources of consciousness production then consciousness—the production of conscious feelings—evolved twice (!) and in two distinctly different brain structures. This seems most unusual in that it would be duplicating an existing functionality in two separate and unique structures. Are there any other cases in evolutionary anatomy where that took place? In my view, it’s more likely that the forebrain evolved to produce additional conscious content that, transmitted to the brainstem, was then “displayed” (made conscious) by already-existing brainstem functionality.

    One wonders too how a newly evolved forebrain consciousness could integrate seamlessly from the get-go with existing brainstem-produced consciousness in a unified stream with a shared streaming rate. I don’t believe anyone has considered this difficulty.

    Regarding the claim that the contents of brainstem consciousness were (and apparently still are) “feelings of emotions and drives,” it’s difficult to believe that sensory content wasn’t present until forebrain-produced consciousness evolved. That implies that a pre-forebrain organism could feel and be motivated by fear and other emotions but not feel bodily sensations like touch, pain, temperature and so on.

    If Solms actually uses the terms “affect consciousness” and “perceptual consciousness,” he’s postulating multiple consciousnesses as opposed to a singular consciousness with both affect and perceptual content. That’s confusing consciousness with the contents of consciousness, which is unfortunately a widespread misunderstanding.

    One more thing: if, as you wrote, “… the hard problem is often phrased along the lines of, ‘Why does it feel like something to experience X?”, the answer is simply that consciousness (experience) IS feeling. If consciousness is a simulation in feelings of an organism centered in a world, asking why consciousness is what it is makes no sense. Why is a heartbeat alternating heart muscle contraction and relaxation?

    Liked by 1 person

    1. Thanks Stephen. I thought you might find this one interesting.

      On the emergence of the forebrain-midbrain-hindbrain structure, I think the right way to think about it is that the first thing to evolve was the chordate nerve cord. Lancelets, which are taken as model organisms for Pikaia, an early Cambrian species, basically just have that nerve cord. Lamprey’s, which represent a later but very early vertebrate species, have a central swelling at the front of that cord, the early brain. Even in lamprey’s, the forebrain-midbrain-hindbrain structure is discernible, although it’s not clear how differentiated it is compared to most bony fish species.
      (Note that prosencephalon, mesencephalon, rhombencephalon are alternate terms for forebrain, midbrain, and hindbrain.)

      So, I think the evolutionary sequence might have been something like 1) nerve cord, 2) general swelling of that cord toward the head, 3) differentiation between the front, middle, and back of that swelling. Or there could have been three distinct swellings from the beginning based on selected capabilities and the required connections.

      All that said, scientists aren’t sure why brain features evolved when they did. And it’s worth noting that the forebrains in reptiles, birds, and mammals are 6-10 times (or higher) the size of forebrains in fish and amphibians (relative to the body).

      On the evolution of consciousness, I’d be cautious in regarding it as necessarily one ontological thing. In any case, there are numerous cases of features evolving several times independently. Eyesight, for example, reportedly has evolved dozens of times. And if cephalopods are conscious, then since their evolutionary history forked from ours well before vertebrate brains, consciousness would have had to evolve separately in their case.

      That said, my wording of Solms’ view probably doesn’t accurately capture it for the way you’re looking at it. Here’s the relevant snippet from the book.

      As we saw in relation to blindsight, the superior colliculi’s ‘two-dimensional screen-like map’ of the sensory-motor world, as Merker calls it, is unconscious in human beings.35 It contains little more than a representation of the direction of ‘target deviation’ – the target being the focus of each action cycle – producing gaze, attention and action orientation. Brian White calls it a ‘saliency’ or ‘priority’ map. Panksepp explains that this is how our ‘deviations from a resting state come to be represented as states of action readiness’.36 I cannot put it any better myself.

      Perceptual consciousness of the world around us becomes possible with the help of suitably aroused cortex, which (unlike affective consciousness) is what hydranencephalic children and decorticate animals lack. The superior colliculi provide condensed here-and-now mappings of potential targets and actions, but the cortex provides the detailed ‘representations’ that we use to guide each action sequence as it unfolds. In addition to these highly differentiated images, there are in the subcortical forebrain many unconscious action programmes which are called ‘procedures’ and ‘responses’ – not images. (Think, for example, of the automatised kinds of memory you rely upon to ride a bike, or to navigate the route to a familiar location.) These are encoded primarily in the subcortical basal ganglia, amygdala and cerebellum. Memories are not mere records of the past. Biologically speaking, they are about the past but they are for the future. They are, all of them, in their essence, predictions aimed at meeting our needs.

      Solms, Mark. The Hidden Spring: A Journey to the Source of Consciousness (pp. 140-141). W. W. Norton & Company. Kindle Edition.


      1. Mike, when I said that the production of consciousness evolving twice being difficult to understand, I was referring to a follow-on second appearance of consciousness in another brain structure.

        “… it would be duplicating an existing functionality in two separate and unique structures”

        … so that both must then somehow be coordinated to produce a unified consciousness.

        A parallel would be for an already sighted mammal to evolve additional eyeballs. I don’t think such a duplicate functionality anatomical evolution has ever happened, so it’s on those who support both brainstem and cortical consciousness to explain that mysterious evolutionary path.


        1. Stephen, I understood your concern. However, I didn’t explain my point very well.

          I think you’re assuming that consciousness can only exist in one location, rather than having components in multiple locations that work together. Consider sensory processing. We know sensory information goes to both the superior and inferior colliculi in the brainstem as well as to the thalamus and sensory cortices. The sensory processing in these locations end up doing different aspects of the overall sensory processing and working together.

          Someone who supports both brainstem and cortical consciousness can similarly see them as providing different aspects of consciousness. That’s one interpretation of what Solms is describing. That also seems to match Antonio Damasio’s view. The other interpretation of Solms is that the cortex is providing “helper” functionality to the consciousness in the PAG, which I think is more in line with your view. (I’m not sure which interpretation of Solms is correct, which is why I just quoted him.)

          That said, with evolution, sometimes additional eyeballs happen.


          1. Yes, I’m assuming that conscious feelings are produced (‘displayed’, or made conscious) by a single brain structure. I also believe that the production of conscious content is overwhelmingly cortical, a belief for which there’s abundant evidence.

            Proponents of theories that the ‘display’ of conscious feelings is distributed in any way, whether brainstem-cortical or multi-location-cortical need to address these points:

            1. Lack evidence for the cortical production of feelings
            2. No explanation for the unified presentation of consciousness
            3. Inability to explain Libet’s ‘touch’ timing findings

            Also, a careful reading of Oliver Sacks Awakenings reveals that the disruptions to the subjective “rate of flow” of the stream of consciousness, including being stopped altogether, was found via autopsy evidence to be caused by significant viral destruction of the substantia nigra which, as you know, is a region in the midbrain. That finding would lend support to the brainstem ‘display’ hypothesis, but also means that:

            4. Proponents of distributed conscious feeling ‘display’ need to explain how substantia nigra flow regulation could operate across multiple widely distributed ‘display’ regions.

            This issue is related to the unexplained unified presentation of consciousness issue. I’ve yet to encounter any discussion whatsoever of the rate of flow issue by distributed consciousness proponents.

            (I have some crude notes on the flow of consciousness issue from Sacks’ book that I might be able to render presentable and shareable should time and circumstances permit.)


          2. I guess the question is, is there actually a “display” anywhere? I did a post recently on why I think it’s a problematic metaphor.
            In summary, there’s no evidence for it, and the idea seems to downplay the importance of the “audience”, when the utilization of the information is arguably the most important part of what’s going on.

            I’ve never read Sacks. I probably need to since his books are reputed to be a wealth of neurological case studies.


          3. Of course there’s no literal display Mike. As I’ve done before, I used ‘display’ in quotes to indicate it’s a metaphor and shorthand for “the production of a conscious feeling” or, perhaps “made conscious.” There’s no literal display or literal audience for any display. You might recall my Neural Tissue Configuration hypothesis in which a particular configuration of neural tissue IS a feeling … that’s where I’m coming from. Again, nothing is being literally displayed.

            Your misinterpretation apparently caused you to ignore the four significant difficulties I identified for proponents of a distributed production of conscious feelings, both cortical and cortical plus brainstem.

            I’m astonished that you’ve not read Sacks’ Awakenings and I suggest putting it at the top of your reading list. Take notes as you encounter consciousness puzzles. His documenting of pathologies of consciousness provides much food for thought on the subject. Importantly, any theory of consciousness must take these pathologies into account and explain how they could arise.

            For example, referring to consciousness of visual input, those afflicted with a ‘frozen’ consciousness (not literally 32℉ but unchanging) reported this visual scene:

            The still picture has no true or continuous perspective, but is seen as a perfectly flat dovetailing of shapes, or as a series of wafer-thin planes. Curves are differentiated into discrete, discontinuous steps: a circle is seen as a polygon. There is no sense of space, or solidity or extension, no sense of objects except as facets geometrically apposed. … The state is there, and it cannot be changed. From gross still vision, patients may proceed to an astonishing sort of microscopic vision or Lilliputian hallucination in which they may see a dust-particle on the counterpane filling their entire visual field, and presented as a mosaic of sharp-faceted faces.

            Perhaps what is being described is pure brainstem-created consciousness that’s completely lacking the rich visual content resolution provided by the visual cortex.

            Another description:

            I had just started running my bath,’ she answered, ‘there was about two inches of water in the bath. The next thing – you touch me, and I see there’s this flood.’ As we talked more, the truth was borne in; that she had been ‘frozen’ at a single perceptual and ontological moment: had stayed motionless at this moment, with (for her) just two inches of water in the bath, throughout the hour or more in which a vast flood had developed. … [A]ll of these observations indicate that she was truly and completely de-activated during her standstills. But it was also apparent that her standstills had no subjective duration whatever. There was no ‘elapsing of time’ for Hester during her standstills; at such times she would be (if the logical and semantic paradox may be allowed) at once action-less, being-less, and time-less. … and this, because for her no time had elapsed.
            This also supports my idea that we externalize the stream of consciousness and think of it as flowing time in the world.

            Regarding the speed of the stream/flow of consciousness (my challenge #4 of yesterday):

            Her symptoms, at first, were paroxysmal and bizarre. She would be walking or talking with a normal pattern and flow, and then suddenly, without warning, would come to a stop—in mid-stride, mid-gesture, or the middle of a word; after a few seconds she would resume speech and movement, apparently unaware that any interruption had occurred. … In the months that followed, these standstills grew longer, and would occasionally last for several hours; she would often be discovered quite motionless, in a room, with a completely blank and vacuous face. The merest touch, at such times, served to dissipate these states, and to permit immediate resumption of movement and speech.


            Her movements were extraordinarily quick and forceful, and her speech seemed two or three times quicker than normal speech; if she had previously resembled a slow-motion film, or a persistent film-frame stuck in the projector, she now gave the impression of a speeded-up film – so much so that my colleagues, looking at a film of Mrs Y. which I took at this time, insisted the projector was running too fast. Her threshold of reaction was now almost zero, and all her actions were instantaneous, precipitate, and excessively forceful.

            A great read … highly recommended.

            And now, Mike, what do you make of the four challenges I listed yesterday? All four, by the way, do not apply to brainstem-only consciousness production.


          4. Stephen,
            I actually understood you didn’t mean the term “display” literally, which is why I also quoted it and talked in terms of the problems with the metaphor. Saying there’s a small central place in the brain that’s conscious while the rest isn’t, I think, is the issue. The problem with that kind of thinking is that you’re then forced to confront why only that region is conscious. Does it have a central place within it that’s the conscious part? And does that central place itself have its own place?

            To avoid an infinite regress, at some point we have to stop and explore the actual non-conscious mechanisms of consciousness. To test whether we’re actually doing that, I think we have to ask whether that exploration is giving us any insight into being able to create an artificial consciousness. If not, then we’re not really exploring the mechanisms.

            On your challenge points, 2 and 4 seem tied to the central conscious region. On 3, I’m not sure what in particular about Libet’s results you mean. (In general, I think his results tend to be over-interpreted.)

            On 1, I guess it depends on exactly what you mean by “production of feelings”. There is plenty of evidence that the experience of feelings are heavily affected by injury to the amygdala (Kluver-Bucy syndrome), orbitofrontal cortex (prefrontal lobotomies), anterior cingulate (akinetic mutes and changes in pain perception after lesions), insula, hypothalamus, nucleus acumbens, etc. Often the circuits involved do run from the PAG up through these regions and into the cortex. It’s hard to draw sharp functional boundaries. What we can say is that many of these regions seem crucial but not sufficient.

            Thanks for the Sacks recommendation and quotes!


          5. I’m completely puzzled by your first two paragraphs Mike. I would never say that “there’s a small central place in the brain that’s conscious while the rest isn’t.” In fact, I would not characterize any part of the brain as “being conscious,” as you seem to believe I have. My claim is that some brain structure (I favor the brainstem) produces conscious feelings. There’s no possibility of an endless regress and no need to consider “non-conscious mechanisms” of consciousness to terminate your imagined regress. There’s also no need for a theory of the biological production of conscious feelings to provide insight into being able to create artificial consciousness. Where is this illogic coming from?

            Recall my definition of consciousness:

            Consciousness, noun. A biological, streaming, embodied simulation in feelings of external and internal sensory events and neurochemical states that is produced by activities of the brain.

            Your last paragraph’s statement that the feelings we experience are heavily affected by amygdala injury and so on lists some of the “activities of the brain” my definition refers to but the activities you mention resolve the contents of those feelings. By the “production of feelings” I’m referring to the actual cellular creation of the feeling itself, which I metaphorically referred to as ‘display.’ It’s the end stage that produces the feeling itself, not the content of the feeling.

            Back to that list.

            1. There is no evidence whatsoever that the cortex produces feelings but abundant evidence that the cortex resolves the contents of feelings. Handfuls of the cortex can be removed and consciousness itself remains undisturbed. Only certain contents of consciousness disappear. For a dramatic first person description of content depletion resulting from a stroke see neuroanatomist Jill Bolte Taylor’s “My Stroke of Insight.” No one has evidence that the cortex produces feelings themselves, i.e. consciousness, or we’d all know about it.

            We’ve previously discussed the Merker, et al evidence for the brainstem’s creation of feelings, so there’s no need to repeat that evidence.

            2 and 4. No mechanism has ever been proposed to explain how distributed consciousness, created by specialized cortical regions, ends up as a unified streaming experience. If the visual cortex independently produces visual feelings and the auditory cortex independently produces conscious feelings of sound, and so on throughout the cortical repertoire, then what explains the fact that all of those feelings are experienced in a unified streaming presentation? What explains the rate of the streaming of that unified presentation? No one, including yourself, has proposed any answers to those questions.

            The end-stage creation of feelings by a single subcortical brain structure perfectly explains the unified presentation as well as allowing flow (stream of consciousness) regulation by, for instance, the nearby substantia nigra.

            3. I’ve mentioned Libet’s timing studies before and no one has provided any explanation other than the very mysterious “backward in time reference” which is nearly magical. Libet’s timing experiments have been repeatedly and successfully replicated. In short, if a touch stimulus is applied directly to the cortex, followed by an actual touch to the skin, then the actual physical touch is felt first. If the cortex produces the feeling of the touch, why isn’t it felt immediately and felt before the physical touch?

            The physical touch sensation is directly conveyed up the spinal column to the brainstem before being forwarded to the cortex so creating the actual touch feeling would be faster than a cortical stimulation. In addition, the cortical stimulation would result in the usual time consuming cortical process of evaluation and “story formation.”

            So all of 1-4 are easily explained by the brainstem’s creation of conscious feelings and none of the challenges have been explained by cortical consciousness creation theories.


          6. Moving forward, Mike, as you post about a specific theory of the creation of consciousness, perhaps you could evaluate the theory on its ability to explain the four challenges I’ve documented.


  7. I’ve been away enjoying myself and also reading this book but still haven’t quite finished. Some general reaction to the book.

    It seems like the book has three themes or parts to it.

    The early part is somewhat autobiographical and contains most of the discussion of Freud and Solms’ evolution as a neuroscientist. I didn’t realize Solms has done some ground breaking work on dreams and found them to originate in dopamine circuits – hence, they are in a sense based on wish fulfillment as Freud suggested. I think generally people are too hard on Freud and forget the state of psychology and its almost non-existent tools when he began his work.

    The next part is about the brainstem and the cortex bias of neuroscience – something even Freud fell for. The cortex is a huge part of the human brain so it is natural to think it must be doing most, maybe all, of the work of consciousness. This driven in part by the related bias to believe less complex organisms cannot be conscious since they lack this big chunk of a brain that humans have. My current thinking on the brainstem vs cortex is that both are potentially conscious when their neurons are oscillating in the proper organized patterns and that consciousness arises in the shift of digital to analog processing (temporal to spatial information) that is “felt” by the neurons themselves. These proper organized patterns are fueled by neuromodulators which generate the wake state.

    The third part, which is most of the book, is all about Friston’s free energy theories. I’m still trying to understand this part and may need to do some rereading. I’m not sure how much of it is Friston and how much Solms.

    I’ll probably have by own post on the book eventually.

    Liked by 1 person

    1. I noticed you hadn’t been around much lately. Good to see you back.

      I’m not sure what to make of the Freudian psychoanalysis stuff. It’s hard to separate what might simply have been early theories from notions even psychologists of the day thought were dubious. As I noted to J.S. Pailly above, there were psychologists before Freud, and a lot of what they came up with seem more scientifically grounded, but that might just reflect their better reputations today, and a tendency of current scientists to cite them for what they were right on in comparison to citing Freud for what he was wrong on.

      There was an Aeon piece a few years ago that gave a more sympathetic account of Freud’s theories.

      On the cortex and brainstem, I don’t know if you noticed my response to Stephen above. In general, if consciousness requires a forebrain, that doesn’t have the implications for animal consciousness many take it to have. In vertebrates, the forebrain-midbrain-hindbrain architecture arose very early (early Cambrian). As always, I think it comes down to how we define “consciousness”. Often what’s envisaged in the midbrain is an anoetic form. But if we ask, what processing is available for introspection and self report, it seems hard to include the PAG.

      I might have to make another pass on the FEP stuff myself. A lot of it seems more geared toward explaining life overall than consciousness in particular. Although the active inference stuff fits with a lot of the predictive brain theories that are currently ascendant in neuroscience.

      Looking forward to your post! This book covered a lot of material, a lot of which I omitted for space reasons. I’m sure you’ll have noticed aspects I missed or forgot about.

      Liked by 1 person

      1. I did see your comment to Stephen above. Still I think part of cortex bias is related to the fact that we humans have a big one.

        Regarding Freud, a lot of people criticize him but haven’t actually read him. There’s some pretty remarkable observations scattered throughout his work.

        I’m with you on the FEP stuff. It may explain a lot about how the brain works but I still don’t see it bridging the gap into how consciousness comes about.

        Liked by 1 person

      2. BTW there is an FEP explanation video by Friston himself that only take 15 minutes and doesn’t require any mathematics.

        Oddly I saw the link from a Bernardo Kastrup Facebook post. He uses FEP and Markov’s blankets to explain alters.

        Still I think it explains something about life, something about the brain, but not exactly how consciousness is required even if it fits into the overall FEP theory.

        Liked by 1 person

        1. Thanks. I’d seen this video before, but it made more sense after reading Solms’ account (and others).

          I don’t know. It seems to have value in understanding self organizing systems. But I’m struck by the cases people have cited that appear to falsify it (such as the fact that a system can avoid surprises if it just hides in a hole and starves itself to death), and the explanations of Friston and others that they just needed to take a broader view. Which makes it feel vaguely unfalsifiable in the same manner as moral consequentialism. It just feels a bit too reductive, and I’m usually onboard with reductionism.

          Liked by 1 person

          1. I think it captures some aspects of self organizing systems. Hiding in a hole would only lead to the “surprise” of hunger. The system needs to maintain homeostasis internally and externally. I have heard the critique that it is unfalsifiable and there might be something to that.

            Liked by 1 person

  8. I didn’t watch (not something I’m interested in right now), but I noticed the name:

    The RI lectures are usually very good and worth watching.

    (FWIW, I rather like Freud’s idea about the Id, Ego, and Super-ego, at least metaphorically. I’ve long been conscious of the “dialog” between them in my mind. I suspect the popular “devil on one shoulder; angel on the other shoulder” metaphor ultimately comes from that internal dialog.)

    Liked by 1 person

    1. Thanks. I watched bits and pieces of it and it seemed like a good summation of the book, albeit light on the free energy principle.

      Every time I hear about the id, I think of the monster from the id in Forbidden Planet.


  9. Hello. I discovered your site while doing a search on Mark Solms. I’m impressed by the quality. I just finished Solms’ book (having stopped half way through to read Anil Seth’s, which is easier on the neurons). Not ready to weigh in on any balanced assessment, which will wait for a second reading of both. However, a quick thought on the energy intensive nature of attention. If attention is so entropic, how would the continued heightened attention promoted as ‘present moment awareness’ (Kabat-Zinn et al.) result in the positive affect claimed for it ?

    Liked by 1 person

    1. Thanks Chris, and welcome!

      On positive affect for in the moment awareness, not sure. All I can do is speculate. I think of affects as quick summary assessments and reflexive responses reached by our nervous system. That can be triggered by innate dispositions toward sensory information, but also by cognitive deliberation, and a perception of achieving a desired goal that required a lot of effort. That might be what’s at work in what you describe. I think it helps to remember that exteroception, cognition, affect, and interoception are all integrated in loops allowing the components to constantly feed back on each other.


Your thoughts?

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.