The dual nature of affects

Mark Solms is coming out with a book on consciousness, which he discusses in a blog post. Solms sees the key to understanding consciousness as affects, specifically feelings, such as hunger, fear, pain, anger, etc. In his view, the failure of science to explain the hard problem of consciousness lies in its failure to focus on this aspect of mental experience. And affects, he states, are generated in the brainstem.

Solms is in a camp of animal researchers* cognitive scientists, along with people like the late Jaak Panksepp, which see consciousness centered on the brainstem, specifically the midbrain region in vertebrates. He cites as evidence for this hydranencephalic children, children born with most of their forebrain missing, who appear to display affective states, and animals who have been decorticated, who also show signs of affects.

One of the problem with discussing affects, like so many other concepts in the cognitive sciences, is that the word “affect” is ambiguous. For these kinds of situations, I always think there’s value in checking a quality dictionary, since dictionary definitions are based on actual usage. Along those lines, Merriam provides two definitions for the noun version of “affect”:

a: a set of observable manifestations of an experienced emotion the facial expressions, gestures, postures, vocal intonations, etc., that typically accompany an emotion

b: the conscious emotion that occurs in reaction to a thought or experience

https://www.merriam-webster.com/dictionary/affect

I think this encapsulates the problem in a nutshell. Under normal conditions for healthy humans, a: and b: virtually always happen together. So when we see cases of a:, it’s completely natural for us to project b: onto what we’re seeing.

The problem is we know from human brain injury cases that b: happens in the forebrain. Someone with damage in the connections between their insula and anterior cingulate cortices can feel pain without being bothered by it. There can be similar injuries knocking out someone’s ability to feel fear. They might even have the physiological reactions associated with these states, but simply not have the associated feeling. If damage in the forebrain can knock out the conscious experience of an affect, then the feeling of the affect happens there, not in the midbrain.

Yet, as Solms notes, we have cases of a: in children with little or no forebrain or animals who have had their cortex removed. It’s actually widely acknowledged that reflexive reactions originate in the midbrain. (In mammals, a lot of reflexive reactions also originate in subcortical regions of the forebrain.) It’s powerfully counter-intuitive to see the “facial expressions, gestures, postures, vocal intonations” as reflexes that can happen without the experience, but that’s the reality.

Solms is right that affects are generated in the midbrain, at least the more basal ones. But the lion share of the evidence is that it’s not where they’re felt. In summary, when discussing affects, we have the reflex, and we have the feeling caused by that reflex. When assessing evidence for affects, we should be clear which version the evidence actually supports.

But isn’t it possible there are some feelings happening down in the midbrain? The problem is, if there are, we appear to have no introspective access to them. Consider that your midbrain is the region that controls eye saccades, the constant and ongoing movement of your eyes. Yet we have no real conscious perception of this movement.

These are among the reasons most neuroscientists see consciousness as a forebrain phenomenon, specifically a thalamo-cortical one.

Still, Solms is a serious scientist, often cited in many other books I’ve read on the evolution of consciousness. I will almost certainly read his. I’m expecting some good information on animal nervous systems and research, even if I anticipate disagreeing with many of his conclusions.

What do you think? Is Solms right that focusing on affects is the key to the hard problem? Are there aspects of this I’m overlooking?

* Corrected per comment from James Cross.

53 thoughts on “The dual nature of affects

  1. I’m a fan of Solms so I appreciate your finding this.

    One nitpick is when you state:

    “Solms is in a camp of animal researchers”

    This makes it sound like Solms is an animal researcher but I don’t think that is what you meant. Solms has been trying to integrate psychoanalysis and neuroscience so he is more of a psychoanalytic neuroscientist or something like that.

    The idea of affects being the basis of consciousness is really interesting and I am going to spend some more time thinking about it. I have been wondering recently in the context of altered states (psychedelics, NDEs, etc) about the role or neurotransmitters. I think we generally think of them as chemical enablers for neurons firings and that the real stuff is happening in the neuron firings. But it is possible from a McFadden EM field standpoint that neurotransmitters could be playing a more direct role in modifying the brain’s EM field. In that case, affects could be a sort of base EM wave from neurotransmitters that pervades consciousness upon which cognitive activities get layered. Even if you don’t buy the EM field part, the metaphor might work for you.

    Liked by 2 people

    1. I have to admit I actually did think of Solms as a neurobiologist who specialized in animal research. Every time I’ve seen him cited, it was in the context of animal cognition. But just read his profile page and there’s nothing there about animals. Interesting. I stand corrected. I think I’ll update the post. Thank you!

      I do think affects have a major role in our intuition of what makes a conscious system. They definitely play a big role in learning, which we’ve discussed before as an important function of consciousness. And since we’re unlikely to give a machine an evaluative system that works just like biological affects, it might be the thing that stops most people from seeing a machine that can navigate the world and learn as conscious.

      On neurotransmitters affecting the magnetic field, I’m assuming by “direct” you mean other than through the synapse’s effects on the action potential? How would that work?

      Liked by 1 person

      1. Yeah, I do mean direct and how it would work I’m not sure. But most molecules do possess a small EM field or dipole. Without some kind of direct effect I find it hard to understand how such small quantities of substances like LSD or DMT can have such large effects on consciousness. It could be they set up a cascade of secondary effects but I’m not sure anybody understands how it works. I guess you know that both LSD and DMT fall into the class of serotonergic psychedelics but their effects seem to far outweigh the amounts required to produce them. This is speculative, of course.

        https://en.wikipedia.org/wiki/Serotonergic_psychedelic#:~:text=Serotonergic%20psychedelics%20(also%20known%20as,tied%20to%20the%20neurotransmitter%20serotonin.

        Liked by 1 person

        1. Take LSD for example.

          “The mode of action of LSD is not well understood. It is thought to interact with the serotonin system by binding to and activating 5–hydroxytryptamine subtype 2 receptor (5-HT2), which interferes with inhibitory systems resulting in perceptual disturbances. It is amongst the most potent drugs known, being active at doses from about 20 micrograms. Typical doses are now about 20 to 80 micrograms although in the past, doses as high as 300 micrograms were common.”

          https://www.emcdda.europa.eu/publications/drug-profiles/lsd

          I suppose if it zeroes in directly on some select group of receptors that might explain the effect.

          Liked by 1 person

          1. I think that’s the key. You don’t need much to affect post-synaptic receptors, which work, I think, with just a few hundred molecules to begin with. Give the pivotal role of synapses, I would think that would be enough.

            Liked by 1 person

          2. Yeah, probably my “direct” theory doesn’t make a lot of sense.

            Still there is something of a mystery of how so much effect can come from microgram doses. That has to go into the stomach, make its way into the blood, make its way to the brain, then hone in on exactly the some subset of receptors. Then it disrupts stuff for about 5-6 hours.

            Liked by 1 person

          3. This is an interesting question. Remembering that synaptic receptors are molecular scale mechanisms, some very quick, extremely dirty, and possibly defective calculations.

            Per Google, a mole of LSD is 323.4 grams. Per Avadrado, a mole has 6.022X10^23 molecules in it. Converting that to 20 micrograms (.000020 / 323.4) x 6.022 * 10^23 yields 3.724 * 10^16 molecules. If we assume 10% of that makes it to the brain, and it needs to be distributed among 100 trillion synapses, that leaves us with (3.724 * 10^16) / 10 / 10^14 = 37 molecules per receptor site.

            I can conceive of that being enough if it clogs the right receptors, but metabolic processes might concentrate more of it in the CNS, which could bring it up as high as 370 molecules per receptor site. If we further say that not every post synaptic site needs to be affected, then the ratio might go up accordingly.

            Liked by 1 person

          4. Then you have DMT with doses almost 100 times that of LSD, usually smoked or injected (can’t pass through digestive track without MAOI) so it goes directly to the blood, and the molecule itself is very very similar to serotonin, more so than LSD which only has a piece of it similar to serotonin with a lot of extra stuff.

            Straight DMT wears off in 10-15 minutes but LSD is active for hours.

            Like

          5. Whatever its mechanism of action, the metabolic processes must have a hard time clearing it out for that little to have that long lasting an effect. It’s interesting that there are no documented cases of anyone dying from overdosing on it, although it sounds like things can get nasty if you do. A potent molecule.

            Like

    2. How would you apply your ideas here to the process of sleep? Would it be like not getting neuron firings, those moments of “still waters”, just before we drift off to sleep? What actually is sleep?

      Liked by 1 person

      1. Which ideas are that? I explored one idea then pretty much rejected my own idea. LOL

        If I am not mistaken, the wake cycle is controlled to a large extent by the neurons in the brain stem/RAS firing and pumping neurotransmitters to the cortex . So the reduction in these transmitters combined with slower firings is associated with sleep. This actually seem to go in part of Solms’s point about the origin of affect and consciousness in the brain stem. And to my general point about the role of neurotransmitters.

        Like

  2. I think that affect is A key to the hard problem. But if anything deserves to be called THE key it would be turning to the meta-hard problem, the task of explaining why we think there’s a hard problem. After explaining that, there is a hard fact, but it’s no longer an intellectual problem: The fact that subjective experience cannot be captured in objective language.

    Liked by 1 person

  3. The brief explanation by Solms in the Psychology Today article leaves me unpersuaded. He certainly may be on to something. But his explanation is not compelling. He suggests that the beginning issue to answer the so-called hard problem of consciousness should be “How and why do feelings arise?” He argues that certain facts “…suggest that the fundamental type of consciousness is affect…” That is, we “feel our way into intrinsically unconscious processes to become aware.” He certainly may be right. And I acknowledge that I’m a dilettante in this area philosophy, but Solm’s examples seem unpersuasive.

    Solms points out that seeing, reading, and face recognition can occur unconsciously. But feeling something is different. Solms argues “If you are objectively in danger, for example, you do not run and hide unless you become aware of the danger, and you feel scared.” I must object. I’m sure many people, like me, have been in danger and have reacted unconsciously. I was once in a near serious vehicle collision and reacted in a complex yet unconscious way to save myself. Solms also cites being hungry and then consciously eating. So, he has never mindlessly eaten that last piece of cake?

    But, he may be on to something here. He just didn’t persuade me. So, what am I not getting?

    Liked by 1 person

    1. There is nothing inconsistent in the idea that there are automatic systems that cause unconscious actions. Actually Solms directly points to examples of vision as something that occur unconsciously. He writes: “Visual perceptual processes are not intrinsically conscious.” In your example of the collision, you saw and acted unconsciously and I assume without feeling fear.

      The wording of what you quote may be somewhat ambiguous but I would guess Solms is talking about a situation where you do not react automatically and unconsciously. In other words, you assess the situation, become fearful, then act. I think that must be what he means when he writes “objectively in danger”.

      Like

      1. That seems to make sense to me. Thanks James. Yes, in my collision example the “feeling” of fear was after the fact when I pondered what had happened—consciously. So, would consciousness comes in degrees depending on how “present” we are; to borrow a term from meditation exercises? But is the feeling the key to it or just something that rides along? I may have to read Solms’ book.

        Like

        1. I think this points to an important insight. We often don’t feel in order to act. We have reflexive and habitual reactions for that which arise from survival circuits. As you described, they typically happen unconsciously. We feel afterward. I think we feel not for the in the moment actions, but for what we’ll do in the future, to learn not to touch that hot stove again, or to assess how we got into the traffic situation we had to react quickly to, etc.

          In other words, feeling is about learning and longer term actions. Which makes sense since conscious processing is far slower and more complex than reactive processing.

          Liked by 1 person

          1. often we are not aware of what we are feeling, we take action habitually and only later when it all cools down a bit we reflect. But the feelings arise simultaneously with the thought, or the impulse of taking the habitual action. Our bodies also simultaneously and correspondingly timber through it unfailingly.

            Like

          2. I think I agree, but my way of putting it would be, we have:
            a. the reflexes and fixed action patterns
            b. the habitual reaction to those reflexes and fixed action patterns
            c. the utilization of information from both in simulations to decide whether to allow or inhibit a. or b.

            Note that a. often triggers physiological reactions throughout the body that enhance the effects on b. and c.

            Of course, I’m describing this as though it’s all clean, sequential, and modular, when it fact it’s more a tightly integrated and interactive bundle, but sometimes it helps to think in terms of a toy model to understand the more complex real one.

            Liked by 1 person

  4. Thanks for the reference Mike. I’ve put his forthcoming book on my amazon list.

    Solms’ thinking is very similar to Damasio’s. If you’ve read The Feeling of What Happens you’ll recognize the similarity. Damasio distinguishes physical feelings, which he calls sensations, from emotional feelings, but both are feelings. And, of course, I agree with the brainstem consciousness hypothesis and have previously explained my view that cortical processing resolves pre-conscious images of feelings which are conveyed to the brainstem for “display” as feelings.

    As for affect I prefer the ‘a’ definition which is essentially behavioral.

    Liked by 1 person

    1. I thought you might like this one Stephen. Solms is definitely in your camp, more so than Damasio I think, whose view I understand to be a lot more nuanced. Maybe Solms will have evidence I haven’t considered before. I already pre-ordered his book. But I’m hoping for evidence of b:.

      Like

  5. Were I to design a system (sensors, circuitry, software and actuators) I’d ensure that a number of subsystems have immediate feedback-loops such that, in some scenarios, self-preservation became automatic. “A depth perception plunge detected, redirect.” “Elevated surface temperature detected, avoid.”

    Such systems would communicate both their signals and reactions back to a CPU (of indeterminate size and complexity) which would then do what it might with the information. Perhaps it’s exploring a maze, for instance, and needs to build a memory of the past and pro-active set of actions to predict the future.

    As such a system evolves to greater and greater capability and general adaptability, at some day, year, century in the future we might ask it, “You almost fell off that cliff, how do you feel about that?”

    “Well, let me tell you a story.” It would say. “But first, I’m feeling a bit peckish, would you care for nourishment? You from this array of biological offerings, me from this portable fusion engine.”

    [What I haven’t quite determined, in this exploration of consciousness of our future masters, is the role of hormones and how they might be modeled.]

    Liked by 1 person

    1. That’s a good description of plausible machine affect-like mechanisms.

      A question to consider are whether you would make it so the machine couldn’t turn off a particular automatic response if it seemed unproductive in a particular situation. Animals generally can’t, although they can eventually condition their override to be habitual, but since the reflex often involves arousal, such as heart rate, muscle tension, etc, it still ends up burning energy.

      Another is whether the speed of its ability to do a more thorough evaluation affects what you relegate to the initial automatic responses, and what you just require the more in depth processing. In animals, the automatic responses can happen in milliseconds, but the longer one may take seconds. So from an evolutionary perspective, it’s more adaptive to have the reflex do the arousal in case it’s needed, even if the animal overrides it. But if the machine can do its more complex evaluation faster, can the “arousal”, and associated energy usage, wait until the evaluation?

      I think of hormones as an information broadcasting system. In the brain, they effect synaptic processing. And of course, they have effects throughout the body.

      Like

      1. To point #2, speed of detection and response… Fair point. Now, what happens when the artificial consciousness grows to encompass a city block or a planet? Speed of communication would still matter. It might want roughly autonomous peripheral appendages to act on the behalf of regional portions of itself, which would eventually benefit the Globally Conscious Entity, if only to react relatively faster to adverse conditions.

        #1? Intentionally disabling wasteful reactions would no doubt be advantageous. However, accepting that the GCE would continue to exist in the physical world, it might accept some waste with the thought that random events would continue to impact it and predicting them might remain out of its control. No doubt it would figure this on its own. But I agree, being unable to “Spock’ify” our emotions and reactions is no doubt viewed as part of our flavor of consciousness.

        I thought more on hormones and expect that they could be represented as massive neural network overlays integrated into the general processing system thereby allowing the world and its other participants to impact the GCE’s behavior.

        Like

        1. I’m not an AI alarmist, but the idea of a Global Conscious Entity would make me nervous. Would we want such an entity directly perceiving all the physical machinery? I don’t know. Seems better to have subsidiary agents and a distributed system.

          All of this assumes that “consciousness” as we normally intuit is a reasonable description of how such an entity would process information.

          When it comes to machine consciousness, the definition of “consciousness” becomes a significant issue.

          Liked by 1 person

          1. I definitely agree that consciousness is not a human only thing. And I do think machine consciousness is possible. But I wonder if it’s possible like jetpacks, flying cars, and laser guns are possible, in that they’re technically possible but we don’t do them on any significant scale.

            I can see robots that can navigate their world and predict their environment. But robots that feel pain, hunger, or fear the way animals do? I’m not sure how many of those we’d really want, or whether people would intuitively see systems without those kinds of things as conscious.

            Anyway, I need to check out your arguments. Just followed your blog.

            Liked by 1 person

          2. What is pain, hunger or fear? Environmental stimuli – analog signal – chemical sensor induced trigger – electro-bio signal transfer – massive neural network analysis – systemic response. We think biology is the only way that behavior operates in the world as that’s our only way to understand it. I propose that the same (or near identical) response circuit might be thought in the same light.
            Life is just chemicals, patterns of assembly and electricity. So too would be a GCE.
            (It’s all BS, I’m sure, but, I’m all for destroying our anthropocentric throne.)

            Liked by 1 person

          3. Just to be clear, I don’t doubt that we will eventually be able to build a machine that feels pain, hunger, or fear. I just wonder if we’d do so on any large scale.

            We do need to be on guard against anthropocentrism. But I also think we need to be leery of anthropomorphism.

            Like

    2. Regarding “… how do you feel about that?” and “… I’m feeling a bit peckish”

      To feel something is to be conscious. Where in your assembly of “sensors, circuitry, software and actuators” is a feeling produced? “Emergent” isn’t available as an explanation by the way … it’s just another word for magic.

      Liked by 1 person

      1. I assume knowledge of other posts and arguments. In those I propose that humans atomic composition and electronic processing differs little from an AGI. And that for humans to think they own the concept of consciousness — a heightened sense of the world, of self and our place within it — is hubris.

        Like

      2. Anonymole, I agree that consciousness is not a solely human attribute … it’s widespread among organisms. But it remains solely biological.

        And there is no reason to believe an “artificial consciousness” is possible. Certainly it’s theoretically possible to construct a device that computes itself centered in a world but incorporating a single biological feeling is impossible, simply because the device is not biological. If you disagree, to substantiate your disagreement you need provide an answer to my previous question:

        Where in your assembly of “sensors, circuitry, software and actuators” is a feeling produced?

        Like

          1. Mike, I’ll go with the usual dictionary definition of biology, something like: “the science of life and living organisms.” A biological organism is a living entity consisting of one or more cells.

            My definition of consciousness specifies that it’s a production of the brain of a living organism. I trust we’re on the same page about the definitions of ‘life’ and ‘living’ … 😉

            Like

          2. By the way, I’ve added ‘unified’ and ‘streaming’ to my matters-of-fact definition:

            Consciousness, noun. A biological, embodied, unified streaming simulation in feelings of external and internal sensory events (physical sensations) and neurochemical states (emotions) that is produced by activities of the brain.

            Like

          3. On “life”, we’ll see. 😉 It’s a more difficult question than it might first appear. (See viruses, viroids, and prions.)

            But on your definition of consciousness, what about “unified streaming simulation in feelings of external and internal sensory events (physical sensations)” requires that it be biological and involve neurochemical states produced by an organic brain? In other words, what does biology have or do that is impossible for a machine to do?

            Like

          4. The “… produced by activities of the brain” rules out the viruses, etc. you mention.

            As to “what about [consciousness] requires that it be biological?”—this is a definition—a statement of the exact meaning of the word. In this case, the exact meaning captures the facts of the matter of consciousness. Biological is specified as a fact of the matter because all reliably observed or inferred instances of consciousness are biological.

            Per Wikipedia, “A term may have many different senses and multiple meanings, and thus allow multiple definitions,” but the definition I’m proposing is the a. definition, one that is broadly acceptable for consciousness studies discussions. (Follow-on b-z definitions of some type of awareness are possible, as in ‘AI consciousness’ and ‘panpsychic consciousness’ but all of them would be explicitly understood as fictitious, imaginative and highly conjectural since none of them can be affirmed as existing).

            As I’ve mentioned before, if a feeling is determined to be a specific Neural Tissue Configuration, perhaps a sheet of neural tissue connected and deformed in a particular way, it would be obvious that a machine replication wouldn’t be possible. But until we know exactly how consciousness works, there’s no reason whatsoever to suppose that a non-biological consciousness is possible. All such suppositions are fictions.

            Historically, Mike, you tend approach most issues rather conservatively, like the block universe and brainstem consciousness, for instance, so I would expect you to be the same on the possibility of machine consciousness. If a non-biological instance of consciousness were claimed—a non-biological feeling—it would be extremely difficult to confirm because we’re limited to strength-of-inference with regard to the consciousness of other organisms, even another human being. Any inference would seem impossible in a non-biological case.

            Like

          5. Stephen,
            You can certainly define consciousness to be biological or feelings as identical to a certain neural configuration. But then it seems like you’ve just made them impossible anywhere else by fiat.

            Interesting conclusion that I approach things conservatively. But “conservative” tends to be relative to people’s preexisting beliefs. Given all the things we can do today that once only happened in living systems, is a physicalist singling out consciousness as special really the epistemically conservative take?

            Like

          6. The whole point is to develop a definition of consciousness that includes only facts of the matter. (I’m tempted to acronymize that but I’ll restrain myself).

            The proposed definition is intended to remedy the “eye of the beholder” equivocation of the term that leads to so much nonsense and misunderstanding in consciousness discussions. Convincingly show that an element of the proposed definition is not a fact of the matter and it will be removed. Convincingly propose a definitional element that is clearly a fact of the matter and it will be added. I welcome any definitional suggestions. If non-biological feelings are ever verifiably created, then that will be a new fact of the matter and the definition would be revised to incorporate that newly discovered fact.

            The definition of a noun is not a fiat, Mike—it doesn’t dictate possibilities or impossibilities. It factually defines the term so we all can be certain what we’re talking about in discussions like those using the word ‘consciousness.’ I believe that’s an immeasurable improvement over discussions of ‘what-it’s-like-ness’ … 😉

            As for AI consciousness, panpsychism and neutral monism, we’d all know such discussions are not about consciousness definition a. but are, instead, about some proposed form of awareness that the proponents are required to clearly define, but have not.

            Like

          7. Stephen,
            As I noted at Eric’s blog, there are facts of the matter for any precise definition. So it is a fact that no AI system will ever have Wysong-consciousness. But the only way to solve the “eye of the beholder” issue is for a definition to gain a wide consensus. Given the vast multitude of definitions, that doesn’t seem likely anytime soon.

            Like

          8. I agree that a definition must be widely accepted to solve the “eye of the beholder” equivocation problem that you frequently mention. That’s why I proposed a facts of the matter definition in the first place, in hopes that a community effort might contribute to its accuracy and acceptance. But, quite frankly, I’m unaware of any “vast multitude” of definitions, Mike. In my readings, I’ve found that the usual practice is to omit a factual (or any) definition of consciousness and assume that the audience will “just understand” what the term denotes.

            It’s very likely that you and your blog audience are more widely read than I am on the various consciousness theories so that my impression that the term is left largely undefined is probably a result of my unfamiliarity with the relevant core literature. Perhaps you could point me to some of the definitions among the vast multitude you’ve encountered. How, for instance, do IIT and GWT proponents define consciousness? Or body-mind dualists, AI proponents and cortical consciousness theorists? I’ve also never encountered a proponent of either panpsychism or neutral monism that explicitly defines the term—proponents of those two simply claim that consciousness (whatever it is) is either “in everything” or is the fundamental constituent from which everything that exists is derived.

            This is a serious request for definitional references Mike. Any help in compiling a comprehensive list of that vast multitude of definitions would help refine the proposed facts of the matter definition and would be much appreciated.

            Like

          9. Stephen,
            One of the motivations for the hierarchy I occasionally discuss is to present many definitions of consciousness in a lineup.

            1. Systems in an environment: naturalistic panpsychism
            2. Reflexes and fixed action patterns: if in service of evolved goals: biopsychism
            3. Perception: innate predictions of the environment based on distances senses
            4. Action selection in service based on learned predictions
            5. Imaginative deliberation
            6. Introspection

            Each of these versions has its proponents. It’s worth noting that the earliest modern writings on consciousness from Descartes and Locke seemed focused on 6. A lot of the definitions from the basic emotions camp fall into 3.

            On references, here are some sources that might make for a good start.
            https://www.evphil.com/blog/consciousness-16-a-sorta-brief-history-of-its-definitions
            https://en.wikipedia.org/wiki/Consciousness#Definitions
            https://en.wikipedia.org/wiki/Consciousness#Types_of_consciousness
            https://plato.stanford.edu/entries/consciousness/#ConCon
            https://philosophyofbrains.com/2020/01/13/1-consciousness-problems.aspx

            If you really want to get deep into this, this post is about a book that explores the history of consciousness research, although bewarned: light reading it isn’t.
            https://selfawarepatterns.com/2020/04/05/the-seven-attributes-of-minimal-consciousness/

            This post is on an interesting paper that looked at dimensions of consciousness, which I think has an interesting take related to this.
            https://selfawarepatterns.com/2020/08/21/dimensions-of-animal-consciousness/

            That’s what initially occurs to me. Hope it helps.

            Like

        1. Most Excellent Mike! Thanks!

          The very first reference to Evolutionary Philosophy seems exhaustive so I’ll review it first. I notice you’re credited therein for your contributions. My first thought is that a definition of consciousness need not (and should not) include particulars about the contents of consciousness. It’s a definition, not a study. My proposed definition states that the contents of consciousness are all feelings. The types and particulars of those feelings are not definitional. And, oddly enough, that ‘feelings’ generalization does away with many paragraphs in the EP article.

          If you don’t mind, I’ll report back here after a full consideration of the entire history of definitions. Making the task easier is that the very early attempts are obsolete.

          Liked by 1 person

          1. Certainly, a definition of consciousness does not require to which brain processes it is supposed to be attributable. This is where ToCs are known to diverge. Rather, it should indicate what distinguishes a conscious state from an unconscious state.

            This is anything but a simple undertaking. I have tried to put together some elements.

            A distinction is to be noted between ‘consciousness’ in the sense in which we speak of consciousness as a state of consciousness (intransitive consciousness) and consciousness of something (transitive consciousness).

            A creature lacks consciousness in this first sense when it is asleep, anaesthetized, in a coma, and so forth. Because consciousness of this sort is a property of creatures, it is convenient to refer to it as creature consciousness.

            Being in a state of consciousness involves undergoing inner, qualitative, subjective mental states. States of consciousness are subjective in the sense that they exist only when experienced by some individual.

            Transitive consciousness refers to the relation that an agent has to something. To attribute the predicate of “being conscious” to a state simply means that a mental state has the property of being conscious.

            However, it seems to me that any attempt at a definition is inherently circular, since what is a mental state? It is a conscious mental state, where it is experienced by an agent. But what is an unconscious mental state? Indeed, it can only be a state that is not the object of transitive consciousness, but that nevertheless affects other mental states in some way, e.g., unconscious thought processing, which is steps in much problem solving.

            Some theorists propose the notion of intentionality. According to this, mental states have either an intentional content, i.e. are directed at objects, states or other entities, or a qualitative content (when there is something it’s like for one to be in that state).

            Intransitive consciousness refers to awareness. Awareness can itself be divided into two further components: external awareness, which includes the perception of the environment as well as of one’s own body by sensory stimuli, and internal awareness, which is given by the experience of one’s own affects, feelings, desires, ideas, intentions and memories.

            A mental state is said to be access conscious if it is accessible to a wide range of other systems for further processing, which allows it to be used for deliberation and control of behavior. Access consciousness is defined by its functional role in cognitive life, it refers to the relationship between a cognitive system and a specific representation.

            This functional sense of consciousness differs from the qualitative aspect of other states of consciousness, specifically ” what it is like” to undergo something which involves subjective feeling, rendering, for example, pain intrinsically bad and pleasure good. This is so-called phenomenal consciousness, which refers to the qualitative, subjective, or phenomenological aspects of conscious experience.

            A controversially much debated question is to what extent these two types of state consciousness are conceptually independent.

            Individuals are phenomenally conscious, in that they undergo phenomenally conscious states. Even though phenomenal consciousness is associated with a first-person perspective, it does not require being aware of one’s self or even of the environment (such as in dreams, where conscious mental states arise, even though the dreaming subject is asleep).

            Like

  6. However, in the end it comes back to the question: Which came first, consciousness or emotional feeling?

    Definition a refers to the external signs that occur when an organism recognizes and responds to significant events in the course of survival and/or maintenance of well-being – for example, reactions that occur when endangered or in the presence when hungry.

    According to Joseph LeDoux, however, the functions commonly referred to as emotion functions in humans and animals are not emotional functions at all, but functions essential to the continuance of the individual or species. These are triggered, for example, in the presence of a potential source of bodily harm, whereby sensory representations of the threat are transmitted from the sensory system to subcortical areas, causing a global arousal state in the organism in which the organism’s resources are mobilized to cope with the threat. As LeDoux notes, all organisms have the ability to detect and respond to threats, but only organisms that can be conscious of their own brain’s activities can feel fear.

    In humans, such arousal states are often associated with feelings. So what is described in definition b. LeDoux equates emotions and feelings and considers emotions to be cognitive interpretations of situations in which psychological or physical well-being is potentially at risk. According to Le Doux, feelings require more than the presence of a motivational state. This state must be consciously experienced in order to be consciously felt.

    So it is not an emotional state that brings about consciousness, but the other way around applies.

    LeDoux argues that the brain mechanisms that control emotional responses and those that generate conscious feelings are separate. And evidence indicates that areas of the prefrontal cortex are crucial for conscious perception. Explicit (conscious) fear, for example, results from the cognitive interpretation of one’s situation by prefrontal working memory circuits. Moreover, he points out that the prefrontal cortex in humans has unique has unique cellular features in humans and is known to be involved in cognitive processes that are key for human conscious experience.

    There is much to be said for this view. This raises again the question how it is that phenomenal consciousness emerges from brain activity.

    LeDoux advocates a higher-order theory according to which a first-order state resulting from stimulus processing is by itself not sufficient to enable to bring about the conscious experience of a stimulus. Rather, this requires a re-representation of what is happening at lower order levels.

    But there are many other theories, which have been proposed to explain the neural substrates of consciousness. Georg Northoff and Victor Lamme have undertaken a thorough comparative comparison of some of these ToCs (downloadable from http://www.georgnorthoff.com/2020).

    Northoff himself has put forward his own theory (named Temporo-Spatial Theory of Consciousness (TTC)), which considers the temporal-spatial dynamics of brain’s spontaneous activity as the basis of consciousness (http://www.georgnorthoff.com/s/2017-How-do-the-brains-time-and-space-mediate-consciousness-and-its-different-dimensions-Temporo-spa.pdf). Whereas GNWT indexes consciousness by accessing and rendering the contents of consciousness, TTC emphasizes that the temporal-spatial globalization of stimulus-induced activity requires that the temporal-spatial dynamics of the pre-stimulus period be such that it enables the embedding or envelopment of the actual stimulus or input within a larger temporal-spatial framework. Thus, according to TTC, consciousness comes prior to cognition and serves as its trigger or initiator rather than cognition being prior to consciousness.

    To me, this approach is quite promising. Applied to the initial question, it also suggests that consciousness comes prior to emotional feeling.

    Liked by 1 person

    1. Thanks for your well informed thoughts!

      There’s a long running debate in cognitive science between the basic emotions camp, which includes people like Antonio Damasio, Jaak Panksepp, and Mark Solms, and the constructed emotions camp, which include people like Joseph LeDoux and Lisa Feldmann Barrett.

      What’s interesting is if you read most of these people at length, they make concessions that make their actual position closer to each other than it might seem. For example, the basic emotions people say that basic emotions rise from lower level circuitry, but what they’re calling an “emotion” is more like the reflex, rather than the conscious feeling. They might call it “conscious”, but they typically mean an anoetic type of consciousness which is outside of any introspectively accessible awareness.

      And Barrett concedes that there are more primal affects on which constructed emotions are built. She admits that there are basic affective impulses, the four Fs: feeding, fighting, fleeing, and mating. That gets her halfway to Panksepp’s basic emotions of ‘PLAY’, ‘PANIC/GRIEF’, ‘FEAR’, ‘RAGE’, ‘SEEKING’, ‘LUST’ and ‘CARE’.

      LeDoux is a bit more unyielding in his terminology. He seems to eschew the word “affect” entirely, and focus on a distinction between survival circuits and conscious emotions.

      Anyway, I think whether emotions or consciousness comes first is a bit of a chicken and egg thing. I think, together with sensory processing, they’re too intertwined to speak of them as separate things. But a lot depends on how we’re defining “emotion” and “consciousness.”

      Like

  7. I only know what I have experienced. From this point of understanding, I have come to the conclusion that fear and anger are different to other feelings; acting as a container for all other feelings as they would naturally arise in an uncomfortable situation. Faced with a threat, fear and then anger arise.

    Like

    1. One way to think about it is that fear is the impulse to flee, anger the impulse to fight, which distinguishes them from the impulse to do other things. We feel these impulses in order that our reasoning systems can decide whether to allow or inhibit them. But even if they are inhibited, the impulse has fired, causing physiological reactions, which magnifies the feeling. Which is why having to constantly inhibit those impulses stresses the system, using up energy.

      Like

      1. and when such an event is frozen in us, as in PTSD cases; once we are ready to revisit it in order to work through it and to release the ‘inhibited’ yet already ‘fired impulse’, as you call it, we will face the fear first and as we stay present with the fear of that event, it gets released, it subsides and the very next emotion will be anger, rage and such to various extent, depending on the severity of the event. Sometimes we may not even realise why all of sudden we feel so much anger, but it will always be the very next feeling after the fear is met…. The PTSD events will typically then also hold a whole mosaic of further feelings that occurred, fired and inhibited because of the fear response around it, and each one of these feelings also needs to be experienced, allowed its expression for us.

        Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.