A perceptual hierarchy of consciousness

I’ve discussed many times how difficult consciousness can be to define. One of the earliest modern definitions, from John Locke, was, “the perception of what passes in a man’s own mind.” This definition makes consciousness inherently about introspection. But other definitions over the centuries have focused on knowledge in general as well as intentionality, the ability of thoughts to be about something. Philosophers in the 20th century started focusing on the phenomenal nature of consciousness, the “what it’s like” aspects.

One way I’ve responded to this range of definitions has been with a hierarchy. But this has generally been functional in nature and discussed from the perspective of outside the system. It’s a useful response when considering evidence cited for various animal species being conscious, as well as various liberal conceptions of consciousness like panpsychism. It allows us to discuss what particular systems might have, but also what they’re missing.

But it’s also subject to criticism like the kind Mark Solms gives against modern cognitive science, that it removes the first person “I” from its deliberations. I think this happens because including this perspective often clouds the issue, and assertions about it are difficult to establish objectively. Still, it’s a perspective many people interested in consciousness feel is neglected by science.

Which makes me think of a different type of hierarchy, one built with the inside perspective. It starts with Locke’s definition, the introspective one. I think this makes sense because everything we know about our own consciousness comes from introspection. Without introspection, we wouldn’t even have a concept of our own consciousness. This will make this hierarchy very human specific, but hopefully we’ll see room for other species in it.

This new hierarchy, as I’m currently conceiving of it, has four layers.

  1. Introspective perception.
  2. The perceived.
  3. The perceivable.
  4. The unperceivable.

The first one, introspective perception, are the impressions we have of what is happening in our mind. For centuries, philosophers conceived that this was the whole show, that the mind was transparent to itself and its knowledge of itself was infallible. Modern psychology hasn’t left much room for this view. It’s pretty well demonstrated that introspection, while effective for day to day feedback on our own thoughts, isn’t a reliable source of information about the mind.

The second, the perceived, is the source of the introspective perception. This seems largely correlated with the contents of top down attention. So another way to refer to this might be “the attended.” However, attention is a multilevel phenomenon and there are aspects of it which can be dissociated from conscious awareness, so I thought it best to avoid implying they were the same.

The third, the perceivable, is anything about our own mind that might be perceivable. At any one time, most of what goes on in our brain that is perceivable is not being perceived. Again, this could be thought of as what is currently outside of attention, although it might be something we’re attending to and still not be perceived, such as being in automatic mode while driving home while thinking about that TV show you’re going to watch tonight.

The final one, the unperceivable, is anything that happens in our brain that we never have access to, such as regulation of heart rate, hormone levels, and other autonomic functions. We can often perceive the effects of these processes, and may be able to affect them indirectly, but they’re inaccessible to introspection.

The question is, where in this hierarchy is consciousness? Which answer you feel is correct may influence which scientific theories of consciousness you find more plausible. Someone who answers 3, the perceivable, may focus on local theories such as local recurrent processing. Advocates of these theories usually see consciousness, particularly phenomenal consciousness, as something that can “overflow” our ability to self report.

If you favor 2, the perceived, then you’re probably going to focus on access-consciousness theories such as the various global workspace ones. These theories tend to see 3 not as conscious, but preconscious content, content that has the potential to become conscious but won’t necessarily make it.

And if you require 1, then you’re most likely to see metacognitive theories, such as higher order thought theories, or Michael Graziano’s attention schema theory, as more plausible. These theories vary somewhat in how distinct they see 1 and 2, with some higher order theories insisting that the entire conscious experience is in 1. While many global workspace theories tend to see 1 as an add-on, an enhancement.

It’s worth noting that seeing consciousness in 2 or 3 leaves a lot more room for non-human animal consciousness. Although many advocates of higher order theories discuss various level of metacognition, including explicit and implicit varieties, that may still allow many species in club consciousness.

What is the correct view? I’m tempted to say there’s no fact of the matter here since all of these layers are real. But there are some distinct scientific questions between layer 1 and 2, in how much of, say, the experience of seeing a red apple resides in each layer. We might know the answer to this in the next few years. But I’m not sure the same distinction exists for layers 2 and 3. Here the difference might well be completely philosophical, that is, definitional.

What do you think of this new hierarchy? Am I missing anything?

77 thoughts on “A perceptual hierarchy of consciousness

    1. If we take 3 as conscious, then that only leaves 4 for anything unconscious. (Unless I’m missing a layer.) There are lots of intermediate states in cognitive processing, the raw mechanics of cognitive processes, that will never be perceivable, so that still counts as unconscious.

      But it would seem to rule out fully functional unconscious cognition, such as unconscious perception (of the world or body), memories, or thinking, since any of those might at some point be attended to and introspectively perceived.

      Like

      1. I’m thinking of things such as blind sight where something is becomes a part of visual cognition and can affect action (avoiding the table) without actually becoming conscious visually.

        There could also be an unconscious cognition of predator in the bushes that leads one to move away from the threat without clearly understanding why.

        In both cases, we have something like a perception which never comes to complete awareness that nevertheless has an effect on consciousness. It sort of fits in the space between 3 and 4 – maybe a 3.5 or something.

        Like

        1. Blind sight is an interesting point. It’s a case where, due to injury, a certain type of visual processing no longer works, which just happens to be the type that was perceivable. In other words, it’s not like they have full visual functionality without consciousness. They typically keep detection but not discrimination. In other words, they can detect that something is in front of them or coming at them, but can’t act on what it is. What continues to work is the portion that is never perceivable in healthy humans. Which I think puts it in 4.

          In the case of unconsciously detecting a predator, I think that’s in 3. In a particular situation, you may not be consciously aware of the predator, but perhaps if the stimuli were stronger, or your attention was focused in that area or modality, you would be.

          That isn’t to say that the likelihood of 3 becoming 2 might vary tremendously in particular cases. A visual stimuli that last 20 ms will likely never enter consciousness, but one at 50 ms or longer has an increasingly higher chance, unless another subsequent visual stimuli masks it.

          So I take back what I said above. There would be cases of complete unconscious perception. The kind that’s too brief to enter consciousness. I suppose that could also be true for thoughts.

          Liked by 1 person

  1. “I’m tempted to say there’s no fact of the matter here since all of these layers are real.”

    That, I suspect, is the most accurate view. Consciousness is holistic, not located at any one level or in any one system, but a property of the whole. As you say, science can pick at the parts, but my sense is that consciousness will only ever be understood as emerging from the entire operation of the brain.

    I do very much agree with the view that the “I” is fundamental and central to the picture. After all, it’s the infamous “Hard Problem” — how the heck does three pounds of intricate meat give rise to the personal movie we all experience? Why should a machine have subjective experience?

    FWIW, I think it’s possible the hierarchies of any kind are just too simple of an analysis. Anyone who has done a lot of OOP has run into the insurmountable difficulties of trying to break the real world (and sometimes even the abstract world) down into a neat hierarchy. Anything interesting or complex tends to resist classification along a few, let alone one, axis.

    Liked by 1 person

    1. I tend to agree on the holistic aspect. I think that’s the core principle behind access theories like the global workspace ones. Not that any one of those theories likely captures the full picture. At best they provide a scaffolding in which other theories can be arranged.

      On the “I” aspect, it’s interesting that despite making that criticism, Solms is not a fan of the hard problem. In his book, he spends a chapter critiquing Chalmers’ arguments. His main point is he thinks Chalmers overlooked affect processing in his analysis. Since the hard problem is often phrased along the lines of why if “feels like something” to have an experience, I think he has a point. But you know my broader critique of the hard problem. 🙂

      The hierarchies are definitely oversimplified. I freely admit that. At best they’re simply useful epistemic crutches for quickly conveying a lot of complex dynamics. Although most of the pushback I get on them is that I’m unduly complicating the picture.

      But I mentioned in my last functional hierarchy post, the dimensional analysis of Jonathan Birch and colleagues as another useful crutch, which puts things like perception, affect, unity, temporality, and selfhood as separate degrees of freedom. That approach has its own limitations through, since the dimensions are not really independent of each other.

      Like

  2. First, I really appreciate how you bring original thinking to the topic. Nice.

    Second, my view is both similar and polar opposite to Wyrd’s. On the similar side: Consciousness should be applied holistically, in that, for any system you can discuss the consciousness of that system as a whole.

    On the opposite side: it is a mistake to say there is only one system in a given human being. Most people think of consciousness monolithically, and this causes much confusion. That’s why some say the prefrontal cortex is paramount, whereas others point to various other parts of the cortex, and still others point to brain stem, or other subcortical structures.

    My point is, if you break down what consciousness is “about”, i.e., what kinds of processes are involved in consciousness, you’ll find them in all of these systems.

    So yes, even level 4. What’s “perceivable” to one system may or may not be perceivable to another.

    *

    Liked by 1 person

    1. Thanks James!

      This particular hierarchy is definitely from the perspective of our consciousness, the ones typing these comments. It might be that the various subsystems have their own unique consciousness, but if so, it doesn’t appear to be one we have any access to.

      One theory I didn’t mention in the post is IIT (integrated information theory). Mainly because it doesn’t cleanly fall into any of the levels I listed. IIT would take your perspective to some degree. It would say it’s all conscious. Although due to the exclusion postulate, levels 1-3 it would see as one unified consciousness. But level 4 not being part of the overall consciousness of the system might have islands of isolated consciousness, but they would each have much lower phi than the 1-3 system.

      But, as always, it comes down to how we define “consciousness.”

      Liked by 1 person

      1. Mike,
        On IIT’s exclusion principle, are you talking about what I currently perceive their club to have used to combat Eric Schwitzgebel’s nesting observations? This is to say that they changed their theory so that it didn’t imply that America was itself conscious, because currently under their theory the individuals which make it up should have higher phi than the whole of America? To which the professor astutely observed that if America had a complex enough election to have higher phi than any individual person who makes it up (or America were big enough to do so and did), then each American would then be struck non-conscious in service of this greater phi entity that they were apart of? Is that your understanding of IIT’s “exclusion principle”?

        Liked by 1 person

        1. Eric,
          Well, the exclusion postulate is that the system with the maximum phi is the conscious one. So if a certain combination of your brain is conscious, it says that some subset of that combination, even though it will have its own phi, is not conscious because that’s the not the maximum phi it can be involved in.

          I think an IITer would deny the US as being conscious because its phi, due to the limited connectivity between each of us, would be less than the phi of each individual person. In other words, you can’t just add the phis together. It’s more complicated than that. The gaps between us decrease our combined phi.

          Christof Koch does say that if you wired two brains together in the right manner similar to how our hemispheres are connected, their combined phi could exceed the individual phis, and then the individual consciousness would disappear and the combined system would then be conscious. I think he’s papering over a lot of brain complexity, so I’m personally dubious. I think to make it work, he’d have to change the individual brains so much their individual consciousnesses would be heavily altered even before the link.

          So under IIT, the exclusion postulate doesn’t prevent the US from being conscious. The phi calculation does that. The exclusion postulate does prevent any sub-consciousnesses from existing within your overall consciousness. IIT does say if everyone in the US was linked together Borg style with sufficient connectivity, our individual consciousnesses would disappear into an overall group consciousness.

          As you know, I’m not really a fan of IIT. Although it is interesting that it would end up spanning all the layers.

          Liked by 1 person

          1. Mike,
            Tononi’s exclusion principle would be the reason why a lower phi entity, presumably standard USA function when compared against human brain function, would have no group phenomenality. Given my interest I did some poking around to see what was said when.

            I began at Scott Aaronson’s 2014 blog post regarding IIT. https://www.scottaaronson.com/blog/?p=1799

            Though its reduction of IIT to some kind of mathematical assessment makes the theory seem ridiculous to me already, better still would be an existing entity which IIT suggests is conscious. For this Aaronson linked to Schwitzgebel’s March 2012 post about why Tononi should consider the USA conscious. But there a reader informed him about Tononi’s 2009 paper where apparently an exclusion principle was added to create IIT 3.0. (I guess Tononi was tired of being a panpsychist.)

            So apparently Schwitzgebel’s post was right that Tononi should have considered the USA conscious from 2004 to 2009 (and of course as a panpsychist, everything else as well). Afterward however a society of people would need to create higher phi among them than the brain does to thus all be struck non-conscious in service of this greater conscious entity, (whether through a complex enough election, people “Borged up”, or whatever). And how would non-conscious people continue functioning such that their greater society had more phi than the brain does since they’d now all essentially be comatose?

            Here I’m reminded of a Wizard of OZ line: “Pay no attention to that man behind the curtain!”

            (Perhaps I’ll get to your post itself some time as well.)

            Liked by 1 person

          2. Eric,
            Now that I think of it, you’re right. The exclusion postulate would apply. I just had it in my mind that it excluded subsets with phi, but it does exclude larger entities with phi also, because neither of them would have phi-max. My bad. It’s what I get for discussing a theory I’m really not invested in.

            Getting to the post would be nice. 🙂

            Liked by 1 person

  3. Recently this: https://getpocket.com/explore/item/what-do-animals-see-in-a-mirror

    If this sense of “self” is a developmental ability (children and chimps need time to construct the neural pathways that provide the “self” effect), what does that say of the concept as a whole? If a poodle, a parakeet, or a pachyderm spends enough time in the presence of their own reflection, will they develop the concept of “self”? Would an AGI of sufficient capacity and ability?

    Now, visual awareness of self is nothing like introspection. Or is it? Perhaps the seed of self is all a creature needs to begin questioning and analyzing their own minds and thoughts.

    Liked by 1 person

    1. I don’t know what identifying oneself in a mirror really means about consciousness and a sense of self — something about a higher level of consciousness, I suppose. Most dogs don’t care much about reflections, perhaps because scent and sound are missing, but I’m pretty sure dogs have a very strong sense of self.

      Liked by 1 person

    2. Self awareness is a complex thing and can exist in different degrees. Most animals with distance senses (sight, hearing, smell) I think, have body-self awareness. Modeling the environment seems pointless if you don’t model your body in relation to it. Of course, for many species this type of “awareness” might just be a galaxy of instincts centered around that body-self.

      But metacognitive-self awareness seems much rarer. In that sense, I think the mirror test might demonstrate body awareness, but saying it does for the metacognitive version seems like a much stronger statement.

      As the linked article notes, Gallup does think his original test demonstrates the metacognitive variety, but he thinks most of the other scientists who’ve done it don’t do it right and are fooling themselves.

      It’s worth noting that the scientist who claims to have demonstrated that cleaner wrasse passed the test emphatically says that he doesn’t think the wrasse are metacognitive-self aware. He sees his experiment as demonstrating that the test can’t establish that type of self awareness.

      Like

  4. everything we know about our own consciousness comes from introspection. Without introspection, we wouldn’t even have a concept of our own consciousness.

    I hope you mean that first sentence only in the limited sense that would be implied in the second one. Knowledge is a well connected web, and we can certainly learn about our consciousness from observing others, or learning cognitive science and neuroscience.

    I agree that between (2) consciousness as the perceived and (3) consciousness as the perceivable, there is just a difference of verbal habit not worth fighting over. But morally, (3) hits the important point. If a doctor proposed to give me “anaesthesia” that just distracted me from pain, or made me forget about it, rather than making the noxious aspects of it unperceivable, I’d run screaming.

    Liked by 1 person

    1. Good point. The first sentence’s use of “everything” is an overstatement. We can certainly learn something about consciousness by studying other systems. In particular, we can learn about the limitations of introspection. But the second sentence is on much firmer ground.

      I think you’d run screaming from that doctor because you’d know the chances of keeping the painful stimuli in 3, of not having it shoot into 2, would be very dodgy, at least if 2 was still functional, and you’d be completely rational to think that.

      Perhaps a bit unnervingly, anesthesias can actually disrupt 1 and 2, but not necessarily 3. https://www.pnas.org/content/117/21/11770
      Although before being too unnerved, it’s worth noting that such light use of general anesthesia is usually paired with a local one on the body part being operated on.

      Like

  5. Alright Mike, on to your post itself.

    One thing that I like here is your acknowledgement that people can have a reasonably accepted conception of phenomenal consciousness. You even used Nagel’s “There is something it is like”, which as I recall you use to consider too undefined to be effective. Cheers!

    On your new backwards hierarchy (lowerarchy?), the “base” is made up of the most advanced consciousness that we know of (or something that can think about its own thoughts or whatever). At #2 there is perceived input, #3 is what’s theoretically perceivable, and finally #4 is Kant’s noumena. So you end with reality itself and put what could at least conceivably be conscious input first.

    I guess the point is to make sure that there’s a subjective perspective from which to assess things. While I don’t advocate how methodological behaviorism eliminated the notion of a “consciousness” in science, to me that should have been rectified by ending this ban itself. I consider introspection to be an important position from which to theorize our nature, with empirical science remaining for verification that tends to go beyond introspection.

    As you know I think science has all sorts of problems given that it has no respected community of professionals to guide it by means of accepted principles of metaphysics, epistemology, or axiology. This should go beyond Mark Solms’ mere “We need to put the ‘I’ back in scIence”. Do I think your reverse hierarchy might help? Well I certainly consider the “I” important.

    I think one reason that introspection has been such an unreliable source of information about the mind is because today science functions without an effective basic model of our nature. This is to say that we currently lack a good lens from which to effectively interpret introspective accounts. So it’s a bit catch 22.

    And why do I think science has failed here so far? The social tool of morality should naturally punish scientists for demonstrating that ultimately we’re all hedonistic products of our circumstances, and so an effective general model of our nature has gone missing. Instead our moral influences encourage us to publicly assert that personal welfare comes through altruism. It’s an irony that I’m quite sure science will overcome at some point to thus harden up these primitive fields.

    Try this. I believe that science needs a generally accepted “consciousness” definition. As I see it this would be “phenomenal experience”, functional or not (as in Schwitzgebel’s “innocent” conception). If your new model is a step in that direction then I’ll gladly take it!

    Liked by 1 person

    1. Eric,
      I wouldn’t get too excited by my use of Nagel’s phrase. It was in the context of describing the activity of philosophers. I still see that phrase as fairly vacuous.

      The hierarchy is meant to be from the inside out, so it is meant to take the subjective perspective. But because human consciousness is the only one we know from the inside, it’s inherently human centric. Unfortunately, I don’t see any way to avoid that in this case.

      I actually think methodological behaviorism gets a bad rap. We have to remember it arose as a reaction to what was seen at the time as the freewheeling speculation of earlier decades, and during a time when there were no brain scanners, so it’s not really fair to compare it to later research which accepted subject reports which could be correlated with those scans. That said, they did overdo it. Their mistake, I think, was in not treating introspective reports, in and of themselves, as a type of behavior. If they had, they might have been able to make progress outside of the narrow confines they worked in.

      The unreliability of introspection and the holes in our mental theories are indeed related, but not in a way that I think is incapacitating. There are techniques that are allowing progress to be made. Dehaene in his book discusses how modern research makes extensive use of introspective reports, but treats them as statistical data which is correlated with other data, such as brain scans, and the stimuli supplied. An individual introspective report isn’t reliable, but treating a pool of such reports as data, and figuring out what correlates with them, is productive scientific work.

      Just to be clear, similar to my other hierarchy, this is not meant to be a new model or theory of consciousness. It’s really another attempt to relate the underlying assumptions of other models that are out there. It’s more pedagogical in nature rather than any attempt at theoretical progress. But it does show that many of the theories out there are using different conceptions of consciousness. I think it helps to show where differences are definitional vs differences in fact.

      Liked by 1 person

      1. Mike,
        I guess when we don’t know what to say we tend to fall back on our standard positions. I’ve got mine and you’ve got yours. To perhaps momentarily break out of that I could ask how it is that IIT exists in all four levels of your new hierarchy? But then more towards me, do you have any opinions about where my own dual computers model would fit? (And for reference, here the brain may be seen as a non-subjective computational device that creates an entirely subjective computational device, and it uses this subjective dynamic as input which helps it function under more “open” circumstances that it can’t be directly programmed for.)

        Liked by 1 person

        1. Eric,
          So, according to my understanding of IIT, layers 1-3 would all be part of the phi-max portion of the brain. IIT is about causal structures rather than causation in the moment, so it would include layer 3, even though most of layer 3 isn’t perceived in any particular moment. For layer 4, it would have many disparate regions with varying amounts of phi (supporting James of S’s contention of multiple consciousnesses), although the phi of each of these regions would be far less than the phi in the regions that are part of phi-max.

          On mapping your dual computer model, I’m not sure. I suspect you’d say that layers 1-3 are all in the conscious computer, but that doesn’t seem too compatible with how you usually describe its capacity. If that second computer is composed of EM fields, they’re definitely present. Although the regions in 4 also produce EM fields, so you’d have that to reconcile. You could focus on recurrent processing as Victor Lamme does, but while Lamme is simply taking that kind of processing to be conscious, you would need to find something in the resulting field. James C usually focuses on the synchronization across regions, but that seems like an indicator for layer 2 content, not layer 3. With layer 3, the processing is generally still local.

          Here’s a question for you. Is there ever only one second computer generated? Or can there be multiple? If only one, that seems more in line with layers 1 and 2, and not so much with 3. Or is the second computer’s processing as distributed as the first? If so, what gives consciousness its serial nature in your model?

          Liked by 1 person

          1. I think I get it Mike. IIT is essentially a limited panpsychism given its exclusion postulate that funnels all the phi to a single experiencer, and so under its 3.0 version not everything is conscious, though everything in a given phi-max system will contribute to that state, and even the unperceivable.

            Skipping down to your question about whether there can be two experiencers at once from my model, no I don’t think that’s how things work (except perhaps in rare cases). I’ve wondered why evolution hasn’t given us parallel subjectivenesses. Wouldn’t it seem helpful if I were able to write this to you with one conscious processor while another has an oral conversation with someone else? Note that “I” might reconcile these separate experiences internally? But I suspect dual conscious processors would incite too many conflicts of interest between them to be adaptive. Regardless, human consciousness does seem serial for whatever the reason.

            Then from here, yes your perceivable level 3 would merely have some potential to be perceived but would remain outside the subjective dynamic unless actually perceived, as in level 2 where my model apparently maxes out.

            On regions in 4 also producing EM fields, the theory is not that brain based EM fields in general constitute subjective experience. It’s that certain specific fields with the right parameters can exist as such. Synchronous firing is presumed necessary given the minuscule energy associated with individual firing. Just as your stream of consciousness is experienced as a single varying thing over time, theoretically there is a single EM field that some of your synchronously firing neurons create over time to give you this specific changing stream.

            Here no part of the brain is subjective experience. Instead it gets produced, somewhat like a candy machine is not candy but can produce it, or a lightbulb is not light but can produce it. My understanding is that most cognitive scientists today believe as you do that subjective experience exists when certain information is properly processed into other information. But wouldn’t that mean (as in the case of candy or light), that the brain produces subjective experience rather than exists as subjective experience? My psychology based dual computers model of brain function actually works fine given this information based premise as well. I guess I don’t understand why it’s so popular to consider the subjective dynamic as brain itself rather than as something produced by brain.

            Let’s say that you were to imagine the subjective dynamic to be produced by the brain, and even as substrate-less processed information. Thus zero percent of the evolved brain would be conscious, though it would produce, service, and monitor this separate teleological computer that I speak of. It’s from here that you could potentially gain a working level grasp of my psychology based model of brain function, or a dynamic where valence, informational senses, and memory serve as input, thought serves as a processing dynamic, and a desire to move various body parts serves as non thought output. Here such dynamics wouldn’t exist as a fabulously complex field of EM radiation, but rather as certain information that has been processed into other information. Regardless of how it exists, note that even Keith Frankish has acknowledged the existence of Schwitzgebel’s innocent conception of subjective experience.

            Liked by 1 person

          2. Eric,
            I think the reason human consciousness is serial comes down to the fact that most creatures can only act on one thing at a time, so for most of evolutionary history, it was adaptive to winnow things down to what we’d pay attention to and act on. Humans have complicated things with our modern societies, but we’re constrained by that evolutionary history.

            On EM fields, the question I always have is, what do they add? We already have the synchronized firing patterns and bindings with straight neural processing. So if the idea is to understand me, say, holding an Amazon delivery box in my mind (which I just happen to have looked at), then the bindings and synchronization of neural maps for regions processing the shape, size, location, color, categorization, etc, in the sense of reciprocal excitatory connections, provides a causally complete account of that, at least in principle. What does the field add?

            On the brain “producing” subjective experience, here I think the problem is the tendency we have to refer to this with a noun: “experience”. Simona Ginsburg and Eva Jablonka in their book, “The Evolution of the Sensitive Soul”, prefer the phrase “subjective experiencing”, to make clear they’re talking about a process, not a thing. At the time, I found it a bit pedantic, but with this discussion I’m now seeing why they were so persistent with it. So the brain no more “produces” subject experience than I “produce” walking when I walk. I walk, and the brain subjectively experiences (in both cases among many other things).

            I think if you’re going to assert that the brain produces something separate and apart from it called “experience”, then you need to 1) identify what that is in terms of the physics, 2) why we should accept that thing as experience, and 3) what its causal role is in the brain’s operations. I know you see the answer to 1) as EM fields, but I’m not clear on the others. It seems like there are gaps here which keep getting papered over.

            All in all, it seems more parsimonious to just see the brain as experiencing rather than producing something separate called “experience”. At least until there’s data compelling us to accept that complication. I think that’s the stance of most cognitive scientists as well as functionalist philosophers.

            Liked by 1 person

          3. “what do they add?”

            EM fields address what is one of the most perplexing problem in neuroscience: how does the brain synchronize brain activity to create a unitary experience. This is what is known as the binding problem.

            To have a single experience, there needs to be some final “computation” that puts everything of concern to the organism into one unified image. EM fields could do this because they can simultaneously bind all of the activity into a unified view, dropping from consciousness the things that are unimportant, and bringing to the fore the things that are most critical.

            Like

          4. As I understand it, the interconnectivity between cortical and subcortical regions, including reciprocal connections, provides everything needed for binding between regions. If region A is firing at a certain rate, then its excitatory signals to B pulse at that rate, increasing B’s firing rate. B in turn, through reciprocal connections excites A. By exciting each other, they end up synchronizing their firing rates. (They are also inhibiting adjacent regions through lateral inhibition, which is actually the source of the P-waves in EEG readouts.)

            The rest seems similar to the global workspace.

            Of course, none of this is understood thoroughly enough to categorically rule out that EM fields aren’t involved (beyond perturbations). But based on the data as it stands, it seems like an unnecessary complication. New data could change that tomorrow.

            Like

          5. Mike,
            To me what EM fields mainly add, is a highly plausible output mechanism from which an experiencing entity may exist, which is to say a potentially natural initial “hard problem” solution. Observe that everything which computers are known to do, exist by means of physics based output mechanisms of various varieties. In no case has it been demonstrated that information which is processed into other information will reside as a worldly output in itself. Instead there’s always some kind of physics based medium of conveyance, such as a computer monitor or even heat. If your position is ever validated however, as far as I know subjective experiencing would be a lone exception to this rule.

            How invested are you in the idea that as an experiencing entity, you may effectively be referred to as “brain” itself? I don’t consider myself that way, but rather as an experiencing entity which exists by means of a brain. Or in my thumb pain thought experiment (which you’ve agreed with so far), if the right information on paper were properly converted into other information on paper, then the machine which does that converting would thus experience what you do when your thumb gets whacked?

            The reason I ask is because if you mandate that the machine which causes experiencing must also be what does that experiencing, then it would seem that you deny the possibility for a non-experiencing computer to create an experiencing computer. In that case you shouldn’t have much potential to grasp the dual computers model of brain function which I’ve developed from that premise.

            Could you try to conceptualize a machine by which subjective experiencing exists, and yet the experiencer may effectively be referred to as something other than the machine which creates that experiencing entity? Here you might even presume that this experiencing exists exclusively as certain information that’s properly processed into other information. In that case perhaps you could gain a working level grasp of my dual computers model of brain function, and so help assess it even if I’m wrong about what information processing itself can potentially do.

            Yes I suspect that 1) certain EM fields reside as the physical substrate of subjective experiencing, 2) we should accept this as such if experimentally justified, and 3) I extensively detail its causal role in the brain’s operations through my psychology based dual computers model (which of course I’d love for you to grasp and assess).

            Liked by 2 people

          6. Eric,
            I’m invested in following the data. I understand you intuitively do not see yourself as a brain, but something else. There’s a word for that intuition. I won’t use it here because I know you dislike having it applied to your model. But it’s an old, perhaps innate intuition. However, science has never been particularly kind to our intuitions. Given that history, I’m very invested in not preferring my intuitions to evidence.

            As we’ve discussed before, I think I understand your model, at least to the extent you’ve flushed it out in our conversations. Remember, I did well on the tests you’ve given me. I understand it, but just disagree with it. I think it’s reifying intuitions the data is telling us we need to question. And I think that statement applies more generally to EM theories overall. Again, maybe there will be new data in the future that changes the picture.

            On 2), I agree we should accept it if it’s experimentally verified, but what would it mean for that to happen? On 3), what is the conceptual mapping between phenomenal properties and properties of the field? If the image of a red apple forms in my visual cortex, what are the steps between that and me: deciding whether or not to reach for it, remembering it later, or reporting my perception of it? Granted, we don’t have a complete account of that at the neural level, but it’s comprehensible in principle. The only thing I can see EM fields adding to that is flattering the old intuition.

            Liked by 1 person

          7. Mike,
            Given how difficult it is to evade the non-natural implications of the status quo account of subjective experiencing demonstrated by my thought experiment, I can see why you’d want to characterize my model as “dualistic”. But while my accusation against your side references full non-worldly substance dualism, yours against mine merely references “two states”, such as an electrical switch that can either permit electricity to pass or not. So I wouldn’t mind you referring to my model as “dualistic” if properly qualified, but in that case you’d also lose your rhetorical insinuation! I certainly smiled at your ability to then suggest that my beliefs are merely based upon “intuitions” while your beliefs are instead based upon “evidence”. Now that’s some nifty use of rhetoric! 😉

            I dug up that test you referenced. (Apparently this was in mid January 2020, or about six weeks after James Cross interested me in EM field theory, not that I’d yet reconciled it.) Here’s the comment where I reviewed your answers to my test: https://selfawarepatterns.com/2020/01/02/the-issues-with-higher-order-theories-of-consciousness/#comment-48473

            Though you did do pretty well, note that these were essentially memory based questions, or something that an attentive student might get right by means of mere lecture recall. If asked what force happens to be the product of, from lecture a student might reply “mass and acceleration”, though without much ability to apply this theory to real world situations. To achieve that a student should need to attempt practical problems which demonstrate what F=MA both does and does not effectively mean.

            You commonly ask me working level questions about my model. Note that I gladly demonstrate these solutions, though they never give you an ability to solve questions of this sort yourself. For that you’d need a fundamentally different mindset, or that of a student earnestly trying to practically implement a given model. And of course there’s little chance of you voluntarily doing so while I effectively portray a position of yours to be both spooky and groupthink. Ah but we do seem to have good fun anyway! So here are some things that mere lectures shouldn’t help you effectively grasp, though may interest you enough anyway:

            First, conceptualizing subjective experiencing in terms of a computer in its own right, is not some kind of ontological fact. Instead it’s an epistemic crutch. Models in general exist as such. Note that it’s commonly acknowledged that psychology supervenes upon neuroscience. My dual computers model not only reflects these two layers of abstraction, but relates them to each other.

            Secondly, my model does not depend upon EM fields existing as the conceptualized second computer. That solution simply scratches lots of itches for me. I should check my records but I might have developed my dual computers model around 2007, or long before my December 2019 introduction to cemi.

            Thirdly, my model works just fine under the status quo “informationism” premise. There are no parameters in this psychological account about what creates subjective experiencing. It’s merely my naturalism which is offended by that.

            Fourth, there should be countless ways to empirically check whether or not certain synchronous EM fields exist as the stream of experience. Maybe some sort of transmitter could be implanted in the brain that’s set up to produce waves which should alter the field created by certain synchronous firings? The point is that causal proposals should have causal implications we can detect, and this particular proposal (unlike the vast majority, and certainly in informationism) is eminently falsifiable.

            Fifth, on the conceptual mapping between phenomenal properties and properties of the field, according to McFadden’s cemi a single field would exist each moment that constitutes the phenomenal, and would change over time just as the phenomenal does.

            Sixth, according to his theory a visual image will not form in your visual cortex (like a theater?), though should be highly associated with affecting the EM field in ways that concern vision.

            Seventh, you deciding whether or not to reach for a perceived apple will depend upon your interpretation of various inputs (in the form of valence, sense, and memory), and construction of scenarios about what will make you feel best. Theoretically all of this exists in various elements of a single EM field.

            Eighth, you remembering a past experience later would provide EM field alteration to a given field such that a hollow conception of the original experience would now exist as input information for the experiencer.

            Ninth, reporting that perception would amount to a field which constitutes associated words (as in “I perceived an apple”), with a desire to presently express them orally, and this desire would be taken as input to the brain computer which dutifully operates associated muscles as the second one desires.

            Liked by 1 person

          8. Eric,
            What qualifiers would you find acceptable for “dualistic” in reference to your model? I’ve tried “naturalistic dualism”, “physical dualism”, and a few others that you seemed to find problematic. McFadden used “matter / energy dualism” for his EM theory, but if I understand your subsequent remarks, that might be too pigeonholing for your theory.

            On intuitions, my remarks there were based on where you started above.
            “How invested are you in the idea that as an experiencing entity, you may effectively be referred to as “brain” itself? I don’t consider myself that way, but rather as an experiencing entity which exists by means of a brain.”
            That to me, beginning with how you “consider” yourself something other than a brain, certainly seemed like an intuitive starting point.

            Using your numbering:
            1-2. We’ve had long conversations over the years where I attempted to suss out whether your second computer was a separate ontological thing, or something emergent, a higher level abstraction of brain operations. I thought with the embrace of EM theories, you had swung decisively toward something ontological. But here you appear to be opening the door again to emergence. Would it be fair to say you’re noncommittal on this aspect? I will say “dualistic” seems less applicable for the emergent scenario, except maybe in terms of an “epistemic dualism” which seems to apply to just about any theory.

            3. I remain unsure what “informationism” is. I’m a computationalist and functionalist, in the sense of seeing both in neural operations. My impression is that “informationism” is simply that without your second computer. On falsifiability, I’ll just note that the functionalist outlook rests heavily on cognitive neuroscience, which uses extensive empirical data.

            4-5. So the idea would be to disrupt the EM field without disrupting the underlying neural processing. I’ll grant if someone could do that and it led to differences in conscious awareness, I’d find that compelling. Of course, given that we measure neural activity through EM field effects, reliably demonstrating that we haven’t altered neural activity (as TMS would do) would be a challenge. Using James of Seattle’s analogy, it’s like altering the sound of the dominoes falling without changing the actual falling of the dominoes, and finding a way, in a dark room, to verify that’s what we did.

            6. We have extensive evidence of image maps forming in the visual cortex, in the sense that the pattern on the retina is topologically projected to early sensory regions. We also have extensive data showing how the information is discerned from V1-V6. Of course, the question is where the experience of the image happens, but we know that experience is affected by changes in the processing in these regions.

            7-9. These seem vaguely like the neural explanation with EM fields bolted on. I still don’t see the necessity of those fields. Again, I’ll grant that it’s conceivable they could play a factor (other than noise) but I just don’t see the data pushing that conclusion, at least not yet.

            Liked by 1 person

          9. I’ve never been able to see how you get consciousness out of neurons firing without something else added to the explanation. Neurons firing are just electrochemical reactions by themselves so almost any battery or circuit would have a lot of the same properties. I like McFadden’s dualism because it seems to have the added something else to get to consciousness. So as Eric’s says, it might be an initial “hard problem” solution.

            I think your view is it is just neurons firing nothing else required to complete the explanation. Neurons fire in the spinal cord, the gut, the heart, and throughout a lot of the body. Is all of this conscious? What makes the brain special? If it is information, then are you positing a matter/information dualism?

            Like

          10. I do think the answer is the information processing architecture. I actually think that will be the answer even if EM fields end up being involved. Otherwise, we’re still at the same question you’re asking about neural processing. What is it about that kind of processing, either neural firing or EM field activity, that explains consciousness?

            As far as I can tell, the only thing adding some form of electromagnetic transmission adds is speed, but I’m not aware of any data showing that kind of speed is needed for mental processes. (The opposite is actually what we see in the data. Watching a movie, for instance, depends on the limitations of how fast our brain can process information, otherwise we might perceive a sequence of still pictures.)

            On matter/information dualism, that assumes information is something other than matter or energy. But information is matter and energy. It’s just a way of talking about the patterns and structures of that matter and energy. So the information architecture view is not substance dualism. It’s also not property dualism, since there are no non-physical properties. Some people do talk about dualism from the multi-realizability aspects of this view, but if so, it’s a dualism similar to what exists in the device you’re using to read this.

            Like

          11. “On matter/information dualism, that assumes information is something other than matter or energy”.

            But then you add:

            “It’s just a way of talking about the patterns and structures of that matter and energy”.

            Patterns and structures? So it could be built completely on a mechanical basis like a sophisticated but old fashioned adding machine, for example. BTW, there is a theory out there saying exactly that the brain works from mechanical movements.

            https://www.scientificamerican.com/article/brain-cells-communicate-with-mechanical-pulses-not-electric-signals/

            Like

          12. This gets into what I was saying above about multi-realizability. I do think the patterns and structures are what’s important. And they should, at least in principle, be implementable on alternate substrates. But that substrate has to be something with causal efficacy. Even if we take the platonic view (which I personally don’t), the structures and patterns in abstract, in and of themselves, would have no causal power. To have any causal power, they must be in some kind of physical substrate.

            I would note that muli-realizability isn’t an absolute thing, in the sense that not all substrates are equivalent in terms of efficiency and performance.

            That Sci-Am article appears to be paywalled. I suppose you could interpret the movement of ions, vesicles, and molecular interactions as mechanical, but it seems pretty firmly established that the reason for those movements are chemical and electrical.

            Liked by 1 person

          13. I think you can find the SciAm article elsewhere. I know I read it at one point. The researcher profiled in it thinks actually the physical movements in the cytoskeleton are responsible for how the brain works. BTW, there was another paper out recently with something similar:

            “In a paper published by Frontiers in Molecular Neuroscience, Dr Ben Goult from Kent’s School of Biosciences describes how his new theory views the brain as an organic supercomputer running a complex binary code with neuronal cells working as a mechanical computer. He explains how a vast network of information-storing memory molecules operating as switches is built into each and every synapse of the brain, representing a complex binary code. This identifies a physical location for data storage in the brain and suggests memories are written in the shape of molecules in the synaptic scaffolds”.

            https://www.sciencedaily.com/releases/2021/03/210301112334.htm

            The guy may be on to something but I would think, however, that the bendings and transformations of the proteins noted in the article would also likely affect the electromagnetic moments of the molecules too.

            “To have any causal power, they must be in some kind of physical substrate”.

            But that’s the thing – how does multiple circuits and neurons have a unitary causal power. Sure, one neurons can trigger another but, when there are thousands firing even in a coordinated manner, how does one integrated result arise from it.

            Liked by 1 person

          14. Those theories sound interesting, although I’ve become pretty leery of phrases like “a revolutionary new theory”, and press releases that make basic mistakes such as referring to “trillions of neurons” aren’t confidence inspiring. I’ll become more interested if a substantial portion of the neuroscience field starts viewing any of them as plausible.

            On multiple circuits having unitary causal power, if I’m understanding what you mean, it depends on which theory you favor. HOTTs have it all converging on PFC integration hubs and utilized in planning. GWT has it broadcast throughout the specialty systems, aligning them in terms of content being attended to. And of course there are other answers. Coordinated firing only requires reciprocal connections, which are pervasive in the system.

            Adding another medium to this isn’t really an alternative answer. It just adds another substrate for whatever the answer is to operate over. We still need a causal account of what’s happening.

            Like

          15. Mike,
            On dualism, it’s simply that a person skilled in the art of rhetoric may use the term (even if naturally qualified), to falsely imply that someone else’s ideas are mere “intuition”, while their ideas are instead “evidence based”. Unless a person is referring to substance dualism rather than “two causal systems which function together”, I don’t see what this term could add to these discussions beyond actual substance dualism insinuations. Note that my model of brain function is already entitled directly as “Dual Computers”, and so I actively advertise this element of it.

            In any case, you consider yourself to exist “as brain itself” while I consider myself to exist “by means of brain”. Science should need to harden up substantially before it might intelligently weigh in here. As you know there are a number of ways that I’d like to help it do so.

            I think I now understand your uncertainty of my conception of this second computer, whether ontological or a higher level emergence. I’m definitely committal. Try this: I consider all of reality in itself (and even numbers and other terms ultimately) to exist ontologically. (Terms exist as a product of thought.) As an experiencer of existence rather than some sort of god however, I can only intelligently speak of things epistemically rather than ontologically. So all of my statements are mere models, and whether the first computer or the second. They exist as potentially effective crutches rather than as reality itself.

            I speak of brain “producing” rather than “existing as” subjective experiencing, because sometimes this dynamic exists by means of a functioning brain, while other times a functioning brain does not produce any experiencing at all. You could use this crutch as well, as in “Sometimes the brain processes information into other information such that a subjective dynamic results, while other times it doesn’t”. In that case you might also delve into my dual computers model in a working level capacity without yielding to any physics based medium such as EM fields. That would probably be too ambitious however since I portray your side in a non-natural way.

            I consider “computationalism” and “functionalism” too vague and misleading for effective use. Thus I’m trying to popularize the term “informationism” instead. This is the belief that subjective experiencing exists by means of certain information in a given medium that’s properly processed into other information. Though it’s an extremely common position in cognitive science today, you’re the only person I know of with the integrity to actually “own” the implications of my thumb pain thought experiment. I presume that most simply cheer you on rather than get into the line of fire given how ridiculous the implications happen to be.

            On transcranial magnetic stimulation, to me that’s a bit like cheating. Here scientist cause a collection of neurons to fire in targeted parts of the brain, and this sometimes helps people with various mental disorders, not what’s happening is understood beyond that some extra neurons fire. We’d need some experimentally validated models to effectively speculate why this can help certain people with mental disorders.

            Instead of inciting neuron firing, my plan would be to get a feel for the number of neurons which tend to fire synchronously, and so the type of radiation which might effectively alter a theorized EM field of experience. To get a sense of this theorized field we’d use standard electromagnetic detection of synchronous neuron firings, and then implant a transmitter in the head that could directly create EM radiation far more strongly and variably that single neurons do. The point would be to simulate radiation that groups of neurons could produce which might thus alter a single dynamic field which constitutes all subjectiveness. So here the evidence that it works wouldn’t rely upon technologies such as MRI, but rather whether or not certain transmissions affect the subjective dynamic of a person in various ways.

            Your sense of the visual cortex seems to correspond with mine. Afterwards the difference is that from my model there’s a physics based medium from which images would exist, or an EM field, whereas from your model there’s not. Thus you must contend with supporting notions such as a subjective dynamic in Chinese rooms, Chinese nations, the United States as a whole, and even certain information on paper that’s properly converted into other information on paper.

            Liked by 1 person

          16. Eric,
            The way to counter the intuition charge is to cite the data your views are based on. On “dualism”, fair enough. I won’t use that word in relation to your theory.

            My use of “ontological” wasn’t a good way to express what I was talking about. I should have said something like “physically distinct” or something along those lines.

            On TMS, my point is that it wouldn’t be sufficient. Sounds like we agree there. Although if you implant something that creates a stronger EM surge than what occurs naturally, you’re doing pretty much the same thing, just from inside the skull rather than outside.

            Liked by 1 person

          17. Mike,
            I’m not quite saying that my position that naturalism requires a physics based medium for a subjective dynamic to exist, is not an intuition. I’m saying that its converse is an intuition as well. Furthermore clearly some intuitions are better than others. Sometimes science gets things wrong, that is until it can be paradigm shifted into a better spot. I think that’s what’s needed regarding the notion of mediumless subjectivity. It’s unfalsifiable specifically given that no physics based medium exists to potentially assess. If you consider magical notions in general, such as astrology or psychic mind reading, you’ll find this to be a common theme. We faithfully believe such unfalsifiable notions, that is until they’re debunked by means of substrate based explanations. So yes, I think my intuition here happens to be superior to the intuition that many cognitive scientists faithfully support, which is to say, mediumless subjectivity.

            On my transmitter idea, I was really just spitballing in the moment, though this might have some legs. It came from a recollection I had of McFadden’s account of why standard EM radiation, such as from power lines and cell phones, as well as the static kind from MRI, doesn’t affect our endogenous EM fields. And then alternating magnetic fields do transmit energy into the head, which of course are used therapeutically with TMS to cause neuron firing. The blockquote from McFadden in the following comment of mine discusses the physics here. https://selfawarepatterns.com/2021/02/14/the-right-reason-to-doubt-the-simulation-hypothesis/#comment-136359

            Regardless I shouldn’t have theorized that “stronger” emissions would be needed from an implanted transmitter. Probably the only way to get this job done would be to have the transmitter produce the same energy firing levels that neurons tend to, though with synchronous ignition rates that are found in brain function. Theoretically if done right, a standing EM wave of consciousness would be affected.

            A problem might be that the zone parameters happen to be extremely tight. On the plus side however, once implanted (and perhaps with a cord to a full computer), it should be possible to try any number of combinations (that is if the wrong stuff happens to be benign). So various well paid people might be hooked up for years as different variables are tried, all on the hope that one day a subject might say, “Wait a minute, that seemed strange. Try that last sequence again.”

            Liked by 1 person

          18. Eric,
            The intuition that consciousness must be something in addition to the brain is an innate one. Ginger Campbell, in her latest episode, interviews Iris Berent, who discusses this exact topic. You might find it interesting.
            https://brainsciencepodcast.com/bsp/182-berent

            On the other hand, the conclusion that it’s all neural activity (with support from glia) is pretty counter-intuitive. It’s not something arrived at through intuition, except possibly an intuition borne from familiarity with the data (the well accepted reproducible neuroscientific data).

            Certainly there remain gaps in that data where one can still shelter versions of the old intuition, but that’s “theory of the gaps” thinking. It could conceivably turn out to be a lucky guess, but it’s not following where the data is currently pointing.

            Liked by 1 person

          19. That was an interesting coincidence Mike. I noticed your response just after I listened to that podcast. At some point in it I did skip to the last 10 minutes, but think I got the gist. I didn’t feel that my position was refuted at all however. It certainly wasn’t suggested that subjectivity should exist by means of substrate neutral information processing that I could tell. Let me know if I missed something about that.

            I wasn’t familiar with the essentialism term. To me this sounds like some kind of Platonism. Then there’s dualism which is suppose to be the opposite, or my perception of the substrateless subjectivity paradigm that I described above. Instead I believe that reality functions by means of causal dynamics and thus subjectivity will require a medium of some sort like all else verified through science.

            Do you consider light to exist as something in addition to an associated lightbulb? Of course not. Under the proper conditions you consider light causally mandated by the lightbulb, and even if we don’t refer to light itself as “lightbulb”. My perception of subjectivity is no different. Furthermore it may be that my quite simple thought experiment could finally help move science away from its implicit “information as a second kind of stuff” perspective here, or at least if more widely considered.

            Liked by 1 person

          20. Explaining the binding problem with EM fields is a completely evidence-free proposal at this point, although it has an agreeable science fictional feeling to it.

            A simple explanation for the unitary nature of consciousness is that it’s produced by a single brain structure. One of the reasons why the brainstem production of consciousness is compelling that it resolves the binding problem simply and organically. Right alongside that solution is a compelling explanation of the streaming nature of consciousness involving the substantia nigra.

            Liked by 1 person

          21. I think the architecture of the brain is sufficient to explain the apparent unity of consciousness. All of the major regions in the midbrain-basal forebrain-cortical system are heavily interconnected with each other. The system seems built for unity with numerous integration hubs.

            The thing about appealing to the small size of the brainstem structures (or any other small region of the brain), is that they still have at least tens of millions of neurons with their own set of interconnecting nuclei. We’d still be faced with a system that has subcomponents that are not themselves conscious, and explaining how consciousness emerges from them. Except that now we have a lot less substrate to work with.

            Like

          22. I’m not sure how “single” the brainstem is . It has four parts and the reticular formation extends through other parts of the brain. I think the brainstem theory is confusing the fact that it plays a key role in arousal and wakefulness which are essential for consciousness with consciousness itself.

            Like

          23. Mike, “All of the major regions in the midbrain-basal forebrain-cortical system are heavily interconnected with each other … with numerous integration hubs” is why there’s a binding problem in the first place. How do all of those regions coordinate to produce a single unified experience and, further, a flowing presentation with single rate of streaming? The single brain structure production of conscious feelings is the simplest (and perhaps the only) credible proposal consistent with experimental and observational evidence.

            I don’t believe the entire brainstem produces (“displays” for short) those feelings, but, rather, a dedicated brainstem substructure with a specific type of neuronal organization and inter-connectivity. All parts of the brain have tens of millions of neurons, so that cannot be a reason to rule in or out any given structure. The reticular formation is often the suspect, but we don’t know of course.

            And consciousness wouldn’t ‘emerge’ from whatever structure that is … feelings would be specifically created by the structure from previously resolved content images received from the content resolver, largely the cortex.

            James, why is the brainstem key to wakefulness? Might not the brainstem activate the cortex to engage it’s vast content resolution capability?

            Like

          24. My view is that consciousness appears when a critical mass (TBD) of neurons begin to oscillate in a coordinated fashion at rates in the general range of 8-100 hertz. It may be any neurons or certain types like ones structured like pyramidal neurons. So potentially the brainstem by itself could meet this condition. The brainstem does, however, seem to be the trigger for the production of the quantities of neurotransmitters that are required for the faster oscillatory behavior to occur. This is what is associated with arousal and the awake state.

            I think an interesting test for Solms’ theory (I guess you are familiar with Mark Solms) would be whether it might be possible to trigger production of neurotransmitters in the cortex, possibly by some sort of electrical stimulation, in people with brainstem damage. If that produced some sign of consciousness, then the brainstem theory would be disproven. If it didn’t, then there would be stoing evidence in its favor.

            Like

          25. James, is there any evidence to support your view that “consciousness appears when a critical mass (TBD) of neurons begin to oscillate in a coordinated fashion at rates in the general range of 8-100 hertz”? Why do you find that explanation compelling? Is this a sort of EM theory?

            Also, as far as I know, “people with brainstem damage” tend to be comatose or dead. Do you know of contrary information? People exhibit consciousness with massive swaths of cortical tissue missing but not so with brainstem assaults. Suicide attempts that blow away the brainstem with a gunshot aimed at the back of the mouth are extremely likely to succeed, while those who choose to shoot themselves in the temple often remain alive with significant brain damage.

            (This is an absurdly long thread … if you reply, you might start a new one).

            Like

    2. Eric, I see you’re still resistant to the definition of consciousness as “a simulation in feelings” … although no one commenting on this blog (or Schwitzgebel’s) has shown that to be false.

      Liked by 1 person

      1. Stephen,
        Though I didn’t specifically state consciousness as “a simulation in feelings”, it’s not at all clear to me that I disagree with your sentiment. Consciousness may effectively be referred to as a simulation (as in “imitation”) given that what we perceive doesn’t actually exist as such, but presumably does often tend to represent what’s actually real at least somewhat to thus help various forms of life survive better.

        Then as for feelings, sure, that could reasonably be defined as the essence of subjectivity. The tightest consciousness description that I’ve come across is Schwitzgebel’s “definition by example” response to the illusionist Keith Frankish, found here: http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/DefiningConsciousness-160712.pdf

        I consider it a masterpiece. The only true discrepancy between us and you that I know of in this regard, is your adamancy that our common conception of “consciousness” depends upon biological dynamics, and I guess you think the brain stem harbors the proper sort of biology. But if so, and if natural, would we not have evolved this structure through chemistry/physics just like the rest of biology? And if evolution did build it from natural stuff, then shouldn’t it be possible for those same dynamics to be engineered as well? Until science can experimentally verify how it is that the brain creates a subjective dynamic, it’s surely premature to presume that the solution is inherently organic.

        Like

        1. Eric, I prefer to think of the simulation as a model or a representation in feelings of distinctly different inputs. It’s not an imitation. Your “does often tend to represent what’s actually real” should be “always represents.” Nothing in the world is what we feel it to be.

          As for feelings being “the essence of subjectivity,” perhaps you’ve read my comment of the 26th to Mike that begins “Mike, you seem to believe the word ‘feelings’ …” and the following one that begins “A tautology …”. Although I wouldn’t phrase it as such, I’d probably agree that feelings are all of subjectivity. Mike has yet to answer my question, “Can you identify a single content of consciousness that cannot be understood as a feeling?”—would you care to take a shot?

          Note that in Schwitzgebel’s definition by example Abstract, all of the positive examples cited are feelings.

          My “simulation in feelings” is not a ‘sentiment’ or ‘conception’ of consciousness but is a definition of consciousness. Clearly your conception includes the possibility of the production of non-biological feelings, but a definition of a phenomenon is limited to facts of the matter—a definition of a phenomenon is no place for conjectures or predictions as you seem to desire.

          It’s not premature to presume that consciousness is biological—it’s incorrect to claim that it’s not. If and when the facts of the matter of consciousness are revised the definition certainly would be modified to include the new findings, although what would really need modifying then would be the definition of a ‘feeling’ as both biological and non-biological.

          Liked by 1 person

          1. Stephen,
            What you’re describing to me here are essentially semantic nuances. As I’ve said both here and over there, you, Schwitzgebel, and I use the same essential definition for “consciousness”, except that you’re waiting for humanity to figure out and then actually build that sort of thing in order to permit such a possibility in your definition itself. Well I guess that’s fine if so stipulated, though the definition we use not only address present human ignorance and inability, but accounts for the possibility that the human or anything greater might be able to create such a “simulation in feelings” dynamic one day, or at least conceptually. It’s just semantics man! I’m a “big picture” guy who thus seeks to marginalize semantic worries like this. Is any of this problematic?

            Like

          2. Eric, a definition is hardly a semantic nuance. It’s a statement of the meaning of a term, in this case ‘consciousness,’ preferably as exact as possible. The biological/non-biological distinction is hardly a nuance. The biological assertion is a fact. Your non-biological insistence would make my definition factually false, hardly the effect of a nuance.

            Whether or not non-biological feelings are possible is irrelevant for definitional purposes, regardless of your and Schwitzgebel’s opinions. You cannot define your desires into existence.

            You apparently would like to toss it all off, as if understanding a word’s actual meaning is irrelevant. If we do that in this case, then we do it everywhere and truth would die. Although I suspect Philosophy would continue to thrive … 😉

            Like

          3. Eric, it’s also worth noting that your desire to create non-biological feelings and therefore consciousness is fraught with ethical peril. Such a feeling would be felt, of course, by some non-biological entity. What would you do to ensure that the feeling created wasn’t painful or unpleasant? And how would you know?

            Like

  6. Mike, you interpret John Locke’s “the perception of what passes in a man’s own mind” as a definition of consciousness which “makes consciousness inherently about introspection.”

    I disagree that Locke is presenting a definition. He’s instead describing the contents of consciousness—perceptions—rather than the phenomenon of consciousness. I suspect that conflating these two entirely different things is the dominant confusion in consciousness thinking. This confusion exists in all of the so-called “scientific theories of consciousness” whose metaphorical non-biological mechanisms operate to resolve the contents of consciousness only. We are left to imagine the unstated “magic” that transitions those resolved contents to experienced feelings in biological brains.

    Your new “hierarchy” (which, lacking a stated ordering principle, appears to be another simple list), is strictly about perceptions or their absence. Perceptions are, as in Locke’s statement, contents of consciousness, which is what you have listed. So the answer to your question, “… where in this hierarchy is consciousness?” is that consciousness is not in the hierarchy at all.

    Odds ‘n ends:

    I suspect the ‘self’—the first-person ‘I’ for humans—is a feeling arising from embodiment that conditions and contributes to the contents of consciousness, but ‘introspection’ is likely a human-only capability.
    And I’m compelled to repeat, once again, that it “feels like something” to have an experience, because consciousness is a feeling

    Liked by 1 person

    1. Stephen,
      Here’s the exact quote from Locke, with some surrounding sentences. (From An Essay Concerning Humane Understanding, Book II, Chapter 1, section 19. (emphasis added)

      If they say that a man is always conscious to himself of thinking, I ask, How they know it? Consciousness is the perception of what passes in a man’s own mind. Can another man perceive that I am conscious of anything, when I perceive it not myself? No man’s knowledge here can go beyond his experience.

      http://www.gutenberg.org/files/10615/10615-h/10615-h.htm

      I agree that feelings are an important part of consciousness, but we generally don’t consider patients with affect disorders (akinetic mutism, abulia, etc) to not be conscious if they show awareness of their surroundings.

      Like

  7. Mike, you seem to believe the word ‘feelings’ refers to only emotional feelings. My Damasio-like “consciousness is a feeling” claim refers to:

    1. Emotional feelings
    2. Physical feelings (sensations)
    3. Sensory-inhibited variants of physical feelings like thought in words (speech-inhibited) and pictures (vision-inhibited), and sensory-inhibited hallucinations and dreams as well.

    Any contents of consciousness (qualia, etc.) you can name are feelings. Recall the core of my definition of the consciousness: “a simulation in feelings.”

    As to Locke, I suggest a 17th century description of consciousness can safely be omitted from consideration as a credible definition of the term. I believe I’ve previously shown that actual definitions of the phenomenon of consciousness are extremely rare, discounting common-usage dictionary definitions like ‘awareness’. You suggested a few (and your “hierarchy of definitions” lists many) that are not definitional but simply ideas about consciousness, like in the case of panpsychism.

    Like

        1. Mike, this small ‘Murphy’ comment was intended to follow my comment below that starts “A tautology”. Sometimes the ‘Reply’ links are easy to confuse and/or don’t operate in the intended way … probably user error though.

          Like

          1. No worries Stephen. Unfortunately the WP commenting system leaves much to be desired. It doesn’t give me a good way to move comments around. But I understand your meaning.

            Like

  8. A tautology is saying the same thing twice using different words, like “a pedestrian traveling on foot.” Clearly a ‘feeling’ is a well defined phenomenon that everyone can understand from their own experience—I haven’t redefined the word ‘feeling’ in any way or claimed that ‘feeling’ means ‘consciousness.’ Damasio and others note that not all feelings become conscious, by the way.

    To say consciousness is a feeling means that all of the contents of consciousness are feelings, as I’ve explained above. Can you identify a single content of consciousness that cannot be understood as a feeling? Do you know what it feels like to see green? To say ‘Hello’? To think?

    The point of observing that all conscious contents are feelings is to emphasize the simplified observation that in creating consciousness, the brain is producing qualitative variations of just one thing—feelings. This crucial simplification clarifies this otherwise very messy topic immensely as well as lending support to proposals that support a single substructure that produces (actualizes) all conscious feelings.

    In the link you provide, Paul Austin Murphy writes:

    Moreover, many who talk or write about consciousness never actually get around to defining the word ‘consciousness’ at all. True, they may have their own tacit or unexpressed pet definitions deep within their minds; though they never explicate or articulate such definitions precisely or in any detail.

    Bingo!—my point exactly … although it’s far more pervasive than the ‘many’ Murphy believes. It’s almost all. And it’s not having a clear definition of consciousness (like “simulation in feelings”) that “… traps us in the mud.”

    Regarding Murphy’s assertion:

    We can say, instead, that definitions of consciousness should include both what consciousness does and how conscious seems..”

    As far as I know, “what consciousness does” is an unknown at this point and consequently cannot and should not be definitional.

    Like

  9. Stephen,

    Starting a new thread.

    Since the brainstem controls various vital functions then damage to many parts of it will result in death. Damage to other parts (I think Solms points out that just a few neurons) can result in an irreversible coma. The question in those cases is whether consciousness itself is disabled or whether wakefulness and arousal, the preconditions for consciousness, are disabled. If other parts of the brain (assuming them undamaged) were stimulated in those cases, would consciousness arise or not? Do you or anyone know if that test has been done? According to your theory, it would not.

    I looked around some and found some things but the extent to which they address exactly the right test isn’t clear.

    “One hundred and seven patients in vegetative state (VS) were evaluated neurologically and electrophysiologically over 3 months (90 days) after the onset of brain injury. Among these patients, 21 were treated with deep brain stimulation (DBS). The stimulation sites were the mesencephalic reticular formation (two patients) and centromedian–parafascicularis nucleus complex (19 cases). Eight of the patients recovered from VS and were able to obey verbal commands at 13 and 10 months in the case of head trauma and at 19, 14, 13, 12, 12 and 8 months in the case of vascular disease after comatose brain injury, and no patients without DBS recovered from VS spontaneously within 24 months after brain injury.”.

    https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1460-9568.2010.07412.x

    “After 3–6 months of chronic deep brain stimulation, the prolonged coma scale rose in four of the eight cases and three cases emerged from the persistent vegetative state. Transmitter substances and their metabolites were also found to be increased in the CSF after chronic deep-brain stimulation. Based on these findings, chronic deep-brain stimulation represents a useful kind of treatment that can lead to emergence from a persistent vegetative state, if the candidate is selected by electrophysiological studies 2 months after the initial insult and if the stimulation is applied for more than 6–8 months using a high-safety chronic deep-brain stimulating instrument”.

    https://www.tandfonline.com/doi/abs/10.3109/02699059009026185

    “Monti and his colleagues report preliminary results from a trial in which they used ultrasound to noninvasively stimulate an area known as the thalamus in patients with long-term disorders of consciousness. Of the three patients included in the write-up, two showed behavioral improvements, such as responding to simple commands and, for one, gaining the ability to motion yes or no in answer to questions. “.

    https://www.the-scientist.com/news-opinion/brain-stimulation-tested-to-awaken-coma-patients-68428

    “In this context, electroceuticals, a new category of therapeutic agents which act by targeting the neural circuits with electromagnetic stimulations, started to develop in the field of DoC. We performed a systematic review of the studies evaluating therapeutics relying on the direct or indirect electro-magnetic stimulation of the brain in DoC patients. Current evidence seems to support the efficacy of deep brain stimulation (DBS) and non-invasive brain stimulation (NIBS) on consciousness in some of these patients”.

    https://www.frontiersin.org/articles/10.3389/fnins.2019.00223/full

    If you look at even the Wikipedia on reticular formation, you can see the key role of the reticular formation is the triggering of faster brain oscillations that extend across the brain and are associated with wakefulness and arousal.

    “The main function of the ARAS is to modify and potentiate thalamic and cortical function such that electroencephalogram (EEG) desynchronization ensues.[D][26][27] There are distinct differences in the brain’s electrical activity during periods of wakefulness and sleep: Low voltage fast burst brain waves (EEG desynchronization) are associated with wakefulness and REM sleep (which are electrophysiologically similar); high voltage slow waves are found during non-REM sleep. Generally speaking, when thalamic relay neurons are in burst mode the EEG is synchronized and when they are in tonic mode it is desynchronized.[27] Stimulation of the ARAS produces EEG desynchronization by suppressing slow cortical waves (0.3–1 Hz), delta waves (1–4 Hz), and spindle wave oscillations (11–14 Hz) and by promoting gamma band (20 – 40 Hz) oscillations”.

    Liked by 1 person

    1. James, many thanks for the information. Interesting possibilities.

      I don’t have much to say other than to note that our ignorance of the integrated operations of the brain subsystems that support consciousness means that, at best, we see many correlations with consciousness, including these faster brain oscillations, but we’re a long way from credible conclusions. In addition to your suggestion that damage to the brainstem might result in the brain’s inability to sustain the faster oscillations, it’s also possible that such damage might leave the brainstem unable to receive resolved pre-conscious images or create conscious images from them.

      BTW, wakefulness and arousal are the preconditions for consciousness when we’re awake, but are those faster brain oscillations present while we’re dreaming? It would also be interesting to know about these oscillations in partial-to-fully decorticated animal experiments as well.

      My preference for brainstem consciousness (i.e., brainstem produces feelings) is based on experimental and observational data and bolstered by my thought experiments about the evolution of brain structure after the brainstem complex, the oldest brain structure, was running the show. For instance, what might be the brainstem’s functional relationship with a primordial cortex? Would a fully functional organism feel nothing of itself and its environment until an initial cluster of cortical tissue evolved?

      Obviously, the brain doesn’t know it’s conscious so continued evolution of those supporting structures would be dependent on their contribution to the reproductive success of the organism. Perhaps consciousness is the easiest way to control precision movement or is simply a side-effect of the development of such control. All fascinating to contemplate …

      Like

      1. “are those faster brain oscillations present while we’re dreaming?”

        Yes.

        BTW, there are even oscillations in the 20 hertz range in the mushroom bodies of locusts when presented with odorants.

        “Presentation of an airborne odor, but not air alone, to an antenna evoked spatially coherent field potential oscillations in the ipsilateral mushroom body, with a frequency of approximately 20 Hz. The frequency of these oscillations was independent of the nature of the odorant. Short bouts of oscillations sometimes occurred spontaneously, that is, in the absence of odorant stimulation. Autocorrelograms of the local field potentials in the absence of olfactory stimulation revealed small peaks at +/- 50 msec, suggesting an intrinsic tendency of the mushroom body networks to oscillate at 20 Hz.”.

        https://pubmed.ncbi.nlm.nih.gov/8182454/

        Like

      2. “Would a fully functional organism feel nothing of itself and its environment until an initial cluster of cortical tissue evolved?”

        My view is the cortex is not necessary for consciousness. However, that doesn’t mean that the cortex, if present, does not participate in generating the consciousness of the organism.

        As I said, oscillating neurons of sufficient critical mass is the requirement. The cortex as part of oscillating and integrated set of neurons would contribute. I see it arising from the whole brain or at least many different parts of the brain.

        Like

  10. Thanks.

    To sum up the studies, it seems to me that the more general case is that consciousness does not exist unless faster brain oscillations are present. Damage anywhere that disrupts the ability of the brain to produce those faster oscillation results in coma or some sort of consciousness disorder.

    The tools for correcting this in the case of brain damage seem to be fairly crude, but in the more general case restoration or improvement of consciousness is associated with the faster oscillations no matter how it is accomplished.

    That by itself isn’t an EM field argument. A number of theories would be supported with those arguments. But to me it provides an plausible explanation of why damage in the brainstem would result in coma – because the damage results in an inability of the brain to sustain the faster oscillations, perhaps because neurotransmitter generation is too low or the stimulation of the rest of the brain is inadequate.

    Like

    1. This discussion somewhat reminds of the paper on possible islands of awareness: https://selfawarepatterns.com/2020/02/20/islands-of-awareness/
      In particular, the hemispherotomy scenario, which at the time I had these observations about:

      A hemispherotomy is sometimes performed on a patient with severe epileptic seizures. It involves severing the connections between the damaged hemisphere and the other side, as well as its connections with the brainstem, thalamus, and other subcortical structures. However, a hemispherotomy, unlike a hemispherectomy, leaves the tissue in place, with all of its vascular connections.

      Could such a disconnected hemisphere be conscious? The authors note that, under normal circumstances, without the activating signals coming up from the RAS (reticular activating system) in the brainstem, the activity in the disconnected tissue has very low firing rates, equivalent to a deep dreamless sleep. But, they ask, what would happen if electrodes were inserted and used to stimulate the hemisphere? Might it then regain some consciousness?

      The authors discuss the role of subcortical regions in consciousness. It’s well established that they provide crucial support, but what is the nature of that support? Are they causal, constitutive, or both? Causal means they just cause awareness in cortical tissue but don’t participate in generating or consuming the content. Constitutive means they do.

      Personally, I think with a disconnected thalamus, the question is somewhat moot. Such a hemisphere’s ability to communicate with its disparate regions would be heavily compromised. I tend to doubt any awareness is possible under those conditions. Only if the subcortical connections were kept intact, with only the RAS disconnected, might it be possible to re-stimulate some form of consciousness.

      Liked by 1 person

      1. ” But, they ask, what would happen if electrodes were inserted and used to stimulate the hemisphere? Might it then regain some consciousness?”

        That’s exactly the sort of experiment I’m thinking about. Of course, it might need to be some sort of pulsed stimulation and maybe something that stimulates production of neurotransmitters. In some cases, the latter may be difficult because it seems some neurotransmitters are mostly produced in the brainstem, which is additional evidence of what the role of brainstem in arousal and wakefulness. So the experiment may need to be combined with a bathing with neurotransmitters.

        Like

        1. In a hemispherotomy type scenario, but leaving the basal forebrain connected, while disconnected from a still functioning brainstem, it seems like the neurotransmitters and neuromodulators would still propagate through the vascular system, although that’s complete speculation on my part. The problem would be establishing that consciousness is present if there’s no motor output.

          You could try instead to work with an intact brain using a regional anesthetic. Maybe it would be possible to leave the producing glands functional, as well as the motor output portions of the midbrain, while inhibiting just the activating signaling from the RAS.

          Given the location of these structures, and the invasiveness of these experiments, this is likely only something that would happen in animal tests, if it’s even possible with current technology.

          Liked by 1 person

  11. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

    Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.