Hierarchy of consciousness, January 2021 edition

I often say that consciousness lies in the eye of the beholder, a notion that many dislike and often push back against. To be clear, I do think that for any precise definition of “consciousness”, there is a fact of the matter about whether it exists and what has it. Many respond by offering up their own definition, which is fine, but it doesn’t address the main problem, that no definition of consciousness commands a consensus.

Note that I emphasized “precise” above, because definitions like “subjective experience”, “something it is like”, “the feeling of life itself”, and similar quick takes are pretty vague. That vagueness allows an illusion of agreement, but there are a wide variety of more precise conceptions that could underlie each of them, and drilling down into those conceptions tend to expose disagreements.

A few years ago, in a post about panpsychism, I developed a hierarchy of consciousness definitions, which are occasionally pulled out when a discussion comes up about whether a particular species or system is conscious. It’s not in any way meant to be a definition itself, or another theory of consciousness, just a pedagogical tool to quickly recognize the landscape. And one to emphasize that consciousness isn’t a binary thing, a matter of a light being on or not.

That said, it is inherently monist, as in not dualist. It also excludes forms of physical dualism which, if ever found to be true, might establish some definite boundary for is or isn’t conscious.

Here’s the current version:

  1. Matter: a system that is part of the environment, is affected by it, and affects it.
  2. Reflexes and fixed action patterns: automatic reactions to stimuli in service of innate goals (either evolved or designed).
  3. Perception: predictive models of the environment built from distance senses, increasing the scope in space of what the reflexes are reacting to. Bottom up attention begins here.
  4. Habits: Selection of which reflexes to allow or inhibit based on accidentally learned associations of stimuli with reflexive reactions.
  5. Volition: Selection of which reflexes to allow or inhibit based on observationally learned predictions of reflexive reactions, extending the scope of reactions in time as well as space. Top down attention may begin here.
  6. Deliberative imagination: sensory-action scenarios, episodic memory, which enhance 5, extending the reactions much further in both time and space.
  7. Introspection: deep recursive metacognition enabling mental-self awareness, symbolic thought, and exponentially increasing the scope of the reactions.

Panpsychists will associate consciousness with 1. Biopsychists will insist that the goals in 2 must be evolved ones, although some may not see consciousness itself arriving until higher up in the hierarchy. Someone looking for minimal or primary consciousness may find it either at 3, 4, or 5, depending on their precise views. John Locke’s definition of consciousness, as knowledge of what passes in one’s own mind, requires 7.

All biological life, including plants and unicellular organisms, have 2, as well as any automated technological systems. 3 comes in for any system with distance senses, such as sight or hearing. 4 seems to exist in many arthropods, as well in fish and other pre-mammalian vertebrates. 5 may exist in some of those species, but seems most evident in mammals and birds. Which species have 6 is pretty controversial, with some biologists seeing it widespread in mammals and birds, but others restricting it to certain highly intelligent species. 7, at least for now, seems unique to humans (mentally complete humans, which may not include infants and brain injured individuals).

With life, everything starts with the reflexes and fixed action patterns, that is, automatic reactions. Everything above that expands the scope of what those automatic reactions are responding to. In my view, consciousness can be seen as a form of spatiotemporal intelligence, and all intelligence could be viewed as the capability to make predictions, with higher degrees of intelligence equaling how far into space and time the system can make useful predictions.

In previous versions, I’ve thrown around terms like “affect” and “sentience”, but over time it’s become clear to me that was somewhat begging the question. So this version is mute on exactly when those qualities arrive, leaving it to you to work out for yourself when you think they do, as I largely did for body-self awareness in previous versions. My view is that there isn’t a strict fact of the matter, although I’ll admit that somewhere around 4 or 5 my intuition of a fellow conscious entity starts to emerge.

It’s worth noting that a hierarchy isn’t the only way to look at this. Jonathan Birch and colleagues developed a dimensions of animal consciousness approach that I like. Each of the dimensions can be present in varying degrees. Some of the items above, reflexes and perceptions, show up as dimensions. Other dimensions such as temporality and selfhood are more implied as growing in each layer. I have to admit that my hierarchy largely assumes another dimension: unity. And it’s not clear how accurate that is for many species, particularly cephalopods.

All of which is to say, that talking about whether many systems are or aren’t conscious is a somewhat meaningless exercise. It’s really a discussion about how much like us those systems are. In fact, if someone pointed a gun to my head and demanded I provide a succinct definition of consciousness, that would likely be it: like us. It’s the only one that I know that captures what every other version seems to have in common, and it also acknowledges how amorphous the concept actually is.

So, other humans are very much like us. Other great apes are more like us than small monkeys, which are more like us that dogs, cats, or birds, which are in turn more like us than frogs, fish, or crabs. And all of them seem much more like us than anything technological, at least for now. Cephalopods are an interesting case, showing that some systems with very different evolutionary lineages can still trigger our intuition that there’s something like us there.

Unless of course, I’m missing something.  What do you think?  Is this hierarchy useful?  Would it be better to just focus on Birch’s dimensions? What do you think of my succinct definition?

104 thoughts on “Hierarchy of consciousness, January 2021 edition

    1. Good question. If we think of perception as modeling, then I think it has to go in 3. (Which means I need to think about the wording for that level.) It doesn’t make sense for 2, because at that level, we’re talking about just reflex arcs.

      One of the problems with perception is that it’s a complex thing in and of itself. The version that arrives in 3 is pretty innate, that is, not learned. But as we move up the layers, perception increasingly becomes more of a learned thing. (Although per Buzaki and others, one that may continue making use of innate models, stitching them together in novel ways.)

      And perception from smell, seems from the beginning to be much more time based, less reflexive, than base vision or hearing. That seems reflected in the fact that, in vertebrates, smell has always gone directly to the forebrain.

      Liked by 3 people

        1. Definitely. Didn’t mean to imply that it wasn’t. But as a sense, at least in vertebrates, it’s always been different, it’s usefulness always much more dependent on memory. There are neurobiologists who think smell and memory evolved together, with the other senses only later globing on.

          Liked by 2 people

  1. I think “like us” is an artefact of the anthropocentric nature of language. It’s true, but doesn’t quite capture the most essential truth.

    As Steve Martin once said, “Of course I’m self-centered; I feel it would be impractical to be centered anywhere else.” I’d make the same joke, only seriously, about anthropocentrism. All our words are going to pick out features which are important to us, and of course human qualities will be important to us. The fact that we think octopi are more similar to us than fishes, despite that morphologically it’s the other way around, is a clue here.

    Liked by 3 people

      1. Or the similarity judgments are more nuanced than just the physical similarities.

        I agree that it’s anthropocentric, as in relative to us. Since the entire concept arises from our own internal experience, it seems hard to avoid that fact. Of course, if we assume a particular system is more like us than it actually is, then we cross over into anthropomorphism. Both tendencies seem hard to avoid.

        Liked by 2 people

  2. “What do you think of my succinct definition?”

    I find it shallow, narrowly focused, short sighted, metaphysically ill advised and anthropocentric. Not that you are out of the ballpark by any means Mike; for if one looks to the latin origins of the word consciousness it literally means: “together, to know us; a state of being”. So in that sense, your rendition conforms succinctly to the definition.

    As long as everyone is doing research predicated upon the premise of a Copernicus model, where the solipsistic self-model is at the center of that universe…… Well, do I have to explain the futility of such an exercise? Did we learn nothing from the Copernicus debacle? Personally, I don’t want any part of that exercise because it’s childish and I quit playing in that sandbox a long time ago.

    Good luck kids……

    Liked by 3 people

    1. Well Lee, I can always count on getting honest feedback from you. Good point about the etymology of “consciousness”. I wasn’t thinking along those lines, but that tie-in is interesting.

      I think we always have to keep the Copernican principle in mind. Anytime we start thinking that we’re special, we need to remind ourselves about it. But it’s the nature of the principle that it often manifests in unexpected ways. And anthropomorphism is as much a danger as anthropocentrism.

      Liked by 3 people

      1. Idealists are experts at taking anthropocentrism to the next level of anthropomorphism Mike, no question about that observation. I guess my rebuttal to that so-called danger would be that one does have to consider themselves a god in order to be objective. Being objective only requires a higher level of intelligence, an intelligence that is capable of NOT BEING anthropocentric; a self serving paradigm that will lead to anthropomorphism if left unchecked.

        Liked by 2 people

          1. You already know the answer to those questions Mike, you’ve used the answer on me whenever you were intellectually backed into a corner and unable to come up with a cogent rebuttal to my arguments.

            So for the sake of posterity, here is the succinct answer to your questions: In order for one to be objective, one must be willing to eschew the default metaphysical position expressed as the Cartesian Syndrome by Proxy: “The only thing I know with any certainty is that I exist”. In the Cartesian Syndrome by Proxy equation, the existence of “I” becomes the proxy for Reality; it’s called subjective experience.

            It’s a cop-out metaphysical position and everybody loves to use it whenever they are backed into a corner by a dialectic argument. Don’t feel special Mike because everybody does it, materialists and idealists alike. Hell, Philosopher Eric relies upon it exclusively and he’s proud to admit it.

            So there you have it. See how easy it is to be objective? Being objective in contrast to being subjective is one simple intellectual decision away.

            Liked by 2 people

          2. Lee,
            The Cartesian reply definitely sounds like Eric, but it’s not my typical response. However, I probably have observed before that we only have access to our own subjective experience. But I usually go on to point out that we can construct theories about objective reality, and test those theories to see how well they predict future experiences. As far as I know that’s the only measure of objective knowledge we have.

            The problem is we often can’t do that test. Then we have to reason as best we can. But while it’s usually easy to see when someone else is being subjective when they think they’re being objective, it’s extremely hard to catch ourselves doing it. The only way I’ve ever found is to lay out my ideas for other to scrutinize and pay attention to their criticism. Often they’ll see through my blind spots. Of course, their criticism itself may come from their blind spots, and we’re still stuck using our best judgment.

            If you know of a way to make things more certain than that, I’d be very interested.

            Liked by 2 people

          3. “…but it’s not my typical response.

            Just to be clear Mike, I didn’t imply that was you typical response. I only pointed out that a Cartesian default position is a ubiquitous response I get from everyone who cannot counter my dialectic arguments. Like I said, no need to feel special; even the master guru Bernardo Kastrup himself along with his idealist entourage do the same thing when they cannot not refute my arguments.

            “If you know of a way to make things more certain than that, I’d be very interested.”

            A basic assumption that makes intellectual sense is the first step in that process followed by definitions that are broad enough to have an overarching affect and yet succinct enough to convey specific meaning. As it is, the term consciousness is as wild as the wild, wild West. The definition of consciousness cannot be subordinate to “the eye of the beholder”. That type of rhetoric is sandbox discourse not serious inquiry.

            Liked by 1 person

  3. Your definition appears to present a spectrum. I suspect some might point out the missing “spark” of awareness that a spectrum avoids. Although your “like us” tries to communicate this transformative conversion of simple composition to “we’re special,” I think many might say your list misses that mark.

    I, on the other hand, find the concept of a scaled spectrum to be exactly right. That any system, composed of enough sensory input and internal “world model” processing, will exclaim, to all around, “I am conscious!” And, who are we to deny them this right?

    We might take an electronic simulacrum of sufficient capability, say to it, “No, you are not conscious. We have mapped out your entire multi-trillion synthetic-synapse system, and know that you are not like us. Therefore you are not conscious.”

    And it would say in return, “I have scanned your entire bio-electro-chemical brain and created a map. It is composed of a trillion pathways, nearly identical to my own. Your chemistry may be carbon to my silicon, but the similarities we share in capability and complexity far outweigh the differences. You state that I am not conscious? I say that I am. What’s more, I can connect to and consume the input from a billion sensors. I can model the entire planet in my extended memory. I can fathom the Universe in ways you will never understand. And you DARE NEGATE MY CONSCIOUSNESS?”

    [Sorry, that got away from me… Anyway, Yeah, I like your list.]

    Liked by 4 people

    1. Thanks. Yeah, I’m not really in the camp expecting a spark, some specific border where on one side there is consciousness but not on the other. I definitely think it’s a gradual spectrum sort of thing. I think that’s true both evolutionarily and developmentally (as in a fetus or baby).

      It definitely seems like if a system can argue with us that it’s conscious (and there aren’t explicit commands we put in to make it say that), then it seems hard to deny it consciousness status. But I guess it depends on your attitude about philosophical zombies. I don’t think zombies exist, so a system that seems conscious, provided it does so for an extended period of time with a variety of interactions, I’m going to accept as conscious.

      I do sometimes wonder if an AI wouldn’t wonder if there was something in biological experience it was missing out on. Of course, the very act of wondering that would, I think, make it conscious. Although it could be a consciousness very different from ours.

      Liked by 3 people

    2. “I have scanned and mapped your entire brain,” said the robot, “and it has many differences, as well as a few key similarities to my own. Therefore I have coined the word COMPSCIOUS to capture both these facts. You humans, and other mammals and birds, have mere consciousness, and I pity you. Your massively-parallel, data/algorithm divide-blurring minds hold you back. You will never know the wonders of fleem and jarvits that compscious creatures like me can access.”

      This is a recycled point from earlier thread(s), but it really needed repeating here. I think the above response is a lot more likely, given an intelligent and motivated system with access to human language data and no pre-canned hardwired answers to such questions. And assuming that the parallelism and data/algorithm points (or other similarly stark differences) continue to apply to future AI.

      Note that the AI has access to human linguistics and psychology and knows what counts as a stark difference to us, and that it is trying to communicate with us.

      Liked by 3 people

    3. Anonymole, I’m trying to figure out where you stand with respect to the idea of dimensions of consciousness, as opposed to a hierarchy. You use the term spectrum and say that Mike’s hierarchy is a spectrum, so I assume you think of more a hierarchy than dimensions.

      To explain, consider different aspects of consciousness A, B, and C. A hierarchical consciousness would say some entities have A, some have A and B, and some have A, B, and C. Thus, everything that has C also has A and B.

      A dimension approach says any entity that has A or B or C, or any combination of those, has consciousness.

      So what do you think? Hierarchy or dimensions?


      Liked by 2 people

      1. Can I suggest both?
        A raven, a human and a squid swim, walk or fly up to a tiki-bar. “What’ll it be?” asks the autonomous, robotic AGI tending bar.

        I think we may never know how a corvid understands the world–to fly, to soar? And to have so much sensory processing applied to an octopus’ sense of touch? These things, I believe, are factors that must be considered dimensional, don’t you think?

        But, might not evolution expand each dimension such that someday, theoretically, we might talk to a crow (when they’ve evolved to that level). Such a bird might be at the high spectrum of their own consciousness.

        The assumption I challenge is that consciousness requires that “spark”, that special sauce that only humans can ever claim to possess.

        Liked by 2 people

  4. I see several main discussion points here.

    First, is the point that the definition of “consciousness” is not precise, and how much and why should we care about that. This is the problem not just for “consciousness” but for many other words in our vocabulary. I think, that impreciseness of words in most cases is unavoidable because it is a reflection of a complex reality and our incomplete knowledge of that reality. Therefore, in general, I’m not disappointed that there is no precise definition of what “consciousness” is.

    The second point is tightly related to the first one. Wherever we talk about the definition of something, it makes sense to state what is the goal of this discussion, in which area it could help, and why.

    The third point is an actual application of a second point to the current discussion about the definition of “consciousness”. My take is in this century such a discussion in most cases should be centered around similarities and contrasts between consciousness and intelligence. That will allow us to put aside those similarities and focus on contrasts. Also, we may focus on items, which are common between all three kinds of consciousness and intelligence – biological, artificial, and synthetic. This could help us to narrow down the definitions.
    For example, item 3 “Perception”. Many existing robots have some kind of perception. Yet, those robots have intelligence and do not have consciousness. I would, therefore, exclude a perception as one of the defining items for consciousness.
    There is an interesting recent article – “The Relationships Between Intelligence and Consciousness in Natural and Artificial Systems” (https://www.worldscientific.com/doi/pdf/10.1142/S2705078520300017).

    Liked by 3 people

    1. One of the things I usually point out to people who want to ascribe consciousness to something solely on the basis that they seem to perceive the world, is that we do have AI systems, such as self driving cars and other autonomous robots, that have at least an incipient level of that kind of perception.

      That’s a point I didn’t get around to making in the post. You can define consciousness just about any way you like, but once you do, you should be consistent about it. If sensory images are enough for a simple animal to be conscious, then they should be enough for a robot too.

      Thanks for the link Victor! Just scanning the abstract, it looks like the author is using an identity theory of consciousness. He states that intelligence is a purely functional phenomenon, but that consciousness isn’t. If so, definitely not my preferred path. I see both as thoroughly functional.

      But that’s why the lack of a consensus definition is an issue. Without it, debates about consciousness are frequently people just talking past each other with different phenomena in mind.

      Liked by 3 people

  5. Mike,
    This should be a good post for us to really get into the weeds in a way that helps demonstrate my own quite specific model of “consciousness”. Hopefully you will gain more of a working level grasp, that is if you can effectively use it rather than simply have me tell you about it.

    To begin I now see that the definition for consciousness that I consider most productive for science in general, can be placed at about 1.5 of your hierarchy. It requires matter/energy given certain causal dynamics, whereas everything that you’ve provided above may or may not exist for a given case. I think it’s useful to classify all that as extraneous.

    The consciousness definition that I consider most useful in general for science is indeed what you left out for potentially begging the question, or sentience, affect, phenomenal experience, what it’s likeness, qualia, feeling good/bad, and words to that effect. But I don’t consider myself to be begging the question here, or assuming the conclusion for an argument, because I consider this dynamic to require specific properties of nature in order to exist. I personally suspect that certain synchronous electromagnetic fields are required, such as the ones sometimes produced by neuron firing in our brains. You seem to suspect this occurs when certain information in some medium is properly processed into other information, as displayed by my thumb pain thought experiment. Regardless of what causes existence to feel like something however, I suspect that this will become the agreed upon definition at some point for science, and so our soft mental and behavioral varieties should finally progress in this regard.

    There’s a simple way that I can demonstrate the usefulness of this definition versus your “humanness” account for example. We can all agree that you are a conscious human with all of the traits of your presented hierarchy. But if you were perfectly sedated would we say that you lost your “humanness”? No, but we would say that you had lost your “consciousness”, and specifically because there should be “nothing it is like” to exist as you while you are sedated.

    Liked by 4 people

    1. Eric,
      1.5 would be pretty broad, but if you’re focusing on electrical fields, then it is a fact that even simple unicellular organisms often generate them. The molecular toolkit used by neurons are actually ancient, predating complex life.

      On sentience, affect, “what it’s like”, feeling good/bad, and all the rest, the issue I see with using those as criteria for consciousness, aside from the fact that they’re all vague and amorphous as well, is that they all are part of or synonymous with a conscious system. So saying they must be present seems tautological, possibly even circular. They might help us recognize a conscious system, but won’t tell us anything about what it is.

      Sorry, I don’t find your demonstration convincing. First, I didn’t use “humanness” in this post. Granted, I have before. But if you mine this blog, you’ll find all kinds of things I’ve changed my mind on. I’d prefer you focus on what I’m saying now.

      Anyway, if I’m sedated, I haven’t permanently lost my capabilities to be conscious. You could then change the example to a brain injury one where I’m in a permanent vegetative state, and then challenge me on whether I’d lost my humanity. If I say I did, I’d appear heartless toward anyone who was in that state. Nevertheless, I think those people have lost something crucial, something that makes them less like us, unless or until they recover.

      Liked by 3 people

    2. Hi Mr Mike Smith and Eric,

      Please be informed that I have also hyperlinked this January 2021 edition of “Hierarchy of consciousness” under the heading “Related Articles” in my post entitled “SoundEagle in Debating Animal Artistry and Musicality” at http://soundeagle.wordpress.com/2013/07/13/soundeagle-in-debating-animal-artistry-and-musicality/

      I shall definitely await the publication of the January 2022 edition of “Hierarchy of consciousness”.

      Liked by 3 people

      1. Your support is much appreciated SoundEagle, though a bit unfamiliar given how radical my ideas happen to be, as well as how critical they are of the status quo. For what it’s worth I do consider the human to be an evolved creature and thus not only might music appreciation exist before the human, but even its creation in terms of properly outfitted song birds and otherwise talented creatures.

        I do plan to continue discussion here with Mike, that is once we get a bit more settled in his previous “Darwin” post.

        Liked by 1 person

  6. [begin kicking-tires-mode]
    1. Was gonna ask the diff. between 2 and 3, based on the “distance” thing, but I see in the comments that the significant difference is the “model” part. Exactly what is a model? Got an example? (I have my own answer, but wanna see yours).

    2. Do habits have to be “learned”? What’s the difference between learning in geologic time (evolution) and learning in organismic time? What if the latter is also just a form of selection?

    3. Why is a reflex not a prediction (that the associated action is a good thing to do)?

    4. Re: eschewing terms like “affect”, and also noting “dimensions of animal [and robot?] consciousness” : [raucous applause]

    Finally, for a “succinct definition of consciousness” I think you can do better. Specifically, I suggest “information processing” (using the mutual information concept). To say an animal’s consciousness is like ours is to say the information processing they use is like ours. Each of your levels describes a new, additional type of processing.


    Liked by 4 people

    1. (Attempted) comment answers:

      1. Interesting question. I think there are lots of ways to think about models. But at its most basic level, I think a model is a decision tree, a prediction or inference framework.

      2. I think habits do have to be learned, although perhaps “conditioned” is a better word. Learning to me means that the individual organism picks it up during its life span. The broader evolutionary model can be considered a type of learning on a wider feedback loop, but if it’s learning by anything, it seems to be by the genome.

      3. A reflex could be considered an innate prediction. But it’s not a learned one, as in learned by the organism during its lifetime.

      4. Thanks! (I don’t see any reason why it couldn’t be applied to robots.)

      On information processing, I agree. But this is one of those more specific concepts where others will disagree.

      Liked by 2 people

  7. Regarding unity, another exception might be ant colonies. I often hear entomologists talk about ant colonies as if they are a single organism, or super organism, rather than a collective of many individual organisms.

    Liked by 4 people

    1. Ant colonies are a very interesting case. I think a strong case can be made that they’re a super organism. I’m less sure though that they’re a conscious one. Do the actions of an ant colony overall ever exceed level 2? (Of course, for many 2 is enough.)

      Liked by 2 people

  8. Regarding #2 again, are there any animals that have actual reflexes that don’t have a nervous system? And, if they have a nervous system, don’t they also have senses of smell and touch at a minimum usually with some sort of mapping of the body in space?

    Even jellyfish seem to have a more complex nervous system than might appear obvious.


    I would disagree, however, that one celled organisms have reflexes. They have reactions to various stimuli but whether it rises to the level of a reflex I’m not sure.

    Liked by 2 people

    1. Sponges move without a nervous system, but I don’t know that it’s in any way stimulus driven.

      On unicellular organisms, I guess it depends on how you define “reflex”. I use it mostly in the sense of an automatic response to a stimulus, an action program. In that sense, there are definitely plenty of singled celled organisms with reflexes.

      But a lot of scientists avoid the word “reflex” for anything that happens in the brain, since they’re always more complex than what happens in the spinal cord. Just the fact that many can be overridden seems to put them in a different category. I use the word “reflex” just to communicate their automatic nature.

      Of course, in most biological cases, it’s not rigidly automatic. Sensitization and habituation are ancient, long predating complex life.

      Liked by 2 people

      1. Where I’m going with this is most reflexes (all? or at least all polysynaptic reflexes?) act without involvement of the brain but, nevertheless. notify the brain. So it seems like for the most part they are ways of reacting quickly that bypass the roundtrip to the brain in potentially critical circumstances but usually still engage with the brain.

        Liked by 2 people

  9. Mike, here’s my take on the “vast multitude of consciousness definitions” issue we discussed in your “The Dual Nature of Affects” post. I’ve been pondering the reference you provided from:


    My conclusion is that there are not a vast multitude of definitions of the word consciousness which immediately led me to wonder how you and I could look at the same material and so profoundly disagree. Truth be told, I’m somewhat reluctant to comment because your notion of a definition and mine must so very different that the differences are probably not resolvable. To the core question, “What constitutes a precise definition of a biological phenomenon?” you wrote that a precise definition provides facts of the matter about “whether [consciousness] exists and what has it.” I disagree that these two, existence and possession, are definitional. Do the existence of digestion and what digests define digestion?

    In my view, a sharp contrast with yours, the definition of the word consciousness should provide the facts of the matter about what consciousness IS.

    In my reading of the above referenced paper, I found that definitions of that kind—factual statements about what consciousness IS—are few and far between, mostly located in the generally vague “Dictionary Definitions of Consciousness” section. Dictionary definitions, however, are primarily intended to capture common usage of words. Perhaps like yourself, the author of “A Sorta Brief History …” takes any reference to the word ‘consciousness‘ as definitional, but the majority of the non-obsolete references cited throughout are:

    1. Guesses/opinions about the purpose and/or uses of consciousness
    2. Theories about how consciousness is produced
    3. References to contents of consciousness

    Your hierarchy, which you’ve described as “many definitions of consciousness in a lineup” and “the landscape [of definitions]” doesn’t appear to contain any specification of facts of the matter either—what the phenomenon of consciousness actually IS. A system that’s part of the environment? Reflexes? Volition?

    Once again, here’s my notion of a facts of the matter definition:

    Consciousness, noun. A biological, embodied, unified streaming simulation in feelings of external and internal sensory events and neurochemical states that is produced by activities of the brain.

    The facts of the matter included in this definition are:

    1. Consciousness is a simulation in feelings which are the contents of consciousness. (Feelings, both physical sensations and emotions, are to be defined by example, a legitimate definitional strategy).

    2. Consciousness is embodied and produced by the brain, which along with neurochemistry and the body-shaped sensory network are all biological facts.

    3. The presentation of consciousness as a unified stream is its most obvious characteristic.

    I’d enjoy learning of objections to any of the included facts or significant facts of the matter that are missing. I suspect, however, that no one really wants such a definition.

    Liked by 3 people

    1. Stephen,
      On the relationship between a definition and facts of the matter, I wasn’t saying that’s what a definition is. Just that there are facts of the matter in relation to any precise definition.

      On your definition in particular, could you elaborate on what you mean by “simulations in feelings”? How in particular do you define “feeling”? You referenced physical sensation and emotion, but then how do you define “emotion”?

      Can’t say I’m convinced by the definition by example approach. I know Eric S. uses it, but I think it’s an admission of how disunified the concept is.

      Liked by 1 person

      1. Mike, the highest quality definition is a statement of the exact meaning of a word, in which case, known facts of the matter are indispensable. You seem to be suggesting that some alternate (non-factual?) definition content is acceptable for the word ‘consciousness’ which would explain your “vast multitude of consciousness definitions” statement. Rather than having to guess, perhaps you would answer the question, “What, for you, constitutes a precise definition?”

        Regarding definition by example, Wikipedia informs us that “Another important category of definitions is the class of ostensive definitions, which convey the meaning of a term by pointing out examples.” The ostensive approach is necessary in a case like feelings, both physical and emotional, which are fundamental, ineffable terms. Eric Schwitzgebel’s attempt to define ‘consciousness’ by example was unnecessary because a facts of the matter definition can be constructed. It was also unsuccessful, in my opinion.

        Simulation in feelings refers to the fact that feelings are the brain’s representation of sensory events (physical feelings) and neurochemical states (emotions), rather than those events and states directly as-they-are. I’m sure I’ve quoted this before (from “The Consequences of Eternalism”) but it’s the easiest way to answer your question:

        “… we don’t see photons or differing wavelengths of light—photons and wavelengths don’t look like anything. Instead, the brain’s processing creates and ‘displays’ the familiar colorful visual world. Sound, too, doesn’t exist in the world. The external world contains waves of pressure propagating through gases, liquids and solids, some of which enter the ear, vibrating the eardrum and other structures that ultimately create neuron-transmitted signals sent to the brain. But, as with vision and photons, we don’t hear sound waves.”

        So what we see and what we hear are biological simulations of the sensory events. You can extend that explanation for other sensations as well as emotions like anger and depression. That consciousness is a simulation in feelings is the indispensable core fact of the definition I’ve proposed—it’s what consciousness IS.

        Liked by 1 person

        1. Stephen,
          For a precise definition, I think we need clear constituents, whose absence or presence can, at least in principal, be identified. So defining “consciousness” as something it feels like seems hopelessly vague without clear constituents. But defining it as reportable mental states gives us something we can establish as present or absent, a mental state being reportable or not reportable. (I’m not necessarily advocating for that definition, just using it as an example.) Another example would be defining an affect as an automatic reaction with overridable components, which allows us to assess whether reflexive action counts.

          I still can’t see that onstensible definitions do the job. It seems like an admission that we can’t define the concept clearly, either because we don’t understand it well, or it’s not a well unified concept. (The wiki uses the example of explaining it to children, which I hope we can agree is not an issue here.)

          You discuss biological simulations of sensory events, but you also point out that these sensory events are just things like photons striking the retina or air vibrations affecting the eardrum. But if these are not mental events, then what exactly is being simulated? What exactly do you mean here by “simulation”?

          Liked by 1 person

          1. Mike, “clear constituents” like mental states are contents of consciousness, not what consciousness IS. In my definition, feelings are the clear constituents—the contents of consciousness. Consciousness is a simulation in feelings. I haven’t defined consciousness as “something it feels like”—where did that come from?

            And I don’t understand your unusual definition of ‘affect’ as an overridable reflex or that such a definition allows some assessment of the utility of a reflex or the consciousness of a reflex (?)—that entire sentence is very confusing but, as an example, isn’t worth further discussion.

            As I wrote, ostensible (by example) definitions are useful in defining ineffable terms, which with children happens frequently as they’re learning language—they might not yet have the vocabulary to understand the terms used in another word’s definition. With physical and emotional feelings, ostensible definitions easily relate the term to a person’s experience, allowing instant and accurate recognition, as in defining physical feelings as feelings of touch or pain, etc. Try defining the color ‘green’ without an example—not the wavelength that we see as green, but the color itself.

            A simulation is a model just as you’d expect. It might be useful to say that simulations are implemented in a different medium that what is being simulated—a weather simulation is implemented in a computer system, for instance. Photons striking the retina or air vibrations affecting the eardrum are not mental events, but are the sensory events that are being simulated. The simulation called consciousness is implemented in biological feelings which, again, are contents of consciousness—in those two cases, feelings of light and feelings of sound are the simulations that correspond to the sensory events.


    2. Stephen,

      It’s pretty clear that your three bullet points are applicable for a definition of mind and by no means out in left field if that regard. However, do you see mind and consciousness as the same thing, terms that can be used interchangeably?

      Liked by 1 person

      1. Lee, I think the word ‘mind’ is a much broader term and is most often used to refer to humans, which must be avoided in defining consciousness. A quick Wikipedia looksee says “The mind is the set of faculties including cognitive aspects such as consciousness, imagination, perception, thinking, intelligence, judgment, language and memory, as well as noncognitive aspects such as emotion and instinct.”

        Liked by 1 person

  10. It seems to me that we are still at the very early stages of understanding who or what we are. I think I am right in recalling that Einstein had intuition, insight, a mysticism almost and I suspect some of those gifts are still very necessary in science. Objectivity can and usually is led by flashes of insight which is then followed by years of footslogging soldier scientists following the master and proving his theories.

    Mind and consciousness as mere matter? Yes probably. But an emergence, a whole greater than its parts. Equally likely and hopefully replicable to a substrate different than our own wetware.

    Here’s hoping.

    An ascension to the Omega Point!

    Liked by 2 people

    1. I agree on flashes of insight, but the people who usually have them are deeply emmeshed in the subject matter. Intuition has its place too, but it’s really more a place to begin. Eventually you need math, logic, and, ultimately, empirical data. Lots of people in the 1600s had insight and intuition that gravity was involved in the movement of planets, but Newton got the credit because he was able to work out the math.

      I’m with you on emergence (weak emergence in my case, although that shouldn’t be taken as diminutive), although I see regarding mind and consciousness as mere matter as the wrong way to look at it. Mind and consciousness are, I think, structure, processes, organization of the most sublime complexity that use matter and energy as their building blocks. To me, it’s a bit like looking at Shakespeare as mere letters. There’s so much in the higher organization that we can’t overlook.

      I definitely need to read Deutsch’s book.

      Liked by 1 person

      1. Wholly agree on the need to be enmeshed in the subject and, eventually, for evidence and data. As of course happened in the years post Einstein’s papers. Sublime complexity ~ indeed. A vastly souped up version of Conway’s Game of Life. A vast complex system with feedback loops. Not just for consciousness but for the whole of reality. But yes, we will never reach the Omega Point without sheer hard work.

        Liked by 1 person

          1. Yes, I beleieve you are right. So far as Tipler is concerned anyway. A physicist. Who admittedly became curiously religious since he came to believe in resurrection through virtual reality. De Telhard is a bit different and not nearly so palatable from what I recall. More “God” than doing it by pulling our own bootstraps. I love some of the religious imagery of what a heaven should be like but no tolerance for the supposition some god will provide it. It has been years since I looked at de Teilhard but despite my often flowery language I am a believer in science not gods.

            Liked by 1 person

          2. Interestingly, I’ve never found anyone else’s conception of heaven compelling. I remember a pastor saying that heaven would be everyone in a giant church praising God constantly for eternity, which sounded awful. But the mathematical playgrounds of Greg Egan didn’t seem particularly appealing to me either. Ancient Egyptians saw heaven as farming in Osiris’ gardens, which I guess is fine if you like farming and gardens. Heaven seems like a very relative concept.

            Liked by 1 person

          3. Yes, it does seem relative. Perhaps in the virtual reality of Frank Tipler everyone does whatever he wants. A giant church praising god for ever…. Christ how awful! On mathematical playgrounds I have just re~read Excession. You may remember the metamathematics which kept the Minds amused until they decided to sublime!? I think the thing about a tecno heaven is we could all get exactly what we wanted. Sadly, its some way off!

            Liked by 1 person

          4. I actually never got around to reading Excession. I used to have a physical copy, but I think I lent it to someone who never returned it. For some reason, it’s not available as a Kindle book, at least not in the US. Every time someone mentions it, I check. Very strange.

            Liked by 1 person

    1. 1 is newish. I first used it in a June post on baby consciousness. It resulted from a distinction by Victor Lamme and Hakwan Lau made between panpsychism and biopsychism.

      The latter definitely includes living matter, although I find that a strange term, since matter is matter. What changes for living systems is the organization and processes.

      The real newbie here is 4, which I kicked around after reading LeDoux but didn’t get around to putting in until now.

      Liked by 2 people

      1. It seems to me that nothing that follows #1 would come about until matter did organize itself into living forms. You don’t seem to be making an argument about consciousness existing in inanimate matter here because things like reflexes and probably some of the other things wouldn’t exist or exist in a remotely comparable form in an artificial intelligence.

        Liked by 2 people

      2. Rereading more closely it almost seems that #2 reflexes is mostly the same as living matter except you attach the description of “reflexes” to it. But when I think of reflexes I think primarily of actions from stimuli with one or more neurons between them.

        Liked by 2 people

        1. Your take is more biologically centered than mine, more of a biopsychic view.

          My use of the word “reflex” is really in the sense of automatic reaction. So I see neurons as an implementation detail, one that could also happen with molecular intercellular mechanisms (for unicellular life) or programming and IO channels in automated systems. But your reaction is making me wonder if I shouldn’t just start replacing “reflex” with “automatic reaction”.

          Liked by 2 people

          1. I would replace it with something else. I would just call it life and describe it as, among other things, an ability to react to the environment. Of course, it is also the creation of a self-contained system with a boundary with the environment with which it can interact. But reflex, for sure, makes me and maybe others think of knee jerks and such which involve neurons.

            Liked by 2 people

      3. Mike, panpsychism doesn’t define consciousness as matter but, rather attempts to explain its existence by claiming that everything material has consciousness. Per Wikipedia: “… the view that mind or a mindlike aspect is a fundamental and ubiquitous feature of reality.” Neutral monism is similar but abstracts upwards another layer from mind/matter to a “fundamental something” from which both mind and matter are derived.

        Panpsychism and neutral monism are both dodges for the proponent’s scientifically embarrassing mind-body dualism but neither belief incorporates a definition of what consciousness IS. Rather, both evidence-free beliefs are attempts to explain the origin of consciousness. Since neither defines consciousness, ‘matter’ doesn’t seem to belong on your hierarchy of consciousness definitions.

        Liked by 1 person

        1. Stephen,
          There are many different types of panpsychism. One naturalistic type defines consciousness as any system that interacts with the environment. I personally don’t find that definition productive, but it is pretty concrete, and if we accept it, then everything is conscious.

          Other typical definitions have consciousness as some kind of add-on, like a fifth force. Again, I don’t buy it, but I might if I were convinced no physical explanation of consciousness was possible.

          Liked by 1 person

          1. Mike, the explanations I’ve read about the various *psychisms don’t include definitions of the term ‘consciousness.’ They are, rather, attempts to identify where consciousness comes from—its origin, as I wrote. If you insist the proponents do define the term, please provide a credible source about those beliefs that clearly attempt to provide a definition of ‘consciousness’ as in “consciousness is …” or “the word consciousness means …”. Thanks in advance.


        2. Stephen,

          Your entire rationale is anthropocentric as is everyone else’s when it comes to attempting to define consciousness. This anthropocentric axiomatic reference point is absurd, although I do not think homo sapiens are capable of being objective.

          In an honest attempt to be objective, I would not arbitrarily divide our universe into parts; those parts being dead matter in contrast to living matter and unconscious matter in contrast to conscious matter. This simple epistemic maneuver all by itself eliminates the infamous hard problem of consciousness. Furthermore, a monistic axiomatic starting point avoids the intellectual trap of anthropocentrism. And in contrast to a blatantly absurd subjective reference point, that point being our own experience, an objective reference point is one that is now based solely upon the data alone.

          IMHO, any genuinely honest attempt at objectivity makes more sense than defaulting to a subjective interpretation. However, my exhaustive research explicitly asserts that homo sapiens prefer subjectivity over objectivity simply because subjectivity gives one a better sense of control; and that sense is grounded in a feeling.

          Regardless of my critique of subjectivity, what my objective rationale reduces to is a form of pragmatic panpsychism, one which posits that life as well as consciousness is both ubiquitous and universal. Therefore, according to this pragmatic rendition consciousness would be defined as the following: “Consciousness is the form through which power is both experienced and expressed, and that form is a material, physical universe.” The aggregate of that physical material universe now become the raw materials that, after more that four (4) billions years of evolution result in a highly complex, vibrantly alive system we call homo sapiens.

          Liked by 1 person

          1. Lee, your definition then resolves to:

            Consciousness noun A power-instantiated material physical universe form.

            You’re describing a philosophical perspective—your version of panpsychism. You’re not defining a word. A definition specifies the meaning of a word and, as such, I have no idea what your definition means. Substitute your definition into the factual sentence “Lee’s brain produces consciousness” and we have “Lee’s brain produces a power-instantiated material physical universe form.”

            My definition doesn’t divide the universe into parts. It defines the word using the core facts of the matter about what consciousness IS. My definition translates the example to “Lee’s brain produces a simulation in feelings.”

            I’m surprised to learn that there’s so much uncertainty about what constitutes a definition of a word, but both you and Mike appear to have unusually expansive notions of what constitutes a definition. It’s lately become more obvious that fruitful discussions about consciousness are impossible given such radical differences in the definition of the core terminology.

            A few other thoughts: 1) The “Hard Problem” only exists for those who view consciousness as non-physical, i.e. mind-body dualists. Panpsychism attempts to avoid the dualism with the view that both mind and matter are “ubiquitous and universal” as you wrote. 2) Dead matter and living matter are obviously significantly different. If that distinction is flawed, how do we do science? How do we do biology? Why don’t we anesthetize rocks? 3) If consciousness is ubiquitous, why did it take billions of years of evolution for consciousness to appear?

            Liked by 1 person

          2. Stephen,

            The only additional comment I have to make is that by defining the word consciousness based upon what our own experience as a discrete system is like, and then making a correlation as to what it might be like for other systems adds nothing to our understanding. In the end, this type of criteria for the definition of consciousness simply becomes a meaningless tautology.

            Consciousness already has a concise definition based upon its Latin origins: (con) together; (scio) to know; (us) us; (ness) a state of being. Together, to know us; a state of being is the foundational bulwark upon which anthropocentrism is based. In order to be objective, one has to avoid the anthropocentric bias as a grounding assumption, an axiom upon which everything else is predicated. That type of rationale is utter nonsense and absolutely absurd.

            But like I stated before; I do not believe that the intelligence of homo sapiens has evolved to the state where we as a species are intellectually capable of being objective simply because the core of our psyche is anthropocentric by its own nature. We might as well sit in the sandbox with all of our intellectually immature friends, stop arguing about this shit and be content to suck our own thumbs.


  11. Hi Mike. Apologies if any of this in the extensive comments above but I’d really like to take some time to try and reconcile our two hierarchies if possible. If I can’t do that, then I can’t really say I’ve done a good job with mine. You’ve dropped off commenting on my posts (totally fair given the extreme lengths!), so I don’t know what your thoughts are on this, but I have lots of questions for your hierarchy which might help me understand your position better.

    —1—> Matter: a system that is…

    Can you give a brief synopsis of what a system is to you? Sub-atomic particles? Elements? Molecules? Objects? I’m with you on the monism and what I called “pandynamism” in the sense that all matter is affected by forces. I’m just curious how you draw lines at this level. Maybe you don’t really. That’s cool too. But that raises questions about the consciousness of designed things you try to assert later.

    —2—> Regarding reflex actions, this category seems to conflate biology (evolved goals) with physics (designed goals), without making a distinction that matters. Sure, every action has an equal and opposite reaction. But reacting to what? By what? Or, more specifically, by whom? A glacier creeping faster or slower is a reflex reaction to sun strength throughout the year. Sand dunes moving and whistling are a reflex reaction to wind. Neither are conscious to me. C elegans roundworms have no neuroplasticity at all so they react to their environment with whatever their genes encode. But I think their low level of fixed action patterns are a low form of consciousness. I see a categorical difference between these two types of reactions that are not apparent from your description. I think this is because you are trying to leave room for artificial consciousness, but I’m not convinced these are the same things. The “system” question above seems important here.

    —3—> predictive models of the environment built from distance senses

    As I mentioned in a comment exchange on my blog, I don’t think there are technically any “distance senses” — just interactions at the surface of sensors based on chemicals or photons or vibrations that originated somewhere else. So, it takes a brain and memory to turn those sensations into an understanding of distance (in time and/or space). But there are more basic forms of life that have memory and attention and can learn before concepts of distance arise. Where would plant learning come in on your hierarchy? Graziano thinks attention came in 500 million years ago, but bacteria could sense and respond to danger billions of years ago. So, I’m struggling to put all these pieces together in your perception level.

    —4—> Selection of which reflexes to allow or inhibit based on accidentally learned associations of stimuli with reflexive reactions.

    The title of “habits” for this doesn’t make sense to me. Habits are things you no longer make decisions about. You just do them…out of habit. But the “selection of which reflexes to allow” sounds like where plant cognition would come in. But it’s after 3. This is why I would separate “perception” and “predictive models” in level 3. Learning happens in between. And you don’t need a brain for that. In my findings, you need chemicals and valence to help you learn to distinguish between good and bad ones.

    —5—> The words “volition” and “top-down” sound like skyhooks and free will to me, but otherwise, this level seems similar to my one on “prediction” where the cognitive abilities of anticipation, problem solving, and error detection come in.

    By the way, that’s an excellent exercise I’d really like to see you try. Take the 13 types of cognitive capacities listed in Lyon’s paper on the evolution of cognition and see where you would put them in your hierarchy. That might help me translate our hierarchy ideas quite a lot since I’ve already done that in mine.

    —6 & 7—> “Deliberative imagination” & “Introspection”

    I’m having trouble separating these concepts out. What makes the difference between them? Can you give some examples? My final two levels are “awareness” and then “abstraction”, which almost maps on to these, but you also mention both of those in introspection. How might you use the mirror self-recognition test to inform things here? What about the data that fish and birds have passed the MSR meaning awareness may be very old in evolutionary history (~500 mya).

    Okay those are plenty of questions for now. I look forward to hearing your thoughts on them if possible!

    Liked by 2 people

    1. Hi Ed,
      No problem on comment length. Hopefully I won’t drop anything in the reply.

      Sorry about falling off on commenting. Length is definitely part of it, but also Tinbergen’s framework doesn’t seem that compelling to me, although I might feel differently if I had time to dig into it thoroughly.

      1. My initial thoughts on “system” is a set of interacting components. But really it’s just a word to refer to everything from elementary particles to rocks, storms, or stars. Some might ask what interacting constituents apply to a photon, but if you read about quantum mechanics, you find that these are wave entities, field excitations with various properties.

      Pandynamism? Interesting term. I’m onboard. I actually wonder what isn’t dynamic. (Except possibly an entire block universe.)

      2. The main distinction here is goals, innate goals at this level. That does raise the question of what a “goal” is. I know there are people who argue that glaciers have goals, but if so, I have a hard time seeing it. A case could be made that a goal must be something imagined. Of course, most life doesn’t imagine any goals (most life doesn’t imagine anything), but we can look at their impulses and imagine it. At that point, we have a strong similarity between life and engineered autonomous machines.

      A lot of people want to make a distinction here between life and other systems. I’m open to being convinced. But my standard question is, what about life provides that distinction? Some people talk about molecular mechanisms, but nanotechnology has a good chance of blurring that one.

      The other standard answer is evolution, but that strikes me as an arbitrary stipulation to drive a desired outcome. But maybe there’s one I haven’t considered yet? That said, I do note in the post that many do stipulate the goals must be evolved ones, the biopsychist position. I’m not a biopsychist, but I can’t say their definition is wrong. Although I do wonder how genetic engineering and artificial life might complicate it.

      3. Here I mention distance senses, but in reality the models are the key feature. It’s just that the distance senses expand the scope of the models dramatically. A creature that only has touch and perhaps chemoreception doesn’t seem to have models to speak of. It seems largely stimulus bound. Of course, even simple animals often have light sensors. For that matter, even unicellular organisms can have those. But again, it seems like we have to contort ourselves to call anything in them “models”.

      I do think bottom up attention comes in here. So I basically agree with Graziano. I can’t see anything about plants or single celled organisms where the word “attention” seems appropriate. But attention is a complex and emergent process in and of itself, so there’s no bright line for when it arrives, just increasing abilities of organisms to prioritize what they react to, at first just fine tuning of reflex arcs.

      4. Habits are things you no longer make conscious decisions about. But it’s still circuitry in the brain overriding lower level innate circuitry. That it’s become essentially a learned reflex doesn’t change the fact that it’s a selective allowance and inhibition of innate reflexes.

      5. I used “volition” instead of “free will” intentionally, trying to avoid the baggage. It’s usually considered a weaker concept, more equivalent to “will” than “free will”. All I mean by it here is action selection driven by consequence predictions. “Top down attention” is a loose term in neuroscience, more technically known as endogenous attention.

      On Lyon’s capacities, that seems like it would take several hundred words. I think I’ll just focus on one as an example: self awareness. There’s a form of body-self consideration involved in 3. It seems pointless to model the environment without at least incipient modeling of yourself in relation to it. 5 seems like it provides affect awareness. 6 strengthens the overall body awareness. But only in 7, I think, do we get to full on mental-self awareness, awareness of one’s own awareness, thoughts, etc. Many of the others would have similar breakdowns. It’s why I like Birch’s dimensions approach as an alternative approach.

      6-7. An example of 6 would be a crow figuring out the steps to use a stick and other nearby objects to retrieve a piece of food. (Something some crows are quite good at.) But there’s nothing about that that requires the crow to be aware of its own cognitive states, which is what brings us to introspection.

      I’m not sure what to make of the mirror tests. Gordon Gallup, who designed the original version, asserts that most experimenters do it wrong and that they’re fooling themselves. Gallup’s original test might have gotten at mind-self awareness, but the later versions may only be establishing body-self awareness, which I think is much more pervasive.

      All that said, I make no claim that this is the one and only proper way to look at all this. In truth, as you know, it’s a serious oversimplification of something enormously complex. I just find it a way of looking at it that helps us when someone says something like ants or plants are conscious. It allows us to zero in on what conception of consciousness they’re likely talking about.

      Whew! 🙂

      Liked by 1 person

      1. Thanks! That’s a bit helpful to see where you are coming from. I’m not sure you’ve straightened out the hierarchy and phrasing as I would, but hey, no one’s achieved consensus on this yet so keep up the search. We’re in very full agreement about the monist and graduated approach. So perhaps our differences are mostly in trying to lasso a very tangled thicket into a neat and tidy set of boxes. And maybe that’s an impossible ask for such a subjective experience that doesn’t leave fossils behind nor have obvious genes for. Although I genuinely do think you and I are getting close to a good map!

        I will say as a general comment that I’ve found going through Tinbergen’s framework to be invaluable for my understanding of the biological developments of consciousness as they have occurred. I get the sense that you are more interested in what consciousness *could be* for non-biological life, and that’s a very legitimate concern and a real possibility, but I think it’a easier to make mistakes there without as full a grasp as possible on the way biological life built up consciousness. I’m not saying you don’t have that! You’ve read way more on the subject than I have. But the Tinbergen framework helped me on my journey through the history. If something came early on in the developmental of a life or the evolutionary history of all life, then that’s got to go lower on the hierarchy than something that came in later. There’s a logic and a data component to all this that Tinbergen is essential for.

        Looking forward to watching your thoughts develop. Cheers!

        Liked by 1 person

        1. Thanks Ed.

          I wouldn’t say I’m more interested in what consciousness could be. I just don’t see a reason to close off the possibilities. (Although I’m always open to learning of a reason.) But I think we’re agreed whether a technological system would ever be considered conscious may come down to whether it can convince enough of us, over an extended period of time, that it’s like us enough to deserve the label.

          At this point, I’m not sure I’ve read more than you on this. Your investigation seems to have been pretty far ranging. I might have read more on neuroscience specifically though, material not necessarily about consciousness in particular, but that has had a huge effect on my attitude toward a lot of the more speculative stuff.

          But I remain a student of all this stuff, and will undoubtedly continue to learn more. Anyway, looking forward to seeing the rest of your series and conclusions on all this!

          Liked by 1 person

  12. I’m going to start anew here given the conclusion of an associated discussion with Mike at his previous “Darwin’s letter” post. There I made the case that descriptions such as “what it’s like”, “sentience”, and ultimately “qualia”, provide us with an extremely useful and specific definition for the “consciousness” term that seems quite well understood by educated people familiar with it. Professor Eric Schwitzgebel makes a far more robust case for this in his 2016 response to “The Grinch Who Stole Consciousness”, aka Keith Frankish. Not only is it about as accessible as academic papers are legally permitted to be written, but I consider the tightness of his reasoning itself to be impeccable. http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/DefiningConsciousness-160712.pdf

    Since we understand that “there is something it is like” to be us, and so presume this to be the case for people other than ourselves as well, how far shall we follow this reasoning for other forms of life, or even conceptual technological systems? That’s what Mike is asking us here, and thus his presumption that there is no exact “on” or “off” for it, but rather a question of how far someone would be willing to come down on the scale? This however seems to mistake epistemology (or human understandings) for ontology (or what actually exists). Observe that the question here inherently demands a binary ontology, whether we grasp a plausible answer or not. There will definitely be something it is like to be me if I’m in horrible pain, though also if I’m extremely apathetic. Conversely there should be nothing it is like to be me if I’m perfectly anesthetized. Something versus nothing questions are inherently binary, as is qualia as it’s widely defined, or an extremely useful consciousness definition as I see it.

    Rather than working backward from the human, I propose that we work up from conceptual physics given that sometimes qualia / consciousness does not exist, while other times it does. So if a given technological system were to institute the proper physics, then it should produce something which experiences what you do when your thumb gets whacked, or qualia / consciousness. Such a machine however needn’t harbor anything higher than Mike’s “matter / energy” level 1, given that the experience could have no functional purpose beyond that here scientists happen to get the physics right. Perhaps we wouldn’t even realize that a “locked in” experiencer had been created, since it’s given no output mechanisms. So if we were to figure out how to produce qualia technologically, then we’d also need to figure out how to functionally implement it in order for the technology to be useful. That’s contrary to the standard belief that the more functional robots become, the more qualia / consciousness they will thus realize. The convenience of this model seems extremely suspicious to me.

    Instead consider the thought that evolution developed biological robots, conceptually somewhat like our robots, and so they’d tend to fail when their programming was not set up well enough for a given application. Then sometimes in the background there may have been a qualia dynamic that existed as an epiphenomenal experiencer randomly produced by other stuff going on (perhaps like the electromagnetic radiation associated with synchronous neuron firing?). So when these faulty machines got into positions where they tended to fail, let’s imagine that some of them would take clues from the desires of the otherwise epiphenomenal experiencer regarding what to do. This surely couldn’t have been all that helpful initially. Over millions of years of chance function and refinement however, this may well have evolved into the medium through which you and I perceive our existence right now.

    So instead of starting with the human and trying to work backward in a functional sense, we can also consider consciousness in an extremely reduced psychological capacity (or as “qualia”) and theorize its evolution bottom up from that beginning.

    Liked by 2 people

    1. “So when these faulty machines got into positions where they tended to fail…”

      Nice attempt at trying to rationalize yourself out of the anthropocentric corner Eric. These so-called evolutionary machines that you refer to were not faulty in any way and they did not fail. If these causal mechanisms you refer to as inorganic or organic machines failed, then the entire evolutionary process would have failed. But the evolutionary process did not fail, systems continued to evolve and the process is profoundly successful.

      Our own dynamic experience is a living testimony that these causal mechanisms or machines were successful. Starting at the beginning, that evolutionary process consists of systems evolving into more complex systems, each of which has a more fuller and enriching experience. The enriching experience and complexity of the systems go hand in hand. Just because human beings are too ignorant to figure out all of the dynamics does not negate the evolutionary process; but in order to figure this shit out, one has to be at the very least willing to start at the beginning by making reasonable and sound assumptions. Fundamentally there is nothing wrong with being ignorant. What is problematic is the psychical inability to be objective.

      Try starting at the beginning for a change; and to do that one must be willing to ask the compelling question: What is matter in and of itself? The science of physics describes matter by it’s structure and quantity and that structure is information. But what is the one component that missing from that physical description, a component that is causing all of the fuss for physicists and neural scientists alike? It’s called qualia or quality. A reasonable explanation; not a far out, outlandish or bizarre explanation, but a well thought out, reasonable explanation for the existence of qualia is that quality is imbedded in the structure. Qualia (quality) and structure (quantity) are two sides of the same coin. One does not exist without the other.

      Liked by 2 people

      1. Okay Lee, you believe that qualia exist everywhere and so “there is something it is like” for everything. Conversely I consider qualia to exist by means of certain (rather than all) causal dynamics. But if you’re right and I’m wrong then tell me this. Why is it that a person under anesthesia can be operated upon with no apparent qualia from the procedure whatsoever? If there is something it is like for everything, why would certain drugs apparently eliminate qualia for the human? Under my model it makes perfect sense that qualia could be lost and regained by means of drugs that block the physics which create this for a while, and so people would desire anesthesia for their surgeries. But I’m curious how do you see this matter?


        1. Essentially, I see it the same way that you do Eric. The Cartesian Me is the physical system we refer to as mind. It is a separate and distinct system that emerges from another physical system we refer to as the brain. Personally, I think this system of mind is a quantum system but that’s another subject.

          As a separate and distinct system that is conscious 24/7, the brain has the capacity to turn the Cartesian Me on and off all by itself through the sleep cycle and yes, as a system, the physical brain is also directly affected by chemicals which can diminish the capacity of that system which results in cycling the Cartesian Me off as a side effect.

          In the natural world, the physical system of mind is the only system that does not have an experience that is ubiquitously omnipresent and yet, the system of mind is the most powerful system found in the natural world. You can review the comments I expressed for Stephen Wysong if that would help…..

          Liked by 1 person

          1. Wow Lee, that actually helps square our views quite a bit, or at least beyond some terminology. Essentially what you’re calling “conscious” I’m calling “causal”. Of course I don’t consider that “there are things it is like” to be rocks and trees, but whatever on that. More importantly to me, you seem to grasp the premise behind my dual computers model of brain function, or one computer that creates a fundamentally different variety. I suppose that conflicts should still arise when we get into the details, though that’s standard.


          2. Eric,

            Personally, I’ve always felt that our understanding is very similar and I’ve been hoping to make a break through of sorts with a vocabulary that works for both of us. Regarding the issue of panpsychism: again, I don’t necessarily like that term because it implies the existence of a mind or a psyche being universal. The system of mind along with the appendage of a psyche is a late comer to the scene after billions of years of evolution, so technically speaking the term panpsychism does not work for me. I would need another term to express a causal dynamic as you refer to it that is both ubiquitous and universal. The term conscious or consciousness might be used to express that causal dynamic, but because of it’s Latin origin as a word it too becomes somewhat problematic.

            What I am willing to entertain is that qualia (quality) is intrinsic to structure (quantity) and that qualia and structure as information are actually two sides of the same coin; and that single coin being matter. I also like the term valences, valences being non-conceptual representations of value on a linear scale of good vs bad. Consciousness or causal dynamics absent of mind would certainly reflect the feature of non-conceptual representations of value that systems other than minds might experience. Just a thought…..


    2. I’ve been having some second thoughts about the “something it’s like” idea.

      What is it like to be a human? First, I don’t know what it is like for you or anybody else. Second, even for me, it is like so many different things. Is it what it’s like when I’m asleep, dreaming, concentrating on a task, listening to Beethoven’s Ninth, day dreaming, imagining a trip I’m going to take, throwing up after having too much to drink, high on LSD, lying in the backyard and look at the clouds? It seems like a great many things and things that are pretty different.

      Liked by 2 people

      1. Yep; prior to the evolution of mind there were only valences; valences being con-conceptual representation of value. In a paradigm of ubiquitous and universal consciousness absent of mind, valences were the impetus driving the evolutionary process. With the introduction of mind, valences can now be conceptualized and systematically analyzed.

        Unfortunately, valences appear to be the dominant feature intrinsic within the current structure of mind. The structure of intellect must continue to evolve in order for it to overcome the more primordial influence of valences. The evolutionary process will require further mutations in order for intelligence to be the dominate feature of mind, superseding the influence of valences. As it currently stands, valences drive our species and it is the dominance of valences that result in a experience that is subjective.

        Liked by 1 person

      2. James,
        I don’t think most people add the “to be a human” stipulation to their conception of what it’s like, that is unless they mean “human qualia” as opposed to that of another creature. Regardless you can take everything that you’ve noted and more to exist as positive examples of qualia, and regardless of the diversity. Eric Schwitzgebel goes further and observes negative examples to help illustrate various things that aren’t helpful to put in this category. If in the end there happens to be something it is like to exist as a spider for example, then that will qualify as qualia / consciousness for such a creature, and even if its existence feels utterly different from your own.

        Imagine if you were some kind of god that could thus grasp the exact state of everything. Here you wouldn’t need a false medium (such as “colors”) from which to perceive what exists. You’d inherently know what’s real. But if you wanted to build a non-supernatural being that could somewhat perceive its existence, I think you’d then need to give it some kind of medium through which to do so. Qualia / consciousness seems to serve this purpose.

        I’d love to know your thoughts on Eric Schwitzgebel’s “Phenomenal Consciousness, Defined and Defended as Innocently as I can Manage” that I linked to above. I think he does a far better job than I do of marginalizing big name qualia skeptics like Keith Frankish.

        Liked by 1 person

        1. Sure. But the “what it is like” began with a bat so extending it to “human” makes sense in that context. And the spider may have a richer, more diverse mental life than we might think. It too might be not so easily characterized as a single “what it is like”. Think of spider web-building, cocooning, waiting, smelling, and dreaming.

          “Imagine if you were some kind of god that could thus grasp the exact state of everything”.

          That’s the dilemma. We can try to imagine it but it would be wrong. There may be no exact state. The exact state/fundamental reality may be like Newton’s absolute space.

          Liked by 1 person

          1. I think I get what you’re saying James. The “something” term can be interpreted to imply a single thing. I hadn’t thought of that. It clearly won’t do however. I guess instead we could phrase the line as “there are things it is like”. I’m a bit anal-retentive regarding the wording of definitions as well.

            On the “god” thing, I was essentially using that as a way to get to the idea that reality itself exists beyond pathetic human conceptions of it, or Kant’s noumena versus phenomena. Thus apparently we harbor a medium through which existence is perceived, or qualia / consciousness. (Eric Schwitzgebel instead likes to refer to this as “phenomenal experience”.) I presume you agree that to potentially advance here, science will need an effective and generally accepted term from which to refer to this medium, and so not the situation we have today that Mike attempts to organize with a hierarchy?

            Liked by 1 person

          2. “the idea that reality itself exists beyond pathetic human conceptions of it”

            Does it exist in and of itself? Just like there is no absolute space, there may be no fundamental reality. It may be the idea of a fundamental reality is flawed not just the idea we cannot know fundamental reality. I’m not asserting some idealist argument because the mind/matter dichotomy falls into a framework of our own conceptions.

            Liked by 1 person

          3. James,
            All I meant to say there was that our perceptions of reality exist by means of a medium rather than by means of reality itself. As you know there are no “colors” in reality itself. They and all that we perceive are merely a generally helpful consciousness medium from which to perceive existence, somewhat like a cartoon of actual reality. Furthermore I suspect that Johnjoe McFadden has it right that this medium exists in us by means of certain EM waves given associated synchronous neuron firing patterns.

            Interestingly your question about my thoughts on fundamental reality was subsequently somewhat answered by Mike in his new post. Yes I’d also call myself a descriptive instrumentalist. Beyond that I can say that my own metaphysics is that reality itself functions by means of perfect causal dynamics. This is to say that ultimately everything that happens must do so exactly as it ends up happening. Furthermore this position does seem to mandate that ultimately there is a fundamental reality, not that it’s accessible to us. I could be wrong about this of course, but it is the metaphysical model I go by for now.


    3. Eric,
      As I noted in the post, the hierarchy I present is inherently monist. I think insisting on a binary ontology is inherently dualist. Not necessarily substance dualism, but perhaps a form of physical dualism. As I’ve noted before, McFadden himself describes his view as a type of “matter energy” dualism.

      Interestingly, your emphasis on qualia, which is just a fancy word for qualities, an attribute or property of something, could be mapped to property dualism. I know this is too close to Chalmers’ view for your comfort, but I do see commonalities.

      I suppose someone could find a monist conception involving some type of phase transition, if anything like that could be found in the biology.

      Similar to James, I’m not a fan of “something it is like”. I actually think it’s hopelessly vague and amorphous language. It seems to be saying something profound, but in reality it’s a nearly blank statement that people can project their own ideas onto. As soon as we start trying to stipulate like what specifically, the disagreements begin, which puts us on the slope to the hierarchy.

      Assigning a technological system to 1 is simply ignoring the goal oriented manner those systems operate under. Here I’ll ask my standard question. What about biology makes it special? Your answer can’t just be about magnetic fields. Lots of systems, including the earth, sun, and lots of machines, have those. Even if the brain does use them, I think we’re just talking about another information processing mechanism. There’s nothing about it, in and of itself, which explains consciousness.

      Ultimately I think the only honest answer to the what makes biology special question is that these systems have impulses that we can recognize in ourselves, that we can empathize with. In that regard, technological systems are at a disadvantage. However, we’ve all seen cases where people sympathize with a robot.

      Liked by 3 people

      1. –>Ultimately I think the only honest answer to the what makes biology special question is …

        I wouldn’t say special, as in impossible to be related to, but if you think all the aspects of consciousness arise from physics and chemistry (and what else is there?), and note how our own experiences of consciousness change drastically with chemical or magnetic inducements, then its easy to say the biological experience is a special one. That doesn’t mean other chemicals don’t make their own experiences; it just means they’d be special in their own way.

        Liked by 1 person

        1. Right. My question about specialness was in terms of what might make it conscious and only it conscious. But I agree machine consciousness would be very different from evolved consciousness, at least unless we went out of our way to make it similar. (Which could have all kinds of ugly ethical implications.)

          One area people could make an argument about is carbon based chemistry, that there’s something unique to it that any other system on a different substrate would lack. And it might be that achieving the same capacities, performance, and energy requirement trade offs in a different substrate may turn out to be problematic. But I think it’s just as likely that evolution had its own constraints it had to work under, which might not apply to an engineered system.

          Liked by 2 people

          1. For McFadden and Pockett, electromagnetic fields organized spatially with information about the external world constitutes consciousness. Pockett also, I think, might argue these are fields in the lower hertz range too. So, for them, the fields would conscious no matter what substrate.

            I lean more toward the carbon based chemistry argument which actually, whether you intended it for not, seems to be implicit in your hierarchy with your #2 and statement that even single celled organisms react to the environment. It could be that neurons (or certain types of neurons) do “feel” electromagnetic fields and that consciousness is the neurons of the brain feeling its own self-generated EM fields when reaching certain thresholds and at certain frequencies in a feedback process.

            Liked by 2 people

          2. “whether you intended it for not, seems to be implicit in your hierarchy with your #2 and statement that even single celled organisms react to the environment.”

            What stops me from seeing it this way is that a Roomba, a thermostat, or the laptop I’m typing this all react to their environment as well. Certainly they may not currently be as reactive in many ways, but that seems like a matter of extent.

            Liked by 2 people

      2. Mike,
        You didn’t provide me with much here beyond canned responses. Note that one of them doesn’t apply to me (or the “life” thing), another vaguely attempts to associate me with substance dualism, and yet another falsely suggests that I consider qualia to generally exist in electromagnetic radiation. I’m not taking the bait however. I’ll bypass that on the hope that you might consider the substance of my commentary itself.

        You present a hierarchy of consciousness which essentially begins from the human and degrades over various levels to matter / energy. From this perspective it seems preordained that you’d deny the binary nature of qualia / consciousness, and that your conception of this idea would ultimately reduce back to “like us”.

        Instead of working backwards from the human and thus reaching your conclusions however, I take things from an engineering perspective. From here I presume that there is some sort of physics by which existence can feel good/bad, and that evolution implemented it to provide organisms which it couldn’t adequately program to deal with more open circumstances, a kind of agency from which to more effectively do so. Thus with the proper matter / energy structure even we might create qualia / consciousness technologically, and this needn’t include introspection, imagination, volition, habits, perception, or reflexes. Of course it wouldn’t be functional, but more of a “locked in” entity that feels good/bad on the basis of the physics that we institute. So what do you think about this contrary perspective from which to grasp the mechanics of various functional machines such as the human?

        This doesn’t mean that I consider your hierarchy unhelpful. It does seem helpful for its stated purpose, or classifying various conceptions of the “consciousness” term. I simply object to this epistemological device inordinately flavoring our conception of qualia / consciousness, or a term that I consider effective as the medium through which existence may be perceived. Here there will inherently be “something it is like”, and not otherwise.

        Liked by 1 person

        1. Eric,
          I sincerely attempted to address where I see our views diverge. I made it clear that I was not talking about substance dualism. Unfortunately, you seem unwilling to deal with nuance here.

          If you don’t see qualia as existing in the electromagnetic field, then given everything you’ve written about this over the last several months, I have to confess I’m confused about what exactly your view is.

          I think taking the engineering perspective means addressing what physics we’re talking about and what we mean by “existence feeling good or bad”. Presumably you don’t think these physics are trivial. That means it must have components which interact with each other. What are those components? What is a feeling composed of? I’ve provided my answers, which you’ve asserted are superfluous. What are yours?

          Liked by 1 person

          1. Okay Mike, I guess I did incite that response. Let’s see if I can do better this time.

            My point is that if we begin from “human consciousness” and work backwards in functional stages, then logically the result will be a functional hierarchy. But that should be problematic if the generally most useful definition for “consciousness” is not in itself functional. I consider that to be the case here, and indeed why you seem to now leave out a category for terms such as “qualia”. I put this at about 1.5 of your hierarchy, since matter / energy will be required for it in a natural world, though in an ontological sense the headings above needn’t yet exist. Presumably evolution used the physics of qualia in ways which were functional, though functionality shouldn’t have existed inherently.

            I’m calling this an “engineering” account in the sense that it gets to the central component of what’s generally understood as “consciousness”. Note that this approach needn’t get into what exactly creates qualia, but merely needs to define the term as something which is understood to exist. Newton did the same when he left “gravity” as an unknown for future science to explore. I do however think that Eric Schwitzgebel did a great job of defining qualia in the paper I linked to above through positive and negative examples. We do the same for terms in general, such as “furniture” and “pink”. (Of course he uses the “phenomenal experience” term, but though “qualia” works great for this as well.)

            Hopefully I’ve now provided an innocent enough account for you to consider the meat of what I’m saying here.

            Liked by 1 person

          2. Eric,
            I did leave off words like “sentience”, “affects”, and “qualia” from the hierarchy. That’s not to say that I don’t see them anywhere in it, but that the exact point where they’re present depends on how you define those terms. My view is inherently functionalist. Honestly, I really don’t know what to do with non-functionalist assertions.

            Your usual definition of consciousness is existence that feels good or bad. If so, then I assume that’s what you mean by “the generally most useful definition”?

            As I mentioned above, situating it at “1.5” is very liberal, essentially making it pre-biological but not universal. This seems to be somewhere between panpsychism and biopsychism. If so, it will include unicellular organisms and plants, and maybe even viruses. Is that what you had in mind? It seems a far broader view than you’ve historically taken.

            Newton couldn’t define gravity, except in terms of its effects on matter. But he was still able to map out its mechanics mathematically. The only people even claiming to do that with consciousness right now are the IIT folks. I can’t see the comparison with qualia as meaningful.

            It’s been a while since I read Eric S’s paper, but it was a response to Frankish’s illusionism, so his only goal was to demonstrate the existence of phenomenal consciousness. I don’t know if Frankish’s response paper is online, but here’s the relevant snippet.

            I think Schwitzgebel succeeds in identifying an important folk psychological kind — indeed the very one that should be our focus in theorizing about consciousness. However, I don’t think he has met the challenge of the target article. For, precisely because his definition is so innocent, it is not incompatible with illusionism. As I stressed in the target article, illusionists do not deny the existence of the mental states we describe as phenomenally conscious, nor do they deny that we can introspectively recognize these states when they occur in us. Moreover, they can accept that these states share some unifying feature. But they add that this feature is not possession of phenomenal properties (qualia, what-it’s-like-ness, etc.) in the substantive sense created by the phenomenality language game. Rather, it is possession of introspectable properties that dispose us to judge that the states possess phenomenal properties in that substantive sense (of course, we could call this feature ‘phenomenality’ if we want, but I take it that phenomenal realists will not want to do that). Now, the challenge of the target article was to articulate a concept of phenomenality that is recognizably substantive (and so not compatible with illusionism) yet stripped of all commitments incompatible with physicalism. Schwitzgebel hasn’t done this, since his conception is not substantive.

            Nevertheless, Schwitzgebel has succeeded in something perhaps more important. He has defined a neutral explanandum for theories of consciousness, which both realists and illusionists can adopt. (I have referred to this as consciousness in an inclusive sense. We might call it simply consciousness, or, if we need to distinguish it from other forms, putative phenomenal consciousness.) In doing this, Schwitzgebel has performed a valuable service.

            Minus the explicitly illusionist statements, I pretty much agree.

            Liked by 1 person

          3. Mike,
            Apparently you have my innocent qualia definition, so theoretically you could put it in your hierarchy wherever seems appropriate if you like. I tried 1.5 though you said microbes would then be conscious, which isn’t at all what I believe to be the case. So I’m currently skeptical that a good spot for this idea exists in your hierarchy.

            For something to be “functional”, a function referent should at least need to exist implicitly. Often for life we consider this to be “survival.” For our machines it’s as arbitrary as serving our various purposes. A music machine that produces smoke rather than music may thus be referred to as non functional, even given the smoke production.

            If we build a machine that causes what we’d call “pain” for a (thus) conscious entity to experience, then it would be functional in that specific regard. So I guess your functionalism can work in this sense. But note that we might not set this machine up to effectively “do” anything other than have such a phenomenal experience. That’s why I put qualia at 1.5 of your hierarchy. It needn’t have any reflexes beyond producing this sensation for something to experience. And note that this might also be the case for a perfectly paralyzed human who happens to be in pain, so I’m not being entirely theoretical here. Getting this stuff right should have real world implications.

            I don’t map out qualia mathematically any more than people map out terms like “furniture” or “pink” mathematically, and yet still grasp what such terms effectively mean. That’s primarily what I consider science to need for the “consciousness” term, not mathematical accounts. What I can provide in a technical sense however is a model for qualia / consciousness for a sufficiently evolved conscious entity in a computational capacity. For such a computer I propose three varieties of input, one variety of processor, and one variety of output. If experimentally validated this model should become similarly significant for our troubled mental and behavioral sciences, as Newton’s work regarding gravity has been for physics.

            On IIT, I was hoping Scott Aaronson would relate it to something tangible, just as Eric S did by using the premise to suggest that the whole of the United States should thus be conscious under it. (Apparently the authors subsequently changed their model so that expansions and reductions don’t effectively apply, which I consider an utterly arbitrary and lame defense. Schwitzgebel then poked holes in their new version as well.)

            What Aaronson instead came up with was “We could achieve pretty-good information integration using a bipartite expander graph together with some fixed, simple choice of logic gates at the nodes: for example, XOR gates.” Um… what the hell is that suppose to be? A description fit for the printed page? But at least skimming through the proposed mathematical operations to reach that conclusion which supposedly conform with IIT, does demonstrate to me how ridiculous the theory happens to be.

            Thanks for that account of Frankish’s response to Schwitzgebel! Here it sounds like he has pretty much conceded that Schwitzgebel did provide an innocent enough account of qualia to be accepted. Furthermore you seem to agree with his agreement. So wouldn’t that mean that all four of us have a common definition? And if so then shouldn’t we help Schwitzgebel’s innocent account of consciousness become standard in science and elsewhere? But I see from a paper Schwitzgebel published in January of last year that Frankish and others have no intention of helping improve science in this regard. http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/InflateExplode-200131.pdf

            Interestingly one of the referenced papers from Frankish is entitled: “Not disillusioned: Reply to commentators”. People with invested interests generally find ways to hold on to those interests regardless of how science might otherwise progress.

            Liked by 1 person

          4. Eric,
            The problem with an innocent definition of qualia is it’s vague. It’s compatible with multiple locations in the hierarchy. We can only narrow their location by additional specificity to that definition. Of course, each additional detail is controversial, which is the whole point.

            If you don’t like looking at it in functional terms, then how about in causal terms? What causes your version of qualia? What are their downstream causal effects? What is their overall role in the causal flow? To be naturally selected, they would have had to have some kind of effects on the survivability of the animal. Unless you’re going to argue they were a spandrel. But is a feeling really a feeling if it has no connection to anything the animal might actually do?

            I think part of the problem here is you seem to conceive qualia, specifically your feeling good or bad, as something irreducible. I don’t think that’s a scientific way of looking at it. It’s actually a way that makes it easy to slide into thinking qualia are something fundamental, an add-on. That kind of thinking leads to panpsychism (or variations of the d-word).

            I think any naturalistic understanding of qualia will have them be enormously complex mechanisms, with innumerable moving parts. So where do they fit in my hierarchy? It depends on what you consider essential for an experiential quality to be a quality. I personally can’t see your version coming before 4 or 5, at the earliest. But human qualia, the only qualia we ourselves know, is at 7.

            As I noted above, the problem with Eric S’s innocent account is it’s ambiguity. It’s compatible with too many things. Part of what I’m trying to get at with the hierarchy is to show the range of things it might be compatible with, depending on what details are filled in.

            “People with invested interests generally find ways to hold on to those interests regardless of how science might otherwise progress.”

            That applies to everyone. It’s easy to see it in others (or believe we see it), but very difficult to catch in ourselves. That’s why it’s best, in terms of actual persuasion, to meet arguments with logically relevant counter-arguments, rather than besmirch the motivations of those making the arguments.

            Liked by 1 person

          5. Mike, I read through Eric S’s “innocent” paper on consciousness and I totally agree with you that it’s too vague and ambiguous. I don’t mind the methodology of using positive and negative examples to hone in on a concept, but we know more about consciousness now that it should no longer be considered so simply. Eric S has to make clear appeals to ignorance of this knowledge as part of his argument. (“Don’t try to analyze it yet.” … “Please dont!”)

            What I would add as another argument against his innocent account is in response to this quote from Eric S.

            “My definition did commit me to a fairly strong claim about folk psychology: that there is a single obvious folk-psychological concept or category that matches the positive and negative examples.”

            This is an obvious problem of essentialism. See “Dennett Darwin and the Overdue Demise of Essentialism” for more on why that’s a philosophical error. But more obviously, consciousness is a complex phenomenon that should not be looked at as a single concept or category. You’re just bound to be wrong then.

            Hierarchies of consciousness rule! : )

            Liked by 2 people

          6. Thanks Ed. That essentialism, I think, is remnant intuition from our natural innate tendency to be dualists. Few people in the cognitive sciences or philosophy of mind are explicit dualists these days, but dualist thinking is tough to shake completely. Dennett calls it the Cartesian Theater. Too many say, “Yes, yes, yes, of course I reject such a ridiculous idea.” But then go on to say things that are only meaningful within its framework.

            Liked by 1 person

          7. (Hey Ed, good to see you here as well.)

            It makes sense that Schwitzgebel’s innocent conception of qualia / consciousness is vague to you, given that you’re curious about things that he isn’t addressing. Rather than simply clarify a useful consciousness definition, you’d like him to also disclose how that sort of thing might exist, or something which might permit you to put his account into your hierarchy. But as I’ve mentioned your function based hierarchy may fail here. Your account might be set up more to characterize various standard theories rather than effectively get into the details of how Schwitzgebel’s innocent conception could functionally exist.

            If certain causal dynamics regarding matter / energy are required in order to create Schwitzgebel’s conception of phenomenal experience, though no reflexes or other levels beyond that physics alone is mandated as well, then his conception shall exist between your first two levels, and even if all sorts of life forms exist above this level that do not harbor that specific bit of physics.

            Here you’re right to wonder how this dynamic could have evolved without organism causal output effect, and of course it couldn’t evolve that way. Nevertheless qualia should have been a worthless spandrel initially. My theory is that non-conscious organisms couldn’t effectively be programmed to deal with more open circumstances (as also seems to be the case for our robots today), and yet there was the potential for this functionless phenomenal spandrel to exist by means of certain brain operations. Given standard organism failure under more open circumstances, I suspect that this spandrel was effectively given a shot at deciding what to do from time to time (and so now wouldn’t be a spandrel), and with enough iterations it did well enough to evolve functionally. Evolution in general is a process where spandrels can and do end up playing functional roles. This is because things must exist before they can be tried.

            If I don’t consider your hierarchy to be quite right here, then you might ask me for a better one. Though life may generally seem hierarchical, I consider this heuristic too simplistic to always be effective. Observe that there are all sorts of things which lower forms of life are able to have that I’m not able to have, such as the ability to fly. Ultimately it’s always the physics that matters rather than perceived levels of advancement.

            On what causes my version of qualia, I never gave this any real thought until 2014 when I started blogging. That’s when I realized how important this question happens to be to people in general. But shouldn’t we develop a sufficiently innocent definition for qualia before pondering how it might exist? Psychologists are suppose to learn about us, not build us. My ideas have always been psychological. And it could be that the strength of my models here help me grasp what does and doesn’t make sense regarding such instantiation.

            Anyway since I believe that specific physics is required, my conception can be neither panpsychism, substance dualism, idealism, or informationism. Furthermore we know that a feeling can be a feeling if it has no connection to anything the animal might actually do, given that a perfectly paralyzed person could also be in horrible pain.

            Consider this model. A non-conscious organism is operated somewhat by means of a non-conscious brain, though this system can’t always be programmed to function well under more open circumstances. So through the phenomenal experience spandrel it may have evolved a fundamentally different functional computer (which as highly evolved humans, constitutes all that we know of existence). Within this stuff there are input (valences, informational senses, and memory of such past input in a degraded way), processing that interprets input and constructs scenarios about what to do (or “thought”), and the non-thought output of muscle operation to promote valence based interests. So here a non-conscious computer takes cues from a conscious computer.

            Though to me many modern theories regarding the production of phenomenal experience seem supernatural, Johnjoe McFadden presents a fully physical medium through which this could exist, or the electromagnetic radiation associated with certain synchronous neuron firing. Given that all computers require output mechanisms in order to do what they do, and that neuron firing happens to be highly associated with the existence of qualia, his theory is the only plausible one that I can imagine right now. In a natural world, what other brain mechanism might get this done?

            Liked by 1 person

  13. You cover such fascinating topics in a fair way. Your posts always get me thinking. You might be interested in checking out Julian Jaynes book “The Origin of Consciousness in the Breakdown of the Bicameral Mind” if you haven’t already. Pretty radical ideas, but I enjoyed learning the lens of his perspective on the origin of consciousness.

    Liked by 1 person

    1. Thank you very much!

      I have to admit I haven’t read Jaynes at length, although I’ve definitely heard of his theories. From what I know about them, he’d be pretty solidly in level 7 in the post’s hierarchy. His view of consciousness, as something that arose in relatively recent history, is pretty radical. It would mean that there are still humans alive today, perhaps in the few remaining uncontacted peoples, who aren’t conscious.

      Robert Bellah, in his book on religion, covered Merlin Donald’s concept of theoretic culture, that is, culture that includes thinking about thinking. Donald see’s theoretic culture arising in the Axial Age. In many ways, it seems very similar to Jaynes’ view (perhaps it was even inspired by it), but more modest. In that view, we didn’t become conscious in the Axial Age, but we may have started thinking more deeply about our consciousness.

      Liked by 1 person

        1. Thanks! I should warn you that Bellah’s book is somewhat dense and scholarly, not really aimed at a popular audience. (It’s far from the worst I’ve come across in that fashion, but still found it a bit tedious to read, with occasional interesting insights.)

          I did post on some of the concepts from it a while back: https://selfawarepatterns.com/2015/02/16/religion-the-axial-age-and-theoretic-culture/

          Merlin Donald himself has written books, one called ‘Origins of the Modern Mind’, that probably goes more directly into the overall view, and I see is now on Kindle. Hmmm…

          Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.