Consciousness lies in the eye of the beholder

There are few things that everyone who ponders consciousness can agree on.  It’s a topic where debates on the very definition of the subject are common.  The only definitions that seem to command near universal assent are the ones oriented toward phenomenology, such as “subjective experience” or “something it is like.”  And even then, the question of whether these are real or illusory is hotly debated.

Moving beyond phenomenology, many people still hold to substance dualism, the idea that the mind cannot be explained with mere physics, chemistry, biology, etc, that something else is needed.  We appear to have a strong innate intuition for this view.  I think it comes from the fact that our mental model of a mind bears little relation to our model of the physical brain.  It leads to the “hard problem of consciousness.”

But the hard problem appears to actually just be a psychological one, a difficulty in accepting what over a century and a half of neuroscience is telling us, that there is no evidence for dualism.

Many people accept the above logic intellectually, but still retain latent dualistic intuitions.  Well, I guess we all retain those intuitions to some extent, but not everyone remembers to discount them in the same way we discount our intuition of a stationary earth, that humans aren’t animals, or that space and time are absolute.

In summary, there is no evidence for a spiritual ghost in the machine, nor is there any for an electromagnetic ghost, a quantum ghost, or even a physical one in the sense of a particular location in the brain holding the soul or psyche.  There is just the machine and what it does.

You could make the case that there is an overall informational ghost, but that would be true only to the extent that the “ghost” of Microsoft Windows is in the laptop I’m typing this post on.

This has implications for the concept of consciousness that I think many resist, even many stone cold materialists.  We have subjective experience that is generated by the capabilities of our nervous system.  Our own experience is the only one we ever get access to.  We can only infer the existence of similar experiences in other systems.  (In philosophy, this is know as the problem of other minds.)

Consciousness is a label we affix to a collection of capabilities that the information system we call our mind possesses.  (The exact composition of which is itself a matter of ongoing debate.) When we ask if something else is conscious, I think what we’re really asking is if it processes information similar to the way we do and has similar drives.

So, when Bob ponders whether Alice is conscious, he’s basically thinking about how much Bob-ness she has. When Alice ponders Bob’s consciousness, she’s thinking about how much Alice-ness he has. When humans ponder animal consciousness, we’re wondering how much human-ness they have.  And when we ponder machine consciousness, we’re wondering how much life-ness they might have.

This, incidentally, is very natural for us as social creatures.  Pondering how much another entity thinks like us likely goes back at least to the earliest social species.  Perhaps earlier animals even had an incipient theory of mind for prey and predators.  This mode of thinking, to widely varying degrees, may be very ancient.

But it’s always a matter of judgment because no two systems process information in exactly the same way. Even different members of the same species are going to vary. And the further from mentally complete humans we move, the less like us they process information, and the more in doubt their us-ness is.

This is just a special case of the fact that whether a particular system implements a particular function is always a matter of judgment.  To say that it isn’t is to invoke teleology, the idea that natural systems have some inherent purpose.  But teleology was abandoned in science centuries ago, because it could never be objectively demonstrated.  Function is an interpretation.

From the similarities, we decide how much moral consideration a particular system should have. If we decide that it should have it, we tend to think of it as conscious.  Consider all the cases where someone argues that a creature is conscious or sentient, that’s it’s like us, to make the case that it should be treated better.  But if there is no objective morality then it follows that there is no objective consciousness.

A commonly expressed objection to this is that it’s circular and subject to infinite regress.  But this can be said for any evolved trait.  How could the trait,particularly a social one, start if it’s required to first be in a parent or in partners?  The answer is generally that the trait evolved gradually.  The same can be said for consciousness.  There was never a first conscious creature, just increasing capabilities until a point was reached where we might be tempted to apply the label “conscious” to it.  But the first animal “worthy” of that label would not have been dramatically different from its parents.

All of which is to say, I think asking whether a system is conscious, as though consciousness is a quality it either possesses or doesn’t, is meaningless.  Such a question is really about whether it has a soul, an inherently dualistic notion.  Our judgment on this will come down to how much like us it is, how human it is.  When put that way, the answers seems somewhat obvious.  Some species, such as chimpanzees, obviously are a lot more like us than others, such as fish or snails, but all currently are much closer to us than any technological system.

This raises the question of whether we would ever consider a machine intelligence to be conscious unless it had very human like, or at least life like, qualities.  When Alan Turing proposed his famous test (now known as the Turing Test), he did so to move the debate on whether machines could think from philosophy to science.  But he may have identified the only true measure of other minds we can ever employ.  Some criticized that Turing was really testing for how human-like a system was, but that may have been the very point.

It seems that whether any given system is “conscious” is something that lies in the eye of the beholder.

Unless of course I’m missing something?

This entry was posted in Mind and AI and tagged , , , , , . Bookmark the permalink.

92 Responses to Consciousness lies in the eye of the beholder

  1. cpluzc says:

    What’s more likely: a ‘subjective experience’, or a ‘criticising truth’?

    Liked by 1 person

  2. cadxx says:

    There is a tendency to dismiss the things for which there is no detector. Science does not have a consciousness detector, therefore consciousness does not exist. It’s a little sad and illogical but it can be used to fool us all into believing that a machine can think. A machine is by definition an item designed and engineered by humans. A human by contrast is a ???? designed and built by ????.
    Dictionary.com “machine”
    noun “an apparatus consisting of interrelated parts with separate functions, used in the performance of some kind of work: a sewing machine.”
    How do you get a sewing machine to build a sewing machine?
    You can’t reverse engineer a living thing because the act of doing so removes the thing you are looking for…you have to kill it.
    I think the Victorians were closer to the truth, the missing “vital spark”.

    Liked by 1 person

    • You may be right. In the end, all science can do is make careful observations and develop theories that make predictions for future observations. If a particular concept can’t be observed, and its absence makes no difference in the accuracy of predictions, then it’s usually judged to not be there.

      Of course, someone could make observations tomorrow that force a reconsideration. Or it might eventually turn out that no theory can predict what happens in a brain unless some nonphysical model of the mind is included.

      It’s interesting that you mention the vital spark, since biological vitalism was once considered just as hard a problem as consciousness. Eventually biologists discovered that it was all just chemistry and electricity in motion. Vitalism just wasn’t a necessary component to their models. Again, future observations could always change that, although we’re much further along in microbiology, and hardly anyone is still looking for anything like vitalism.

      Liked by 1 person

    • A machine is by definition an item designed and engineered by humans. A human by contrast is a ???? designed and built by ????.
      It’s fine to define a machine as something defined and engineered by humans, but then you need another word which means almost exactly the same except for the “by humans” part. How about, a human is a natural machine designed and built by natural selection.

      Liked by 1 person

      • cadxx says:

        I have a number of problems with natural selection. I am aware of the evolution or god binary but I’m not religious and I cannot support a faith-based scientific dogma that appears to me to be no different from a religion.
        Some years ago (early 2000’s) I spent about eighteen months exchanging views on Talk Origins and would you believe, I was contacted by Richard Dawkins no less. He wanted to know why I was attacking others on the website. I pointed out that in fact I had twelve or thirteen people attacking me. I invited him to join in the game but he declined.
        https://nextexx.com/a-critical-evaluation-of-evolution/

        Like

        • Lee Roetcisoender says:

          @cadxx..

          Touche! Dogmatism is alive and well in all fronts, be it the religious, scientific or skeptical communities. Keep in mind, that individuals are not interested is the discovery of unbiased “truths”, they are only interested in garnering support for their own bigoted beliefs because a community of like minded individuals “re-enforces” those prejudices.

          “The discovery of truth is prevented more effectively, not by false appearance things present and which mislead into error, not directly by weakness of reasoning powers, but by preconceived opinion, by prejudice.”
          Arthur Schopenhauer

          Like

          • cadxx says:

            At the risk of being repetitive my home page makes it quite clear that I personally think skeptics have pathological denial problems. The word sceptical no longer means what it used to mean, it’s now an excuse,(originally used by science and later by sceptics) for taking-on the mentality of a medieval monk.

            Arthur Schopenhauer
            “Proceeding from the transcendental idealism of Immanuel Kant, Schopenhauer developed an atheistic metaphysical and ethical system that has been described as an exemplary manifestation of philosophical pessimism…” https://en.wikipedia.org/wiki/Arthur_Schopenhauer

            Compare

            Nihilism (/ˈnaɪ(h)ɪlɪzəm, ˈniː-/; from Latin nihil, meaning ‘nothing’) is the philosophical viewpoint that suggests the denial or lack of belief towards the reputedly meaningful aspects of life. Most commonly, nihilism is presented in the form of existential nihilism, which argues that life is without objective meaning, purpose, or intrinsic value.[1] Moral nihilists assert that there is no inherent morality, and that accepted moral values are abstractly contrived. Nihilism may also take epistemological, ontological, or metaphysical forms, meaning respectively that, in some aspect, knowledge is not possible, or reality does not actually exist. https://en.wikipedia.org/wiki/Nihilism

            With the execption of the moral BS, this sums-up my problem with mainstream science, education and the mainstream media.

            Like

        • @cadxx

          Passé! I’m sorry you have problems with natural selection. Natural selection makes sense to me, and I haven’t seen good arguments to the contrary. I looked at the link you provided, and it seems mostly an attempt to point at things which natural selection doesn’t explain, like fossils in the wrong places. But nowhere do I see an alternate theory that in fact does explain those things plus everything else.

          My current understanding of consciousness involves mechanisms that have a purpose (archeo/telenomic/whatever). While it’s not vital to my understanding that the purpose be generated by natural selection, that theory is currently the best one. If you have a better one, please let me know.

          *

          Liked by 1 person

          • cadxx says:

            I don’t want to muddy the waters with my own personal theories, there are more than enough of those. I can however give you the most likely answer. Although it may seem at first glance to be the least likely, it does, for me at least tick all the boxes. The most likely contender that is not my own idea, taking all the problems into account is the intervention theory. The idea that we were, at some point in the past (around 300,000 y ago) genetically engineered.

            As for there being no answers or alternative theories on my web page: if you look at my home page you will see that my intent is to encourage people to think for themselves. There is a big difference between thinking and remembering what you were taught to believe. The fact that many don’t understand this is the very reason religion does so well.

            Like

  3. Wyrd Smythe says:

    “There are few things that everyone who ponders consciousness can agree on.”

    I disagree!! 😀 😀

    Liked by 1 person

  4. Lee Roetcisoender says:

    “…teleology was abandoned in science centuries ago, because it could never be objectively demonstrated.”

    That fateful decision was clearly an arbitrary one because teleology can be objectively demonstrated through established scientific methods. For example: Teleology is characterized under two distinct categories, theology or philosophy. Since art is a form of expression, any explicit teleology intrinsic within a work of art falls under the category of theology rather than philosophy. Theological teleology asserts a design and purpose such as a work of art which is clearly an expression, whereas philosophical teleology asserts the explanation of phenomena by the purpose they serve rather than by a postulated cause, i.e., the surface appeal, beauty and/or aesthetics of the work of art.

    Therefore, any expression, such as a work of art satisfies the criteria for both theological and philosophical teleology. Our phenomenal realm is clearly an expression and can easily be demonstrated in two distinct ways. First, by utilizing established scientific methods and second, by utilizing what we know about expression through the science of art. Art and science are intrinsically united, born of the same mother, the need to express…

    Liked by 1 person

    • I don’t know that anyone would disagree that purpose can be demonstrated in art, or any human creation. The difficulty comes in for natural objects. What is the purpose of the moon? Or Mars? Or a hurricane? To our modern way of thinking, these questions seem pointless, although they didn’t to a 14th century natural philosopher.

      I do think we can speak about teleonomy, the appearance of purpose, in evolved biological systems. Although Daniel Dennett argues that we should feel free to discuss actual teleology in biology, to talk about undesigned purpose. I’m neutral on that particular debate, as long as we know what mean in those discussions.

      Like

      • Mike, because I think the concept of teleonomy is absolutely necessary for understanding consciousness, I’m going to push back a little on calling it “the appearance of purpose”. It’s not just appearance. The difference between teleology and teleonomy is the same difference between natural design and artificial design. The latter are both actual design.

        *

        Liked by 1 person

        • James, that doesn’t seem to match the definition of “teleonomy” in Wikipedia or the most common dictionaries. But ultimately it’s not a distinction I feel strongly about. I do know that just about anytime I talk about the purpose of an adaptation, I get grief from someone about how evolution has no purpose.

          Like

          • As you may have guessed, I think you should feel strongly about it, and fight against the griefers. 🙂 The Wikipedia article seems fine except for the word “apparent”. I think Dawkins has it right when he talks about archeo purpose (teleonomy) vs. new purpose (teleology), especially in that neo purpose is a kind of meta-purpose, with archeo purpose at its source.

            *

            Liked by 1 person

      • Lee Roetcisoender says:

        Stated another way Mike, our phenomenal realm is literally a work of art and that postulate can be easily demonstrated scientifically by what we already know about the underlying form of any expression. The principles of expression are consistently uniform and can be applied across the entire spectrum of art; for science itself is an art form and art itself is a science. Now, having stated that; I’m not saying that anyone is going to be comfortable with the findings…

        I mean, get real. Why does our scope of inquiry have to be so limited. We don’t necessarily cringe when someone postulates that our phenomenal realm might be a computerized simulation, so why should it seem bizarre to consider that our phenomenal world is an expression, especially when the underlying principles of that theory can be scientifically demonstrated???

        Like

        • “our phenomenal realm is literally a work of art and that postulate can be easily demonstrated scientifically”

          Lee, I wonder if you could provide some examples. Is there an artist? Or is this similar to James’ point about natural design?

          Like

          • Lee Roetcisoender says:

            “Is there an artist? Or is this similar to James’ point about natural design?”

            Those questions are irrelevant, and besides, I’m not sure what “natural” design means. The only tenable question on the table is: “What are the underlying, qualitative properties of an expression, any expression?” Once those quantifiable principles are established and well defined, it’s only a matter of doing the math. Should we neglect Occam’s Razor.

            Examples? I used to tell of of my grad students that if they wanted the answers to difficult questions, do the hard work and figure it out for yourself. But, I suppose that attitude contributed to me being forced into early retirement too. Humility and gratitude were never two of my strongest suits…

            Like

          • Lee, natural design is the process of getting eyeballs via natural selection. What do you mean by “an expression”?

            *

            Like

  5. Mike, I followed along agreeing with the OP, saying to myself, yeah, okay, sure. But then you started making statements which I think aren’t quite right. I’ll try and deal with them one at a time. In the words of the prophet: buckle your seat belt, it’s going to be a bumpy night.

    Our own experience is the only one we ever get access to. We can only infer the existence of similar experiences in other systems.

    Consider replacing the word “experience” with “heart” in the above quote. It’s still true. But this does not mean we cannot develop an understanding of what a heart is and does. And if we have such an understanding, the only thing that prevents us from doing more than inferring a heart in others is the ethics of going and looking. If we have an understanding of what makes a process an experience, we could likewise go looking for it inside the brain (or where ever seems appropriate, like computers).

    *

    Liked by 2 people

    • James,
      I was referring to the fact that the only system whose internal experience we can access is our own. And while we can study the cognitive capabilities of our own brain, and compare those capabilities to those of other species, we can’t imagine their experience from their point of view. We can engage in flights of fancy that we’re imagining it, but we always do so from the perspective of human being in their place, never from their actual perspective.
      We can’t escape the human perspective.

      Like

      • Mike, I guess my point is, um, so what? I can’t feel your feelings, just like I can’t digest your food or climb your climb. But, in (my?) theory, I can know everything about your feelings, and potentially more than you do. (Not practically, of course).

        *

        Like

        • The point is that our subjective experience is built on human capabilities. The very meaning of “subjective experience” for us is defined by those capabilities. Imagining the world where we only have a subset of those capabilities as well as different ones, as, say, a mouse would have, or a fish, means we can’t know if the way they process information results in anything that we might recognize as inner experience.

          Like

          • Given we cannot experience the same things as say, bats, I don’t think it follows that we cannot know the ways they process information and cannot compare and contrast with how we process information.

            We cannot feel their feels, but that doesn’t mean we can’t know about their feels.

            *

            Like

          • We can definitely scientifically study how they process information in comparison to how we process information, and I think we should follow that as far as it goes. We can even use that information to infer what their feeling states might be. We just can’t test that inference.

            Like

  6. ”Function is an interpretation.”

    Yes, but it is not necessarily an arbitrary interpretation. The “function” of a thing is frequently part of the explanation of why a certain thing exists, eg., eyeballs and chloroplasts. The explanation can be teleological or teleonomic.

    *

    Liked by 1 person

    • I’ll concede that some interpretations of functions are easier than others, that is, they take less energy. But an explanation of why a thing exists is a theory, and all theories are provisional, subject to revision on new information.

      Like

  7. Mark Titus says:

    I think the term “consciousness” will gradually fall into disuse. The term “matter” meant something a couple of centuries ago, but has since been replaced by the terms “atom,” “molecule,” “carbon,” “oxygen,” and so on. The word turned out to identify nothing at all—or too much.

    We don’t have any idea at this point how that fate may befall “consciousness” (although it has already happened to some extent to the term “mind”).

    Of course the words “consciousness” and “mind,” like the word “matter,” will always remain useful for ordinary communication.

    Liked by 1 person

    • You might be right. Although unlike “vitalism”, the use of the word “consciousness” seems much more embedded in our language. But it’s origins are definitely tangled up with religious conceptions of the soul. I’m struck by its almost complete absence as a concept in many neurobiological studies, except for the occasional colloquial use.

      Elkhonon Goldberg in his book on the frontal lobes, ‘The New Executive Brain’, characterized consciousness as an obsolete concept that we only still talk about because, “old gods die hard.”

      Like

  8. All of which is to say, I think asking whether a system is conscious, as though consciousness is a quality it either possesses or doesn’t, is meaningless.

    Mike, you give up too easily. You could say the same about locomotion. All we can do is compare other species and how their locomotion differs from ours. This doesn’t seem right. We can figure out how locomotion works. We can also figure out how Consciousness works.

    *

    Liked by 1 person

    • We can agree on a definition for locomotion. We seem unable to agree on one for consciousness. However, we can study how perception, memory, imagination, emotions, and similarly more focused mechanisms work.

      But is introspection necessary for consciousness? Is there anyway to scientifically answer that question? Particularly since we can’t even ask anyone who’s lost it about how their experience is affected, because they’ve lost the ability to know that.

      Like

      • “Is there any way to scientifically answer that question?”

        Simple answer, yes. Once we decide what “introspection” means, we can scientifically determine if it exists in any given system.

        The problem is not that we are not able to agree on a definition of consciousness. Of course we can. It’s just that everyone seems to have a different one, which is fine. For any given definition, you should be able to scientifically determine whether consciousness, by that definition, exists in a given system.

        *

        Like

  9. paultorek says:

    I agree in broad outline with “When we ask if something else is conscious, I think what we’re really asking is if it processes information similar to the way we do and has similar drives.” But I have serious qualms about “Our own experience is the only one we ever get access to.” Note that “experience” is a word in the English language. As an element of language, it’s subject to norms of use within a community – norms that allow me to speak about your experiences and you to speak about mine. As a child, your mother saw you smile and said “You’re happy”. Through lessons like that you learned the meaning of “happiness”, and other experiential words.

    You can, of course, question the importance of what we share. You can entertain the thought that your experiences are importantly different from mine, to the point that you need to invent new language for the special qualities of yours. But that’s what you’d be doing: inventing new terms.

    As for the continuum of proto-consciousness and consciousness – I agree that there’s a continuum. There’s also a continuum from child to adult, from nonhuman ape to human, and many more. That doesn’t make the concepts “adult” or “human” meaningless. There’s such a thing as dawn and dusk, yet the difference between night and day is like the difference between night and day.

    Liked by 1 person

    • Thanks Paul. On experience, I was referring to subjective experience (I just left off “subjective” in that sentence, but had used in the previous one), the private inner experience. The word “experience” is decidedly problematic, and I’ve called attention to it before. But this is an area where the limitations of language constantly bedevil us.

      On the continuum of proto-consciousness and consciousness (excellent description by the way!), I was really targeting the supposed binary nature of consciousness, that it’s something that’s either completely there are completely absent. Otherwise, I agree with everything you said in that last paragraph.

      Like

  10. Wyrd Smythe says:

    “…what over a century and a half of neuroscience is telling us, that there is no evidence for dualism.”

    Substance dualism, for any detectable substance, seems ruled out, but do you consider emergence as dualism? Or take the reductionist view that high-level emergence is always explained by low-level behaviors (I lean that way)?

    “In summary, there is no evidence for a spiritual ghost in the machine,…”

    As you know, lack of evidence is not evidence of lack.

    Obviously, you are not compelled by the gap between the subjective reality of consciousness and our attempts to understand (let alone replicate) it. Which is perfectly fair; there is likewise a vast gap between QFT and GR, which we know is due to our technological limits.

    “We can only infer the existence of similar experiences in other systems.”

    We establish pretty strong grounds for a belief in those similar experiences, though don’t we?

    The bigger broader ones are easily shared: “Did it hurt like hell when you stubbed your toe there? Yeah? Me, too!” “Isn’t pizza delicious!” “Don’t you love staring at the fire?”

    The thing about theory of mind is that we recognize that others are the same, that we share experience.

    “And when we ponder machine consciousness, we’re wondering how much life-ness they might have.”

    That’s a really interesting point! Is it possible consciousness is a unified experience? If we did the SF mind-share thing, would we find being the other very familiar or very alien?

    I’ll use my laser analogy again. Various materials can be made to lase, but what they produce (laser light) is unified. It’s just photons.

    Maybe at a certain level of “brainness” consciousness emerges, but physical constraints aside, it’s the same basic thing for every brain.

    Or, like fingerprints and many other body things, it’s a bit different for everyone. Sharing another mind would be alien and weird.

    As you say, we have no way to really know right now. (Will we ever is a good question, too.)

    “…whether a particular system implements a particular function is always a matter of judgment. To say that it isn’t is to invoke teleology,…”

    I think there’s a middle ground. Things evolve to have a specific identifiable function because that function provides value. That function isn’t teleological, but it is clear and purposeful, not a matter of interpretation.

    “There was never a first conscious creature, just increasing capabilities…”

    This is fascinating to contemplate.

    If the laser analogy has any truth, consciousness could be something of a discontinuity. (Parke Godwin’s Waiting for the Galactic Bus has a great account of consciousness sparking to life in primitive humans, although in this case aided by highly advanced alien brothers trapped on early Earth.)

    Or maybe it’s like material that weakly lases, the light is dim and not fully coherent. As the substrate improves, so does the light.

    Whatever consciousness is, it sure gave its holders a major advantage. We inhabit every ecological niche possible, and our imagination and capabilities have literally moved mountains.

    I also wonder why no other animal has ever evolved anything like it.

    “It seems that whether any given system is ‘conscious”’is something that lies in the eye of the beholder.”

    Maybe. I think we need to figure out what consciousness actually is. It might turn out to be very identifiable once we understand it.

    As a check point, we agree (I believe) that a sufficiently brain-like object would likely give rise to consciousness. (Do we agree it’s not guaranteed?) I’m pretty sure we both agree making such an object is strictly an engineering problem we will solve. Such an object will be made, and then the question should be answered (if it hasn’t by then).

    I believe we still disagree (like completely) on whether mind can be productively simulated or replicated via algorithms. In other words, that mind is possible in terms of distinct functions.

    You asked me recently why I found my view compelling, but I never returned the favor.

    Assuming you still believe in algorithmic mind (I think you may have used the term “vehement”) what do you find compelling about the view given the lack of evidence for it?

    Liked by 1 person

    • “but do you consider emergence as dualism? Or take the reductionist view that high-level emergence is always explained by low-level behaviors (I lean that way)?”

      I very much accept emergence, albeit in its weaker conception that it is more about how, due to the limitations of our mind, we’re forced to switch models as we scale up in layers of abstraction. And I dislike referring to the higher level models as an “illusion” when they’re often just as predictive as the lower level ones. I am a reductionist, but not an eliminativist.

      However, I prefer not to invoke emergence by itself as an explanation. Of course consciousness is emergent, but if we can’t talk about how it is, then we really haven’t explained anything. Thermodynamics is emergent from particle kinetics, but we understand, at least in principle, how that emergence happens.

      “As you know, lack of evidence is not evidence of lack.”

      Definitely, and as I admitted to someone else on this thread, there could be data tomorrow that provides it, or we could reach an impasse where the only way to explain what’s happening is to add some kind of super-physical concept to our theories. I’m not expecting this, but it could happen.

      “The thing about theory of mind is that we recognize that others are the same, that we share experience.”

      Definitely. And I think that’s at the center of our intuition of consciousness.

      “Whatever consciousness is, it sure gave its holders a major advantage. We inhabit every ecological niche possible, and our imagination and capabilities have literally moved mountains.
      I also wonder why no other animal has ever evolved anything like it.”

      This seems to be using a definition of consciousness that only includes humans (which of course is a view many people have). I would agree that human level consciousness has definitely given us a lot of advantages.

      “As a check point, we agree (I believe) that a sufficiently brain-like object would likely give rise to consciousness. (Do we agree it’s not guaranteed?) ”

      I think consciousness is a collection and hierarchy of capabilities. So far, only brain like objects have demonstrated them. If a system had those capabilities, I think it would be conscious. I wouldn’t say a system could have those capabilities and not be conscious. Which is to say, I don’t buy p-zombies.

      “Assuming you still believe in algorithmic mind (I think you may have used the term “vehement”) what do you find compelling about the view given the lack of evidence for it?”

      Haha! I think the actual word I used was “relentless”, specifically that my views were relentlessly functional. Why do I find it compelling? Well, setting aside different definitions of “computation”, I do think the evidence for it is pervasive and compelling.

      I’ve read a ton of neuroscience in the last few years. Many neurobiologists are very careful to make a distinction between digital computers and nervous systems, but most of them talk freely of neural circuits and computation. A brain is not a general purpose computer. It is not itself a Turing machine. But viewing it as a computational system is productive and predictive. Of course, it’s also very much a biological system, and that always has to be kept in mind.

      I do still think it’s possible, in principle, to simulate its operation in a computer, although it’s not clear whether it will ever be practical. I think in order to do it, we’d need to use a massively parallel architecture. Eventually if you ramp that up enough, you get an something that starts resembling an actual brain. I don’t see any reason why a replacement brain may not be possible someday, with “someday” probably being centuries from now.

      Liked by 2 people

      • Wyrd Smythe says:

        “Thermodynamics is emergent from particle kinetics, but we understand, at least in principle, how that emergence happens.”

        Yep. We’re completely on the same page here.

        “This seems to be using a definition of consciousness that only includes humans…”

        As far as we know, we’re the only ones around that have it.

        Really what I’m speaking to is the apparent rarity of a putative evolutionary trait that was such a massive success for the species that stumbled onto it. It essentially allowed us to transcend our basic nature and live in ways and places impossible without our ability.

        “I think consciousness is a collection and hierarchy of capabilities.”

        Which really seems like a form of dualism to me. There is the meat, and it implements this suite of capabilities. The processor and the algorithm are separate.

        I think it’s more holistic, so I guess that makes me the monist here. ROFL! 😀

        “A brain is not a general purpose computer. It is not itself a Turing machine.”

        I assume you mean it differently than it seems, because that’s the very point I’ve been making. The brain is not a Turing machine, therefore consciousness is not algorithmic, therefore consciousness cannot be replicated as a software simulation.

        Not what you meant, I’m sure!

        “But viewing it as a computational system is productive and predictive.”

        Other than DL, in what way? AI is facing a potential Second Winter over failure to find the Holy Grail.

        “Eventually if you ramp that up enough, you get an something that starts resembling an actual brain.”

        Okay, thanks. Looks like we remain on opposite sides on this one.

        I see a crucial difference between simulating something and replicating it. You see, at least with consciousness, those as potentially have the same result.

        Given that we’ve covered this ground pretty well in the past five years, I may not have a lot to say going forward, unless there’s something new that comes along. (I’ve noticed that, once I finished my big series explaining my position, I haven’t written much since. Said what I meant to, I guess.)

        It’s kind of like high energy physics these days: We need new data!

        Liked by 1 person

        • “Really what I’m speaking to is the apparent rarity of a putative evolutionary trait that was such a massive success for the species that stumbled onto it.”

          Personally, I think the answer is symbolic thought enabled by recursive metacognition. It’s the one capability we seem to have that other species don’t. That and a primate body plan adapted to walking upright, freeing our hands to manipulate the environment.

          “The processor and the algorithm are separate.”

          They are in general purpose computing systems, but nervous systems never made that divide.

          ” The brain is not a Turing machine, therefore consciousness is not algorithmic, therefore consciousness cannot be replicated as a software simulation.”

          Yeah, I disagree with your therefore’s here. The brain can’t run just any computational algorithm, just it’s own specialized processes. But there’s no particular reason why the reverse wouldn’t be true, a general purpose computing platform can run the brain’s processes. (Subject to the limitations I described above.)

          “Other than DL, in what way?”

          I meant that as far as understanding what’s happening, looking at it as a computational system is productive. For example, when studying the vision processing regions in the occipital lobe, treating it as a multilayered neural network that is extracting meaning in a systematic way from the visual field, works. It helps in deciding where to look and how to investigate.

          “I may not have a lot to say going forward, unless there’s something new that comes along.”

          No worries. Thanks for engaging in this thread!

          “It’s kind of like high energy physics these days: We need new data!”

          The good news is that neuroscience is making steady progress. A book on the brain today has a lot more information in it than one from just 10 years ago. The picture is blurry, but it’s steadily getting sharper.

          Liked by 1 person

          • Wyrd Smythe says:

            Me: “The processor and the algorithm are separate.”

            You: “They are in general purpose computing systems, but nervous systems never made that divide.”

            FWIW, this is the heart of it. I agree an algorithm can be reified in machinery, and when it does it effectively becomes the processor.

            My point is that, even reified, even combined, the algorithm necessarily still exists.

            So a claim that mind can be simulated is a claim that consciousness is algorithmic.

            “Yeah, I disagree with your therefore’s here.”

            I don’t think they’re arguable; they form a logical syllogism. All you can do is dispute the validity of the premise (brain != TM). Which I think you always have, or at least so I have understood.

            The consequence of your view is that the brain is a Turing machine. It has to be. (That’s not saying it’s a UTM, just a massively huge TM; a single giant algorithm reified in meat.)

            Liked by 1 person

          • A lot of this comes down to what your attitude is towards analog computation, and whether you regard it as real computation. I do, as does most of neuroscience apparently, but I can see the arguments against it. And then whether you regard the inevitable approximation a digital system would have to make of the analog system’s operations as a showstopper.

            Like

          • Wyrd Smythe says:

            Yep, that last bit is the sticking point. 🙂

            To be clear, I have no problem with analog computation so long as we all understand it operates on completely different principles from discrete computation. Least free energy principles versus algorithmic principles, respectively. I do see those differences as irreconcilable.

            And it is my perception that discrete cannot, even in principle, accomplish what analog does that is, as you say, a showstopper.

            Like

          • Mike said

            Personally, I think the answer is symbolic thought enabled by recursive metacognition.

            This is two things: 1. Symbolic thought, 2. Recursive metacognition. Are you saying you can’t have the first without the second?

            I ask because my personal opinion is that consciousness begins with the use of symbols, and anything with a basic nervous system does this. Recursive metacognition is a very high level use of symbols which requires the ability to generate arbitrary concepts, which allows for a concept of self, and which seems to be the forte of, and possibly exclusive to, humans.

            *

            Like

          • James, I’m referring to the conception of symbolic thought used by anthropologists. I think there is a difference between thought and processing. Certainly thought is built on top of processes, and I can see the argument that those processes involve the use of symbols.

            But anthropological symbolic thought is mental use of symbols as stand ins for perceptions, feelings, or actions. (Which does require metacognitive access to those perceptions, feelings, and actions.) Thoughts in terms of these first order concepts may be built on symbolic processing. But to be symbolic thought, the thinking itself must concern itself with those symbols.

            Hope that makes sense. This is a difficult distinction to convey.

            Like

          • I’m referring to the conception of symbolic thought used by anthropologists. I think there is a difference between thought and processing. Certainly thought is built on top of processes, and I can see the argument that those processes involve the use of symbols.

            I really have no clue what anthropologists might mean by symbolic thought, but just going by what you say above, I think you’re over shooting the mark.

            Thoughts in terms of these first order concepts may be built on symbolic processing. But to be symbolic thought, the thinking itself must concern itself with those symbols.

            What if the first order concepts are built on symbolic processing, and symbolic thought is also the same kind of processing but using the output of the former as input?

            What I’m saying is maybe the key process is having a symbol as input, and recursion after that is just a matter of additional cognitive ability.

            *

            Like

          • “What if the first order concepts are built on symbolic processing, and symbolic thought is also the same kind of processing but using the output of the former as input?”

            That’s close to the point I was trying to make. The only part I’m not sure about is “and symbolic thought is also the same kind of processing.” If you mean it’s all neural processing, then I’m on board.

            But if you’re saying that symbolic thought is exactly the same as more primal thought, then I don’t know that that jives with neuroscience, which shows that specific locations in the brain light up when language, metacognition, or symbolic thought is in action. Lesions in Wernicks Area or Broca’s area knock out language, but also the ability to think abstractly.

            I’m also a little uneasy with the idea of the lower level symbolic processing. How would you define those lower level symbols? What would be an example? Are they discrete like an assembly language instruction, or amorphous like a sensory image?

            Like

          • Mike, if you’re uneasy now, it might be a good time to take a Dramamine hit. Let’s go.

            A lower level symbol, or just a symbol, is an arbitrary physical thing which has the (teleological or teleonomic!) function of representing something. As an oversimplified (and anatomically incorrect) example, a cone cell in the retina absorbs a red photon and in response generates neurotransmitter X. That neurotransmitter is a symbol. It is a symbolic input to the next neuron in the chain, and it can be interpreted to represent that a red photon was processed. How it is interpreted is a (teleonomic?) function of the mechanism which uses it as input. In its simplest form you, Mike, would call these interpretations reflex processes.

            Now a mechanism can consist of more than one neuron. We just want to identify the inputs and outputs. Some mechanisms can take more than one symbol as input and generate a single symbol as output. That output would be a symbolic concept and represent the combination of the input concepts, and this is what’s happening in visual and other sensory processing. Pixel symbols are combined to get line symbols, motion symbols, etc.

            Consider that it’s possible for a mechanism to take two such symbol inputs and generate an output concept symbol, but in such a way that the new output symbol can be a new input symbol for that same mechanism. I think this is where you get your recursion, but it’s still just symbol manipulation.

            Finally, let’s consider a mechanism with many (millions?) possible symbolic inputs. Let’s further assume that this mechanism can combine any two or more of these inputs “arbitrarily” to generate arbitrary new concept symbols. I suggest this situation describes qualia. If a mechanism can have more than one symbolic input, and each symbolic input is physically identical (neurotransmitters) but can still be individually identified as separate from the others, then any practical reference to an individual input would be a reference to the meaning of that particular symbol. Any reference to the input of a process involving symbolic input X would be described as the qualia or feeling of X. To “have a feeling” or “qualia” of X is to be undergoing a process with X as the input.

            Okay Mike, how do you feel. Dramamine help?

            *

            Like

          • Haha! Ok, thanks James. I see what you’re getting at now. (And I’m happy to say it didn’t require Dramamine 🙂 )

            I guess the distinction I would make is between pre-cognitive and cognitive symbols. For example, if I perceive a cat in front or me, that perception is built on a vast array, hierarchy, and galaxy of symbolic processing in the way you describe. But cognitively for me, that’s just a raw perception. When I then learn that the sound of the word “cat” applies to that perception (or more specially to a perceived object), that association is more of the processing you’re talking about, but it’s also a cognitive symbol.

            The association hierarchy and galaxy continue, but now in cognitive space. It’s this capability that seems to set humans apart. Some animals, such as great apes, seem to have it to a very limited degree, but a human two year old quickly surpasses even the brightest of them. It requires a level of recursive metacognition that doesn’t seem to exist in other species.

            Like

          • Mike, as others have said, glad we’re on the same page.

            Serious question: did you really understand my last paragraph? Do you accept that as the explanation of qualia?

            *

            Like

          • James, unless I’m missing something, it seems in accordance with my understanding. Neurons communicate with each other over synapses, so any qualia are going to be built on those communications. The experience of seeing a deer will likely be triggered by convergences of such signals, with perhaps a synapse firing for deer color, another for deer shape, yet another for deer like movement, etc, culminating in a deer neuron firing.

            Are you familiar with Damasio’s convergence-divergence zones? They’re nexuses of association hierarchies that come down to a relatively small collection of neurons, essentially the focal point of a mental concept.
            http://willcov.com/bio-consciousness/diagrams/Damasio%20CDR%20diagram.htm

            Like

      • paultorek says:

        “And I dislike referring to the higher level models as an “illusion” when they’re often just as predictive as the lower level ones. I am a reductionist, but not an eliminativist.”

        Hear hear!

        Liked by 1 person

  11. Excellent post. I think the trick may be in how one defines the problem of consciousness (our minds or others). Some definitions may lead us down a more fruitful path; others might lead us to a dead end. I think that equating consciousness with “the subjective experience of consciousness” may very well lead us to a dead end (to a concept that cannot be further decomposed or validated). I might suggest equating consciousness with “the illusion of consciousness”. Illusions may be of the form, “perceiving something that does not exist” (as with some hallucinations) or of the form “perceiving something that exists as something else” (as with an optical illusion). When I think of the illusion of consciousness, I think of the second form. Perhaps what we call consciousness is an illusion that comes from a drive to integrate all our disparate thoughts. There may be an evolutionary advantage to integrating or summarizing our thoughts in order to manage them. The concept of “consciousness” and the concept of “I” probably serve similar functions. I’m not so sure there’s any advantage to differentiating between them unless we want to talk about somebody else’s “I”. I suspect the I-concept is necessary for systems to move or act. We think anything that is possible for us to think, but we would be paralyzed by the cancellation of all those thoughts going in different directions, without some “I” choosing one or a few of those thoughts and acting on them.

    Liked by 1 person

    • Thanks Mike!

      Consciousness as an illusion is something I’ve toyed with on and off over the years. And I know there is a large literature that takes that tack. I generally agree ontologically with the illusionists, but I find the word “illusion” problematic. A case can be made that if subjective experience is an illusion, then the illusion is the experience.

      And people tend to stop listening once they perceive you’re saying consciousness doesn’t exist. On the other hand, people’s conception of consciousness is often hopelessly tangled up with dualistic notions, so often the type of consciousness they’re thinking of actually doesn’t exist.

      On the “I” concept, are you familiar with Antonio Damasio’s theories of self? He posits that there are three broad levels: a protoself implemented in the brainstem, a core self implemented in the thalamus and cerebral cortex, and an autobiographical self which depends heavily on memories. It’s an interesting take that is compatible with a lot of neural theories on consciousness.

      Like

      • Thanks for responding. You received so many comments that I didn’t expect you to get around to mine. I didn’t mean to imply that “the subjective experience of consciousness” substantively differed from “the illusion of cnsciousness”; merely that “the subjective experience …” seemed to me to be a question-stopper, whereas “the illusion …” seemed to provide a bit more fecundity for persuing the illusive subject of consciousness. No, I’m not familiar with Damasio, but I’ll definitely look him up. Thanks!

        Like

        • My pleasure. If it’s not obvious, the discussion is actually my favorite part of blogging.

          I know where you’re coming from on the subjective experience / illusion front. It reminds me of something David Chalmers, the originator of the idea of hard problem of consciousness, wrote about last year.

          He coined the “meta-problem of consciousness”, the problem of why we think there is a hard problem, which I think is getting to the same point you’re making. It reminds me of Michael Graziano saying that what we really should be studying is why the brain thinks it has some version of experience that can’t be accounted for physically.

          Liked by 1 person

  12. Callan says:

    Brian calls to crowd “You’re all conscious!”

    Crowd chants back in unison “We’re all conscious!”

    Lone voice “I’m not”

    Liked by 2 people

  13. Steve Ruis says:

    I tend to the position that consciousness is an emergent property of the brain. I also avow that a brain without its connections to a nervous system would soon become dysfunctional, so where to draw the line as to the existence of consciousness. It seems to require a brain of substantial complexity (not necessarily size) connected to a nervous system providing substantial inputs to the brain. So, where does the consciousness reside? In the total system IMHO, of course. We tend to place our consciousness close to our eyes as they provide a stream of data truly immense, so immense that it must be culled rapidly so as to not overwhelm the limited data storage capacity we have. Some religions have our “soul” or consciousness residing near our stomachs or gonads. This seems to suggest an over concentration of attention placed upon food and sex, but …

    We have been studying this problem seriously with adequate tools for such a short time that I expect many, many surprises are in out future.

    Liked by 1 person

    • I generally agree, although I’m always reluctant to invoke emergence by itself as an explanation. It seems obvious to me that what we interpret as consciousness is emergent from lower level phenomena, but by itself that doesn’t strike me as much of an explanation. I want to know how it emerges, similar to how we know that temperature emerges from particle kinetics.

      I do think we’re making progress on this. A lot more than people generally like to admit.

      Like

  14. Fizan says:

    “..many people still hold to substance dualism, the idea that the mind cannot be explained with mere physics, chemistry, biology, etc, that something else is needed. We appear to have a strong innate intuition for this view.”

    My own view (so far) is that the mind cannot be explained with mere physics, cehmistery, biology etc. But NOT that we definitely need something else to explain it. It might very well be that the mind isn’t a problem to be solved by the mind. In other words it may simply be unexplainable. But we appear to have a strong innate intuition that everything can be explained.

    Liked by 1 person

    • I don’t know if it’s so much of an intuition as a strong desire. But I’ll admit that there are two very different camps on this. Some, like the one I’m in, definitely want the mind explored and explained as far as possible. Others thing the entire endeavor as forlorn and seem uneasy with any statements about what’s been learned.

      I do think we suffer from a perspective challenge in examining ourselves, particularly in attempting to explain subjective experience. Studying subjective experience, in and of itself, may never be productive. We have trouble even agreeing on what we’re trying to study.

      On the other hand, I think we can learn a lot by objectively studying the brain and behavior, including self reported perceptions, and the interrelated correlations, as though they were an outside system. It’s an approach that many dislike, but it seems to have been a fruitful one.

      Liked by 1 person

      • Fizan says:

        Yes, I agree with you. We do learn a lot by objectively studying the brain and behaviour. That’s why we should continue to do it.

        On the other hand I’ve also formulated (for myself) that learning is one process that the mind does another is feeling. It does not seem evident (or even possible) that we can use learning on it’s own to get to feeling (which is what a lot of people attempt to do but in all such attempts feelings are sneaked in one way or another).

        Liked by 1 person

        • The way I think of it is, let’s study the brain to see how it produces the behavior we associate with feelings, including talking about them. It may be as close as we’ll be able to get.

          Liked by 1 person

          • Fizan says:

            Yes and perhaps an assumption there is that once we understand this brain behaviour completely we may be able to replicate it (say in an artificial system) and create feeling?

            Ethical dimension aside I would like to see that happen just to see if it works or not.

            Liked by 1 person

  15. Stephen Wysong says:

    Hokey Smokes Bullwinkles!

    After reading all 71 posts in this thread, I notice that Damasio’s simple, physical definition of consciousness wasn’t mentioned at all:

    Consciousness is, in general, the feeling of what happens.

    All consciousness is biological and rooted in Core (Primary) Consciousness, which is an ever-present feeling of being embodied and centered in the world. Consciousness has contents, which are feelings that are associated with every sensory track, both interoceptive and exteroceptive. We all know what a feeling is—a feeling of a touch on the hand; a feeling of a pain in the leg or a tickle on the sole of the foot; a feeling of a sound or a sight or a thought.

    Any animal feeling any feeling is conscious. There is something that it feels like to be a bat.

    The notable characteristic of consciousness is the flow, or stream of consciousness. It’s also apparent that consciousness is wholly a simulation—that there’s no color in the world is a clue to that realization.

    So there’s a definition and description of a wholly physical consciousness. And it avoids the Really Hard Problem, which is trying to understand the morass that is Consciousness Philosophy—Integrated Information Theory, Panpsychism, the incoherent “what-it’s-likeness” and all the rest.

    Liked by 1 person

    • Stephen Wysong says:

      Ah, the powerful result of forgetting a closing italics HTML. The italics should have stopped after the “feels”: “There is something that it feels like to be a bat.”

      Like

      • Fixed! (Let me know if I didn’t fix it right.)

        Like

        • Stephen Wysong says:

          Perfect Mike! Thanks. I’m taking a thinking holiday today but I’ll respond to your “feeling” question soon. The dictionary definitions are a bit sloppy but I find it a useful term because people all seem to understand what bodily feelings like “touch” are, regardless of their neuroscience knowledge.

          Like

        • Stephen Wysong says:

          BTW, if you fetched that Stuckey et al paper “Reversing the Arrow of Explanation” from my Google Drive, section 4 “Implications of RBW for the Experience of Time …” starts with nine ” phenomenological features of temporal experience” as they put it, the first two being:

          1) Why, at any given moment, is our conscious awareness confined to such a small part of spacetime that we dub “the now?”

          2) What gives some experiences, by definition those that feel real to us at any given moment, their feeling of presentness? Why do we experience some pains as memories and others as having the property of “nowness” or presentness?

          If you run short of blog topics, perhaps these “mysteries” of consciousness would be interesting to discuss. One per article maybe or it’ll all get jumbled up in the responses.

          Enjoy!

          Like

    • The problem with “the feeling of what happens” definition is in agreeing what’s involved in a feeling. Is it just a signal from the peripheral nervous system to the brain?

      Does the hammer striking the patellar tendon in the knee and causing the customary knee jerk reflex count as a feeling? Even when it happens in a patient that is brain dead? Or if the stimulus triggers a reflex higher up the spinal cord, does that in and of itself count as the feeling? If the signal makes it to the brain, but only to lower level circuitry that only produces another reflex, are we at the feeling yet? Or does it require that the signal be put into a broader context?

      I’m reading through Merker’s paper. I find his definition of consciousness interesting: integration for action. But here’s a question: does integration for action automatically mean a feeling is present? Or does it require integration for action planning?

      Like

      • Stephen Wysong says:

        Have patience Mike … I’ll respond soon. An unusually busy life and other of your SelfAwarePatterns articles have lately claimed my attention. I need to reread some of Merker to reply to your closing questions but I’ll compose a response almost forthwith … 😉

        To lessen the suspense a bit though, though, I can say that the hallmark of a feeling is that it is felt by the organism.

        Like

        • No worries Stephen. I can certainly understand being busy.

          Of course, you know what my next question will be. What does “felt” mean? 🙂

          Also, having now gone through the paper and the skimmed the associated commentary, I’m thinking about a post dedicated to Merker’s views, assuming my own time constraints allow it.

          Like

          • Stephen Wysong says:

            I downloaded Merker’s complete article almost exactly two years ago and the footers aren’t personalized. I don’t have an academic affiliation of any sort but thanks for the head’s up.

            While we’re both still waiting for my “consciousness as feelings” reply, here’s an interesting and very relevant quote about brainstem consciousness from Peter Watts’ Blindsight that I posted a while ago on Schwitzgebel’s blog (italics mine):

            “I went to ConSensus for enlightenment and found a whole other self buried below the limbic system, below the hindbrain, below even the cerebellum. It lived in the brain stem and it was older than the vertebrates themselves. It was self-contained: it heard and saw and felt, independent of all those other parts layered over top like evolutionary afterthoughts. It dwelt on nothing but its own survival. It had no time for planning or abstract analysis, spared effort for only the most rudimentary sensory processing. But it was fast, and it was dedicated, and it could react to threats in a fraction of the time it took its smarter roommates to even become aware of them.

            And even when it couldn’t—when the obstinate, unyielding neocortex refused to let it off the leash—still it tried to pass on what it saw, and Isaac Szpindel experienced an ineffable sense of where to reach. In a way, he had a stripped-down version of the Gang in his head. Everyone did.”

            I wonder if Watts had read Merker?

            Also, here’s a great website I just discovered with a section of articles on Consciousness in Animals: https://animalstudiesrepository.org/

            Liked by 1 person

          • Stephen Wysong says:

            PART 1 of 2

            Mike, my goal has been to formulate a largely non-technical consciousness hypothesis that captures the essential operation and features of consciousness in ordinary language—I’m certainly not a neuro-specialist and I imagine most or all of your blog contributors are not domain specialists either and I prefer an explanation that can be widely understood. By the way, this is my first attempt at describing the “consciousness as feelings” hypothesis with any specificity, so I trust you’ll understand any draft-level crudity.

            Merker provides a few definitions of consciousness, such as: “state or condition presupposed by any experience whatsoever” and “To the extent that any percept, simple or sophisticated, is experienced, it is conscious, and similarly for any feeling, even if vague, or any impulse to action, however inchoate” and “… the ‘medium’ of any and all possible experience.”

            I focus on feelings in my definition of consciousness because physical feelings like touch, hot/cold and pain need little explanation, unlike “mind,” “subjective experience” and so on. Being ineffable, no feeling can be described, but a physical feeling can be experienced directly by us all, so it’s a good place to start. Touch the back of your left hand with your right forefinger … if you feel a touch, you are conscious of the touch. An animal is conscious if it feels any physical feeling and there is no consciousness if no feeling at all is felt.

            The key realization that, in my opinion, greatly simplifies an understanding of consciousness is that those “non-physical” instances of consciousness like seeing and thinking are also feelings and are rooted in physical feelings. Understanding thought, seeing and hearing as feelings means that the organ of consciousness produces feelings which I find to be a clarifying simplification. Thinking in words is really vocalization-inhibited speech. Thinking in visual images like Temple Grandin is sight-inhibited vision. I believe much confusion has arisen because thinking, in particular, doesn’t seem to have the same character as physical bodily feelings but, instead, seems ethereal and non-physical, perhaps soul-like. As we know, proceeding down that road yields much religion, much more philosophy and the dreaded dualism.

            I believe that core, or primary/animal consciousness—the feeling of being embodied and centered in a world—is the ground state of consciousness, the fundamental feeling that’s overlayed with feelings resulting from sensory events both intero- and exteroceptive. Core consciousness is so pervasive that it’s taken for granted, as it were, and rarely noticed, but it becomes obvious with the slightest bit of meditative silence and inactivity. If that feeling were to suddenly disappear, I’m certain you’d notice and equally certain you’d freak out.

            Like

          • Stephen Wysong says:

            PART 2 of 2

            My Neural Tissue Configuration (NTC) idea, which I’ve explained elsewhere in a block universe context, proposes that a specific feeling is one-and-the-same as a specific configuration of neural tissue in some brain structure. The configuration itself IS the feeling. Imagine a collection of body-mapped “sheets” of specialized neural tissue in which each member takes on successive configurations that are feelings and you’ll have the idea. If that brain structure is at all operational, core consciousness is the base configuration and the organism is conscious.

            By the way, I don’t disagree that our consciousness is hugely dependent on cortical processing—I view that processing as resolving nearly all of our conscious content into pre-conscious images that are transmitted to and then serially “displayed” by the original organ of consciousness, the Brainstem Complex (BCx). I suspect that the mammalian/primate and certainly the human BCx has evolved to become so dependent on the detailed richness of cortical input that it no longer produces its original range of conscious images. Hence phenomena like Blindsight where damage to the visual cortex disables pre-conscious image formation and the BCx, although still receiving (and “recognizing”) input via the “old” ocular pathway, no longer produces conscious visual images from that information. (Unless you nick it somewhere as in that cat experiment.) This cortical function model explains the delay in feeling the cortically-stimulated “touch” in Libet’s experiment—the cortex needs time to “figure out” the context and possible remembered associations of the touch before forming the pre-conscious image, while the BCx immediately “displays” the feeling.

            The BCx is evolutionarily ancient and as “Body Central” is perfectly placed to be the control nexus for the animal centered in a world. Of course the brain as an organ doesn’t know about its consciousness but it might be that the same tissue configuration that manages life processes produces feelings as a side-effect. If those life management NTCs succeed over time so that the animal successfully reproduces, they’ll be conserved and enhanced, giving rise to more robust and detailed conscious images. Cortical-like functionality like that of the retina clearly contributes to that success so the evolution of additional specialized cortical tissue is easy to understand.

            As to your questions, a reflex is an automated sequence of movements in response to a stimulus and, as such, the reflex per se is not conscious but, in a functioning brain, the feelings of both the reflex stimulus and the ensuing bodily movements are conscious by definition.

            Perhaps “integration for action” is accomplished by the networks Panksepp writes about: “Adequate evidence exists for seven primary-process emotional networks concentrated in subcortical regions of the brain—SEEKING, RAGE, FEAR, LUST, CARE, GRIEF, and PLAY.” But until those processes result in NTCs, no feeling is present.

            Like

          • Stephen,
            I appreciate the detailed explanation. I guess my question is, what about the NTC makes it a feeling? If a robot has sensory signals coming in to indicate the state of its appendages, is that a feeling? If not, why not? What specifically is necessary for a feeling to be a feeling?

            (If you recall, I provided my own answer in a post a few months ago, but I’m interested in yours.)

            Like

          • Stephen Wysong says:

            The idea is that a particular NTC IS a particular feeling means just that. All that is required for a feeling to be produced is for the NTC to be in place … the feeling and the NTC are one and the same. Sensory signals propagated via nervous tissue are part of the inputs that cause NTC tissues to take on their particular configurations but are not themselves the NTC tissue. A feeling of a touch on some body surface IS a body-mapped NTC that takes on its “shape” (i.e., configuration) as a consequence of sensory signals, possibly enhanced by the context of those inputs as determined by cortical processing.

            If consciousness is a wholly physical phenomenon then, wherever in the brain the conscious “display” is produced, we must ultimately find living cells organized into tissues performing that “display” functionality. I don’t know how others conceive consciousness in a purely physical sense, but I get the impression that some think it’s a sort of widespread cortical bioelectrochemical “aura” … how do you conceive of it Mike?

            The NTC idea is the first conjecture I’m aware of to propose a purely physical brain-tissue-based explanation of feelings. Other than the evolutionary and experimental evidence I find compelling, a structure composed of NTC tissue could be located in any appropriately connected brain structure.

            I’m considering your Merker post and the ensuing conversation. I’ll contribute my two or three cents worth soon.

            Like

          • I think your idea of the NTC being the feeling is definitely right, but I think it applies to more than just feelings. Every mental concept is an NTC somewhere. For us to hold a mental concept is for there to be an NTC of it somewhere in our brain. So we agree on that point.

            (Have you heard of Damasio’s CDZ and CDR concepts? CDZ (convergence-divergence zones) are regions where neural signals converge for a particular concept, whether that concept be a perceptual one, a feeling, or an action, or even a sequence of events. Of course, the CDZ itself is not the concept, just the culmination of the overall hierarchy that converges on that point. CDRs are convergence-divergence regions. Damasio sees CDZs existing in the “many thousands” but there only being a few CDRs in the brain. It seems like there is resonance there with what you’re talking about.)

            You ask how I conceive of it. The question for me is, why does an NTC occur, and in particular, why does it occur where it occurs? My conception of a feeling comes from asking why we have them? (“Why” in the sense of why they were naturally selected.) Along those lines, what adaptation would a feeling provide within a reflex arc? I can’t see any. (But maybe you can?) It’s value seems to me to be as input into imaginative simulations where multiple sequences of action are being evaluated. It spurs those simulations and is used to evaluate potential action plans.

            Lisa Feldman Barrett considers a feeling to be a prediction. I’m not sure I see it that way, but I do think it’s an important input into the brain’s prediction framework.

            Like

          • Stephen Wysong says:

            Mike, regarding your suggestion that “Every mental concept is an NTC somewhere,” surely there are representations in memory and other places in the brain that we would think correspond somehow to mental concepts. I think of them as “stories” in a broad sense of the word—you may recall my footnote about the brain being a “Story Engine” in “Breadcrumbs”. I haven’t studied how concepts and memories are physically represented, although any such representation likely involves configurations of brain tissues or regions as Damasio suggests.

            But I intend the NTC I’m proposing to be specifically a configuration that corresponds to, and is an actual feeling. Perhaps I should prepend a ‘C’ for consciousness—CNTC. My conception is that CNTCs are a response to sensory information in the BCx or a consequence of pre-conscious “images” that are the results of cortical processing.

            Why do we have CNTCs/feelings? I’m not sure there are answerable “why” questions in evolution, whose general message seems to be “if an adaptation doesn’t kill you before you successfully reproduce, it stands a chance of being transmitted and that adaptation might be useful in the process of living through successful reproduction, but that’s a possibility, not a requirement.” For an animal in motion in the world, however, a felt simulation of its embodied self centered in that world seems quite necessary in my view.

            BTW, that “Story Engine” concept includes the notion of story comparison which I see as a straightforward method for achieving what you call “imagination” involving “imaginative simulations”. Again, from “Breadcrumbs”:

            “I believe that Greene’s remark, “… it seems to unfold into a coherent story.” hints at an organizational insight about the brain’s functioning—the Story. We think in stories. We understand ourselves and the external world in stories. Our “Self” is a story. … I extend the meaning of the word “story” to additionally refer to non-narrative conceptions and productions. Our fundamental mental tool is the embodied metaphor … each of which is a story of relationships and their implications. Science constructs stories of how the world works, partially through the use of mathematical stories. Artworks in all media are stories. … musical expressions are stories. I suspect we construct our memories in multi-sensory gestalts that are stories and our pattern-recognition intelligence compares those remembered stories. Interestingly, a recent attempt to train an AI in moral decision making failed until the approach was modified to teach the AI stories with embedded moral decisions.”

            So, what you call “imagination” becomes an outcome of story processing, primarily story comparison and doesn’t require a simulation process. As a simple example, using a set of metaphorical stories A, B and C: “if A is like B and B is like C” it’s easy to conclude not only that A may also be like C but other characteristics of C not directly involved in the root comparison may apply to A. That process yields creativity.

            A prediction mechanism is possible if the brain identifies current events as a story that compares with a remembered story—the outcome of the story in memory then becomes a viable, although remembered, “prediction” of the outcome of the current events. Combinations of those “creative” and “predictive” processes yield remarkable mental concepts without requiring an open-ended simulation capability.

            Note that story pattern matching is the fundamental operation in both these examples—Intelligence!

            Liked by 1 person

        • Stephen Wysong says:

          Mike, you mentioned skimming the “associated commentary” in Merker’s paper so I assume you have the final article version as published in “Behavioral and Brain Sciences” rather than the draft version also lurking around the Internet that’s missing the commentary. In any case, here’s my Google Drive URL for the published version, which might be useful for other readers should they still be following our ramblings:

          https://drive.google.com/file/d/173p-DgahwJc7jpTo7Eql01bDrjpqO0e2/view?usp=sharing

          I’m still working on my reply …

          Like

          • Stephen, that is the one I went through. Just fyi, the publisher put my university’s account info in the page footers when I downloaded it. If yours has similar info, you might want to be careful about publicly sharing it.

            Like

  16. Pingback: Kurzgesagt on the origin of consciousness | SelfAwarePatterns

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.