What counts as consciousness?

One of the things I get reminded of every few years, is that difficult determinations often look clearer when you consider them in a wider scope.  Years ago, when I was trying to figure out whether conservative or progressive political policies were better, I discovered that widening my investigation to history helped immensely, and widening even further to the history of other developed countries helped even more.  Many of the typical conservative hangups looked parochial in that broader context.

The same thing happened when I was trying to decide how worried to be about artificial intelligence.  Many of the people who are worried about it are familiar with the technology, and their concerns carry weight with the general population.  But learning about neuroscience and evolutionary psychology put those concerns in a much broader context and, at least for me, rendered most of them moot.

Consciousness is one of those topics that people have been writing and debating about for centuries.  But I’ve found that many of the philosophical ideas often kicked around wither in the light of neurological case studies and overall neuroscience.  We’ve gained a lot of insight into consciousness by looking carefully at the human brain, particularly the cases where it gets damaged of malfunctions in some way.  But maybe a broader approach yet is to look at consciousness in animals, particularly in terms of evolution.

This is the approach used by Todd Feinberg and Jon Mallatt in their new book: ‘ The Ancient Origins of Consciousness: How the Brain Created Experience‘.  (This is the first of what I hope will be a series of posts inspired by their fascinating, albeit technical, book.)  The good thing about studying animal consciousness is that it gives a much broader array of systems to study.  And animals can be studied in many more ways, ways that are often ethically unacceptable for humans.  (Some of those ways I personally find unacceptable, but the knowledge gleaned from them is real.)

Of course, the biggest issue with studying animal consciousness is that we lose the primary advantage of focusing on human consciousness.  We know that we ourselves are conscious, and it is uncontroversial to assume that all mentally complete humans are also conscious.

But the farther we move away from healthy adult homo sapiens, the more tenuous this assumption becomes.  We have to be careful not to project our own experience on animals or other systems.  It’s reasonable to assume that animal experience is not human experience, particularly as we move down the intelligence chain.  This raises an interesting question, how much of human experience can we dispense with and still coherently use the label “consciousness”?

In their book, Feinberg and Mallatt make clear that they’re not attempting to explain human level consciousness, but that they are aiming for the hard problem of consciousness, the one that asks, why is there “something it is like” to be a conscious being?  They equate this with what they consider to be primary or sensory consciousness.

But it’s not clear to me that what we mean by “something it is like” is so easily divorced from higher level consciousness capabilities.  It might be that without the ability to reflect on our experience, that it is not necessarily “like anything” to be one of these creatures.  As Thomas Nagel pointed out years ago, we can never know what it’s like to be a bat.  But it’s possible that neither can an actual bat, if it doesn’t have at least some level of introspective ability.  For that and other reasons, we have to be cautious in assuming that animals have an inner experience.

Still, anyone who has ever cared for a pet knows that the intuition of animal consciousness is very powerful.  Whatever mental life animals possess, we sense in them fellow beings in a way that we don’t sense with plants or computer systems.  This isn’t true of all animals of course.  I don’t really sense any consciousness in a worm, a starfish, or an oyster, which makes sense since none of these animals have brains.

But pretty much any animal with eyes tends to trigger my intuition that there is some inner life there, something that is seeing and has some kind of intentionality, a worldview of some kind, even if it’s a limited one.  This is a common intuition, which is why it’s not unusual for movies to show an opening eye to indicate that some thinking feeling thing is present.  According to Feinberg and Mallatt, this turns out to be a reasonably good indicator.

High resolution eyes with lenses, as opposed to simple light sensors, are costly constructs in terms of complexity and energy, and evolution rarely wastes such resources.  But without mental images, eyes would in fact be a waste.  And mental imagery, another costly feature, would itself be useless without modelling of external objects and the environment, along with the animal’s body and its interactions with that environment.  And that modelling would itself be useless without being a guide to possible actions the animal might take.

None of this is to say that the modelling done by the brain of a lamprey, one of the simplest vertebrates that Feinberg and Mallatt conclude may be conscious, is anything like that done by a human brain.  Without a doubt, the lamprey’s models are far less rich, but then the lamprey has no real need of human like models.  All that matters is whether its models are effective in allowing it to navigate its environment, and they generally appear to be so.

Lamprey Image credit: Tiit Hunt via Wikipedia
Lamprey Image credit: Tiit Hunt via Wikipedia

But do these capabilities count as consciousness?  A lamprey doesn’t have a cerebrum, where human and mammal consciousness appears to reside, and sub-cortical processes in humans are below the level of consciousness.  But a mammal with its cerebrum removed or destroyed is a severely disabled creature, without the ability to navigate its world and survive on its own.  A lamprey does have that ability, indicating that the necessary modelling is still taking place somewhere in its more primitive brain.

This makes sense from an evolutionary point of view.  Primary consciousness must have some adaptive value.  It seems reasonable (although admittedly speculative) to assume that it is consciousness which allows animals to have a wide repertoire of available actions to navigate the world, find food and mates, and avoid predators.  These capabilities were likely important catalysts leading to the evolution of complex brains, and consciousness, during the Cambrian explosion.

We’re not talking here about human level consciousness, but Feinberg and Mallatt use the analogy of an airliner and an ox cart.  The experience of riding an ox cart is not the experience of riding on an airliner, but they’re both transportation.  Likewise, the experience of a lamprey is not like the experience of a human, except that they both have experience.

But again, does this really count as consciousness?  What I’ve alluded to here is called exteroceptive consciousness, one type of primary consciousness described in Feinberg and Mallatt’s book.  The other two are interoceptive consciousness and affective consciousness, all of which I’ll describe in more detail in another post.  But after some consideration, I’m inclined to accept it as part of core consciousness, although I would completely understand if someone insisted on the label “proto-consciousness”.  Ultimately, the exact labeling here is a matter of convention on how to discuss a certain point on the evolutionary spectrum.

But this raises another interesting question.  Is the Google self driving car conscious?  It doesn’t have eyes exactly, but it does use LIDAR to model its environment, and its own interactions with that environment.  Of course, the Google car’s models are currently far less effective than a lamprey’s, at least relative to their respective environments, and the motivations of a self driving car are very different from those of a living animal.  But as Google and other technology companies improve these systems, might we eventually reach a point where it makes sense to consider them to have a sort of primal consciousness?

37 thoughts on “What counts as consciousness?

  1. Building on the 2012 Cambridge Decalaration on Consciousness (which stresses that the required neurological apparatus for total awareness of pain, and the emotional states allied to that, arose in evolution as early as the invertebrate radiation), Professor Marc Bekoff has since proposed an even wider declaration, a Universal Declaration on Animal Sentience, where sentience (and by extension a total awareness of suffering) is defined as the “ability to feel, perceive, or be conscious, or to experience subjectivity.” Taken as is, it’s a definition that would extend to include motile, single-celled protozoa who, despite not possessing a single neuron, can and do resist all assaults launched against their existence, which, at least superficially, demonstrates that this gelatinous blob of sensible organic material knows its suffering.

    Liked by 4 people

    1. Interesting. I’m not sure when the invertebrate radiation happened, but if it was prior to the rise of vertebrates, then based on what I’ve read in F&M (more details coming), the Cambridge Declaration seems a little broad. But they may be attempting to make sure that all vertebrates and arthropods are included for protection.

      Bekoff’s declaration, if he does interpret sentience to include protozoa, seems unmanageable as any kind of ethical guide. By that standard, even vegans are killers, and anyone who uses Lysol may be inflicting suffering.

      But I think the relatively straight forward autonomous nature of these life forms, their lack of internal patterns that are isomorphic with the outside world (i.e. models), makes me skeptical that they know they are suffering damage or destruction in any meaningful sense of the word “know”. Of course, this is an area where we can’t escape our intuitions, and intuitions can change.

      Liked by 2 people

      1. The invertebrate radiation means insects, such as the fruit fly, and cephalopod molluscs, such as octopus, Nautilus, and cuttlefish. As for Bekoff, even his broader definition is nothing compared to Integrated Information theory which states atoms are conscious, albeit in a manner inaccessible to us. Arguably, Tegmark goes even further with his fourth state of matter.

        Whatever the case might be, we’re in interesting times.

        Liked by 3 people

        1. Thanks for the clarification. F&M would actually agree that fruit flies, cephalopods, and the like are conscious, although in the case of the fruit fly, they’re concerned about the lack of complexity in insect brains. Still, they do seem to effectively model their environment and navigate around in it.

          I’ve been dismissive of IIT for a long time, perceiving that its definition of consciousness is too broad. Definitions don’t seem particularly useful if they don’t exclude things from the phenomena being defined. That said, F&M’s favorable mentioning of it, plus a lot of imploring from others that I should read Tononi at length, finally made me add his book, ‘Phi’, to my Kindle. Plan to read it in the near future.

          Definitely agree on interesting times!

          Liked by 2 people

  2. I think that John has hit on the core issues: Is there pre-reflective consciousness and if so, is that what we want to talk about? The trouble is that if we say that consciousness is recursion – state to state comparison – then consciousness seems to pervade almost everything.
    If we say that recursion by itself doesn’t count, and only state to concept comparisons count, then how are we to tease that apart from its recursive basis?
    In other words, how is reflection not a state-dependent activity, i.e. fancy recursion?

    Liked by 3 people

    1. It may be that even primary consciousness includes at least incipient reflectivity. Remember that modeling the environment isn’t useful unless you model yourself and your interaction with that environment. It’s not the theory of mind reflectivity we have, but it seems probable we won’t find that outside of social species.

      I’m not clear on what you mean by state to concept comparisons.

      I suppose modeling could be considered a form of recursion, but then so would reflectivity, with loss of resolution on each layer. But we do refer to our “inner world”. What is that if not recursion? One view of consciousness might be the organism reacting to its own expansively predictive model of the environment rather than the immediate environment itself.

      Liked by 1 person

      1. I mean comparison of an anticipated state of affairs with current experience. That is reflection at its most basic level.
        I agree with you, and differ somewhat with Nietzsche and others, on the nature of non-reflective consciousness. I think there is a distinction between state-state and state-concept processes of comparison – at the very least, the latter implies the ability to form a concept.
        But the former is not unconditioned. I don’t think that our thoughts come to us unbidden or that we are a bundle of experiences. Our ‘direct’ experience of the world has an aspectual shape, and that finally shapes our concepts.
        So it seems to me that the activity of non-reflective consciousness is the basic thing, and it colors everything in our concepts. We shouldn’t be surprised that when we reflect upon it, it seems to be everywhere.

        Liked by 2 people

  3. This certainly looks to be an interesting series of articles, Mike, and I so appreciate the way you précis books to save me reading them! 😉 This would likely be too dry and technical for my tastes and moribund capacities, though I did read Dennett’s Kinds of Minds and which seems to explore related issues, though in a very accessible manner.

    When you say that “we have to be cautious in assuming that animals have an inner experience”, then what do you mean by ‘inner experience’? Doesn’t the phrase presuppose that awareness of phenomena does not (somehow) partake actively ‘within’ (as it were) the observed externalities themselves, and is that perhaps a risky presumption? I know that folk like Honderich would say so, and I have some sympathy with his objection.

    We know the story of the senses and their involvement in phenomena, but we also know – I have to say ‘introspectively’ here, unfortunately – that there’s an illuminative/lucent and non-objectifying aspect to awareness. No one knows whether that’s spatially referenced or not, because in and of itself as a kind of Tabula Rasa ‘experience’, it isn’t, and it doesn’t make spatial references. [Strictly speaking, nothing is experienced as an object.] The obvious objection to this idea of an ‘objectless awareness’ is that memory has ceased functioning and so the mind can’t re-present pure perceptions as known objects, although I’m far from convinced on that being the whole story.

    You’ve pulled me up before for positing things outside of experience-ability, but I do insist on this one point of an objectless awareness, and questioning the assertion that it’s necessarily local and that all awareness is therefore an ‘inner experience’. The state of an objectless awareness is not observable per se, because to observe it (as a re-presentation or ‘just then’ memory) makes it an object of consciousness, and obviously all consciousness [i.e. being with knowledge] is brain dependent. Anyway, ‘inner experience’, what does that really mean and how do we know it’s isolative and discrete in respect to what is outside of the cranium?

    Liked by 1 person

    1. Thanks Hariod. I do hope you find the series interesting.

      “then what do you mean by ‘inner experience’?”
      I only meant the subjective perception of inner experience. In truth, this book has substantially weakened my view of the criticality of the theory of mind modelling for that inner experience. I still think it’s a major factor in human consciousness, but if it were missing, I think we’d still be conscious. Modeling the environment is only useful to an organism if that model includes itself, which implies at least an incipient self awareness, and therefore at least the beginning of inner experience.

      I’m not sure about objectless awareness, but I only covered one type of awareness in this post, awareness of the outside world. There is also awareness of internal body states, and awareness of dispositional reactions to the information from the other types of awareness, such as affects or emotions. My usual reaction to the idea of objectless awareness would be wonder if it’s not internal interoceptive awareness being mistaken for something else.

      But as I mentioned last time we discussed this, given the fact that the brain is active even when there’s no incoming information, even if a person’s spinal cord has been severed, that we can’t completely rule it out. Of course, these are situations where the brain has had a lifetime of information coming in, so it might simply be rehashing over past experiences.

      The question is what kind of awareness would a brain have if it were born locked in, with no sensory access, external or internal? I don’t know the answer, or how we could ever find out.

      But maybe there are aspects of this I’m missing?

      Liked by 2 people

      1. “The question is what kind of awareness would a brain have if it were born locked in, with no sensory access, external or internal?” Well, retained memory notwithstanding, it’s possible to simulate or at least approximate that in very deep and rarified states of concentration which are well documented, but that doesn’t preclude your interoceptive awareness and assuming some variant of that which is itself not object dependent. Still, interoceptive awareness by common definition is indeed object dependent, so that would leave the explanation being that what is regarded as an ‘objectless awareness’ is in fact an extremely subtle feeling that the brain has of itself – a subtle proprioception of its blood flow, or something, yet which is not re-presented as an object of consciousness (a percept) but remains a real-time proprioceptive feeling?

        Liked by 2 people

        1. One of the problems with contemplating something like this is that experience is often subjectively irreducible. We simply can’t know, introspectively, if information from sensory pathways isn’t part of a state we might subjectively perceive as completely calm and blank. I know the few times I’ve tried meditation, I was very aware of my breathing. (I was actually told to focus on that by my guide.)

          From everything I’ve read, the lion share of the evidence is that the brain doesn’t receive any sensory information about itself. We can’t feel the activation of the neural networks that are taking place. Brain surgeons can reportedly work with the patient awake; in fact they often have to in order to ensure that they don’t damage some crucial aspect of cognition.

          F&M argue that this is one reason for the subjective / objective divide that lead to the hard problem of consciousness. Even if a neuroscientist is watching live scans of their brain as they think thoughts (which I think has been done), they won’t feel the patterns they’re observing, unlike say if you were watching an x-ray of your bicep while you raised your arm.

          Liked by 1 person

          1. I’ ve hear that a lot – that brains don’t feel at all. So what’s a headache? o_O

            I just read Ian McEwan’s Saturday which is about the life of a fictional London neurosurgeon. McEwan spent two years doing research with a working neurosurgeon, and the result’s a great blend of story-telling and the kind of things we’re discussing just here. 🙂

            Liked by 1 person

    2. “So what’s a headache?”
      My understanding is that a headache comes from the pain receptors in the head and neck but outside of the brain.

      Thanks for the book recommendation! Sounds interesting. I’ll check it out.

      Like

      1. Yup, meninges, trigeminal nerve.
        Is time an object (if so, the brain ‘feels’ what’s going on in the supra-chiasmatic nucleus and other time-keepers) ? Is self an object, and is there awareness without at least nascent self-awareness, i.e. orientation?
        It all seems to come down to whether or not one thinks it is useful, or even possible to speak of a mind without contents.

        Liked by 2 people

        1. Well, we don’t sense time itself as a phenomenon, Keith, but infer it subjectively from other changing objects of phenomena (as we do gravity). As self-awareness references temporally (as beingness objects) and spatially (in detecting otherness objects), and as those object measures can be suspended in certain mental states, then self-awareness can be too – although critically, awareness (pure lucidity) doesn’t disappear along with it.

          I think it may be useful to consider the lucid mind absenting contents, as not to do so leaves us with the presumption that what we think of as consciousness (being with knowledge) is the sole domain of awareness – i.e. that awareness is nothing other than psychical representations (consciousness). This of course makes Consciousness Theorising a bit more manageable, because we can observe the connections being made and see what happens as or when connections are interrupted. That keeps us firmly in studying the functionality of consciousness and away from (dare I use the expression?) the Hard Problem. And I suppose we have to allow that there may be one?

          One question for you, if you will: Mike has advised me that the brain has no pain receptors, but may I ask, does it have movement receptors, or somesuch? Thanks!

          Liked by 1 person

        2. Thanks Keith. I couldn’t remember if the meninges itself had sensory receptors, and a quick Google before my reply to Hariod didn’t immediately make it obvious.

          As I mentioned to Hariod above, I’m actually somewhat agnostic on a contentless mind. I don’t see any indications for it, but I also can’t see any neurological reason why it might not conceivably be possible. Given that the brain starts processing sensory information (albeit not conscious information) in the fetal stages, I’m not sure if there is any way for us to ever know, at least until we have a thorough and comprehensive understanding of the entire brain.

          Like

          1. No motion sensors in the brain.
            I have to disagree about the experience of time being a mere property of phenomena, or even perhaps, an epiphenomenon.
            We seem to have some direct experience of now, just as we have a direct experience of green – privately and without recourse to any theory of now or theory of green.
            I believe (at the risk of posthumously correcting someone much smarter than myself) that that is what Nietzsche meant to get at when he said that our thoughts come to us unbidden.
            We cannot reflectively access our motivations, as they present to us in the moment, as the moment.
            That doesn’t mean that they are inexplicable or that we can’t reflect upon our motives.
            It just means that one can’t say, “OK, now I’m going to direct my intention on X.”
            It is already directed – directly, if you will.
            I’m not sure what pure lucidity or a mind that’s not ‘about’ anything might be.
            End of rave.

            Liked by 1 person

  4. Interesting insights here! Especially this:

    “But pretty much any animal with eyes tends to trigger my intuition that there is some inner life there, something that is seeing and has some kind of intentionality, a worldview of some kind, even if it’s a limited one. This is a common intuition, which is why it’s not unusual for movies to show an opening eye to indicate that some thinking feeling thing is present.”

    I’ve always known that certain creatures, especially terrier mixes by the name of Geordie, have an inner life that can’t be denied except by those who want to be pedants. (The problem of other minds is always there, it will always be, but c’mon, that’s boring at this point right?) It never occurred to me that the eyes—the windows of the soul—might be tied in with all sorts of other characteristics such as mapping and sense of environment. An interesting thing to ponder.

    You also brought up the “what it is like to be an X” in regards to different types of mental mapping. Geordie’s is very different from mine in that he hears and smells way better than I do. I rely on vision, which is terrible when we’re trying to get to the treat lady’s house and all the houses look identical to me. (Not kidding about this. I get confused even after I’ve been to someone’s house on multiple occasions. I have to consciously consider some salient feature such as a decoration in order to locate the house I’ve been to a many times.) Geordie, on the other hand, goes directly to the treat lady’s house without any hesitancy. I could probably blindfold myself and he’d lead me there, except I’d risk getting hit by a car. The point is, I can obliquely understand his sense of navigating in the world even though I don’t share it. I know he picks up on scent and I can make pretty good guesses about what sort of thing he’d be interested in. It’s really not all that complicated. I think it gets more complicated when we don’t know what motivations are operating, which is the case for a lot of creatures. And there’s always the assumption that life seeks more life, that motivations can be boiled down to survival, but we should be careful not to exclude other possibilities.

    “This raises an interesting question, how much of human experience can we dispense with and still coherently use the label “consciousness”?”

    I think we’re stuck with an anthropomorphic understanding of consciousness, if we want to retain the word’s flexibility (and I think we do). As you said, we have no problem identifying consciousness in living creatures…insofar as they resemble us, as a sort of continuum. I don’t see a way out of this anthropomorphism without giving the word “consciousness” a narrow definition that belies a certain theoretical bias. Which is fine for some purposes, but the definition has a hard time transferring to other contexts; it seems highly contestable, shocking even. Plus there’s a danger of circularity. On the other hand, using a narrow definition in a stipulative way doesn’t seem problematic when you’re trying to do science, even if it’s a working definition. Same goes for philosophy, where consciousness would get discussed in a different way and scientific assumptions, as assumptions, would not go over well. And if you were to try to move this philosophical understanding of consciousness (likely a contentious and tenuous sort of understanding) into the sphere of everyday interactions, people would be right in finding this tedious and beside the point. Consider the way people say, “We’re raising consciousness…” instead of “awareness.” Or in spiritual groups when people talk about consciousness, I’m pretty sure they mean something very different from some objective definition. In fact, they might not want that objectivity, they might not even want a clear definition. They’re using the word in a way that’s meant to be expansive and possibly thought-provoking.

    “Is the Google self driving car conscious? It doesn’t have eyes exactly, but it does use LIDAR to model its environment, and its own interactions with that environment.”

    You could say that, but you’d have a lot of explaining to do. I don’t know that it would be worth it. 🙂

    Liked by 1 person

    1. Thanks Tina. I was hoping to see your insights on this series. I hope you’ll consider reading the rest of it when you have time. If not, you might want to consider checking out the last post as, along with this initial one, it bookends the series pretty well conceptually. https://selfawarepatterns.com/2016/09/21/the-range-of-conscious-systems-and-the-hard-problem/

      “The problem of other minds is always there, it will always be, but c’mon, that’s boring at this point right?”
      Well said. I felt the need to get this post off my chest, but I was fairly well convinced by F&M that all vertebrates are consciousness, at least at a primal level, and probably all arthropods and cephalopods as well. (F&M themselves are cautious about insect arthropods, worrying about whether their brains are complex enough, but I think their behavior demonstrates their primal consciousness.)

      Geordie is actually a brainiac compared to the simplest conscious creatures. A lamprey’s conscious experience is at a far lower resolution and depth than his. Geordie, as a member of a social species, probably also has at least an incipient theory of mind, which there’s no evidence that I’m aware of that the lamprey has. A lamprey has a few million neurons, while Geordie has hundreds of millions. For comparison, we have 86 billion neurons in our brain.

      The smell thing is interesting. F&M speculate that it led to the rise of memory, seeing smell as being somewhat entangled with it. Smell has always connected directly to the telencephalon, the structure that later became our cerebrum. This is unique among the senses, and complex memories and learning seem to live in the cerebrum, indicating that the early telencephalon and memory may initially have evolved to recognize smells, what some have called a “smell brain”. Later (in evolution) the unique telencephalon functionality may have proven useful for handling higher order processing of all sensory information, leading to it becoming the center of consciousness.

      As you note, smell for us (and most primates) isn’t much of a thing anymore. But we may owe it for the development of memory. I’m not entirely sure myself how much I buy this particular proposition since some types of semantic memory seem to be sub-cortical, but it’s an interesting one.

      Interesting thoughts about definitions. In truth, a lot of scientists seem to just stay away from the concept of consciousness, preferring to focus on relating neural circuits to behavioral capabilities, although obviously F&M didn’t share that inhibition. I agree that many people don’t necessarily want an objective definition, but I think any attempt at one should pay attention to the intuitive way the concept gets used in our everyday language, since that reveals our intuitions about it. I’m skeptical of prospective definitions that jettison too many of those intuitions.

      What this book made me realize is that, although a lamprey’s modeling is orders of magnitude less sophisticated than ours, it still triggers our intuition of a fellow consciousness, which is what led to the speculation about self driving cars. The cars model their environment and themselves as a guide to action. It wouldn’t be human level consciousness, but it might well be lamprey level (or higher). Although as Steve pointed out on that final post, the car’s motivations may be too inorganic to really ever really feel intuitively conscious to us.

      Liked by 1 person

      1. Sorry about not keeping up with your posts. I’ve been running behind on pretty much everything lately.

        It’s hard to imagine how our sense of smell could affect memory so much, but as you point out, we don’t rely on it much anymore. I tend to remember scents, but I can’t always place them. Rarely do they bring to mind any sort of memory, but it’s interesting to ponder what scents mean to animals. I wonder if their dreams include scents? What a trip that would be.

        I once wrote a post about AI with anthropomorphism as the crux of how we view others and construct ethical views, and how our views might play out with AI. I think with the self driving car, Steve’s right…it’s not organic, biological, so we are less likely to consider it conscious even if it does exhibit some of the same behaviors. We’d need a much higher degree of evidence, I would think.

        Liked by 1 person

        1. On not keeping up with posts, no worries at all. Just whenever you have time and / or the interest.

          On the self driving car, yeah, I’m actually starting to swing back to a position I held when I first started this blog, that part of what we intuitively regard as consciousness inescapably involves a system with similar instincts and motivations, something we can recognize at least some kind of common experience with.

          The problem with requiring evidence is, what kind of evidence would ever suffice to convince a skeptic that there is indeed a kind of consciousness there? That’s the problem with consciousness. Its very subjectiveness makes it unamenable to evidential investigation. All we can do is find evidence for behavior and attributes that may or may not trigger our intuition of another consciousness, but we can never prove that it’s actually there.

          Liked by 1 person

          1. I do have the interest to read your posts! I’ve been out of the blogosphere lately, and once you fall behind, it’s hard to catch up.

            Common experience seems to be the only way we can access consciousness in others. As you say, there’s no way to prove to a skeptic that consciousness even exists. To be honest, though, I’d like to see that skeptic in his or her daily life to see how consistent this skepticism really is. I doubt anyone would treat other human beings as if they questioned their consciousness. In other words, this strong skepticism seems perverse to hold as a philosophy, although it may be useful to think about, of course. Maybe once. 🙂

            Heidegger thought that we didn’t need to find evidence, that we, at our very foundation, were already with other conscious beings in the world. He says many things I disagree with, but I think he made a profound contribution here. It’s not as if we look someone up and down, make a little checklist of similarities to ourselves, and only then determine whether or not someone’s conscious. Still, the checklist—as I’ll call it for now—makes sense for those bizarre cases when we aren’t sure and for doing philosophy and thought experiments.

            For AI, we have a very different situation which calls for such thought experiments. The nature of AI makes us question the nature of consciousness and casts a shadow of doubt that wouldn’t be there in organic, biological creatures. This doubt doesn’t seem at all strange or extremely skeptical. So I think for AI, we would need perhaps stronger evidence—behavioral evidence—than we might require with biological creatures. I think it’s possible that we could enter a threshold where doubting AI consciousness would seem just as perverse as denying that a dog is conscious, but AI might have to demonstrate behavior in a very consistent way, perhaps in a way even more consistent than what we’d require of biological creatures of the same order of intelligence. Who knows where that threshold is. I’m just speculating that it would be higher, and maybe it would shift along with the way we use and invent technology.

            Liked by 1 person

          2. Thanks Tina. Again, no worries.

            I think Heidegger may have overlooked the possibility that our mental evaluation of another conscious system still involves a checklist, one that we just go through subconsciously. Like many cognitive functions (probably most), we aren’t consciously aware of all the machinations, just the result. The trick is teasing apart what that intuitive checklist actually is, and recognizing that it’s not set in stone.

            For instance, when I first stared programming computers in the late 70s, I had a powerful sense that there was a consciousness of some type in the computer. That feeling faded as I learned just how a computer worked and that its seeming agency was something that had to be carefully programmed. But as I’ve learned about neuroscience, genetics, and evolution, the distinction between biological and technological systems has seemed more about sophistication and capacities than sharp breaks.

            For AI consciousness, I remember reading something several years ago that noted that the last people who would accept a machine as conscious would probably be the engineers and programmers who designed it. They’d have too much access to the inner workings to see actual agency, at least unless they perhaps had a neuroscience background.

            Liked by 1 person

          3. Heidegger definitely overlooked the possibility that whatever’s going on “under the hood,” so to speak, is really just such a checklist. Of course, he meant to overlook that possibility.

            I understand that for you the distinction between biological and technological might not be so sharp a distinction since you’ve been thinking about what makes for consciousness for a long while and in great detail. As you say, programmers and engineers might be the last to accept AI consciousness. That makes sense. I think for the rest of us, there’d be a great deal of grey area, of vagueness and differences of opinion, just as there’s differences of opinion in the way we ought to treat animals.

            Liked by 1 person

  5. I stumbled on your blog as I research topics like consciousness and spirituality while working on my next book, tentatively titled: Transformational Awakening. Thank you for many insightful comments and links which will have me following related tangents for quite some time.

    Liked by 1 person

  6. Mike,

    It’s quite difficult to find the time to read and consider all these interesting topics you post. High praise to you to keep me coming back again and again.

    Allow me to broach this topic of consciousness from a different angle. From my earliest memories of having first heard the word consciousness as a late teenager, my first impression was one of not having any idea of what the book or person was talking about when the word consciousness was used. It was always in the context of some foreign (unknown) object with a feature such as ‘expanded’, ‘cosmic’, or as some primary agent that was above and beyond anything to do with ‘me’. Through my philosophical and religious studies, I began to see definitions that were mostly about ‘becoming’ conscious, ‘being’ conscious, etc. Some schools believe that consciousness is the ultimate truth, calling it pure consciousness. None of these definitions or beliefs have ever satisfied me or made much of a difference in my life. Now, I’m here reading about the ‘hard problem’ and the evolution of the brain, and the differences between human and animal brains and whether we can call any animal ‘conscious’.

    Well, I have to tell you that I am still not convinced that there is anything called consciousness apart from the brain activities that go on in all of us. Subjectivity, which most people would call ‘themselves’, ‘me’, is clearly brain function. It is impossible to reduce anything about your ‘self’ apart from what you think or imagine it to be. There is no organ or entity called self. In the same context, there is no organ called consciousness. It is an assumed ‘something’. To go further, we assume that we are alive. But, who is alive? Nothing but the activity of the brain and the interpretation of its information. That interpretation is based on knowledge and knowledge is relative to the input of cultural conditioning that we choose to assume is real and true. We are full of erroneous beliefs and this is true of the very people doing neuro research and many of the conclusions that they draw up in their theories. The whole process of wanting to know is stimulated in us by this conditioning. Can we really discover anything that is real by sifting through the past which is the content of our mind? In fact, the idea of real itself may just be another erroneous belief passed down through millenia of cultural inheritance.

    Is all of this anything else but the activity of our brains sorting through all of its received impressions and simply coming up with a picture/model of what is? We are back to your simulation machine, Mike. Do we have to be stuck with this word consciousness which really doesn’t do it for me unless I want to give you an idea that you could possibly agree or disagree with? Are we not just mired in language and imagery pretending that we understand anything at all? This is not to stop you or anyone else from pursuing whatever might interest you and drawing any conclusion that you would like. But, I don’t see what any of this has to do with consciousness. It’s just a word. Thanks for indulging me.

    Liked by 1 person

    1. Jeff,
      I definitely agree that consciousness isn’t anything other and apart from what the brain does. And I’ve often wondered if it’s a coherent enough concept to warrant serious discussion, so I know where you’re at.

      For a long time, most scientists, including neuroscientists, did everything they could to avoid mentioning it. They might talk about how visual processing in the brain works, or how memories were stored, and many other subjects, but the overall topic of consciousness itself was judged too ambiguous a subject, and too loaded with spiritual baggage, to explore scientifically. This has changed, but a lot of scientists still carefully avoid discussing it unless they have to.

      What keeps me interested in it is, there is something different about how awake animal brains process information vs how other systems (current robots, storms, etc) do it. We intuitively sense the difference. And there is something different about the way I’m thinking about this reply vs the processing my brain is doing to keep my body at the right temperature, to keep it upright in the chair, or even to control the details of my finger movements on the keyboard.

      A big part of this are the primal motivations of the system (one philosopher quipped that consciousness is about how something reacts to being poked with a stick), the ability of that system to build models of the world and itself, and to consider alternative actions for meeting its goals.

      The more we can understand these distinctions, the better insight we may have to understanding the experiences of other animals, or getting insight into when a machine is reaching a point where we should consider it a fellow being. But I have to admit to just having a primal curiosity about what makes us, us.

      Like

      1. Mike,

        I understand the impulse that many of us have of taking something apart like a radio to discover how it is put together. But, in the case of the brain, in particular, we cannot really take it apart, arrange the parts on the table, and connect the dots. The organism is a totality operating in tandem with many systems, constantly recording, imaging, and communicating on a level that is impossible for this observer to observe. We can only observe a fraction of what is going on. Thought looks at and discusses itself only in its reflective nature. It’s a mechanism that doesn’t seem capable of seeing beyond itself and what it knows and experiences. In most moments, we don’t even see this. We believe that through thinking, analysing, etc., we can arrive at some kind of moment of illumination about ourselves. But, if there is no entity that is a self, what is thought supposed to discover? The only thing it can discover is how it is just a condtioned movement that creates a story that has no real basis. It’s what dreams are made from, but we don’t believe that dreams are reality. Somehow, we believe that our waking state is real, yet it seems to be made of the same stuff that dreams are made from. We have divided our experience and created an experiencer, that little devil inside of us. Discovering what that little devil is or isn’t, could be the beginning of some real wisdom.

        Liked by 1 person

        1. Jeff,
          “But, in the case of the brain, in particular, we cannot really take it apart, arrange the parts on the table, and connect the dots. ”
          Many find this disturbing, but in the case of non-human animals, that’s often roughly what scientists do, sometimes intentionally damaging a section of an animal’s brain, or implanting electrodes and stimulating various regions, or introducing types of neurotransmitters in various regions, all to see the effects. Of course, the limitation is that animals can’t self report their subjective experience, only the changes in behavior.

          But the information gained from animal research can be triangulated with human studies, where a combination of neural imaging and self report can help to close the gaps. Imaging isn’t without problems, but when its results coincide with animal studies, the certitude of the information is higher.

          On the self, I can see the illusion arguments. What does seem to exist though, is the brain’s representation of it. This isn’t a culturally contingent thing. The brain’s evolutionary wiring leads to body image maps, and understanding the body’s relationship to the overall environment revealed by distance senses, which even the simplest vertebrates seem able to do, is an incipient form of self-awareness. Of course, as a social species our theory of mind models give us a much more sophisticated conception of self. The self may be an illusion, but if so, it seems to be one we’re hardwired to buy into.

          Like

  7. Mike,

    Some computer troubles today. My last response didn’t show up here.

    You said: ‘Of course, as a social species our theory of mind models give us a much more sophisticated conception of self. The self may be an illusion, but if so, it seems to be one we’re hardwired to buy into.’

    I think the hardwiring is different than the social constructs we are taught from a very young age. We indoctrinated into language and behavior in order to get along and adapt socially. Here is the problem in a nutshell. What we are indoctrinated into, is conditioned and re-inforced every moment so we begin to believe this subjective information that is not from our innate wiring about who or what we are. Is it possible to emerge from this conditioning into a different point of view? This is the real issue for me. Language and behavioral conditioning/habits create and color our subjectivity. We spend our life trying to adapt, to be liked, thinking about our feelings, etc. Animals do not do this although they can be trained to obey, but they don’t mull or brood over how they feel about things. Plus, the myriad conflicting desires that we have vying for a front place seat is not seen in animals. We are not born with these conflicts. They are developed since birth and given the flavor of our culture and its interests. Concepts/ideas are toyed with and we live in a world of ideas without seeing how we give reality to this cycle of illusion. We can only deal with this illusion of subjectivity if we stop creating conceptual models about it. The hard wiring takes care of itself. No need to worry about your heart or blood or breath or how any of the autonomic mechanisms work or that which is hidden from our ordinary ‘awareness’. Our ordinary ‘mind’ is where to begin and focus, not on conceptual problems and models. Abstraction is not the way. I can’t stress this enough.

    Liked by 1 person

    1. Jeff,
      Sorry you had troubles. Hope it wasn’t WordPress. Usually when someone complains that their comment didn’t appear, it’s because the Spam folder ate it, but I didn’t see any sign of a comment from you there.

      Definitely cultural indoctrination colors everything about our worldview. And it’s extremely difficult for us to step outside of our cultural context. This topic always reminds me of the 19th century anthropologists, who probably thought they were exemplars of objective thinking, but were so embedded in the pervasive racist sentiment of that time that they couldn’t see how much it tainted their theories. It makes you wonder what taints people in the 22nd century will see with our theories.

      Still, modern psychologists and sociologists seem to be more aware of this problem and are making efforts to counteract it. They now recognize that college undergrads (traditionally the easiest subjects to access) are not representative of humanity overall, that they’re too WEIRD (western, educated, industrialized, rich, and democratic) to tell us about the mindset of people who aren’t. It’s led to a lot more cross-cultural studies which, although far from perfect, are making results more rigorous.

      “We spend our life trying to adapt, to be liked, thinking about our feelings, etc. Animals do not do this although they can be trained to obey, but they don’t mull or brood over how they feel about things.”

      Actually, other highly intelligent social animals also do these things, particularly other primates such as chimpanzees. They seem to crave group acceptance, hold grudges, get in long running feuds, and suffer many of the other ills of human society, even warfare in some species. Of course, their societies are not as complex as human ones, but they do exist. The differences between us and them are more in magnitude than sharp distinction.

      I’d also disagree that we’re not born with conflicts. All of us are born with both selfish and pro-social instincts and live in the tension between those instincts, with people often feeling that various points on the spectrum between complete selfishness and complete pro-social stances are the right and proper place. The political philosophies most aligned with complete selfishness are the anarchists. The ones aligned with complete pro-social stances are the fascists and related philosophies. Most of us exist at various compromise points in between.

      “Our ordinary ‘mind’ is where to begin and focus, not on conceptual problems and models.”

      The problem is that our “ordinary mind” is a data modeling machine. Most of the details of this modeling aren’t consciously accessible; we only perceive the results, but our very perception of those results is a model, an image map isomorphic with the outside world (hopefully).

      Not quite sure what you mean when you say abstraction is not the way. I wonder if you wouldn’t mind elaborating.

      Like

  8. Mike,

    What I mean by your ordinary mind is what you are thinking about, perceiving, cognizing in the present moment. This is accessible if you pay attention and relax with no particular focus or agenda to fulfill. You are simply noticing your thoughts, feelings, as well as sounds, bodily sensations, etc. This is not the same as engaging in fantasizing or analyzing what you are noticing. The perceptions are also not engaged, they come and go. Conclusions are just more thoughts, they are devoid of meaning. This kind of activity opens into something very different than looking for answers or results to problems. This simple cognizance allows access to the very nature of your perceptions and what you call your self. Without this kind of view, this primal awareness, you will always be looking for an answer to an imaginary problem and never breaking the cycle of this illusion. It is not enough to deduce that there is no entity called ‘you’ through analysis or intellect. You come to the point where nothing substantial exists, no solidity in anything. This cognition allows you to live in a way without allowing the measuring of any experience you might have to seem real and solid. This measuring is how we build our models which are just images that are like ghosts, empty, not to be seen as anything real just as you don’t think your dreams are real when you awaken. All conclusions which seemed so firm and real take a back seat to this cognition. It is a simple wakefulness that is present. An abstraction would be to draw a concept out of this wakefulness which would not be the same thing as this wakefulness, and think it has some kind of substance, meaning. I hope I didn’t get too abstract, Mike. But, I have faith that you can decipher all of this as your brain seems much sharper than mine.

    Liked by 1 person

    1. Thanks Jeff.

      That’s sounds similar to the mental state Hariod often describes as an objectless awareness, which itself seems like the emptiness that Buddhists reportedly meditate into. This is obviously some brain state (or perhaps range of states), but I’m not sure if I can speak to it in any knowledgeable manner.

      Interestingly, I’m currently reading a book by Jaak Panksepp about primal emotions. His first and most important primal emotion is one he calls SEEKING, but that I interpret as raw motivation, or propensity for action. The state you’re describing seems like one where that has been quieted, although the very endeavor to achieve that state is itself a SEEKING impulse.

      Like

      1. Mike,
        It seems to me that the only way to understand or discover what our own experience is, is to be it. By remaining an observer, an interpreter, an outside agency, gives rise to more and more theories. To study the works of researchers, philosophers, or mystics, is always to remain apart from the totality of life if we don’t approach our own experience directly. At first, it may seem daunting with all the cliches and conclusions we’ve read about constantly distracting from the actuality of cognizing what is. What may start off as seeking may lead to some very unexpected insights into the nature of one’s self. So there is no need to escape the impulse to seek. What is important is to allow that impulse to be without attaching some kind of meaning to it. When we do this with all experience, the brain begins to ‘see’ things differently. Nothing really changes but your point of view which is not attached to any state or idea. This has to be lived by you in order to understand this. Otherwise, you can only grasp an intellectual image of what I’m talking about. This is why contemplation of your own experience is necessary, not believing and thinking about the results of researchers and philosophers. You ARE creation, and the results of it. Your own BEING is a key to all this. It goes right to the heart of the matter. And, yes, Hariod is on to this, too, but he will have his own way of describing it. Some Buddhist teachings come closest to describing the heart of the matter, but so do others. You are not what you think or what you measure, and yet, you are not other than what you think and what you measure. When this clicks, the lights get turned on. 🙂

        Liked by 1 person

        1. Jeff,
          I have nothing against introspectively examining our own experiences. I do it all the time. But I think the insights from philosophy, psychology, and neuroscience, as well as my own personal insights from years as a programmer, all make that introspection more productive.

          But one of those insights, is that introspection in and of itself, shouldn’t receive blanket trust. It’s a part of the puzzle, but not the whole thing. I think we have to be open to information from every direction that bears fruit.

          I’m also not a big believer that we should privilege our primal emotions and desires, that there’s something inherently virtuous about them. Many of those impulses are evolutionarily fine tuned adaptive responses, and should indeed be heeded. But many others can lead us astray in a modern world so different from the hunter-gatherer environment we evolved in.

          Reason is ultimately a tool of emotion, but reason’s role is to enable us to decide which of our emotional impulses, which often can be in conflict with each other, we should indulge in, and which we should resist.

          Of course, our ability to do that is at its best when we’re calm, and it pays to have strategies to recognize when we’re not calm, and to try to reach a calmer state before making important decisions. In that sense, I have nothing against meditation or similar practices that often have that effect.

          Like

Leave a reply to SelfAwarePatterns Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.