The range of conscious systems and the hard problem

This is the fifth and final post in a series inspired by Todd Feinberg and Jon Mallatt’s new book, ‘The Ancient Origins of Consciousness: How the Brain Created Experience‘.  The previous posts were:

In the first post of this series, I noted that F&M (Feinberg and Mallatt) were not attempting to explain human level consciousness, and that bears repeating.  When talking about animal consciousness, we have to be cautious that we don’t project the full breadth of human experience on them.

While a zebrafish has sensory consciousness with its millions of neurons, its conscious experience is not in the same league as a dog’s with 160 million neurons in the dog’s cerebrum alone, much less that of humans with our average of 21 billion cerebral neurons.

Space invaders c. 1978 Source: Wikipedia
Space invaders c. 1978
Source: Wikipedia

One analogy that might illustrate the differences here is to compare 1970s era video games, running on systems with a few kilobytes of memory, to video games in the 1990s running on megabytes of memory, and then again to modern video games with gigabytes to work with.  In all cases, you experience a game, but the 1970s variety were blocky dots interacting with each other (think Pong or Space Invaders), the 1990s versions were cartoons (early versions of Mortal Kombat), and the modern version is like being immersed in a live action movie (the latest versions of Call of Duty).

Call of Duty Source: Wikipedia
Call of Duty c. 2014
Source: Wikipedia

Not that the zerbrafish perceives its experience as low resolution, since even its ability to perceive is itself low resolution.  It only perceives reality in the way it can and isn’t aware of the detail and meaning it misses.

All that said, the zebrafish and many other species do model their environment (exteroceptive awareness), their internal body states (interoceptive awareness), and their reflexive reactions to what’s in the models (affective awareness).  These models give them an inner world, which, to the degree it’s effective, enables them to survive in their outer world.

A lot of human mental processing takes place sub-consciously, which might tempt us to wonder how conscious any of the zebrafish’s processing really is.  But when human consciousness is compromised by injury or disease we become severely incapacitated, unable to navigate the world and take care of ourselves in any sustained manner, something the zebrafish and related species can do, indicating that consciousness is crucial and that organisms like zebrafish and lampreys have some form of it.

Considering all this has also made me realize that what we call self awareness isn’t an either-or thing, being either fully present or absent.  Modelling the environment seems pointless if you don’t have at least a rudimentary representation of your own physical existence and its relation to that environment.  Add in awareness of internal body states and emotional reactions, and at least incipient self awareness seems like an integral aspect of consciousness, even the most primitive kind.

(When I first started this blog, I was open to the possibility that self awareness was something only a few species had, mostly due to the results of the mirror test.  But I now think the mirror test is more about intelligence than self awareness, measuring an animal’s ability to understand that it’s seeing itself in the mirror.)

All of which seems to indicate that many of the differences in consciousness between us and species such as lampreys are matters of degree rather than sharp distinctions.  Of course, the difference between the earliest conscious creatures and pre-conscious ones is also not a sharp one.  There was likely never a first conscious creature, just increasingly sophisticated senses and reflexes, gradually morphing into model driven actions, until there were creatures we’d consider to have primitive consciousness.

This lack of a sharp break bothers many people, who want consciousness to be something objectively fundamental to reality.  Some solve this dilemma with panpsychism, the view that everything in the universe has consciousness, with animals just having it in much higher magnitude than do plants, rocks, or protons.

Others conclude that consciousness is an illusion, a mistaken concept that needs to go the way of biological vitalism.  Best not to mention it, but instead to focus on the information processing necessary to produce certain behaviors.  Many scientists seem to take this approach in their professional papers.

But I’m interested in the differences between systems we intuitively see as conscious and those we don’t.  Concluding that they’re all conscious, or that none of them are, doesn’t seem like progress.  I think the most productive approach is to regard consciousness as a suite of information processing functions.  This does mean there’s an unavoidable aspect of interpretation as to which systems have these functions.  But that type of difficulty already exists for many other categories, such as the distinctions between life and non-life (see viruses).

While F&M weren’t interested in tackling human consciousness, they were interested in addressing the hard problem of consciousness.  Why does it feel “like something” to be certain kinds of systems?  Why is all this information processing accompanied by experience?

I think making any progress on this question requires that we be willing to ask a closely related question: what are feelings?  What exactly is experience?

The most plausible answer is that experience is the process of building, updating, and accessing these models.  If we accept that answer, then the hard problem question becomes: why we does this modeling happen?  The second post in this series discussed an evolutionary answer.

This makes sense when you consider the broader way we use words like “experience” to mean having had extensive sensory access to a topic in order to achieve an expert understanding of it, in other words to build superior internal models of it.

I can’t say I’m optimistic that those troubled by the hard problem will accept this unpacking of the word “experience”.  The reason is that experience is subjectively irreducible.  We can’t experience the mechanics of how we experience, just the result, so for many the idea that this is what experience is, simply won’t ring true.

The flip side of the subjective irreducibility of experience is that an observer of a system can never directly access that system’s subjective state, can never truly know its internal experience or feelings.  We can never know what it’s like to be a bat, no matter how much we learn about its nervous system.

While F&M acknowledge that this subjective-objective divide can’t be closed, they express hope that it can be bridged.  I fear the best that can be done with it is to clarify it, but maybe that’s what they mean by “bridged”.  Those who regard the divide as a problem will likely continue to do so.  Myself, I’ve always regarded the divide as a very profound fact, but not an obstacle to an objective understanding of consciousness.

In conclusion, F&M’s broader evolutionary approach has woken me from my anthropocentric slumber, changing my views on consciousness in two major ways.  First, it’s not enough for a system to model itself for us to consider it conscious; it must also model its environment and the relation between the two, in essence build an inner world as a guide to its actions.  Second, that modeling can be orders of magnitude less sophisticated than what humans do and still trigger our intuition of a fellow conscious being.

Which seems to lower the bar for achieving minimal consciousness in a technological system.  Unless we find a compelling reason to narrow our definition of consciousness, it seems plausible to consider that some autonomous robotic systems have a primitive form of it, albeit without biological motivations.  Self driving cars are the obvious example, systems that build models of the environment as a guide to their actions.

Unless of course I’m overlooking something?

40 thoughts on “The range of conscious systems and the hard problem

  1. I think you need to re-examine some of the logic at play, specifically with regard to “A lot of human mental processing takes place sub-consciously, which might tempt us to wonder how conscious any of the zebrafish’s processing really is. But when human consciousness is compromised by injury or disease we become severely incapacitated, unable to navigate the world and take care of ourselves in any sustained manner, something the zebrafish and related species can do, indicating that consciousness is crucial and that organisms like zebrafish and lampreys have some form of it.”

    When a human being loses “consciousness” they lose awareness of the external world, conscious or not. When a human being is trying to hit a 95 mph fastball, they are operating subconsciously on information transmitted through the senses. Acting consciously or subconsciously with the external world requires sensory awarness of what is going on. When that awareness is lost and we say “someone lost consciousness” they lost a lot more than consciousness, they lost sensory input with the world outside. I do not recommend using a turn of speech as an argument that a zebrafish possesses consciousness.

    Liked by 1 person

    1. Steve, I was referring to neurological cases where people lose aspects of their consciousness, such as hemispatial neglect or one of the agnosia conditions, not situations where someone is knocked out completely. If you still think there’s a turn of speech issue here, maybe you could elaborate?

      Like

    2. ‘When that awareness is lost and we say “someone lost consciousness” they lost a lot more than consciousness, they lost sensory input with the world outside.’ – Are you sure, Steve? If being asleep is my having lost consciousness, then why do I wake up when the alarm clock rings?

      Liked by 2 people

  2. I think a brain is absolutely essential for consciousness to manifest its self. I also think consciousness and self awareness are not one in the same! Can you be self aware without a consciousness? Can you have a consciousness without being self aware? While not definitive it has been implied that unfeeling Psychopaths and Sociopaths have no conscious but they most certainly are self aware! So does self awareness originate in the brain? Sounds like a silly question right? I mean how can one be self aware without a brain? I would argue all creatures with brains have some level of consciousness but very few, and some would argue only the human animal, is truly self aware! So then the question is if self awareness originates in the brain and relates or is a function of consciousness why aren’t all creatures self aware? The hard problem won’t be solved until we think quantumly! I think quantum consciousness or quantum self awareness is indeed the answer to all things related to the human brain! In fact I think the human brain might be a quantum computer! I believe mystically if you will that perhaps Biocentrism has a measure of truth and perhaps a large measure of truth! Hear is the mystical part and go ahead and laugh but if one can propose a multiverse of many earths and many copies of us I can propose my nonsensical hypothesis too! I think when our organisms die, our bodies, a process of transfering information between dimensions occurs and we simply wake up as ourselves in whatever present we are living living in in that dimension.

    Liked by 1 person

    1. Hi Matt,
      On psychopaths and sociopaths, I think you are confusing consciousness with a conscience. It’s a conscience, the sense of moral right and wrong, that they are said not to possess, although I think it’s more objective to say they don’t have a sense of empathy. I haven’t seen anyone argue that they don’t have consciousness.

      On self awareness, I suppose it depends on how you define that term. If you define it narrowly enough to only include the precise type that humans possess, then the argument that only humans are self aware becomes somewhat tautological. But the difference between human self awareness and chimpanzee, elephant, dolphin, or whale self awareness isn’t very wide. And as I argued in the post, even the simplest conscious creatures have to have a notion that there is them and there is the rest of the environment. Otherwise knowledge of the environment seems pointless.

      I’m actually not a strong proponent of multiverse theories. In truth, I’m skeptical of most of them. The most plausible to me are bubble universes that come as a direct consequence of eternal inflation, but even that is dependent on a theory that is still in the hypothesis stage.

      On mystical theories, you’re certainly free to put them forward. However I’m a skeptic, which means that I need either evidence or compelling logic to accept a proposition, and I can’t say that I see any for those kinds of views. I’m prepared to change my mind on evidence vetted by scientific experts, but not until then.

      Like

      1. Yes your right I did confuse the serial killer issue but hey what the heck
        I’m just a layman on this stuff. Also it’s nice to see someone actually skeptical on some scientific theories like the multiverse as well as being skeptical on mystics,. But I would argue your conclusion or if I’m understanding what I think your conclusion that all animals are self aware on some level. I don’t think science bears that out. I also don’t think science proves animal like elephants, dolphins and so on are even close to her man level consciousness or self awareness. Human Beings alone occupy the top nich of consciousness and self awareness. Finally there’s Biocentrism a lagitimate scientific hypothesis that has about as much evidence to support it as the multiverse theory and an equal unlikely chance of ever being proven true. They are limits to what science can know. I know science hates this. I think the closest science will ever come to a theory of everything is when they finally discover that consciousness and self awareness, however they are related, are quantum events existing on quantum levels and what ever else that might entail. I think someday folks will get just enough evidence not to laugh at the Biocentrism Hypothesis.

        Like

  3. I like your definition of experience as the process of building, updating and accessing models of the self and the environment. However if that leads to the conclusion that a self-driving car is conscious, then I fear that some part of the logic is wrong. To me, a spider sitting on the dashboard of a self-driving car feels intuitively more sentient than the car itself.

    Why is that? One reason is that the car has no goals. It remains in a completely passive state until a human operator commands it to drive to a specific destination. When it arrives at that destination, it reverts once more to a state of indefinite waiting. Contrast this with the spider, which has a rich internal life, with self-guided motivations, objectives, and perhaps even feelings.

    Liked by 3 people

    1. Excellent point. That’s largely what I meant by the “albeit without biological motivations” part, but the post was already too long to expand on it.

      The car does have goals, it does have an agenda, but its agenda is completely subsumed into ours. The spider’s agenda isn’t. It has its own selfish genes motivating its own actions. And its motivations are ones that can arouse sympathy in us since we can see the similarities with our own. (Of course, the spider is far enough from us genetically that most of us will quickly destroy it if it crosses our own agenda in any way.)

      But not the car’s. We see little commonality, if any, with its motivations. Indeed, we’d be upset if it had its own agenda. It would make us want a new car. We want its agenda to be taking us to our destination while keeping us safe. We also want it to avoid damage and have a long life, but only within the framework of its service to us. It won’t care when its service to us ends.

      It may be that our intuition of consciousness can’t be triggered across that gulf, at least not on a sustained basis. (On a shorter basis, when testing robotic land mine clearers, military personnel ended the tests early because letting the little robots continue after they had legs blown off seemed cruel.)

      Liked by 2 people

    2. So, the car is only conscious while it is driving towards the entered destination. Outside of that time, it is unconscious. I thought that was a given already. How does that diminish its being conscious while driving though?

      There is the part where the car was designed to be conscious, while the spider evolved to be conscious, which makes the spider’s case more interesting and worthy of notice. In some ways the spider’s sensory input is also broader and more general, including temperatures and smells and vibrations and what not. It does not compare well in terms of location awareness and velocities and distances, though… I guess it is fair to say the two are different and non-overlapping enough that it’s hard to objectively grade them. Future man-made consciousness might resolve that though.

      Liked by 1 person

  4. Nice series. I agree with your general approach, so would probably say that.

    I think there is a significant gap before human self awareness. That is not to deny that animals make extensive models of their world and their selves and behaviors, which is similar to much of human awareness. Speaking of robotic cars, they may continue to lack a human type of awareness and self representation because they do not have the more animal-like bodily centered representations and emotional drives. But who knows as we continue to ramp them up.

    Along those lines, the next interesting discussion for me is how language allows human world/self models to complicate and multiply. This allows our models to consistently or quickly represent “John in NY in the US on Earth, married to Jim . . . .” As well as model our selves on a time scale, such as “I wish to go fishing tomorrow” and “will eventually die.” Labeling and explicit categorization may be necessary for such widespread models. I think that leads to a type of self awareness that far outstrips the awareness of dogs, early rudimentary-languaged-hominids, and google cars. Human experience in that way is more globally aware and more self aware. Crudely, the dog has awareness of the wanting of a bone; the chimp is aware that he has to placate this other chimp to fulfill his own food desire; and humans are aware that “I payed 10.50 for a tasty burrito yesterday and will probably do similarly again in a week”.

    But, nonetheless, sensory and bodily representations are integral parts to our awakening, and in their own right are a type of awareness.

    Liked by 2 people

    1. Thanks Lyndon. Glad you enjoyed the series.

      There are a lot of people who think that language is integral to consciousness, and therefore that animals aren’t conscious. I’m not convinced that it is the central defining feature of consciousness. But I think what’s behind language, symbolic thinking, is a central feature of what’s unique about human intelligence.

      Our ability to imagine, say, another continent without ever experiencing it first hand, or black holes, galaxies, or other universes, largely comes from having the ability to think symbolically. Language is just the most obvious manifestation of that ability. It seems to be the one feature that is unique to humanity, with neanderthals being the only other species that might have had it, and calling them a completely separate species seems less and less justified as we learn more.

      One thing I’ve wondered about in all this is the distinction between consciousness and intelligence. Clearly having more intelligence enhances consciousness, but only for certain types of systems. A data warehousing system can have a high degree of intelligence but never do the things we intuitively think of as composing consciousness. And along with Steve’s point, it might be that without creature type motivations, no system will seem conscious to us in any sustained fashion.

      Liked by 2 people

  5. I had started to get into the symbolism versus language but cut it out because of brevity, and because I am not sure I understand all that it entails. We seem to have better models (awareness) of the world because of our scientific theories, say seeing a diagram of an object. However, it is difficult to imagine our complex scientific understanding and symbolism today without language. But we can imagine early humans drawing crude pictures and comprehending such without more complex language arising, and that these kind of symbolic structures aided their models and awareness of the world. It seems difficult that rudimentary symbolic use would have given rise to our modern consciousness without language arising and allowing for even more complex symbolic use. Speaking of that, we could probably form an empty historical theory about the co-arising of greater symbolism along with the arising of language, or having the one lead to the other.

    I do not necessarily mean to deny consciousness to animals. There may however be some important divide between being aware of (or modelling) the world, and being aware that you are aware of the world. Trying to parse what exactly consciousness in and of itself is or what historically we have meant by the concept is a tangle.

    I agree with your intelligence comments.

    Liked by 1 person

    1. On animal consciousness, definitely. That was the point I tried to make at the beginning of the post. There are substantial differences between the simplest conscious creatures and the simplest mammals such as a mouse, and between them and intelligent species like great apes, dolphins, and elephants. There’s a difference between us and chimps, but it isn’t an order of magnitude difference.

      All of these species feel and perceive, they all model themselves and their environment, but I think those higher up on the intelligence scale have models that are deeper and broader. A human has a far deeper appreciation of their existence and experience than a mouse does.

      Like

  6. Nice final post on the F&M book Mike, and I get a sense that these ideas will be considered a good bit more in your future posts. Sounds good to me! In some ways I have competing ideas however.

    For example, regarding consciousness as a continuum of information processing capacity, I see that their model didn’t bring you hope that a clear distinction is warranted. From the model which I’ve developed however, this line can essentially be placed at “affective sensory consciousness,” or what I consider to be the punishment/reward which drives conscious function. Then presumably a specie without “affect” could still have mind, though only in a non-conscious sense. Perhaps modern ants function in this “robotic” manner. Then lower still, I consider plants and microbes to be purely “mechanical, or to have no central processing “mind” at all. (From this model I suspect that the vast majority of the human functions “mechanically,” then the vast majority of its mind functions “non-consciously,” while “conscious” existence is all that we can actually perceive.)

    As for claims that F&M have somewhat addressed the hard problem of consciousness, in this regard you seem to have essentially said “whatever.” I’d second that. We surely have far more important (and attainable) things to straighten out.

    I’ll be working my way back through your previous posts to potentially get a sense of how I might better interest you and your readers with my ideas. This does seem to be a wonderful environment!

    Liked by 4 people

    1. Thanks Eric!

      On having a clear break, I think if you talked with Feinberg and Mallatt, they would say that they do see a clear break, with the rise of image maps. And I can see the distinction that you’re making. (More comments on it below.) The problem is that both criteria are a certain category of functionality. And whether any specific case of functionality makes the grade, particularly cases on the border of our criteria, will always be a matter of interpretation and debate. This seems like an unavoidable issue for any functionalist understanding of the mind.

      When talking about affects, I think we have to be careful to make a distinction between an affect, perception of a reaction, and the reaction itself. Non-conscious organisms have autonomous reactions. Event A triggers action A, event B triggers action B, event C triggers action C, etc. But what led us to evolve a stage where we were aware of the reaction (an affect) that we may or may not take action on?

      I think the sensory models are what led to it. With the rise of the models, we started reacting to the model built from perceiving the event rather than the event itself. As the models became more information dense, a situation developed where a model might have information in it that implied event A, B, and C at the same time, leading to reactions A, B, and C concurrently, except that some of the reactions may have been contradictory, forcing the need for trade-off considerations, an extra step between event stimulus and action. This trade-off process required tight coordination between the sensory model systems and the affective systems, which is why we experience it all as a unified mental experience.

      One thing this is making me wonder is, is the difference between conscious actions and unconscious ones simply that the conscious ones require that trade-off processing? I’m not sure. We can zone out when driving to work, but surely that requires trade off processing at some level, but actually if any part of the driving process requires significant trade-off considerations, such as whether to take an alternate route when a highway is jammed, it seems like I briefly become conscious during that stage. I haven’t thought this out carefully, just me thinking out loud at this stage.

      Anyway, I’d be delighted to discuss your views on this or any post.

      Liked by 2 people

      1. Mike,

        I’ve been happily thinking about your response all day. I see that you have some very coherent models, which should greatly aid your comprehension of mine, as well as the converse. I’d rather not get too technical for the moment (since we are starting from the middle), but I will say some things about your “driving” scenario.

        Let’s imagine that modern ants have no consciousness (even though I suspect that they do). Thus here inputs would go through a processor to produce various associated outputs, essentially as our computers function. Then let’s imagine non-conscious minds which become so advanced, that distinctly different “conscious” minds are created within them that thus feature their own modes of input, processing, and output. Furthermore let’s say that the human conscious processor happens to be relatively puny, thus forcing us to practice a great deal in order to learn to drive a car. Here I imagine the non-conscious processor aiding the conscious processor (graphically represented with a “learned line”). Thus experienced drivers seem able to somewhat zone out and let “a computer” work the pedals and such, while obviously leaving the conscious mind a role as well.

        To go just a bit further, I suspect that “autonomy” was actually why consciousness evolved. This seems to have forced life to figure things out for itself, given that phenomenal experience gave subjects “skin in the game,” or punishment/reward. Otherwise I suspect that evolution would have needed to directly program us to function as we do, such as how we drive cars and read blogs.

        I can’t wait to finally start going through your site for the weekend!

        Liked by 2 people

        1. Thanks Eric. As always I’m grateful for your kind words.

          Ant consciousness is an interesting question. They do seem to have awareness of the outside world and each other, and the ability to learn to some degree, although that varies between species. I agree with you that they probably are consciousness, and given how tiny their brains are (with 250,000 neurons according to Wikipedia), it really demonstrates that primary consciousness is not really at all about processing power or capacity. Our smartphones passed the capacity of insect brains long ago. The fact that our smartphones don’t trigger our intuition of consciousness seems to be because we haven’t programmed them to behave like a living thing.

          Occasionally people speculate as to whether ant colonies overall are conscious. This is interesting because it really challenges our intuition of consciousness. It’s not clear to me that an ant colony models itself and its environment, although a case could be made that its scouts are distances senses and their communication back to the colony is modeling. Except that the communication appears to be in the form of pheromone trails. It’s easy for me to see the colony as a super-organism, but not necessarily as a conscious one.

          On the driving thing, I totally agree and see our pondering as compatible. I think when we learn something, like driving to work, it becomes possible for our reactions to it to be mentally reflexive, and thus subconscious. It’s only when those reflexes won’t suffice that we need to bring consciousness to bear on it, away from whatever it was modeling while the reflexive systems worked.

          Just to clarify, when I said “autonomous” in relation to reflexes above, I meant automatic from consciousness. But what I currently think consciousness evolved for is to maximize the amount of information that organisms can react to, to in essence increase the causal information involved in decisions, to maximize the scope of the environment, both spatially and temporally, that could be taken into account for decisions.

          At least that’s my current thoughts on it? It might be different tomorrow 🙂

          BTW, when looking at my past posts, some of them, particularly the older ones, may not represent my current thinking. In particular, I used to be quite taken with Michael Graziano’s attention schema theory of consciousness. I haven’t written it off entirely, but it doesn’t hold the appeal to me it once did. Reading Damasio weakened its hold on me, but F&M really made me decide that it’s not the pivotal insight I thought it was.

          Anyway, looking forward to any thoughts you might have.

          Liked by 2 people

          1. Mike,

            I would have been quite displeased if you’d though that your beliefs hadn’t evolved over your time here! I made it back to your 5/24/15 “I was wrong” post, where you essentially stated that we must always question ourselves. Of course this further validates your “Unless I’ve missed something?” catch phrase. For the moment I have little cause to delve into your 646 previous posts — I’ve gotten everything what I wanted.

            Furthermore I’m quite impressed with the caliber of your readership! Where’s the bickering and bitchiness in that lot? Are they human? For some reason I failed to notice a normal drive to place theory of mind victory over rational discourse, or even the opposite — passive parsimony. It will be an honor to get to know these people better.

            As for me, I should disclose that I’ve developed various radical beliefs over the adult half of my life. They’re founded essentially from your current position, I think, though with an added layer of models as well. We should have plenty of time to get into that soon enough however.

            Liked by 2 people

    2. Thanks Eric. I’m actually glad you didn’t stray into the 2014 archives. I used to clog up the blog with a lot of link sharing, something I now keep contained to the twitter feed. I try to keep the more substantive stuff available in the labeled categories.

      One of the things I was pleasantly surprised to discover when I started blogging is the incredibly thoughtful WordPress blogging community. The discourse from them is far and above what I used to encounter on in a lot of other online communities. (Or what I still occasionally encounter when I wander over to the big media sites.) There are occasionally sharp disagreements, but I’ve found outright trolling to be very rare.

      Radical beliefs are welcome here. There’s a mathematical Platonist that occasionally visits, and a few mystics, although most of my readership are science or philosophy oriented. They’re generally a thoughtful bunch for anyone who can describe the reasons for their outlook.

      Anyway, happy to have you join the discussions!

      Liked by 2 people

      1. Well Mike, I’ve attended WordPress sites almost exclusively since January 2014, but still consider your site special. Perhaps I haven’t ventured around enough, but will hopefully find similar environments at the sites of others here. (Mine’s not so much “blog,” but rather “manifesto.”) As things stand I’ve put in a good bit of time at Conscious Entities, Sciencia Salon, The Electric Agora, and Plato’s Footnotes. Mystics? Yea they can be fun. As for Platonism, I remain quite indebted to one MUH Platonist who tutored me over several months in a time of need. Nevertheless when I say that I’m “radical,” also know that my ideas do seem to essentially correspond with yours, though from that point I believe that I’ve developed something quite revolutionary. If valid this suggests that our modern mental and behavioral sciences remain essentially as physics was before the rise of Sir Issac Newton. Unfortunately for me, invested professionals tend to consider this notion disrespectfully. Are they pacified once it’s realized that my ideas can be quite difficult to counter? While logic might suggest this, in practice I find that that’s the point where things tend to get extra sour!

        Even if this does happen to be a great community however, another issue is that my general commentary here should only provide bits and pieces of the full picture that I’d like to display. Therefore I also seek private discussions with interested people. I need partners to help develop my ideas wherever they are weak, as well as to promote them wherever they are strong. It is my hope that you and/or others here will become interested enough to help. We shall see…

        Liked by 1 person

        1. Thanks Eric.

          Looking forward to learning more about your views. I read the “Cocktail Party Version” on your site. It sounds like a morality based on evolutionary instincts. (Although I may be utterly misunderstanding it.)

          Have you read Jonathan Haidt’s ‘The Righteous Mind’? He approaches morality from a foundations aspect, foundations ground in evolutionary instincts, although the link between those instincts and actual moral positions is like the link between taste buds and culinary preferences, in other words, heavily influenced by culture.

          Liked by 1 person

          1. Thanks for taking a look Mike, though I must apologise for continuing to present material which was written when I had very little understanding of the perspectives and terms used in the academy. (Actually so fair I’ve only improved a bit in this regard.) I’m sure that your Jonathan Haidt speculation was a reasonable guess, though no, my project is not similar. (In the past I haven’t had much incentive to improve my site, given that few have bothered with it, though your reading should provide sufficient motivation.) In short, I believe that science must theorize the parameters of good/bad existence that’s common to all that harbors it. I believe that such an acknowledged principal would then be used to substantially help our mental and behavioral sciences.

            Back in my college days I decided that if so many brilliant people in these fields had essentially failed to harden them up, then perhaps they contained various unfortunate conventions? Thus I decided to become generally educated in seperate areas (to potentially minimize my exposure to problematic conventions), while continuing to work on my ideas. After about 25 years I did become sufficiently satisfied, and fortunately by that time there were blogs to expose me to long avoided conventions.

            Apparently I need to emphasize that my ideas do not address what’s moral or immoral, but rather that they seek to describe the realities of good/bad existence for any given subject. This is not “ought,” but rather “is,” or it’s “description” rather than “prescription.” Lately I’ve been using the acronym of “ASTU.” The “amoral” references what I’ve just mentioned. The “subjective” mandates that in all cases a unique subject must be identified (and whether the people of a city, or perhaps its cats, or perhaps the life of one person, or even that person’s existence over ten seconds). Then “total utilitarianism” is included to mean that each and every positive to negative unit of utility, will determine good/bad for the subject when compiled over the period. Simple right? It’s a theoretical absolute value for anything at all.

            From this premise I’ve gone on to develop quite a few associated models. “Consciousness” is the one that I’m most proud of, though I like ’em all. I’ll naturally find reasons to present them with your applicable posts, as well as expand when others show an interest.

            (Unfortunately I may now be coming on too strongly. I was hoping to instead begin a bit more subtly this time!)

            Liked by 1 person

          2. Thanks Eric. You’re not coming on too strong at all. I invited your response and very much appreciate it.

            Although it did just occur to me that I should have asked questions about your post in that post’s comment thread. Sorry about that. If I have any other questions, I’ll be sure to ask them there.

            Looking forward to hearing more about your views!

            Liked by 1 person

  7. I like to think I’m logical. I also admit I could be wrong about all I believe and think about.! But I don’t believe the universe is simply an act of natural processes as we understand them! I think human beings are a special bi-product of what ever process is going on in the universe. Some day the uniqueness that science is missing may be discovered and if so science will have to rethink what some of its basic principles and it may be that what life is will have to be rethought. Who knows maybe even what death is will have to be rethought We have come a long way scientifically but I think it’s going to end up being only the tip of what we are as a species. If you want to go way out on a limb I think it’s complete plausible we live in a matrix like computer simulation created by something.

    Like

  8. I like the idea that there are gradations of consciousness and self-awareness. That makes a lot more sense to me than the notion that some computer system can suddenly switch over from being just a computer to being a full artificial intelligence.

    Liked by 2 people

    1. I agree. In truth, I think Steve is right, that the consciousness of engineered systems that behave more like tools rather than life will long be controversial, and ultimately a matter of philosophical disposition. Although even a tool that can talk in terms of itself may eventually convince most of us that it’s having some kind of experience.

      Liked by 2 people

  9. Great post, Mike. I like the computer game analogy, which makes clear that sophistication and intelligence aren’t really the driving factors of what we intuit as consciousness.

    I haven’t read all the comments here or the posts leading up to this one, so I hope I’m not going over something you’ve already discussed. One thing I think Steve touched on is the way we intuit consciousness, which plays into your set up that it’s not necessarily tied to intelligence (spiders being not as intelligent as the best self-driving car we can imagine.) We also have to consider the way we view artificial systems vs. biological beings. The fact that we created one and not the other will certainly come into play regardless of the behavior exhibited. We can’t simply say, “Well, this behavior equals that, therefore this is conscious since that is conscious.” There are so many things that affect the way we think of these matters, whether or not we’re being objective or fair. We might insist on a sort of one-to-one comparison of behavior and also compare the inner workings and modeling and say that if they’re the same, they’ve both reached the same level of consciousness (or lack of consciousness). We might insist that both have survival mechanisms and are goal-driven. This seems perfectly reasonable! And yet, there’s something missing…something that doesn’t seem to square with the way we intuit consciousness.

    I think the shadow of doubt cast on AI will not be overcome by such comparisons, not until AI reaches some threshold that makes it repugnant to us to consider it as anything but ‘like us.’ This threshold might not be a stringent thing, it might not really make sense in some objectively verifiable way. As you mentioned in a reply to Steve: “…when testing robotic land mine clearers, military personnel ended the tests early because letting the little robots continue after they had legs blown off seemed cruel.” I assume you mean the robot’s legs were blown off? I hope that’s what you meant. 🙂 Here we see that we’re not exactly objective or critical thinkers when it comes to viewing others, and the idea of harming an unfeeling robot is already repugnant. Yet, self driving cars might not feel the same way to us, even if they do have all the inner and outer workings required to achieve the level of consciousness exhibited by a presumed conscious biological creature. The robot—which might not even come close to sharing objective criteria for consciousness—gets our sympathy by virtue of having legs!

    I’m not saying you’re wrong about consciousness, only that our intuition of it seems to evade a reasonable objective analysis. And even if we are being reasonable, the idea that we created whatever it is in question will certainly make a difference.

    Liked by 2 people

    1. Thanks Tina! Excellent insights on assessing consciousness.

      “We might insist that both have survival mechanisms and are goal-driven.”
      I actually do think goals (or at least preferences) are a necessary component of consciousness. Without them, there is no affect awareness, and without affects, you only have a model building engine. Such an engine definitely has its uses (see the current deep learning networks, albeit still a shadow of what biological systems can do), but I doubt many of us would regard it as conscious.

      But I personally don’t think it’s necessary for a system to care about its own survival, at least in a manner that trumps its primary function, for it to be conscious. I’m learning though that most people’s intuitions do require that. Which probably explains the widespread anxiety about AGIs (artificial general intelligence). For most people, the “general” part seems to require a survival instinct similar to organic systems.

      “I assume you mean the robot’s legs were blown off? ”
      LOLS! Definitely. The more sobering part is that human soldiers (not to mention civilians) have historically had legs blown off from landmines. Given that, in real war, I’m sure the military would quickly get over any inhibition of watching a robot minesweeper gets its legs blown off, particularly since the minesweeper robot won’t mind its missing legs or even complete destruction.

      “I’m not saying you’re wrong about consciousness, only that our intuition of it seems to evade a reasonable objective analysis. ”
      Actually, I agree completely. It’s becoming increasingly evident to me that consciousness, in an of itself, isn’t a scientific concept, but a philosophical one. Science will increasingly be able to discover the information flows in brains and how they lead to certain behaviors. And those behaviors will increasingly be reproducible in technological systems, but to what extent consciousness is present will always be an intuition, and something of a philosophical decision. No matter how convincing a machine is, there will always be those who insist it lacks a soul.

      Liked by 2 people

      1. I kind of don’t want AI to care deeply about it’s survival, I have to admit. That just seems creepy, even if it doesn’t decide to take over the universe or anything like that. Just…why? Why have it? I can understand a narrow self-preservation of sorts, but I don’t want to see robots going around procreating and doing all the crazy things we do to make sure their off-spring get on well in life. (No soccer mom robots, please.) Whether this cuts off the possibility of general intelligence or not is totally outside my realm of knowledge. My intuition tells me that self-preservation is just the sort of criterion a specialist would require to call something conscious. Goals might be required, otherwise the robot or whatever would just seem aimless and stupid, and being a human creation, we’d be unlikely to give it the benefit of the doubt.

        “No matter how convincing a machine is, there will always be those who insist it lacks a soul.”

        Apparently those who know how the machine works! Or maybe they’ll be the first to draw objective comparisons, demonstrating to the public that there is no difference between this AI and, say, your dog? Who knows.

        The rest of us will do whatever feels right for us at the time. I sense that there’s some semi-magic formula that will lead a great number of people to treat AI as if it were conscious, and I doubt it will be some objective criteria that we’re all on the lookout for, but something totally irrelevant, something that will seem to come from nowhere. Everything will depend on the number of times the AI blinks or some such ridiculous thing. Or if it’s cute.

        Ha, that’s it. I’m betting on cute. As in puppy cute. If an AI designer wants to convince the world that the creation he’s presenting to the world is conscious, he’d better make it look like something you just want to squeeze because it’s so adorable. Not something humanoid. Not something insanely intelligent, not something clever, not something that can write poetry. Cute. Cute-ness overrides a great number of obstacles, and even somewhat skeptical people could find themselves succumbing to childish babble out of pure fun and enjoyment. They might later say they don’t really think this machine is conscious, but over time that skepticism could collapse as more and more people treat this creation as if it were conscious. Then the very idea of consciousness becomes stretched, the way it has with animals (remember when people used to think animals were not conscious?) It’ll make a great number of people angry, especially those who’ve spent numerous hours homing in on those very qualities that make up human consciousness. But the rest of the world will move on.

        Kind of similar to grammatical changes. Grammarians still get up in arms over certain things, specialized battles are fought, but eventually they too roll over as the rest of the world does whatever it wants to do. Few people remember that the saying, “I’m nauseous,” means “I make others want to throw up.” (“I’m nauseated” doesn’t seem to get used as often now.)

        Well, that’s my two cents. Not really based on philosophy, just a wild guess. 🙂

        Liked by 1 person

        1. On AI prioritizing concern for its survival, I think the main point is it won’t be useful for the things we actually want AI for. I can’t think of any pragmatic purpose that would be enhanced by it. Even in cases where we want the AI to act like a human (think sexbot or just emotional companion), we don’t want it to act exactly like a human, such as getting bored by our company or angry when we leave the toothpaste uncapped.

          On cuteness, a few years ago MIT researchers let some people play with cute robots for an hour, then tried to get them to destroy the robots with knives, hammers, etc. The people refused, with one person only doing so after the researchers threatened to destroy all the robots unless the people destroyed at least one. Afterward, there was apparently a pall in the room. http://www.bbc.com/future/story/20131127-would-you-murder-a-robot

          I like the comparison with grammar. (Never realized the “nauseous” vs “nauseated” was a thing.) It recognizes that our intuitions, along with our language about those intuitions, are both malleable over time. People in the 18th century had no trouble seeing a cat get tortured because they didn’t think the cat could truly suffer. But medieval courts sometimes tried animals as criminals, and prehistoric humans appear to have worshiped some animals. All of which indicate that we’ve never been terribly rational about this.

          Ultimately the discussion about consciousness is a discussion about how like us a system is, or at least how sympathetic it is. Very alien systems will probably always have a disadvantage in that, although like you said, the things that might make them sympathetic will not necessarily be at all logical.

          Liked by 1 person

          1. That cute robot experiment seems to undercut the Milgram experiment in a curious way. Maybe we care more for cute robots than human strangers? Interesting.

            I wonder about a robot that gets bored with us. We might find that amusing. Although it might be best to leave the toothpaste control freak stuff out of the code.

            Liked by 1 person

          2. I know some people who care more about animals (particularly cute ones) than people, so it’s not hard for me to imagine people caring more about cute robots. I think one reason why non-humans often get our sympathy is that they’re never going to be our social rivals, whereas people often are.

            Speaking from each person’s perspective, a robot getting bored with you might be amusing, but one getting bored with me will quickly slate itself for being replaced by a different or newer model. At least maybe unless the robot’s purpose is to emulate an actual person’s response for training me on how to handle people. I could see that maybe being used to train counselors and the like.

            Liked by 1 person

          3. On social rivals, I can see that to some extent. Also, there’s something to be said for finding comparisons interesting when it’s not expected. For instance, people like to ask Siri all kinds of silly questions, or curse, or whatever, just to find out what the response will be. You wouldn’t do this to a person, hopefully. (Although children do.)

            That’s an interesting idea, a behavioral learning robot. I can see that working with people with autism, as they might feel more comfortable with role-playing with a robot.

            Liked by 1 person

          4. iPads and similar tablet devices have often been a boon for autistic children, who can interact with them much easier than they can with other humans. But it probably doesn’t help them much with actually learning to relate with people (to the extent it’s possible depending on where they are on the autistic spectrum). An infinitely patient robot with levels of humanness might just what many of them need.

            Liked by 1 person

  10. Hey Mike,

    I’m not sure if this directly addresses your article, but I think there’s evidence that consciousness might be a gradual way of replacing instinct. Take for example the Cyclosa spiders. These very weird bugs build fake spiders in the middle of their webs as a distraction to predators. I think you’d be very hard pressed to ascribe this behavior to the spider’s consciousness. https://www.wired.com/2012/12/spider-building-spider/

    Human beings obviously do similar stuff. Think of the fake, rubber tanks that George Patton commanded before D-Day, but in the human case the deception is the product of planning, empathizing with an enemy’s point of view and social discourse (generals sitting in a planning room, for example). Of course, the advantage to getting your deception strategies from George Patton rather than a spider is that Patton can modify or start from scratch whenever he feels like it while the spider has just the one way to trick its enemies.

    This makes me think that consciousness might be, on an adaptational level, a replacement for genetically programmed behaviors. It would certainly explain why human babies have so few instinctual behaviors.

    (Total tangent here, but it absolutely boggles my mind to imagine that DNA codes for building a fake spider. I would love to take a look at the “fake spider” protein, if it even exists. This sort of thing makes me think our understanding of the genetic basis of behavior is very under developed.)

    Liked by 1 person

    1. Hey Ben,
      That’s really interesting with the spiders. Thanks for sharing that article!

      On consciousness and instincts, rather than “replacing”, I think I’d use the word “enhancing.” Since writing this series, I’ve gradually developed a layered understanding of cognition:
      1. Reflexes, instinctive programmatic reactions to stimuli.
      2. Perception from distance senses (sight, smell, hearing), which increases the scope in space of what the reflexes are reacting to.
      3. Attention, which prioritizes what the reflexes are reacting to.
      4. Imagination, scenario simulations, which increase the scope in time as well as space to what the reflexes are reacting to.
      5. Volition, which uses information from 4 to decide which reflexes to allow or inhibit, transforming them into feelings, dispositions to act rather than programmatic responses.
      6. Metacognition, introspection, self reflection, which enhances 4 and 5 with a feedback mechanism.

      I agree that there probably isn’t much, if any, imagination involved in what those spider are doing. It’s unlikely they simulate the effect the fake spiders will have on others. It’s likely something they do which is rooted much more in 1-3. That isn’t to say that spiders don’t have imagination and volition, but it seems far more limited than what mammals, much less humans, have. (And of course, there’s no evidence they have metacognition, but then there’s scant evidence non-primates have it either.)

      One note about human babies. They do have a lot of instincts. It’s just that human babies are born far earlier in their gestational cycle than most non-human animals, which makes them far more helpless and unable to display instinctive behavior than, say, a newborn calf. (From what I understand, the leading theory for why this is so is due to the narrow birth canal in human females, restricting how large a baby’s head can be at birth.)

      On DNA and behavior, I think it’s worth remembering all the layers in between, and how contingent observed behavior, even instinctive behavior, is on developmental and environmental factors. In the case of Cyclosa, change its environment enough, particularly during its gestation, and you might end up with very different behavior.

      Liked by 1 person

      1. I thought about this overnight and I like your model of consciousness, though I think it might be possible to add in a few more layers. Specifically, I was thinking about the literal, psychological power of narrative. Humans and a few other higher mammals have mirror neurons that allow us to internalize the experiences of other people, animals or even inanimate objects. We wince with pain when we watch Tom and Jerry clobber each other, for example.

        This is all fine and well, but the really interesting thing is that it gives us something close to the ability to read minds. There was an experiment a few years back where they put a story teller in one MRI machine and had him narrate for a listener in another MRI. What they found was an amazingly close correspondence between the brain states of the storyteller and the listener during the story. The storyteller was literally putting his mental state into somebody else’s brain.

        I realize you could fold this into number four, but the problem is that this ability to mirror and mind-read only happens in a very few species, many fewer than have demonstrated the ability to imagine as you defined it above.

        As for babies, perhaps I should elaborate. The ratio to purely instinctive to conditioned and learned behavior in young humans is much lower than in most other animals.

        I also think you can make a case that steps 4,5,6 and (if you accept my premise about narrative) 7 are indeed replacing instincts in at least some ways. Some examples I thought of include the following:

        During the Nicene Council, the church fathers had to go out of their way to stem the tide of people self-castrating.
        As long as there have been written records, there have been people trying to end sex. There have always been vestal virgins, celibate monks, eunuchs and/or cults of sexual purity. Today is no different. https://materialfeminista.milharal.org/files/2012/10/Political-Lesbianism-The-Case-Against-Heterosexuality-LRFG.pdf
        As long as there have been written records, there have been suicide pacts and death cults.
        People ranging from Plato to Buddha have argued that virtue lies in the ability to suppress and overcome instinctual behavior. Middle school might be better understood as “learning-to-overcome-the-instinctual-drives-of-puberty-school.”

        Liked by 1 person

        1. Thanks Ben. I should have emphasized that those layers weren’t meant to be comprehensive or necessarily authoritative. I developed them primarily to call attention to the fact that what we intuitively mean by “consciousness” is a hazy and shifting category of capabilities. I usually bring it up when something like panpsychism comes up, or we start talking about consciousness as a singular concept.

          As you note, the layers as I articulated here, don’t explicitly cover the self model, or the related models we build of other minds, a quality of social species. Similar to consciousness, I see self awareness, as well as awareness of other minds, as something that also comes in layers. There is awareness of our own bodies (which prevents animals from trying to eat themselves), awareness of our own emotions, and awareness of our own thoughts, the last of which only kicks in with layer 6.

          And you’re right that layer 4 is vast. Fish have imaginations that allow them to simulate a few seconds in the future. Most land animals can maybe do it a few minutes. Humans have the ability to do it days, months, or years into the future. But I think our ability to do that requires symbolic thought, which only becomes possible with layer 6 where we have access to mental life so we can build concepts of our concepts, models of our models.

          If I added a 7th layer, it would probably be symbolic thought itself, to emphasize that it’s enabled by layer 6. I think layers 4-7 would enable your layer 7, narrative. But as I noted above, these layers are really just mental crutches, not a rigorous theory, so I don’t object to alternate layers.

          On instincts and learned behavior, sounds like we’re on the same page. I would just note that when we override an instinct, we’re always doing it ultimately in service of some other instinct. From an evolutionary viewpoint, that instinct may be a misfiring one, such as when it causes people to self-castrate, but I think it’s an instinct nonetheless. Often what we’re really doing is overriding a short term instinct in favor of one that requires imagination and self reflection to realize will eventually be satisfied by the override.

          Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.