More on computer consciousness

Marvin-TV-3After discussion on my post the other day on consciousness being in the eye beholder, I realized that I probably should expand a bit on my hypothesis about what we would intuitively consider to be a conscious being.

We, as minds, are aware.  We have awareness from our senses: sight, hearing, smell, touch, and taste.  From the input of these senses, we build a model of the outside world.  We are aware of this outside world, which enables us to respond to it, to plan and execute movement in order to avoid danger, secure food, procreate, and survive.  We are more than just aware, we are aware of our own awareness, of our own consciousness.  We are self aware.  (Well, at least to some degree.)

We are also aware of other minds.  We have a theory of mind.  We can infer their existence by the actions taken by those minds.  The minds most like ours, the ones we can be most sure of, are those of our fellow human beings.  Due to language, the ability to communicate in detail, we know more about the minds of other humans than any other minds we encounter.  As members of the same species, we recognize many of the same motivations, emotions, drives, instincts, and experiences.  Since we all have similar programming, we can empathize with each other.

We also recognize minds in many other species.  We recognize many, although not all, of the same drives, emotions, and programming in them that we also possess.  We may see it most strongly in other primates, since they’re the closest to us on the evolutionary tree.  We see it to a lesser degree in other mammals, but still, most of us still have a strong intuition that dogs, cats, and bears are still conscious.  They seek food, have fears and joys, seek mates, and strive to survive.  We recognize the common motivations and experiences we have with them.

beesAlthough we don’t recognize it as strongly in insects, we still detect glimmers of similar experiences and motivations.  We still intuit that they have some form of consciousness, although perhaps more limited.  Many of us are bothered by the decline in the bee population, partially due to ecological concerns, but also because we feel some empathy to their plight.  Although we may not hesitate to step on a roach, many of us find the idea of turning them into cyborgs disquieting.  Whether or not we sympathize with them, we do empathize, at least to some degree.

At this point, we don’t feel any empathy toward computers, not even to the degree that we might feel toward an insect.  But why not?  Modern laptops and desktops now have more processing power than the brain of a worm, or an ant, and in many cases even more than a bee.  But no one mourns the disposal of the last generation of laptops, tablets, or cell phones.   Why is that?

There are a lot of theories about why we are conscious and machines are not.  Many of them posit that consciousness lies in the molecular adaptability of brains.  Some even take this to the quantum level.  While it’s possible we might eventually discover that the brain requires fidelity down to these levels of resolution to achieve what we commonly think of as consciousness, the preponderance of the current scientific evidence doesn’t point in that direction.  The neurons and synaptic connections appear to be the basic unit of information processing and storage.

Others point out that computers can never know anything on their own, can never have anything original to say, will never know the disappointment or worry of failure.  It might pay to unpack these phrases a bit.  What do we mean when we say we “know” something?  Or when we have something to say “on our own”?  Or that we are disappointed or worried?

Knowledge is justified true belief.  It is a model in the mind of a portion of reality.  It is essentially data. Consider your knowledge of the address of this site.  Why can’t we say that your web browser “knows” that address?  What distinguishes the data it holds about an outside entity from the data we hold about that entity?  (Aside from complexity.)

Ultimately, what can we really say “on our own”?  That is, what can we really say that isn’t a result of perceptions (inputs) from our environment, synthesized perhaps over a lifetime, or derived from those inputs?  Our unique combination of those inputs do give us a unique perspective, but a computer’s unique combination of inputs will also give it a unique perspective, separate from ours or other computers, even of the same model or programming.

And why are we ever disappointed or worried?  What is disappointment and worry? Aren’t they emotions, instincts, our basic programming provided by evolution?  Other than complexity, how is that different from the urgency my laptop shows when it’s about to run out of battery charge?  You can say that urgency was programmed by someone, but wasn’t our urgency about things programmed by evolution?

There are certainly differences in degree here.  My laptop’s knowledge and ideas are not (yet) anything like mine.  But I can’t see a sharp distinction, except in one area, our basic programming.  This programming is so central to our being, many of us often equate it with consciousness itself.  We intuitively think that when or if a computer ever achieves it, it will be concerned about its own existence, its own wellbeing.

But computers don’t have the evolutionary history that we do.  They’re not going to magically acquire these characteristics.  They’ll only get these motivations if we program them to have them.  Until we do, no matter how sophisticated they become, we likely won’t recognize any consciousness in them.  This raises the interesting ethical question of whether we should ever program them to have fear, perceptions of pain, or concern for their own wellbeing, and what our responsibilities to them would be once we did.

But, many will argue, a computer might be able to fool us into thinking it was conscious, but it still wouldn’t have an inner private experience.  It wouldn’t have qualia.  But what is qualia?  What is this inner experience we have?  I have a thought experiment for you to perform.  Warning, you might find it a bit disturbing.

Right now, you are currently awake and aware (at least I hope you are).  Imagine that you suddenly lost your sight.  Then suddenly went deaf.  Then, a moment later you lost your feelings of touch, of smell, and taste.  This included  losing the feeling of your body breathing, the state of your stomach, or any other sensation.  Finally, you lost your memory.  How much of qualia, of inner experience, is left?

Of course, computers already have memory and senses (keyboard, USB ports, network connections, etc).  What they don’t have (yet) is the creature level programming.  What we call ‘consciousness’ is ultimately motivations and experiences similar to what we have, and what we can intuitively feel that others have.  When we can have that intuition toward machines, we’ll consider them to have it.

Consciousness really is in the eye of the beholder.

h/t amanimal for the links to manbynature, Fitch’s nano-intentionality paper, and Jaques

19 thoughts on “More on computer consciousness

  1. Thanks for the clarification and expansion. I agree that observing behavior geared toward survival and reproduction, or conversation that indicated those goals as priorities, would go a long way toward our feeling that we were observing or conversing with an entity that possessed some level of consciousness.

    But I wonder if that would be enough once it were known that the entity in question was a machine. We do tend, on the whole, to think we’re pretty special and our consciousness plays a large part in that perception. I think we might have some difficulty sharing that particular attribute with machines regardless how convincing a performance we’re presented with.

    My reply to your first post, which in essence was “What? A machine conscious? Don’t be silly.”, might be a fairly common response, but who knows, it might be a while before the question presents itself and everything changes, even attitudes and biases.

    Like

    1. Good points. As people start to wonder if maybe machines actually are conscious, there will be my fellow programmers reminding you that, “It’s just a machine using algorithm xyz.” Those reminders might lose some of their force as neuroscience progresses and the algorithms that make us become clearer. (Or they conceivably could gain some force if human cognition did eventually turn out to be beyond science.)

      Two things that might demolish the distinction eventually. One would be if the machines advance to become more engineered life than robotic. The other is if we ever achieve mind uploading. If grandma is inside a machine pleading that it’s really her, it’s going to be pretty tough to regard her as not being conscious.

      Like

      1. I apologize for being somewhat pessimistic as far as progress regarding humanity’s self identity goes, but we’ve a proven track record of unwarranted anthropocentric hubris. I think I only posted this a couple of times at HP so in case you never saw it:

        ‘Minding the Animals: Ethology and the Obsolescence of Left Humanism’
        http://www.inclusivedemocracy.org/journal/vol5/vol5_no2_best_minding_animals.htm

        While it’s rather lengthy the relevant part in under the heading:

        ‘Modernity and its Discontinuities’

        … and references Bruce Mazlish’s ‘The Fourth Discontinuity: The Co-Evolution of Humans and Machines’ that I think you might find of interest. Grandma … LOLOL 🙂

        Like

        1. Thanks. I think your pessimism is rational.

          I do think progress will happen, but it will be in fits and starts, with many reverses. In our times, we’ve seen the conservative backlashes against modernity. I imagine the backlash against any version of a transhuman future might be severe. But, as severe as it might be, I tend to think it will ultimately be temporary. (Temporary could still be a very long time from the standpoint of human culture, but I think brief by the standard of evolutionary time scales.)

          Like

          1. One of the scenarios I ran through my head was based on the paper wasps we have out back and their nests under the 2nd story eave. I imagined watching them from a distance crawling about their nest, some flying off, some returning, and one or two occasionally chewing on the wooden fence some 20 ft distant – basically just going about their waspy business.

            Then one day, as I actually have, I find a dead wasp on ground, but to my surprise it’s a mechanism, a little robot wasp. It’s difficult to say for certain, but I think I’d have a hard time imagining it to possess a consciousness equivalent to an actual wasp even if it were surviving and reproducing like the real thing.

            Now I’m wondering if that might be a type of magical thinking of the essentialist variety:

            ‘Children prefer certain individuals over perfect duplicates’, Hood/Bloom 2008

            Click to access hood&bloom.pdf

            Hmm …

            Like

          2. I think I saw that experiment on a Science Channel show. It basically showed that children are innate dualists. When a toy was given a name, and apparently copied, they didn’t consider the copy to be the same named toy. Of course, if you’re a dualist, then a machine can never be conscious. There’s no doubt that dualism is intuitive. But, outside of everyday life, our intuitions are suspect.

            Like

  2. I don’t know how it works — I just know that evolving consciousness in any particular direction — imagine it being a pathway — makes a difference in time. Its when the pathway is mostly in focus that the progress begins to slow — contrast is a good thing if the excitement of making choices is desirable — as of computers — the newest quantum computers may just about come to life I suppose
    ~ Eric

    Like

    1. Eric, not quite sure if I follow you. I’ll agree that quantum computers, if / when they get off the ground, are going to usher in a new era of capacity and performance that may very well tempt us to consider them “alive”. Eventually though, we might get there regardless.

      Like

  3. “… our intuitions are suspect.”

    They most certainly are – I can’t remember if I’ve shared this with you, but if true, and I strongly suspect it may be, one the more fundamental of counterintuitive truths:

    The mind’s best trick: how we experience conscious will’, Wegner 2003

    Click to access wegner-trick.pdf

    … also relevant to this post, I came across both of these just in the last couple days:

    ‘Can this sneaky chimp read minds?’
    http://www.bbc.com/future/story/20131125-can-this-sneaky-chimp-read-minds/1

    ‘Is it ok to torture or murder a robot?’
    http://www.bbc.com/future/story/20131127-would-you-murder-a-robot/all

    Like

  4. I can easily imagine a computer or robot with consciousness, behaving like a living creature, perhaps equal or superior in intellect to a human. But I have never encountered one. The computers I know are nothing like that. They are nothing like even primitive creatures. They are like machines – like my car for example. That’s because they were designed and built like machines.

    To say that a computer has more processing power than an ant’s brain is like saying that a car has more horsepower than a horse. They are completely different things.

    But also, single-celled organisms don’t appear to behave like sentient beings – they are like machines. Watching insects move around, it’s quite easy to think of them as machines too, if you use your imagination. So I think there is some kind of crossover point where machine-lie behaviour switches into conscious behaviour.

    AI will emerge when we understand how to build machine intelligences. They won’t appear simply from building more processing power. They won’t emerge from tools like Google either, no matter how much big data and predictive tools are thrown at the problem.

    To think is a quite different matter, and one that we are some way from understanding. Probably such machines will have to build themselves, rather than being designed.

    Like

    1. I completely agree that AI won’t magically emerge from more processing power. Concern about it is a little bit like worrying that the lunch I forgot about in the refrigerator evolving into a monster.

      But you mentioned a crossover point where machine-like behavior switches into consciousness behavior. I think that “conscious” behavior is a convention, an intuition, that we award to something when we can empathize with it.

      We “mistakenly” make that award all the time, but the only difference I can see between a “mistaken” awarding of consciousness and a “correct” award of it is that the “correct” one endures while the “mistaken” one is eventually shown to have been given to a deterministic process. Given that neuroscience may be starting to show that our mental processing is a deterministic process, that distinction may be in danger of falling apart.

      Like

  5. Yes, you’re probably right. Intuition can be unhelpful when applied outside its normal range of validity.

    Single-celled organisms seem to me to act more like simple machines than conscious entities, but if consciousness emerges from the interaction of many cells, then even creatures with tiny central nervous systems must be more like us than like machines.

    Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.