After discussion on my post the other day on consciousness being in the eye beholder, I realized that I probably should expand a bit on my hypothesis about what we would intuitively consider to be a conscious being.
We, as minds, are aware. We have awareness from our senses: sight, hearing, smell, touch, and taste. From the input of these senses, we build a model of the outside world. We are aware of this outside world, which enables us to respond to it, to plan and execute movement in order to avoid danger, secure food, procreate, and survive. We are more than just aware, we are aware of our own awareness, of our own consciousness. We are self aware. (Well, at least to some degree.)
We are also aware of other minds. We have a theory of mind. We can infer their existence by the actions taken by those minds. The minds most like ours, the ones we can be most sure of, are those of our fellow human beings. Due to language, the ability to communicate in detail, we know more about the minds of other humans than any other minds we encounter. As members of the same species, we recognize many of the same motivations, emotions, drives, instincts, and experiences. Since we all have similar programming, we can empathize with each other.
We also recognize minds in many other species. We recognize many, although not all, of the same drives, emotions, and programming in them that we also possess. We may see it most strongly in other primates, since they’re the closest to us on the evolutionary tree. We see it to a lesser degree in other mammals, but still, most of us still have a strong intuition that dogs, cats, and bears are still conscious. They seek food, have fears and joys, seek mates, and strive to survive. We recognize the common motivations and experiences we have with them.
Although we don’t recognize it as strongly in insects, we still detect glimmers of similar experiences and motivations. We still intuit that they have some form of consciousness, although perhaps more limited. Many of us are bothered by the decline in the bee population, partially due to ecological concerns, but also because we feel some empathy to their plight. Although we may not hesitate to step on a roach, many of us find the idea of turning them into cyborgs disquieting. Whether or not we sympathize with them, we do empathize, at least to some degree.
At this point, we don’t feel any empathy toward computers, not even to the degree that we might feel toward an insect. But why not? Modern laptops and desktops now have more processing power than the brain of a worm, or an ant, and in many cases even more than a bee. But no one mourns the disposal of the last generation of laptops, tablets, or cell phones. Why is that?
There are a lot of theories about why we are conscious and machines are not. Many of them posit that consciousness lies in the molecular adaptability of brains. Some even take this to the quantum level. While it’s possible we might eventually discover that the brain requires fidelity down to these levels of resolution to achieve what we commonly think of as consciousness, the preponderance of the current scientific evidence doesn’t point in that direction. The neurons and synaptic connections appear to be the basic unit of information processing and storage.
Others point out that computers can never know anything on their own, can never have anything original to say, will never know the disappointment or worry of failure. It might pay to unpack these phrases a bit. What do we mean when we say we “know” something? Or when we have something to say “on our own”? Or that we are disappointed or worried?
Knowledge is justified true belief. It is a model in the mind of a portion of reality. It is essentially data. Consider your knowledge of the address of this site. Why can’t we say that your web browser “knows” that address? What distinguishes the data it holds about an outside entity from the data we hold about that entity? (Aside from complexity.)
Ultimately, what can we really say “on our own”? That is, what can we really say that isn’t a result of perceptions (inputs) from our environment, synthesized perhaps over a lifetime, or derived from those inputs? Our unique combination of those inputs do give us a unique perspective, but a computer’s unique combination of inputs will also give it a unique perspective, separate from ours or other computers, even of the same model or programming.
And why are we ever disappointed or worried? What is disappointment and worry? Aren’t they emotions, instincts, our basic programming provided by evolution? Other than complexity, how is that different from the urgency my laptop shows when it’s about to run out of battery charge? You can say that urgency was programmed by someone, but wasn’t our urgency about things programmed by evolution?
There are certainly differences in degree here. My laptop’s knowledge and ideas are not (yet) anything like mine. But I can’t see a sharp distinction, except in one area, our basic programming. This programming is so central to our being, many of us often equate it with consciousness itself. We intuitively think that when or if a computer ever achieves it, it will be concerned about its own existence, its own wellbeing.
But computers don’t have the evolutionary history that we do. They’re not going to magically acquire these characteristics. They’ll only get these motivations if we program them to have them. Until we do, no matter how sophisticated they become, we likely won’t recognize any consciousness in them. This raises the interesting ethical question of whether we should ever program them to have fear, perceptions of pain, or concern for their own wellbeing, and what our responsibilities to them would be once we did.
But, many will argue, a computer might be able to fool us into thinking it was conscious, but it still wouldn’t have an inner private experience. It wouldn’t have qualia. But what is qualia? What is this inner experience we have? I have a thought experiment for you to perform. Warning, you might find it a bit disturbing.
Right now, you are currently awake and aware (at least I hope you are). Imagine that you suddenly lost your sight. Then suddenly went deaf. Then, a moment later you lost your feelings of touch, of smell, and taste. This included losing the feeling of your body breathing, the state of your stomach, or any other sensation. Finally, you lost your memory. How much of qualia, of inner experience, is left?
Of course, computers already have memory and senses (keyboard, USB ports, network connections, etc). What they don’t have (yet) is the creature level programming. What we call ‘consciousness’ is ultimately motivations and experiences similar to what we have, and what we can intuitively feel that others have. When we can have that intuition toward machines, we’ll consider them to have it.
Consciousness really is in the eye of the beholder.
h/t amanimal for the links to manbynature, Fitch’s nano-intentionality paper, and Jaques
- How Consciousness Arises From Networks (disinfo.com)
- Archangel Metatron via James Tyberonn – Animal Consciousness: The Benevolent Nature of Cats & Dogs (aquariusparadigm.com)
- On the fascination with zombies (sureshemre.wordpress.com)
- Consciousness as material? (awakenthevoid.wordpress.com)
- Can a Computer be Conscious? (bloggingisaresponsibility.wordpress.com)
- The science of consciousness must escape the religious dark ages (oup.com)
- Unbelievable: Speaking Money $$ into reality now possible! (kingdomecon.wordpress.com)
- Scientists sign declaration that animals share the same awareness with humans (ascendingstarseed.wordpress.com)
- The Root of Consciousness (mysteryofmotivation.wordpress.com)