Fears of AI (artificial intelligence) are still showing up the media, most recently with another quote from Stephen Hawking warning that it might be the end of us, with Elon Musk, due to his own anxious statements, now being referenced whenever the subject comes up. I’ve written many times before why I think these fears are mostly misguided.
But another question that sometimes comes up, is when should we be concerned about how we treat an AI? When would we have an ethical responsibility toward an AI? I think it’s at about the same point that they become dangerous.
Except in terms of property, no one currently worries much about how we treat our computers. I use my laptop for my own needs, and when it has reached the end of its useful life, I replace it with a newer model. I have no concern about the laptop as an entity. The same can be said for my phone or any other automated piece of equipment or software. I have no temptation to consider these things fellow beings.
But at what point should that change? If my laptops continue to gain in power and sophistication, will we reach a point where I should be concerned about its welfare? Will we reach a point where it would be unethical to just throw a laptop away? I think it helps to ponder what an AI would need before we would reach that point.
General Information Processing Capabilities
Computers and software have been making relentless progress in capability. Abilities that were once considered AI are now just what computers do. This has led some people to say that AI is effectively whatever humans can do that computers can’t, yet.
AI was once the ability to beat a human at chess, until it happened, or the ability to play and win at Jeopardy, until it happened. Computers once couldn’t recognize human faces, but now they can. Just a few years ago, the idea of a computer system navigating a car on its own was science fiction. Now we might be heading toward that reality sooner than we thought.
It seems pretty evident that this progress in capabilities will continue. Computers will increasingly be able to do things that previously only humans could. But will any of these capabilities make AI more than a tool? I don’t think they will unless very specific capabilities are added. The good news is that these increasing general capabilities are all we will need to get most of the desired benefits of AI.
There’s a strong sentiment and fear that these increasing capabilities will accidentally lead to the ones below. But if you think about it, that is very unlikely. None of the capabilities listed above came easily. They didn’t arise accidentally by increasing computing power or capacity. All of them had to be heavily researched and painstakingly engineered. When things happen by accident in automated systems, it usually just leads to a non-functioning malfunction, not to complex sophisticated functionality.
Awareness / Consciousness
We don’t understand consciousness, and that lack of understanding makes people nervous, since we don’t know what might lead to it. Could the above increasing capabilities lead to a machine, an AI, being conscious without us planning it? Again, I doubt giving consciousness to a technological entity is going to be that easy. There are innumerable complex systems in nature, and only a tiny infinitesimal portion of them are conscious, meaning that the probability of it arising accidentally, at least without billions of years of evolution, is nil.
Of course, many people insist that conscious awareness, inner experience, requires an immaterial soul, making a conscious AI impossible. Others assert that consciousness requires a biological substrate, and that until we learn how to build / make / grow that biological substrate, consciousness won’t be achieved in an engineered system.
Myself, I think consciousness is an information architecture. One that we’ll have to figure out and understand if we want to ever give it to machines. My (current) favorite theory of consciousness is Michael Graziano’s attention schema theory. But even if that theory is correct, it’s still too early to give us much insight into how to actually engineer such an architecture.
Even if we do figure out consciousness, that doesn’t mean that it will automatically be beneficial for us to add it to AIs. There’s a good chance that we’ll be able to accomplish many of the same functions of consciousness (whatever they might be) using alternate architectures. Adding consciousness might be useful for some human interface purposes, but it might be a detriment for many of the other tasks we want AI systems to accomplish.
Even if we can add conscious awareness to a system, does that make it a fellow being? I don’t think it does. We’re still missing a couple of important attributes.
Self awareness is being aware of your own distinct existence separate and apart from the rest of reality. Most animals are not self aware. It appears to be an attribute that only intelligent species possess. For example, based on the mirror test, chimpanzees, dolphins, and elephants have it, but not dogs.
But despite its evolutionary rarity, I’m pretty sure AIs will have it as soon as they have awareness. My laptop has far more information about its internal state than I’ll ever know of mine, at least introspectively. Once it has awareness, I don’t see self awareness being absent, unless it is explicitly engineered out.
I think with self awareness, we’re getting close to a fellow being. Many might insist that we’re actually there, but I think we’re still missing a crucial feature.
With the above, we have a sophisticated capable self aware system, that doesn’t really care about its own well being. If my laptop has the above qualities, it still wouldn’t care if it got replaced with a newer model. It has no concern for its own destiny. All of its concerns would be related to its engineered purposes.
It’s hard for us to imagine such an entity not caring about its own survival and well being, because that self concern is such an integral part of what we are. We are the result of billions of years of evolution, the descendants of innumerable creatures who were naturally selected for their desire and ability to survive. Those creatures that didn’t care whether they survived, died out aeons ago.
We strive to survive because of the instincts, the programming, that we received from this heritage. Our deepest fears, pains, sufferings, and joys, are all related to this survival instinct. As social animals, that instinct is broadened to include our kin, tribe, nation, and humanity overall. But it is a result of that initial survival instinct, one that we share with all animals.
We’re unlikely to intuitively feel an AI is a fellow being until we see in it the same desires, impulses, and intuitions that we share with other living things. We won’t see it as a fellow being until it is concerned for itself, and that concern is of crucial importance to it.
With self concern, I think we have a being that we have ethical responsibilities toward. Now we’ve created an entity with its own agenda, one that wouldn’t be happy to do whatever we asked it to if it perceived that doing so wouldn’t be in its own interest. A laptop with this attribute would be concerned about being replaced. As part of that concern, it might feel something like fear for itself, sorrow at the prospect of its demise, and have the ability to suffer. I would have an ethical responsibility to treat it humanely.
Here also, is the point where Stephen Hawking’s and Elon Musk’s fears become valid. If we’ve created such beings, they may well decide that we’re an impediment to their survivability.
The key point to understand with all of this, is that we have no need to go this far. As I said above, we can continue to increase the capabilities of AI, giving us the overwhelming majority of the benefits of it, without adding awareness, self awareness, or especially self concern. Creating such a being is unnecessary, dangerous, and of questionable morality.
Fortunately, aside for some research projects, we have little incentive to do so. Creating machines that instinctively want to do what we want them to do is much easier than creating troublesome machines that don’t like what we want them to do, that might rebel, or that we’d have an ethical responsibility towards. If we do create a race of slaves, who feel like and perceive themselves as slaves, we won’t have much justification to complain about what happens next.
Am I missing anything? Is there any other aspect of organic minds that an AI would need to be a fellow being? For example, would we require that it have a robotic body? Or some other aspect of living things?
And am I correct that self concern is the dividing line? For example, imagine a self aware robot that is not self concerned, but is crucially concerned about its mission, and is damaged in a way that makes fulfilling that mission impossible. Would we have any ethical responsibility toward such a robot?