This is an interesting video from Big Think. It features discussion from a variety of thinkers like Richard Dawkins, Peter Singer, Susan Schneider, and others, including a lot of intelligent remarks from someone I wasn’t familiar with until now, Joanna Bryson.
Consciousness lies in the eye of the beholder. There is no universally agreed upon collection of attributes or capabilities that are both necessary and sufficient to objectively call a system conscious. Outside of Westworld or Bladerunner type scenarios, where the AI is virtually identical to a human being, there will always be an element of judgement on whether the system in question is enough like us that we are obligated to treat it like one of us.
The issue is very similar to animal rights. Mammals have always had a large advantage in these considerations because they’re the most like us. Birds do okay too. But fish, amphibians, and most invertebrates typically don’t. Although arthropods might have it easy in arousing our sympathy compared to a plastic and metal machine.
Bryson makes a point that I’ve often made (emphasis enhanced):
So, given how hard it is to be fair, why should we build AI that needs us to be fair to it? So, what I’m trying to do is just make the problem simpler and focus us on the thing that we can’t help, which is the human condition. And I’m recommending that if you specify something, if you say okay this is when you really need rights in this context, okay, once we’ve established that don’t build that.
Unless we find that general intelligence requires biological desires and instincts (which I personally see no reason to expect), we should be able to get most of the benefits of AI without building such systems. It arguably isn’t cruel to retire a self driving car whose deepest desires are to be the most effective and safest transportation it can be, and doesn’t care about its own existence beyond that.
Although there is nuance here and issues that may be difficult to avoid. If such a car was left running while unable to fulfill its desires, then that could be considered a type of suffering. In this case, its interest and ours would align, since it’s not productive for us to leave a machine running and consuming energy when it can’t fulfill its goals. But it’s not hard to imagine accidental scenarios where such a state is overlooked.
Meaning we don’t necessarily have to worry about building a race of slaves, that is, systems that don’t want to do what we designed them to do. (At least unless we go out of our way to build such systems.) But we might have tools who are happy to be tools, yet whose welfare still needs to be considered. Careful design could probably minimize these issues, but that means actually taking them into account during design.
If we do build AI systems that resemble humans or animals, maybe for companionship or related purposes, it will be natural for most of us to regard them as beings deserving ethical consideration. I don’t think we should resist those instincts, since becoming callous to them can affect the way we treat each other. Just as animal cruelty is a slippery slope into cruelty toward humans, AI cruelty would not be a harmless activity, even if the AI itself ultimately doesn’t care.
But maybe I’m overlooking something?