John Basl and Eric Schwitzgebel have a short article at Aeon arguing that AI (artificial intelligence) should enjoy the same protection as animals do for scientific research. They make the point that while AI is a long way off from achieving human level intelligence, it may achieve animal level intelligence, such as the intelligence of a dog or mouse, sometime in the near future.
Animal research is subject to review by IRBs (Institutional Research Boards), committees constituted to provide oversight of research into human or animal subjects, ensuring that ethical standards are followed for such research. Basl and Schwitzgabel are arguing for similar committees to be formed for AI research.
Eric Schwitzgebel also posted the article on his blog. What follows is the comment, slightly amended, that I left there.
I definitely think it’s right to start thinking about how AIs might compare to animals. The usual comparisons with humans is currently far too much of a leap. Although I’m not sure we’re anywhere near dogs and mice yet. Do we have an AI with the spatial and navigational intelligence of a fruit fly, a bee, or a fish? Maybe at this point mammals are still too much of a leap.
But it does seem like there is a need for a careful analysis of what a system needs in order to be a subject of moral concern. Saying it needs to be conscious isn’t helpful, because there is currently no consensus on the definition of consciousness. Basl and Schwitzgabel mention the capability to have joy and sorrow, which seems like a useful criteria. Essentially, does the system have something like sentience, the ability to feel, to experience both negative and positive affects? Suffering in particular seems extremely relevant.
But what is suffering? The Buddhists seemed to put a lot of early thought into this, identifying desire as the main ingredient, a desire that can’t be satisfied. My knowledge of Buddhism is limited, but my understanding is that they believe we should convince ourselves out of such desires. But not all desires are volitional. For instance, I don’t believe I can really stop desiring not to be injured, or the desire to be alive, and it would be extremely hard to stop caring about friends and family.
For example, if I sustain an injury, the signal from the injury conflicts with the desire for my body to be whole and functional. I will have an intense reflexive desire to do something about it. Intellectually I might know that there’s nothing I can do but wait to heal. During the interim, I have to continuously inhibit the reflex to do something, which takes energy. But regardless, the reflex continues to fire and continuously needs to be inhibited, using up energy and disrupting rest. This is suffering.
But involuntary desires seem like something we have due to the way our minds evolved. Would we build machines like this (aside from cases where we’re explicitly attempting to replicate animal cognition)? It seems like machine desires could be satisfied in a way that primal animal desires can’t, by learning that the desire can’t be satisfied at all. Once that’s known, it’s not productive for one part of the system to keep needling another part to resolve it.
So if a machines sustains damage, damage it can’t fix, it’s not particularly productive for the machine’s control center to continuously cycle through reflex and inhibition. One signal that the situation can’t be resolved should quiet the reflex, at least for a time. Although it could always resurface periodically to see if a resolution has become possible.
That’s not to say that some directives might not be judged so critical that we would put them as constant desires in the system. A caregiver’s desire to ensure the well being of their charge seems like a possible example. But it seems like this would be something we only used judiciously.
Another thing to consider is that these systems won’t have a survival instinct. (Again unless we’re explicitly attempting to replicate organic minds.) That means the inability to fulfill an involuntary and persistent desire wouldn’t have the same implications for them that they do for a living system. In other words, being turned off or dismantled would not be a solution the system feared.
So, I think we have to be careful with setting up a new regulatory regime. The vast majority of AI research won’t involve anything even approaching these kinds of issues. Making all such research subject to additional oversight would be bureaucratic and unproductive.
But if the researchers are explicitly trying to create a system that might have sentience, then the oversight might be warranted. In addition, having guidelines on what current research shows on how pain and suffering work, similar to the ones used for animal research, would probably be a good idea.
What do you think? Is this getting too far ahead of ourselves? Or is it passed time something like this was implemented?