At Nautilus, Joel Frohlich posits how we’ll know when an AI is conscious. He starts off by accepting David Chalmers’ concept of a philosophical zombie, but then makes this statement.
But I have a slight problem with Chalmers’ zombies. Zombies are supposed to be capable of asking any question about the nature of experience. It’s worth wondering, though, how a person or machine devoid of experience could reflect on experience it doesn’t have.
He then goes on to describe what I’d call a Turing test for consciousness.
This is not a strictly academic matter—if Google’s DeepMind develops an AI that starts asking, say, why the color red feels like red and not something else, there are only a few possible explanations. Perhaps it heard the question from someone else. It’s possible, for example, that an AI might learn to ask questions about consciousness simply by reading papers about consciousness. It also could have been programmed to ask that question, like a character in a video game, or it could have burped the question out of random noise. Clearly, asking questions about consciousness does not prove anything per se. But could an AI zombie formulate such questions by itself, without hearing them from another source or belching them out from random outputs? To me, the answer is clearly no. If I’m right, then we should seriously consider that an AI might be conscious if it asks questions about subjective experience unprompted. Because we won’t know if it’s ethical to unplug such an AI without knowing if it’s conscious, we better start listening for such questions now.
This seems to include a couple of major assumptions.
First is the idea that we’ll accidentally make an AI conscious. I think that is profoundly unlikely. We’re having a hard enough time making AIs that can successfully navigate around houses or road systems, not to mention ones that can simulate the consequences of real world physical actions. None of these capabilities are coming without a lot of engineering involved.
The second assumption is that consciousness, like some kind of soul, is a quality a system either has or doesn’t have. We already have systems that, to some degree, take in information about the world and navigate around in it (self driving cars, Mars rovers, etc). This amounts to a basic form of exteroceptive awareness. To the extent such systems have internal sensors, they have a primitive form of interoceptive awareness. In the language of the previous post, these systems already have a sensorium more sophisticated than many organisms.
But their motorium, their ability to perform actions, remains largely rule based, that is, reflexive. They don’t yet have the capability to simulate multiple courses of action (imagination) and assess the desirability of those courses, although the Deepmind people are working on this capability.
The abilities above provide a level of functionality that some might consider conscious, although it’s still missing aspects that others will insist are crucial. So it might be better described as “proto-conscious.”
For a system to be conscious in the way animals are, it would also have to have a model of self, and care about that self. This self concern comes naturally to us because having such a concern increases our chances of survival and reproduction. Organisms that don’t have that instinctive concern tend to quickly be selected out of the gene pool.
But for the AI to ask about its own consciousness, its model of self would need to include a another model to monitor aspects of its own internal processing. In other words, it would need metacognition, introspection, self reflection. Only once that is in place will it be capable of pondering its own consciousness, and be motivated to do so.
These are not capabilities that are going to come easily or by accident. There will likely be numerous prototype failures that are near but not quite there. This means that we’re likely to see more and more sophisticated systems over time that increasingly trigger our intuition of consciousness. We’ll suspect these systems of being conscious long before they have the capability to wonder about their own consciousness, and we’ll be watching for signs of this kind of self awareness as we try to instill it, like a parent watching for their child’s first successful utterance of a word (or depending on your attitude, Frankenstein looking for the first signs of life in his creation).
Although it’s also worth wondering how prevalent systems with a sense of self will be. Certainly they will be created in labs, but most of us won’t want cars or robots that care about themselves, at least beyond their usefulness to their owners. And given all the ethical concerns with full consciousness and the difficulties in accomplishing it, I think the proto-conscious stage is as far as we’ll bring common everyday AI systems, a stage that makes them powerful tools, but keeps them as tools, rather than slaves.
Unless of course I’m missing something?