As we’ve discussed in recent posts on consciousness, I think imagination has a crucial role to play in animal consciousness. It’s part of a hierarchy I currently use to keep the broad aspects of cognition straight in my mind.
- Reflexes, instinctive or conditioned responses to simuli
- Perception, which increases the scope of what the reflexes are reacting to
- Attention, which prioritizes what the reflexes are reacting to
- Imagination, action scenario simulations, the results of which determine which reflexes to allow or inhibit
- Metacognition, introspection, self reflection, and symbolic thought
Generally, most vertebrate animals are at level 4, although with wide ranging levels of sophistication. The imagination of your typical fish is likely very limited in comparison to the imagination of a dog, or a rhesus monkey.
Computers have traditionally been at level 1 in the sense that they receive inputs and generate outputs. The algorithms between the inputs and outputs can get enormously complicated, but garden variety computer systems haven’t got much beyond this point.
However newer cutting edge autonomous systems are beginning to achieve level 2, and depending on how you interpret their systems, level 3. For example, self driving cars build models of their environment as a guide to action. These models are still relatively primitive, and still hitched to rules based engines, essentially to reflexes, but it’s looking like that may eventually be enough to allow us to read or sleep during our morning commutes.
But what about level 4? As it turns out, the Google DeepMind people are now trying to add imagination to their systems.
Researchers have started developing artificial intelligence with imagination – AI that can reason through decisions and make plans for the future, without being bound by human instructions.
Another way to put it would be imagining the consequences of actions before taking them, something we take for granted but which is much harder for robots to do.
The team working at Google-owned lab DeepMind says this ability is going to be crucial in developing AI algorithms for the future, allowing systems to better adapt to changing conditions that they haven’t been specifically programmed for. Insert your usual fears of a robot uprising here.
I don’t doubt the truth of that last sentence, that this is going to probably freak some people out, such as perhaps a certain billionaire (<cough>Elon Musk</cough>). But it’s worth keeping in mind how primitive this will be for the foreseeable future.
Despite the success of DeepMind’s testing, it’s still early days for the technology, and these games are still a long way from representing the complexity of the real world. Still, it’s a promising start in developing AI that won’t put a glass of water on a table if it’s likely to spill over, plus all kinds of other, more useful scenarios.
I’ve argued before why I think the robot uprising concerns are vastly overstated. But putting those reasons aside, we’re a long way from achieving human level imagination or overall intelligence. People like Nick Bostrom point out that, taking a broad view, there’s not much difference between the intelligence of the village idiot and, say, Stephen Hawking, and that systems approaching human level intelligence may shoot through the entire human range into superhuman intelligence very quickly.
But there are multiple orders of magnitude difference between the intelligence of a village idiot and a mouse, and multiple orders of magnitude difference between the mouse and a fruit fly. Our systems aren’t at fruit fly level yet. (Think how much better a Roomba might be if it was as intelligent as a cockroach.)
Still, adding imagination, albeit a very limited one, may be a very significant step. I think it’s imagination that provides the mechanism for inhibiting or allowing reflexive reactions, that essentially turn a reflex into an affect, a feeling. These machine affect states may be utterly different from what any living systems experience, so different that we may not be able to recognize them as feelings, but the mechanism will be similar.
It may force us to consider our intuitive sense of what feelings are. Would it ever be productive to refer to machine affect states as “feelings”? Can a system whose instincts are radically different from a living system’s have feelings? Is there a fact of the matter answer to this?