Adding imagination to AI

As we’ve discussed in recent posts on consciousness, I think imagination has a crucial role to play in animal consciousness.  It’s part of a hierarchy I currently use to keep the broad aspects of cognition straight in my mind.

  1. Reflexes, instinctive or conditioned responses to simuli
  2. Perception, which increases the scope of what the reflexes are reacting to
  3. Attention, which prioritizes what the reflexes are reacting to
  4. Imagination, action scenario simulations, the results of which determine which reflexes to allow or inhibit
  5. Metacognition, introspection, self reflection, and symbolic thought

Generally, most vertebrate animals are at level 4, although with wide ranging levels of sophistication.  The imagination of your typical fish is likely very limited in comparison to the imagination of a dog, or a rhesus monkey.

Computers have traditionally been at level 1 in the sense that they receive inputs and generate outputs.  The algorithms between the inputs and outputs can get enormously complicated, but garden variety computer systems haven’t got much beyond this point.

However newer cutting edge autonomous systems are beginning to achieve level 2, and depending on how you interpret their systems, level 3.  For example, self driving cars build models of their environment as a guide to action.  These models are still relatively primitive, and still hitched to rules based engines, essentially to reflexes, but it’s looking like that may eventually be enough to allow us to read or sleep during our morning commutes.

But what about level 4?  As it turns out, the Google DeepMind people are now trying to add imagination to their systems.

Researchers have started developing artificial intelligence with imagination – AI that can reason through decisions and make plans for the future, without being bound by human instructions.

Another way to put it would be imagining the consequences of actions before taking them, something we take for granted but which is much harder for robots to do.

The team working at Google-owned lab DeepMind says this ability is going to be crucial in developing AI algorithms for the future, allowing systems to better adapt to changing conditions that they haven’t been specifically programmed for. Insert your usual fears of a robot uprising here.

I don’t doubt the truth of that last sentence, that this is going to probably freak some people out, such as perhaps a certain billionaire (<cough>Elon Musk</cough>).  But it’s worth keeping in mind how primitive this will be for the foreseeable future.

Despite the success of DeepMind’s testing, it’s still early days for the technology, and these games are still a long way from representing the complexity of the real world. Still, it’s a promising start in developing AI that won’t put a glass of water on a table if it’s likely to spill over, plus all kinds of other, more useful scenarios.

I’ve argued before why I think the robot uprising concerns are vastly overstated.  But putting those reasons aside, we’re a long way from achieving human level imagination or overall intelligence.  People like Nick Bostrom point out that, taking a broad view, there’s not much difference between the intelligence of the village idiot and, say, Stephen Hawking, and that systems approaching human level intelligence may shoot through the entire human range into superhuman intelligence very quickly.

But there are multiple orders of magnitude difference between the intelligence of a village idiot and a mouse, and multiple orders of magnitude difference between the mouse and a fruit fly.  Our systems aren’t at fruit fly level yet.  (Think how much better a Roomba might be if it was as intelligent as a cockroach.)

Still, adding imagination, albeit a very limited one, may be a very significant step.  I think it’s imagination that provides the mechanism for inhibiting or allowing reflexive reactions, that essentially turn a reflex into an affect, a feeling.  These machine affect states may be utterly different from what any living systems experience, so different that we may not be able to recognize them as feelings, but the mechanism will be similar.

It may force us to consider our intuitive sense of what feelings are.  Would it ever be productive to refer to machine affect states as “feelings”?  Can a system whose instincts are radically different from a living system’s have feelings?  Is there a fact of the matter answer to this?

21 thoughts on “Adding imagination to AI

  1. I think imagination is what distinguished Homo sapiens from the other great apes. And I do not imagine that this is an unconquerable goal. Consider the chess-playing computer. What is imagination other than consideration of the possible futures moving away from the current state of the board (or some imagined future state of the board)? These computers do not contemplate all possible chess moves in all possible situations, they begin from the here and now and image outcomes along the lines of I go, he goes, I go, but if … etc. This seem to me to be the beginning of an imagining function.

    Liked by 2 people

    1. I can see that. All the dynamics seem much simpler (relatively speaking) for a chess program, since chess is a predefined system with predefined moves, but I agree that the principle may be the same. The difficulty seems to be in trying to do this with real world environments, which are messy and unstructured, and with multiple, at times competing, goals.

      Like

      1. Chess has simple, structured rules, but I believe that a big part of human intelligence is geared to imposing an imagined structure on the world. For example, I look around the office and see chairs and desks. I don’t see objects with complex shapes (unless I look hard at the surfaces of the objects.) The imagined environment that we inhabit is much simpler than the real world, and that’s key to our ability to function.
        (By the way, I agree with your thesis. I am not trying to contradict anything.)

        Liked by 3 people

        1. I think that’s an insightful observation. We reduce the world to structures, to models, which are themselves simplifications of the reality. They’re effective because, albeit simplified, for most of our purposes, they’re predictive.

          This reminds me of a discussion I had years ago about whether spheres actually exist outside of human imagination. There are many things that are spheroidal, but arguably no spheres “out there”, although Platonists might disagree.

          Liked by 1 person

          1. Because the role imagination plays, we construct a “reality” in our minds and then mad sensory data on to it. So, that complex shape becomes “chair.” As I mentioned, the ability to trace lines of change based upon parameters in existence is the beginning of an imagining function. We ain’t there yet. But remember back when chess computers used brute force to function? when computers were taught (aka programmed) to learn on their own by playing whole games in their “minds”, the contest between chess master and chess computer was over. I would suppose that as AI develop similar progress can be made by asking the computer to “Imagine” (cue John Lennon).

            Liked by 1 person

  2. Hi Mike,

    You wrote: “I think it’s imagination that provides the mechanism for inhibiting or allowing reflexive reactions, that essentially turn a reflex into an affect, a feeling.”

    It’s a really interesting line and I was wondering if you could elaborate a little. It sounded as though you picture feelings as interrupted reflex responses somehow? Do I have that correct?

    Also, on imagination as it is described here, would you agree human beings exhibit both conscious and unconscious processes of imagination? Or do you think, for instance, that my ability to walk down an unfamiliar street without tripping, or to set a glass down on an unfamiliar table without spilling it, while focused on a conversation for instance, is because those processes no longer require imagination on my part, e.g. what once required imagination is now simply governed by rules?

    Another example of imagination that is not highly conscious. You’re in a conversation and someone says something innocuous, the consequences of which undermine your favored position. But it’s not entirely obvious without really thinking about it. And yet… you sense something is amiss. You resist. You probe. Is there an unconscious imagination that dashes through the consequences and senses this subtle threat? Or the opposite. You attempt to be unbiased but say something with a very subtle bias. It just comes out without any conscious forethought.

    And the difficult question then: would an AI have both conscious and unconscious processes of imagination? How would we attempt to define the difference?

    Michael

    Liked by 3 people

    1. Hi Michael,
      Excellent questions!

      On feelings, it occurred to me after hitting Publish that I should have expanded that point a bit. Consider the following sequences:
      An organism receives stimulus A, which causes reflex A that results in action A.
      It later receives stimulus B, which causes reflex B, that results in action B.
      So far, this is a stimulus / reflex driven system.

      But suppose the organism receives stimulus A and B simultaneously, triggering reflex A and B. But reflexes A and B conflict with each other. They can’t both be done, at least not immediately. An extra step is needed. The organism needs to consider its options, it needs to simulate various courses of actions to decide its next course of action.

      This extra step makes the reflexes no longer reflexes, but turns them into urges, dispositions, emotions, affects, or what we commonly call feelings. Put another way, a feeling is the communication of the inhibitable reflex to the imaginative simulation engine, both to initially spur the simulations, and then to assess the desirability of each one.

      “Also, on imagination as it is described here, would you agree human beings exhibit both conscious and unconscious processes of imagination?”

      Yes! Imagination can be unconscious. It’s worth noting here how we distinguish between the conscious and the unconscious in human cognition: whether we can introspect it. Introspection may be be a subset of the simulation engine, but it isn’t always triggered. What cause it to be triggered? I’m not sure, but I suspect it has something to do with the nature of the simulation including, perhaps, with how much resources / effort it is taking.

      On walking and setting a glass down on a table, I’d say in most cases, for an adult, that it doesn’t require imaginative simulations. The processing has been delegated to more habitual processing. At least until something novel happens, such as a wobbly table that “wakes up” the imaginative simulation engine to run a simulation on whether the drink should be put there.

      And, of course, I’m oversimplifying, because the allowing or inhibiting of a reflex can itself become habitual. The frontal lobes can delegate to the basal ganglia allowance or inhibition of reflexes from the mid-brain regions. When this happens, it seems to be outside of our introspective scope. But forget this paragraph if it’s confusing.

      “And the difficult question then: would an AI have both conscious and unconscious processes of imagination? How would we attempt to define the difference?”

      I suspect a lot depends on how we architect it. Do we give it introspective access to every aspect of its processing? If so, then no, it wouldn’t have unconscious processes. Or do we restrict it in a manner similar to how our metacognition is constrained? (Maybe a fully introspective mind would be too weighed down by that much reflection.) If we do, then anything that happens outside of the metacognition would be in the unconscious.

      Interestingly, this might be more difficult than it sounds. Consider that a significant problem right now with current deep learning systems is that they can’t explain how they reach their conclusions. They don’t have access to their own modeling. They currently have no introspection at all, although there are attempts to add mechanisms to provide something like it.

      Liked by 3 people

  3. If metallurgists develop an alloy of elements other than atomic number 79 into a yellow, heavy, non-oxidizing, malleable metal, is there a fact of the matter whether that’s gold? Yes, yes there is such a fact. Now consider “feelings” and in particular “fear”. Well “fear” may be relevantly like “gold”, in that there may be a unified neural explanation for all the paradigm cases of fear – and a computer which avoids certain stimuli probably won’t use that exact same neural structure/process. So it doesn’t feel fear. But “feeling” is a general category – more like “element” than “gold”. So I think the computers you’re envisioning would have feelings – we just wouldn’t be able to relate to them easily. All we could do, to understand the feeling, is map the input-output relations.

    Liked by 1 person

    1. That’s an interesting comparison. Gold has a precise chemical definition (79 protons), so we can definitely say whether the new material is it.

      But “feeling” to me seems more like the concept of a file system in a computer operating system. There are many possible implementations of file systems. It takes judgment to decide if a particular implementation counts as an example of a file system. But among knowledgeable programmers, it wouldn’t be a controversial judgment in most cases.

      Still, we could imagine a file system so radically different from anything that has been done before, that the judgment might be one that would split experts. What then? Is the existence of the file system a fact of the matter?

      That said, I can see your stance. If the implementation of feelings in a system is functionally similar enough to the ones in animals, it might not be controversial to call them feelings, even if the underlying details are very different, and the individual feelings are utterly different than ours, such as maybe an inhibitable desire to obtain the latest software patches.

      Liked by 1 person

  4. Very thought provoking post as usualy Mike. I look forward to our progress in AI but (as you may know) I strongly incline against notions of such systems as Deepmind having imagination and even more so of them having affective states. But I’ll leave a full analysis of my contentions for a later post 🙂
    Cheers

    Liked by 2 people

    1. James,
      Interesting. I only skimmed it, but from what I could see, his account doesn’t seem that different from mine either. He’s using the Shannon definition, which I don’t object to, and seems to see the meaning of information being about how a system uses it. I’m not inclined to use the word “interpret” the way he does, but definitional differences generally don’t excite me that much.

      The problem with definitions, of course, is that he can propose them, and people can reject them. Ultimately, definitions are what we agree for them to be. They are utterly relativistic. The only thing we can productively discuss is to what extent they meet most people’s intuitive sense of the word. (Which can often quickly be resolved by looking at quality dictionaries.)

      Thanks for sharing it! Were there any other points you wanted to discuss?

      Like

      1. Well, now that you ask … 🙂

        The significance of Haig’s paper is not the definitions but the framework, and especially the explanatory power of that framework. That framework explains where meaning comes from. That framework explains your levels. That framework explains why a monolithic “consciousness” is an illusion, and why that which you call “subconscious” is actually “consciousness” belonging to a subsystem whose input is not available to the system identified with the narrative or autobiographical self.

        So based on that framework, you could add a level 0, which is responses which are not instinctive or learned. You could also explain exactly what “perception” is, namely, the creation of percepts, i.e., symbolic signs.

        And now I would suggest maybe “imagination” is the ability to create concepts, and a concept is a percept which is a combination of two or more different percepts. I would also suggest that your levels 4 and 5 are essentially the same and differ only in matter of degree.

        So where do you want to start?

        *

        Liked by 1 person

        1. Hmmm. Well, the last point is the one I’m most curious about, that level 4 and 5 are essentially the same.

          I do see metacognition as being a component, or perhaps an enhancement of imagination since imaginative simulations seem to be introspection’s chief (perhaps only) scope of concern. Metacognition seems to be a sort of feedback mechanism, a way for the mind to model and simulate its own processing, to a limited degree. This seems borne out by the neuroscience I’ve read that centers both in regions of the prefrontal cortex.

          But metacognition, introspection, self reflectivity seems like a distinct capability apart from running action sensory simulations. This capability has only been scientifically demonstrated in a few species, while the broader capability seems pervasive in the animal kingdom, albeit with widely varying levels of sophistication.

          How do you come to the conclusion that the two are the same?

          Like

          1. Hmmm, on further consideration I would like to retract that suggestion. That conclusion was based on the idea that the significant innovation at level 4 was the creation of concepts, and the features in level 5 involve just specific kinds of concepts, but I can accept that the emergence of a new kind of concept, e.g., a self-referential concept, can instigate a new level.

            FWIW, I am taking Haig’s paper as an independent “discovery”, and so validation, of the framework I developed for myself, which means only now am I thinking about how this framework applies to things like imagination and the higher level functions. I think I can expect false starts, red herrings, etc.

            *

            Liked by 1 person

    2. No worries James. I know where you’re at. I’d love to hear your speculation, particularly if it points to changes in my way of understanding this stuff.

      The concepts idea is interesting. I was listening to a podcast today with an interview of a neuroscientist who pointed out that concepts are predictions. So, concepts are perceptions which are predictions. But simulations are also predictions. Heck, it’s all prediction, which is what the brain does. It can make you throw up your arms and label the entire thing nothing but a collection of prediction circuits.

      But I think there are productive distinctions. We have perceptions, sensory models. And perceptions can be chained together into composite concepts, such as a dog. A dog is a collection of visual patterns, as well as sound ones, smells, touch sensations, etc, typically collected into multi-modal concepts in association cortices.

      But imaginative simulations are time sequenced concepts, such as an episodic memory, or a simulation of what might happen if we take a certain action. But crucially, the simulation engine in the frontal lobes farms the details of each concept in the simulation back to the perception machinery in the parietal, temporal, and occipital lobes, making the simulations a neocortex wide endeavor.

      So, you could view it all as prediction functionality, and some people do, but I see value in understanding the distinctions, at least for now. (I might feel differently as I learn more.)

      Like

      1. Some people like to think of the brain as a prediction machine. Others think of it as a simulation machine (ahem). And yet others think of it as an analogy machine. As I mentioned in a different thread, I’m in the last group.

        I think the analogy view is simpler, because it’s all just memories, more or less. Activation of a concept leads to an activation of the percepts that made that concept, albeit in less vivid detail than the original percepts. By the way, Chris Eliasmith demonstrated exactly this in his simulated neural networks.

        The prediction/simulation ideas seem more complicated to me. For example, you just referred to the “simulation engine in the frontal lobes [which] farms … details [to other parts of the cortex]”. How would the simulation engine work? What details would be sent to other cortical areas? In contrast, if the frontal lobes simply generate a concept, say the purchase of a dog, and the activation of this concept activates the separate concepts of purchase and dog, and the activation of those concepts activate various other concepts, you can see how this would light up the various regions of the cortex which were simply where the original percepts and concepts were created in the first place.

        *

        Liked by 1 person

        1. I think there are multiple productive ways of looking at the underlying reality. Antonio Damasio looks at it in terms of neural networks that are activated for each concept. When activated concurrently on a consistent basis, the networks form connections between them, to the extent that activating a small part of the network can lead to retro-activation of the other parts. He calls the nexus points for these networks CDZs (convergence-divergence zones).

          The CDZs and associated networks seem like the underlying reality for concepts. And activating one can lead to cascades that activate others. In other words, activating one concept can lead to activation of associated concepts, including action oriented ones if the connections carry over into the frontal lobes.

          In my mind, this could be said to describe everything that’s happening, but so would saying everything happens in terms of particle or chemical interactions, or describing the operations of Microsoft Windows in terms of transistor states. We can also talk about it at the level you describe, which is a notch higher in the layers of abstraction.

          The reason I find it productive to describe it at the layer I am, essentially at the data processing or functional architecture layer, is it gives possible answers the question of why these patterns of activity evolved, why they were naturally selected over other patterns of activity. Calling them “simulations”, “models”, and “predictions” is a quick way to communicate the adaptiveness of what is going on. It also gives a possible hint on what the technological equivalents might be.

          As to how exactly the simulations work, even though we know a lot about the lowest level and highest level details, no one yet understands all the layers of abstraction in between. If DeepMind or other AI researchers succeed, it might give neuroscientists clues on what to look for. Of course, if neuroscientists first figure out how it works in brains, they will have largely done the AI researchers work for them. In reality, AI researchers and computational neuroscience will likely feed off each other’s incremental discoveries.

          Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.