Add feelings to AI to achieve general intelligence?

Neuroscientists Kingston Man and Antonio Damasio have a paper out arguing that the way to get artificial intelligence (AI) to the next level is to add in feelings.

“Today’s robots lack feelings,” Man and Damasio write in a new paper (subscription required) in Nature Machine Intelligence. “They are not designed to represent the internal state of their operations in a way that would permit them to experience that state in a mental space.”

So Man and Damasio propose a strategy for imbuing machines (such as robots or humanlike androids) with the “artificial equivalent of feeling.” At its core, this proposal calls for machines designed to observe the biological principle of homeostasis. That’s the idea that life must regulate itself to remain within a narrow range of suitable conditions — like keeping temperature and chemical balances within the limits of viability. An intelligent machine’s awareness of analogous features of its internal state would amount to the robotic version of feelings.

Such feelings would not only motivate self-preserving behavior, Man and Damasio believe, but also inspire artificial intelligence to more closely emulate the real thing.

One of the biggest challenges in AI is figuring out how to generalize the lessons learned in specialized neural networks for use in other tasks.  Humans and animals do it all the time.  In that sense, Man’s and Damasio’s proposition is interesting.  Maybe having the system start with its own homeostasis would provide a foundation for that generalization.

On the other hand, I’ve often said I don’t worry too much about the dangers of AI because they wouldn’t have their own survival instinct.  Giving one to them seems like it would open the door to those dangers.  Man and Damasio have a response to that.  Give it empathy.

“Stories about robots often end poorly for their human creators,” Man and Damasio acknowledge. But would a supersmart robot (with feelings) really pose Terminator-type dangers? “We suggest not,” they say, “provided, for example, that in addition to having access to its own feelings, it would be able to know about the feelings of others — that is, if it would be endowed with empathy.”

And so Man and Damasio suggest their own rules for robots: 1. Feel good. 2. Feel empathy.

Well, maybe, but as the Science News author notes, that seems optimistic.  It also raises the danger that rather than building a set of tools motivated to do what we want them to do, we might be creating a race of slaves, survival machines forced to do our bidding.  The danger and possible slavery aspects of this make me uneasy.

I’m also not entirely sure I buy the logic that putting feelings in will necessarily lead to general intelligence.  It seems more likely that it will just lead these systems to behave like animals.  Untold numbers of animal species evolved on Earth before one capable of complex abstract thought came along, and we seem far from inevitable.

Still, exploring in this direction might provide insights into human and animal intelligence and consciousness.  But it also makes John Basl’s and Eric Schwitzgebel’s concern about AI welfare seem more relevant and prescient.

47 thoughts on “Add feelings to AI to achieve general intelligence?

  1. Being an amateur philosopher, I may be mistaken, but I assume you are talking about “inner feelings” as opposed to “outer feelings,” such as a feeling of pain or pleasure coming from one’s skin. Are not the inner feelings manifestations of the outer ones, that we can in the simulacrum of reality we create in our minds create sources of love and whatnot, with no outward physical stimulation? Therefore would not these AIs need outward senses to built a base of outer “feelings” to be able to construct an inner world of “feelings?”

    Like

    1. I think they’re talking about any kind of valenced feelings. So pain, pleasure, hunger, anger, fear, etc, are all included.

      The AIs would definitely need sensory capabilities. The paper discusses progress in soft robotics and embedding sensors in the “skin” of the system, allowing it to be more aware of and respond in more detail to its environment.

      Liked by 1 person

  2. The more I think about it, the more I conclude that doing as the authors describe [based on above, haven’t read paper] might be the most heinous thing mankind could do. Sounds hyperbolic but the ramifications are enormous. Creating intelligent things that can suffer, and for no good reason.

    Intelligence does not depend on homeostasis. Homeostasis was the motivation to develop intelligence in our case, because that’s how natural selection works. We have our own motivation for creating intelligence.

    The next step for general intelligence is to have the pattern recognizers recognize space and time, i.e., be able track objects and events. From that you can get the concepts of object permanence and causation.

    *

    Liked by 3 people

    1. I agree that intelligence doesn’t depend on homeostasis. But one thing I wonder about is, does suffering? Normally we as organic beings suffer when our homeostasis is threatened.

      But suppose our deepest desire, as say, a maintenance bot, was to maintain a bridge, and for whatever reason, that wasn’t possible. Wouldn’t that lead to a type of suffering? Or is suffering something that can only relate to homeostasis? If so, why?

      Or is this a case where our ability to empathize will be limited? Might we have a situation where an AI is suffering, but we’re callous to it, because it isn’t our type of suffering?

      (Granted, the maintenance bot may have the ability to silence its desire with the observation that, at least for the time being, it’s impossible to satisfy. Although I could see case where a designer may not want to give it that discretion, to ensure it keeps trying, just in case its assessment is wrong.)

      On pattern recognizers for space and time, that sounds interesting. Are you reading that somewhere, or is it your own conclusions? Just curious.

      Like

      1. I take your point that it’s not so much homeostasis as just stasis. But I think we only get into suffering when we tie a “feeling” of valence to it, which the authors seemed to be suggesting. If the robot’s goal is not so much to effect the repairs as to stop the “feeling” it gets when repairs are needed, then it can suffer.

        On pattern recognizers recognizing causation, that’s mainly my idea, but personal history shows that the very fact I’m thinking about it means that capable people have been working on it for several years. Also, Judea Pearl (The Book of Why) is thinking along those lines, and I think the AI’s winning in video game environments have to be doing this to some extent. Also, the neuroscientists are still working out how people do it with grid cells, etc., getting closer all the time. I think one path is connecting what grid cells do to semantic pointers.

        *
        [as if I know squat about grid cells]

        Like

      2. On further thought, I think the the definition of suffering is to have a goal but be frustrated in its accomplishment. So if your goal is to lose twenty pounds but you just can’t seem to get there, you are suffering. It’s just that when the goal is stop a “feeling”, the suffering seems much worse.

        *

        Like

        1. I actually preferred your first definition of suffering. If I have a casual goal to stop at the grocery store on the way home, but can’t for some reason, it seems strange to say I’m suffering because of it. On the other hand, if I have a deep desire to go to the grocery store and can’t, and the desire continues nonetheless, maybe even intensifies, then I can see that as suffering.

          I know what you mean on coming up with ideas. Usually someone has already thought about them. (Sometimes it’s been thought about and already dismissed.)

          BTW, I’m currently reading Melanie Mitchell’s ‘Artificial Intelligence’. In the chapter I’m in, she describes a concept called a “word vector” which sounds a lot like the semantic pointer concept. It’ll be interesting to see if she later generalizes it to that concept.

          Like

  3. This understanding of feelings seems painfully simplistic.

    If you go back to the late Freud, he hypothesized a death instinct which to a large degree was a tendency to return to earlier states, to homeostasis, to the womb, to death. These aren’t exactly equivalent but they are similar or related. He came upon the idea in analysis of various compulsive disorders when an individual seeks to repeat behavior, sometimes actually painful behavior. However, the opposite of the death instinct was a life instinct or Eros which, of course, is related to sexuality and pleasure but more broadly is an urge to life and unification. The instincts merge and interact in complex ways as can be seen in sexual compulsive disorders.

    A lot of this is in Beyond the Pleasure Principle which is some interesting reading if you have the time.

    Bottom line is that feelings are complex and can manifest in opposite tendencies. We seldom have either unequivocal love or hate. We seldom want either complete homeostasis or change. We usually want complicated mixtures.

    FYI: This post doesn’t seem to show on my WordPress reader feed. Wasn’t that some sort of caching problem?

    Liked by 1 person

    1. I don’t know if anyone is arguing that we don’t have conflicting impulses and feelings. Or that a particular feeling isn’t a complex thing. I actually see them as representations of lower level impulses along with predictive consequences. And I think a major reason for cognition is to resolve all those conflicts.

      On the reader, thanks for letting me know. I noticed that the social media postings initially looked wrong, but they later resolved. Are you still not seeing it? WP did do something with the cache a while back, but the problem has come up sporadically since then, but they typically can see it when I check, so they won’t do anything, wanting to get into the account of the person who isn’t seeing it. If you keep seeing missing posts, please let me know.

      Like

      1. I’m not seeing it in the Reader, either, and I cleared my cache and cookies to make sure it wasn’t on my end. Oddly, the post shows up in the RSS feed.

        Even weirder, in the Reader, if I click on your site header, it takes me to a list of your posts and the new post does show up there.

        Definitely some bugs in the Reader. I don’t like the way it renders pages, either.

        Like

          1. Thanks! On what they said, here’s the body of the reply:

            I have forced this post to show up on the Reader feed and I can see it showing up. It looked to be a temporary issue and we’ve fixed it by forcing the post on the Reader feed from our end. Could you have someone check and confirm as well?

            Like

          2. Yeah, I suspect this will be back again at some point, at least until the bug itself gets fixed. The fact that it shows up in the RSS feed makes me think the issue is in the Reader itself.

            Like

          3. Thanks!

            In the old days, I sometimes posted two or three times a day. I’ve noticed it seems to happen when I have a lot of draft saves, which I had on the previous post, but not this one. I wonder if there’s a way to just clear those old drafts.

            Like

      2. I don’t think it is just a matter of conflicting or complex emotions. Frequently humans deliberately and consciously make bad choices, engage in self-sabotage, seek out the painful. The reasons are complex and not always understood but often the history of the individual, especially the childhood and traumatic experiences, will be involved but still the events are seldom deterministic. Different individuals going through the same or similar experiences may react to them differently and end up in different places as far as future behavior.

        Like

        1. There’s no doubt it will be complicated. The trick may be to start small.

          BTW, if you follow the link to the paper from the Science News article itself, it might let you in. (The link in my quote of the article won’t work.)

          Like

  4. As others have pointed out, sensing the operation of the system and adjusting to it already exists in simple forms. CPUs can slow down if overheating, and car engines adjust to driving conditions.

    Certainly the computer doesn’t think to itself, “Damn, I’m hot! I need to slow down.” It’s, as you say, a kind of reflexive behavior. If this, then that.

    As James Cross mentions, emotions are pretty big and complex for us. They tie to vast libraries of associations in our mind. (And those associations have associations and so on.) And those emotions are tied to various physical responses, sweating, palpitations, rapid breathing, and many more.

    When we feel an emotion we literally feel that emotion physically.

    As JamesOfSeattle points out, if we ever did make a machine capable of such complex associations and physical responses, we’d have to think about whether it wasn’t a sovereign being with rights.

    Does the bridge repair robot: [A] recognize its goal is blocked and just pause until conditions change; [B] keep trying in an unemotional loop in case that works (because why not?); [C] get “upset” and “frustrated” — pointless energy-wasting behaviors designed to do what exactly?

    The idea that complex machines are reflexive isn’t new and certainly seems important when it comes to complex machines. (The Saturn V had a lot of sensors and reflexive abilities! Just launching any rocket successfully requires balancing on top of the thrust.)

    The real question might be what happens with a sufficiently clever machine with goals.

    AI is already solving video games on its own. Given complex enough systems to do useful things, those goals can lead to unexpected results. (They already have.)

    It might be best to hope true AGI just isn’t possible…
    https://abstrusegoose.com/594
    (See the cartoon immediately following, too.)

    Liked by 1 person

    1. “When we feel an emotion we literally feel that emotion physically.”

      We do. As you noted, the initial reaction leads to physiological changes, which we then feel interoceptively, which sets up a resonance loop, one we can enter merely be having the interoceptive portion. The upside is that physical pain relievers can help with emotional pain.

      On the bridge robot, frustration would be a good indicator. Beneath every feeling is one or more reflexes. Frustration is the system giving in to those reflexes, even though they can’t be productive. Another would be the system making use of an option to make the feeling go away (in a manner similar to self administered analgesics). Value trade off decisions would be another good indicator.

      I don’t feel it’s productive to hope AGI is impossible. General intelligence happens in nature. Eventually there will be engineered versions, and we’ll have to deal with whatever philosophical and moral issues they bring.

      Cute cartoon. Nice of the AI to make the human comfortable.

      Like

      1. “I don’t feel it’s productive to hope AGI is impossible.”

        Well, I was being wry. 😉

        “Cute cartoon.”

        Abstruse Goose is my second favorite web comic. Some are special favorites. He was gone for many years and his site didn’t work quite right, but he’s apparently back in action. (Based purely on certain comics he did, I think he found love and got married.)

        Like

        1. Cool. Just subscribed to the RSS feed, although it doesn’t look like there’s been any updates for a while.

          Yeah, if any real person actually went through the cumulative ordeals of a typical action TV show protagonist, they’d be total PTSD disaster.

          Like

          1. No, it’s been a while. I emailed him once, but never got a response. I was just thrilled when, after a few years of nothing, there were new cartoons and the site worked properly.

            Like

  5. That’s a good point about the ethical implications of emotion-feeling AIs.

    But I do think emotions are an obvious way, maybe even the only way, to get past “frame problems”. Problems of the AI getting hung up on some trees and missing the forest. To see the forest, to see what’s important about the context, humans definitely rely on emotional significance. And not just any emotion will do. To see what a human would consider important, it seems necessary to have roughly human-like emotions.

    Like

    1. You might be right, but it seems like we then run the risk of creating artificial humans. We might want artificial humans for some purposes: care giving, companionship, etc, but it also seems like it opens the door to Westworld type scenarios.

      Liked by 1 person

    2. Actually, I’m all for giving AI’s certain emotions, especially disgust. As I’ve said before, I think that’s how to implement Asimov’s laws of robotics. To an extent this would open a door to suffering, but I think that kind of suffering would only occur if intentionally inflicted, which gets weird.

      *

      Like

  6. Great post, and I read the original article. A few things come to mind.

    1. We don’t know how to program a robot to feel, we only know how to program a robot to behave as if it felt. Whether this happens to give them feelings is beyond me; we still don’t understand consciousness. For the record, I doubt it will give them feelings.

    2. Programming a robot as per (1) makes sense, because feelings are what give us the motivation to act, while intelligence gives us the plan. For instance, my intelligence allows me to play a game of chess, but my feelings are what make me play that game in the first place.

    3. This will backfire because most of the software that we write has bugs or unexpected interaction effects. The “feeling” software will be no exception.

    Liked by 3 people

    1. Thanks BIAR!

      On 1, we do have theories on how affects work, so I’m not sure that it’s as stark as you envision. This may be a chance to try those theories out. But we really need to start small and work our way up. The feelings of mouse are going to be easier than the feelings of a human.

      Good point on 3. All the more reason to start small.

      Like

  7. Ok. This question cannot even be taken seriously (pardon me who may have engaged it as such), in that ‘feeling’ is not a program in artificial space, but a blanket of organic affinities and animosities irreducible to such non-sense as AI. There are exactly and only two forms of Intelligence: Artificial and Organic. They are both incommensurate in scope and incompatible of design. Mankind can well reconstruct its animosities. But it will never justify what Nature has performed and still performs so much better, more efficiently, in all respects of Cognition and Intelligence. AI is a pipe dream to include affection, feeling, and the instinct directing all organic processes. Now…shall we broach Instinct?

    A fine Topic, if too slightly treated. Thanks!

    Like

  8. “AI is a pipe dream to include affection, feeling, and the instinct directing all organic processes.”

    BQ, what brings you to this conclusion? What specifically about organic processes, feelings, and intelligence do you see putting them forever out of reach of technology?

    “if too slightly treated”

    If you follow the link to the paper from the Science News article (don’t try to follow the quoted version in my post), you should be able to get into it and all the details, which cover the topic in more depth.

    Like

  9. Kinda spooky at first glance.This will bear closer examination, but I’m tinged with skepticism of the theory on the one hand, as a Cart before the Horse scenario seems in the offing, It’s the outcomes that make the AI good or ill, methinks. Can anyone objectify important social and ethical outcomes on the strict basis of survival alone, for instance? (a simplification) Dog eat dog ensues. Rather, the cooperative Instinct rests qua homeostasis (a good drink of water when desperately thirsty, or long awaited tryst among lovers), rejuvenation and such things our dreams are made of (unconscious Dream Engine?), stemming from and resolved themselves into this: Instinct. Compute that? Why not practice and perfect it first, ourselves, (as we seem to be doing in the worst possible ways) or die trying?

    Like

  10. Aspasia (at the last) got it right, as to the ‘motive’ of Intelligence, and the attempt to apply it to an AI environment or ‘system’ of what can only allegorically be called Conscious. What makes Intelligence is a Motive. There’s no way to do anything but quantum compute chaos into order, if anything like a simulation of what WE do daily could be accomplished technologically. It’s just Far Fetched…and worse: Dangerous in the extreme.

    Like

    1. Thanks for elaborating!

      I’m more optimistic than you that this is eventually doable, but I agree that it ultimately comes back to motivations. As I often say, all cognition is an elaboration of reflexes. But I also agree adding feelings to AI is playing with fire.

      Like

  11. I totally agree! Intelligence doesn’t depend on homeostasis at all. And this was a very well written post! Also, quite enjoyable to read😊 I was wondering if you could checkout my new piece on ARTIFICIAL INTELLIGENCE & JOBS OF THE PAST!And I would really appreciate it if you could comment some feedback to improve the writing style. Looking forward to hearing from you. – Kiran

    https://kiranninprogress.wordpress.com/2019/12/04/artificial-intelligence-jobs-of-the-past/

    Liked by 2 people

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.