Add feelings to AI to achieve general intelligence?

Neuroscientists Kingston Man and Antonio Damasio have a paper out arguing that the way to get artificial intelligence (AI) to the next level is to add in feelings.

“Today’s robots lack feelings,” Man and Damasio write in a new paper (subscription required) in Nature Machine Intelligence. “They are not designed to represent the internal state of their operations in a way that would permit them to experience that state in a mental space.”

So Man and Damasio propose a strategy for imbuing machines (such as robots or humanlike androids) with the “artificial equivalent of feeling.” At its core, this proposal calls for machines designed to observe the biological principle of homeostasis. That’s the idea that life must regulate itself to remain within a narrow range of suitable conditions — like keeping temperature and chemical balances within the limits of viability. An intelligent machine’s awareness of analogous features of its internal state would amount to the robotic version of feelings.

Such feelings would not only motivate self-preserving behavior, Man and Damasio believe, but also inspire artificial intelligence to more closely emulate the real thing.

One of the biggest challenges in AI is figuring out how to generalize the lessons learned in specialized neural networks for use in other tasks.  Humans and animals do it all the time.  In that sense, Man’s and Damasio’s proposition is interesting.  Maybe having the system start with its own homeostasis would provide a foundation for that generalization.

On the other hand, I’ve often said I don’t worry too much about the dangers of AI because they wouldn’t have their own survival instinct.  Giving one to them seems like it would open the door to those dangers.  Man and Damasio have a response to that.  Give it empathy.

“Stories about robots often end poorly for their human creators,” Man and Damasio acknowledge. But would a supersmart robot (with feelings) really pose Terminator-type dangers? “We suggest not,” they say, “provided, for example, that in addition to having access to its own feelings, it would be able to know about the feelings of others — that is, if it would be endowed with empathy.”

And so Man and Damasio suggest their own rules for robots: 1. Feel good. 2. Feel empathy.

Well, maybe, but as the Science News author notes, that seems optimistic.  It also raises the danger that rather than building a set of tools motivated to do what we want them to do, we might be creating a race of slaves, survival machines forced to do our bidding.  The danger and possible slavery aspects of this make me uneasy.

I’m also not entirely sure I buy the logic that putting feelings in will necessarily lead to general intelligence.  It seems more likely that it will just lead these systems to behave like animals.  Untold numbers of animal species evolved on Earth before one capable of complex abstract thought came along, and we seem far from inevitable.

Still, exploring in this direction might provide insights into human and animal intelligence and consciousness.  But it also makes John Basl’s and Eric Schwitzgebel’s concern about AI welfare seem more relevant and prescient.