IBM’s Watson: Cognitive or Sentient?

I’ve heard of Watson of course, the supercomputer system that won at Jeopardy, but I think I stilled picked up some interesting bits from this video.

Iwata is clear that Watson isn’t sentient or conscious, but listening to him, I’m sure many people will be creeped out by its learning abilities.

I don’t share Stephen Hawking’s worry about AIs

This essay by three physicists: Stephen Hawking, Max Tegmark, Frank Wilczek, along with Stuart Russell (the  one computer scientist), seems to be getting a lot of attention.  It keeps popping up in my various feeds, showing up in various venues.

With the Hollywood blockbuster Transcendence playing in cinemas, with Johnny Depp and Morgan Freeman showcasing clashing visions for the future of humanity, it’s tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history.

Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

via Stephen Hawking: ‘Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?’ – Science – News – The Independent.

As I indicated in one of the comment threads a few weeks ago, when physicists start talking about consciousness or artificial intelligence, I often find it cringeworthy, and a good example of the fact that when brilliant scientists speak about matters outside of their specialty, they often have little more insight than the rest of us.  I’d feel a little bit better about this essay if I got the impression that these guys had talked with a lot of working AI researchers and perhaps some neuroscientists.

I’ve already written my own essay about why I’m not particularly worried about an AI revolt.  We tend to equate intelligence with a self valuing agenda, a survival instinct.  But that doesn’t come automatically.  It only happened for us through billions of years of evolutionary programming.  We’re as unlikely to accidentally develop malevolent AIs as your local garbage dump is to spontaneously evolve into Godzilla.

No, an AI’s instincts will have to be painstakingly programmed by us.  Any accidents will be more likely to make it nonfunctional than malevolent.  I do think there is a decent danger of unintended consequences from machines ardently trying to follow their programming, but that exists already with today’s computers and machines, and we haven’t destroyed ourselves yet.  Actually AIs could arguably lessen that risk since it would give machines better judgment in the implementation of their directives.

My concern about essays like this, aside from the anxiety they cause, is that it might lead to politicians deciding they need to legislate AI research, to put limits and restrictions on it.  All that would do is cause the US and Europe to cede such research to other countries without those restrictions.  Legislation might eventually be needed to protect artificial life forms, but we’re still a ways off from that right now.

Am I completely sanguine on the dangers of AI?  No, but I’m not completely sanguine on the dangers of any new technology.  I’m personally a lot more worried about what we’re doing to the environment and our runaway population growth than I am about AIs turning on us.

Artificial intelligence is what we can do that computers can’t…yet

There is currently no consensus on how closely...
(Photo credit: Wikipedia)

I think I’ve mentioned before that I listen to a number of different podcasts.  One of them is Writing Excuses, a podcast about writing science fiction.  One of the recent episodes featured Nancy Fulda to discuss writing about AI realistically.  In the discussion, she made an observation that I thought was insightful.  What we call “artificial intelligence” is basically what computers can’t do yet.  Once computers can do something, it ceases to be something we label artificial intelligence.

Consider that in 1930, the very idea of a machine that could make decisions based on inputs would have been considered a thinking machine.  By the 1950s, when we had such machines, it was no longer considered to be a thinking entity, but simply just one that followed detailed instructions.

In the 1960s, the idea of a computer that could beat an expert chess player would have been considered artificial intelligence.  Then in 1997, the computer Deep Blue beat Gary Kasparov, and the idea of a computer beating a human being at chess quickly got reclassified as just brute force processing.

Likewise, the idea of a computer winning at something like Jeopardy would have been considered AI a few years ago, but no more.  With each accomplishment, each development that allowed a computer to do something only we could, we simply stopped thinking of that accomplishment as any kind of hallmark of true artificial intelligence.

So what are some of the things we currently consider to fall in artificial intelligence that might eventually make the transition?  Pattern recognition comes to mind, although computers are constantly improving in all aspects of that.  The increasing difficulty of Captcha tests are testament to that.

One of the things people often assert today, is that computers can’t really understand anything, and until they do, there won’t be any true intelligence there.  But what do we mean when we say someone “understands” something?  The word “understand”, taken literally, means to stand under something.  As it’s customarily used, it means to have a thorough knowledge about something, perhaps to have knowledge of how it works in various contexts, or perhaps of its constituent parts.

In other words, to understand something is to have extensive knowledge, that is extensive accurate data, about something.  It’s not clear to me why a sufficiently powerful computer can’t do this.  Indeed, I suspect you could already say that my laptop “understands” how to interact with the WordPress site.

Another thing I often hear is that computers aren’t conscious, that they don’t have an inner experience.  I generally have to agree that currently they aren’t and they don’t.  But I also strongly suspect that this will eventually be a matter of programming.  There are a number of theories about consciousness, the strongest that I currently know of being Michael Graziano’s Attention Schema Theory.  If something like Graziano’s theory is correct, it will only be a matter of time before someone is able to program it into a computer.

A Venn diagram illustrating one of the weaknes...
(Photo credit: Wikipedia)

In an attempt at finding an objective line between mindless computing and intelligence, Alan Turing, decades ago, proposed what is now commonly called the Turing Test, where a human tries to tell the difference between a human correspondent and a machine one.  When they can’t, according to Turing, the machine should be judged intelligent.  Many people have found the idea of this test unsatisfactory, and there have been many objections.  The strongest of these, I think, is that the test really measures human like intelligence, rather than raw intelligence.

But I think this gets at the fact that our evaluation of intelligence is intimately tangled up with how human, how much like us, we perceive an entity to be.  It may well be that we won’t consider a computer intelligent until we can sense in it a common experience, a common set of motivations, desires, and impulses.  Until it has programming similar to ours.

Getting back to Nancy’s observation, I think she’s right.  With each new development, we will re-calibrate our conception of artificial intelligence, until we run out of room to separate them and us.  Actually in some respects, I suspect we won’t let it get to that point.  Already programmers, when designing user interfaces, are urged not to make the application they are writing too independent of action, as it tends to make users anxious.

Aside from some research projects, I think that same principle, along with perhaps some aspects of the uncanny valley effect, will work to always keep artificial minds from being too much like us.  There’s certainly unlikely to be much of a market for a navigation system that worries about whether it will be replaced by a newer model, or self driving cars that find it fun to drag race.

IBM to make Jeopardy winning Watson available for commercial use

Remember the IBM Watson, the computer system that won at Jeopardy?  Well IBM is getting ready to make the technology available for use in cloud platforms.  I’d imagine we can expect web applications to suddenly get much more intelligent.

Artificial Intelligence – What You Really Need to Know – Forbes

English: IBM's Watson computer, Yorktown Heigh...

For those who started their careers in AI and left in disillusionment (Andrew Ng confessed to this, yet jumped back in) or data scientists today, the consensus is often that artificial intelligence is just a new fancy marketing term for good old predictive analytics.  They point to the reality of Apple’s Siri to listen and respond to requests as adequate but more often frustrating.  Or, IBM Watson’s win on Jeopardy as data loading and brute force programming.  Their perspective, real value is the pragmatic logic of the predictive analytics we have.

But, is this fair?  No.

via Artificial Intelligence – What You Really Need to Know – Forbes.

I think this article makes an important point.  A lot of old school AI people are unhappy with the direction that commercial AI has taken.  Commercial AI has largely abandoned trying to create an artificial mind, and is instead focusing on where they can make computing systems more intelligent.

They are, of course, driven by the profit motive.  They’ll take what results they can use now, and worry less about pie in the sky aspirations.  I think that’s totally reasonable for them.  Getting upset that they aren’t working toward building a human like mind is just misguided.

I also think it’s a strategy that can eventually lead to breakthroughs.  A lot of times, when you can’t figure out how to solve the big problem, often times focusing on smaller problems will eventually shake things up enough until a solution for the big problem becomes apparent.