IBM’s Watson: Cognitive or Sentient?

I’ve heard of Watson of course, the supercomputer system that won at Jeopardy, but I think I stilled picked up some interesting bits from this video.

Iwata is clear that Watson isn’t sentient or conscious, but listening to him, I’m sure many people will be creeped out by its learning abilities.

6 thoughts on “IBM’s Watson: Cognitive or Sentient?

  1. Michael
    i already afraid that you didn’t like this text (eth). of course it isn’t possible that in this topic to write something very revealing… but for me the most important thing is that you get to superhuman intellectual effort. you can always write something interesting which usually has a lot of sense. but please do not rush! most important… to fully grasp the meaning of this text!


  2. Mr. Iwata sounds like a Senior Vice President in charge of marketing and communications (i.e. smooth and polished). I thought a couple things he said were worth commenting on:

    He suggests that Watson was the first computer to go beyond its programming. It was merely “trained” by humans. But another way to look at Watson is that, although creating it was an amazing accomplishment, it was in fact programmed to acquire and recover information that would be relevant to winning Jeopardy (which covers a lot of random information). It didn’t program itself; it’s doing what it was programmed to do. At least, Mr. Iwata doesn’t offer any evidence that Watson has begun to branch out. Watson probably didn’t decide to study medicine, for example, and isn’t secretly designing Terminators. (I’m offering this observation as a biased ex-programmer.)

    The other thing that struck me is that the only language most of us have to describe what Watson does is that of “folk psychology”. Watson “thinks”, “learns” and “makes sense” of the data. Some philosophers have argued that these are misleading and inaccurate terms that may eventually fade away (just like we don’t talk about “demons” anymore, except metaphorically). From the Stanford Encyclopedia article on “Eliminative Materialism”: “Eliminative materialists argue that the central tenets of folk psychology radically misdescribe cognitive processes; consequently, the posits of folk psychology pick out nothing that is real.” I wonder if we will one day have a new vocabulary to describe what an entity like Watson does.


    1. You could be right. But I’m always curious of what people see as the distinction between “thinking” and “information processing”. Other than one going on inside the head of an organic being and the other going on inside a computer processor, what’s the difference? Certainly at this stage, the organic being has a greater store of information to integrate that information processing into, but I haven’t seen anything to convince me that’s more than a matter of degree.

      This is separate from whether or not a machine is conscious. I now think consciousness requires a certain architecture that we’re only just just beginning to maybe get an outline of. Of course, if you’re definition of “thinking” is what conscious beings do, then you’re on pretty solid ground to say that Watson, or any machine, isn’t thinking yet.


      1. I’m not sure exactly what I had in mind. But it struck me that Iwata wasn’t really describing what Watson does. He was merely using our standard language for mental activity, a language that some philosophers, in particular Paul and Patricia (?) Churchland, seem to think will one day wither away and be replaced by something more accurate and more scientific. That sounds highly unlikely, but it does raise a question regarding the adequacy of our standard language. A robin can “fly” and so can a 747. A horse “runs” and so does a train. I guess I’m wondering whether the underlying processing that goes on in our brains and in Watson are so different that it’s a little misleading to use the same terminology. That’s not to say that we really “think” and Watson doesn’t, or that we are “smarter” than Watson. Just that we’re very different entities that can sometimes perform the same ultimate tasks. (Or perhaps Watson’s internal processing is or Watson 99’s will be so similar to ours that the same terminology will be completely satisfactory.)


        1. Fascinating reasoning. It seems like a lot depends on how liberal we’re willing to be with the definition of what a mind is. How close to an organic brain does an engineered information processor have to be before we regard what it as a fellow mind?


Your thoughts?

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.