SMBC: Chinese room

I love this SMBC on the Chinese room thought experiment.

Click through for full sized version and the red caption button.

Source: Saturday Morning Breakfast Cereal

My regular readers know I’m not a big fan of the Chinese room thought experiment.  I think it only confirms whatever intuitions you already have.  If you think intelligence can’t come from the processing of symbolic information, then it seems to self evidently confirm that intuition.  If you think intelligence can come from that, then you intuitively conclude that the entire Chinese room is intelligent.

But my main beef with this thought experiment is that it’s ridiculous, and Weiner does a good job pointing that out.  In the real world, a person in a Chinese room, as described, would need to be in a room the size of a warehouse and, depending on the question, might take days, months, or years to provide a response.  It becomes more plausible if you actually put the person in there with a computer, but then the intuitive aspects start to disappear.

Consciousness is in the eye of the beholder

220px-Kismet_robot_at_MIT_MuseumAlan Turing was a pioneer in the field of computer science.  One of the things he is famous for is the Turing test.  At its core, this is a test about whether or not a machine, a computer, can convince a human that the machine is another human.  The details of the specific test that Turing himself described aren’t that important.  What is important is the idea.  At the time, there were a lot of articles being published debating whether or not computers would ever be able to think.  While not denying the mystery of consciousness, Turing’s proposal was that, rather than having endless philosophical arguments about when machines would be conscious, instead we should have a test that could be empirically measured.  You’ve almost certainly taken the Turning test yourself, in reverse, when you filled out CAPCHA fields on web forms to prove you were human.

While it has many practical uses, as a bid to end the philosophical bickering about machine thinking and consciousness overall, the Turning test was a failure.  Too many people simply couldn’t accept it as a suitable test to make that determination.  A number of objections have surfaced over the years, the most notable being John Searle’s Chinese room thought experiment.  In it, an English speaking man sits in a room with a Chinese to English dictionary.  Someone slips in questions in Chinese on a piece of paper, which the man writes answers to in Chinese by using the dictionary, which he slips back out.  The takeaway from this thought experiment is supposed to be that, while the man in the Chinese room can mimic an understanding of Chinese, he doesn’t really have that understanding.  In other words, mimicking human intelligence isn’t the same as having it and the Turing test tells us nothing.

The most common counter to the Chinese room is to point out that the man-room system itself knows Chinese.  Searle’s reply was to ask what happened if the man memorized the Chinese dictionary, without actually understanding Chinese.  Though it would appear the man knew Chinese, he would only know the dictionary.  This gets into what the difference is between ‘memorization’ and ‘understanding’.  If you’ve memorized something thoroughly enough to sound like you understand it, don’t you understand it?  If not, why in particular?

Reactions to the validity of the Turning test tends to be an indicator of people’s attitudes toward consciousness.  What counts as being conscious?  We know that we ourselves are conscious, and we generally accept without question that other humans are.  There used to be some debate about it, but most people now accept that animals are also conscious, although the degree of their consciousness probably depends on their intelligence.  Chimpanzees, elephants, dolphins, and whales are probably more conscious than dogs and cats, which are more conscious than mice and bees.  To what degree are ants and worms conscious?  Most people would say they have at least some glimmers of it.

But if ants and worms are conscious, then is the laptop that I’m typing this on also conscious?   It has more memory and processing power than the brain of a worm or an ant, and maybe even than a bee?  If my laptop isn’t conscious, then why not?  What separates a conscious entity from a merely robotic one?

I think the difference is a recognition of shared experiences, shared instincts.  We see a bee trying to find nectar, or an ant scouting for its colony, even a worm trying to make its way in the world, and we recognize something of a shared experience with them.  They try to stay alive.  They procreate.  They have many of the same drives that we do.  So, they seem more conscious than my laptop.

This isn’t set in stone.  It changes over time.  I remember when I first played on a computer back in the late 70s, and felt just a bit creeped out that maybe there was a mind or entity of some type in there, processing my requests.  As I learned to program and became aware of just how dependent computers are on being given instructions, that feeling faded.  But it’s worth remembering that we once thought the sun, the moon, the earth, rivers, and many other natural phenomena had minds.  Spirits and gods that had to be propitiated.  Today, we don’t regard them as conscious primarily because we understand the rules that govern what they do, which raises interesting questions about how we’ll react as neuroscience progresses.

Ultimately, I think Turing was right.  What is conscious is that which can convince us of its consciousness.  The usual philosophical response to that assertion are things like philosophical zombies, beings that seem conscious but aren’t.  It’s difficult to posit a difference between a conscious entity and a philosophical zombie without getting into arguments about dualism.  But I think I’ll save that for another post.