SMBC: Chinese room

I love this SMBC on the Chinese room thought experiment.

Click through for full sized version and the red caption button.

Source: Saturday Morning Breakfast Cereal

My regular readers know I’m not a big fan of the Chinese room thought experiment.  I think it only confirms whatever intuitions you already have.  If you think intelligence can’t come from the processing of symbolic information, then it seems to self evidently confirm that intuition.  If you think intelligence can come from that, then you intuitively conclude that the entire Chinese room is intelligent.

But my main beef with this thought experiment is that it’s ridiculous, and Weiner does a good job pointing that out.  In the real world, a person in a Chinese room, as described, would need to be in a room the size of a warehouse and, depending on the question, might take days, months, or years to provide a response.  It becomes more plausible if you actually put the person in there with a computer, but then the intuitive aspects start to disappear.

15 thoughts on “SMBC: Chinese room

  1. “It becomes more plausible if you actually put the person in there with a computer …”

    … and every time the man enters the a rude question, the computer looks up the rude answer, and gives a knowing little chuckle to itself.

    Liked by 1 person

  2. Huh. I actually always liked the Chinese room experiment, but with those two paragraphs of criticism I’m completely reconsidering everything I once thought.
    I mean, I just read all those symbols you just put there, and then I cross referenced them with the big book in my head (known as my memories of every idea I’ve ever encountered), and came up with “Huh. I actually always liked the Chinese room experiment, but with those two paragraphs of criticism I’m completely reconsidering everything I once thought.” Do I really understand everything about computer science and its intersections with theory of mind? No, just the parts that line up with all these symbols I’ve encountered and remembered. So maybe my head is just a big Chinese room, and I’m not really conscious, by that argument.

    Liked by 1 person

  3. Well, that’s exactly the point, isn’t it? Meaning can’t be de-contextualized. Representations don’t stand by themselves and make meaning. They require interpretation.

    Like

      1. It isn’t simply an algorithm, is all. Global context? It isn’t entirely clear, especially since we represent things in speech and so adjust their meaning (think ‘love’, ‘querer’, and ‘amar’ in English and Spanish) to fit.

        Like

        1. What would you say is there beyond the algorithm?

          Not sure if I’m following the rest. I do think we map language phrases to either sensory impressions or symbolic concepts. (Try thinking of the terms you mention without thinking of some sensory experience.) The symbolic concepts map either to other symbolic concepts or sensory impressions. Eventually, it’s all sensory impressions. Even memories are recorded in terms of sensory impressions. When we say a language element “means” something, I think we are linking it to a symbolic concept or sensory impression.

          Like

          1. I don’t disagree with you, if you mean some phenomenon when you say ‘sensory experience’. I mean, I experience self-awareness as a sensation just like I experience the sound of rain on the window-pane as a sensation.
            But even the sound of rain on the window-pane isn’t composed of just the associated set of frequencies transmitted by the hair cells in my cochlea. It’s the sum of my previous auditory experiences, associated sights, associated touch sensations, awareness of who, where and when I am – off down the rabbit hole that the cartoon lecturer follows.
            Amar and querer both translate to ‘to love’ in English, but mean somewhat different things to the native Spanish speaker. Yet the non-native speaker can still get it eventually, both through exposure to usage, but also because he has the gestalt that language slightly misrepresents any individual’s experience. He isn’t stuck, therefore, with the rule: to love = querer = amar.

            Liked by 1 person

  4. I agree the CR isn’t much more than a way to frame The Question. It doesn’t prove anything. Really, it’s just looking at the Turing Test from another angle — it’s about a man-powered machine that passes the Turing Test… in Chinese. The guy inside who doesn’t speak it just replaces a mechanical or electronic processor doing the lookup (in that none of them speak Chinese).

    But SMBC is always good for a chuckle! 😀

    (Weiner exaggerates for the sake of the humor, but the CR wouldn’t need the answer to questions like “P=NP?” Other than to say what most Chinese people would say: “Huh?” The CR isn’t meant to be a repository of perfect knowledge.)

    Like

    1. Good point. As I recall, the Chinese room is actually designed to counter the Turing test, among other things. I have to admit I’m not as smitten with the Turing test as I once was. I think it’s still indicative for intelligence, but not necessarily for consciousness, at least not human-like consciousness. Although, until we understand consciousness, we can never dismiss the possibility that a machine that gives every indication of consciousness doesn’t have the architecture of consciousness (whatever it is) embedded in it. Of course, without that understanding, we also can’t dismiss the possibility that we’re dealing with a behavioral zombie.

      Totally agreed that Weiner exaggerated for comic effect. Which is a little unfortunate since I think it weakens the satire a bit. Maybe it’s just his way to avoid looking too preachy.

      Like

      1. Yeah, Searle is not a fan of the possibility of algorithmic consciousness. (In some of his talks he’s been quite dismissive of the idea.)

        The Chinese people using the CR, and especially anyone applying a Turing Test, is viewing consciousness from the outside (as we do interacting with others), and you’re right all we can do is decide if something (or someone) acts like they’re conscious.

        And then you really have to ask what’s the difference between being conscious and functionally acting that way — the zombie thing you mentioned. (From a research point of view, the distinction does matter, of course.)

        I suppose that’s the thing I do like about the CR thought experiment: It puts more focus on what’s happening inside the machine — the TT is entirely external.

        Liked by 2 people

  5. Can’t say I get the cartoon at all. “Into the room are handed Chinese symbols?” Um. Why not just say the man in the room is given Chinese symbols? I’m missing something here for sure.

    Then he gives back “appropriate symbols,” but how does he choose what’s appropriate? If there are an infinite number of possible exchanges, then he has to choose which appropriate symbol to return, which would require interpretation. Then his responses to those symbols would have to be logically coherent and consistent throughout the conversation. If he responds to one question, “I hate anything with chocolate” then responds to another question, “I love chocolate ice cream,” he might have to explain himself.

    One way out is to have the person handing him the Chinese symbols point out his inconsistency, and he returns a symbol saying, “Whoops, I guess I changed my mind!” 🙂

    Or perhaps there’s only one response per symbol and these are pre-determined to be consistent with all the others? But that wouldn’t be infinite….

    Like

    1. I think the idea of the Chinese room is that the man doesn’t understand what he’s doing, he’s just blindly following procedures to converse in Chinese. His own judgment is supposed to never come into it. (I have to admit that I missed this point myself the first time I read about it.) The idea is that he’s just like the computer, following exact instructions, giving “the illusion” that there’s something in the room that understands and can converse in Chinese.

      The problem with this thought experiment, is it hinges on the idea, the assumption, that something just following processes is different from what’s happening in our brains. What it purports to show depends on the intuition that our understanding can’t just be those processes in motion. Maybe Searle defended that intuition in his original treatment, but the write-ups I’ve seen don’t mention it, except when discussing criticisms.

      The comic exaggerates the reality of how difficult it would be for such a room to function in practice. But the point is that the procedures would take up an enormous amount of space and require an enormous amount of time for the man to work through. To respond in any reasonable time, to preserve the illusion the idea posits, he’d need a computer, but then we’re right back to the original question of whether a computer can actually understand something.

      Liked by 1 person

      1. Ah, okay. I missed the point about the man in the room not understanding what he’s doing. (Although I find it implausible that he’d be able to follow a procedure like this without being aware of his choices.)

        I think your point about the experiment confirming what you already believe seems to be right, at least as far as I can tell.

        Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.