In 1950, Alan Turing published a seminal paper on machine intelligence (which is available online). Turing ponders whether machines can think. However, he pretty much immediately abandons this initial question as hopelessly metaphysical and replaces it with another question that can be approached scientifically: can a machine ever convince us that it’s thinking?
Turing posits a test, a variation of something called the Imitation Game. The idea is that people interact with a system through a chat interface. (Teletypes in Turing’s day; chat windows in modern systems.) If people can’t tell whether they are talking with a machine or another person, then that machine passes the test.
Turing doesn’t stipulate a time limit for the test or any qualifications for the people participating in the conversation, although in a throwaway remark, he predicts that by the year 2000 there will exist a system that could fool 30% of participants after five minutes of conversation, a standard many have fixated on. This is a pretty weak version of the test, yet no system has managed to pass it.
(There was a claim a few years ago that a chatbot had passed, but it turned out to depend on a clever description of the person that was supposedly on the other end, a foreign teen with a shaky grasp of English, which most people think invalidated the claim.)
We’re nowhere near being able to build a system that can pass a robust version of the test with at least an hour of conversation that fools at least 50% of a large sample of human participants.
I think Turing’s overall point is the philosophical problem of other minds. We only ever have access to our own consciousness. Although physical or systematic similarities may give us clues, we can ultimately only infer the existence of other minds by the behavior of the systems in question. The Turing test is essentially a recognition of this fact.
The argument most commonly cited in opposition to the idea of the Turing test is a philosophical thought experiment put forth by John Searle in 1980: the Chinese room argument. (The original paper is also available online).
Searle imagines himself sealed in a room with a slit for questions in Chinese to be submitted on paper. Searle, who does not know Chinese, has a set of instructions for taking the symbols he receives and, using pencil and paper, producing answers in Chinese. He can follow the instructions to the letter and produce the answers, which he slides back out the slit.
Searle’s point is that the Chinese room may pass the Turing test. It appears to understand and can respond to Chinese questions. But Searle himself doesn’t understand a word of Chinese, and, he argues, neither does anything else in the room. It appears to be a case where we have an entity that can pass the Turing test, but which doesn’t have any real understanding.
The takeaway from this argument is supposed to be that Searle is doing the same thing a computer processor does, receiving and manipulating symbols, but with no understanding of what is happening. Therefore the Turing test is not valid, and computationalism overall is wrong.
There are a number of common criticisms of this argument, which Searle responds to in the paper, one of which I’ll get to in a bit. But what I consider the most damaging criticism is rarely discussed, that the scenario described is, if not impossible in principle, utterly infeasible.
We’re asked to suppose that Searle will do everything a computational system that can pass the Turing test will do. But no one really imagines him doing that. Generally we end up imagining some procedure with maybe a few dozen, or perhaps even a few hundred steps. It might take Searle a while to respond, but there’s nothing too out of bounds about it.
Except that we need to consider what a system that can pass a robust Turing test needs to be able to do. A brain has billions of neurons that can spike dozens to hundreds of times per second, and communication throughout the brain tends to be recurrent and ongoing. Which is to say, that a brain receiving a question, parsing it, considering it and responding to it, will engage in at least hundreds of billions of events, that is, hundreds of billions of instructions. A machine passing the Turing test may not do it exactly this way, but we should expect similar sophistication.
And Searle is going to do this by hand? Let’s suppose that he’s particularly productive and can manually perform one instruction per second. If he takes no bathroom, meal, or sleep breaks, he should have his first billion instructions performed in around 30 years. Responding to any kind of reasonably complex question would take centuries, if not millenia.
Maybe, since Searle is a fairly complex system in his own right, we can provide higher level instructions? Doing so, we might be able to reduce the number of steps by a factor of 10 or maybe even 100. But even with that move, the response will be years to decades in coming. And making this move increases the amount of human cognition involved, which I think compromises the intuition of the thought experiment.
We can make the thought experiment more practical by the expedient of giving Searle…a computer. Even a mobile phone today operates at tens of thousands of MIPS, that is, tens of billions of instructions per second. But of course, then we’re right back to where we started, and the intuitive appeal of the thought experiment is gone.
Okay, you might be thinking, but by introducing all this practicality, am I not failing to take this philosophical thought experiment seriously? I’d argue that I am taking it seriously, more seriously in fact than its proponents. But, in the spirit of philosophical argument, I’ll bracket those practicalities for a moment.
The other response to the argument I think remains strong is the first one Searle addresses in the paper, the system response. The idea is that while Searle may not understand Chinese, the overall system of the room, including him and the instructions, do. If the room can respond intelligently in Chinese, including to unplanned questions about the house and village that it grew up in China, which sports teams it was a fan of, which schools it went to, restaurants it ate at, etc, then at some point we should consider that buried in that system is an entity that actually thinks it did grow up in China, or at least one that can conceptualize itself doing so.
Searle’s response (done with disdain, but then the whole paper has a polemical feel to it) is to simply posit that he memorizes all the instructions and performs them mentally. With this modification of the scenario, Searle still doesn’t understand Chinese and, he argues, the system reply is invalidated.
Okay, I know I said I would bracket the practicalities, but if the initial scenario was infeasible, this one is simply ridiculous enough that it should be self refuting. Searle’s going to memorize the hundreds of billions of instructions necessary to provide convincing answers?
But, bracketing that issue again, nothing has changed. The system still understands Chinese even if the parts of Searle following the instructions doesn’t. If Searle is somehow superhuman enough to memorize and mentally follow the code of the Chinese system, then he’s arguably superhuman enough to hold another thinking entity in his head.
And a counter argument here is to consider how I, as a native English speaker, understand English. If someone were to sufficiently damage Wernicke’s area in my brain, it would destroy my ability to comprehend English (or any other language). In other words, the rest of my brain doesn’t understand English any more than the non-instruction part of Searle understands Chinese. It’s only with the whole system, with all the necessary functional components, that I can understand English. What’s true for me is also true for the room, and for Searle’s memorized version of it.
Searle addresses a number of other responses, which I’m not going to get into, because this post is already too long, and I think the points above are sufficient to dismiss the argument.
If Searle had restricted himself to addressing the possibility of a simple system passing the weak version of the Turing test, pointing out that such a system would be missing the mental representations necessary for true understanding, and the utter inability of computer technology c. 1980 to hold those representations, he might have been on somewhat firmer ground. But he pushes a much stronger thesis, that mental content in computation is impossible, even in principle. How does he know this? He feels it’s the obvious takeaway from the thought experiment.
Any meaning in the computer system, Searle argues, comes from human users and designers. What he either doesn’t understand or can’t accept, is that the same thing is true for a brain. There’s nothing meaningful, in and of itself, in the firing of individual neurons, or even in subsystems like the amygdala or visual cortex. The signals in these systems only get their meaning from evolution and by the relation of that content to the environment, which for a brain includes its body.
A computer system gets its meaning from its designers and environment, including its human users, but the principle is the same, particularly if we set that computer as the control system in a robotic body. Yes, human brains include representations about itself, but so do most computer systems. All you have to do is pull up Task Manager on a Windows system to see representations in the system about itself.
So I don’t think the Chinese room makes its case. It attempts to demonstrate the infeasibility of computationalism with a contrived example that is itself far more obviously infeasible, and responds to one of the strongest criticisms against it by ramping that infeasibility to absurd levels. The best that might be said for it is it clarifies the intuitions of some anti-computationalists. The worst is that by demonstrating the need to resort to such absurd counter-examples, it arguably strengthens what it attacks.
Unless of course I’m missing something? Are there weaknesses in Turing’s argument or strengths in Searle’s that I’m missing?