Consciousness is in the eye of the beholder

220px-Kismet_robot_at_MIT_MuseumAlan Turing was a pioneer in the field of computer science.  One of the things he is famous for is the Turing test.  At its core, this is a test about whether or not a machine, a computer, can convince a human that the machine is another human.  The details of the specific test that Turing himself described aren’t that important.  What is important is the idea.  At the time, there were a lot of articles being published debating whether or not computers would ever be able to think.  While not denying the mystery of consciousness, Turing’s proposal was that, rather than having endless philosophical arguments about when machines would be conscious, instead we should have a test that could be empirically measured.  You’ve almost certainly taken the Turning test yourself, in reverse, when you filled out CAPCHA fields on web forms to prove you were human.

While it has many practical uses, as a bid to end the philosophical bickering about machine thinking and consciousness overall, the Turning test was a failure.  Too many people simply couldn’t accept it as a suitable test to make that determination.  A number of objections have surfaced over the years, the most notable being John Searle’s Chinese room thought experiment.  In it, an English speaking man sits in a room with a Chinese to English dictionary.  Someone slips in questions in Chinese on a piece of paper, which the man writes answers to in Chinese by using the dictionary, which he slips back out.  The takeaway from this thought experiment is supposed to be that, while the man in the Chinese room can mimic an understanding of Chinese, he doesn’t really have that understanding.  In other words, mimicking human intelligence isn’t the same as having it and the Turing test tells us nothing.

The most common counter to the Chinese room is to point out that the man-room system itself knows Chinese.  Searle’s reply was to ask what happened if the man memorized the Chinese dictionary, without actually understanding Chinese.  Though it would appear the man knew Chinese, he would only know the dictionary.  This gets into what the difference is between ‘memorization’ and ‘understanding’.  If you’ve memorized something thoroughly enough to sound like you understand it, don’t you understand it?  If not, why in particular?

Reactions to the validity of the Turning test tends to be an indicator of people’s attitudes toward consciousness.  What counts as being conscious?  We know that we ourselves are conscious, and we generally accept without question that other humans are.  There used to be some debate about it, but most people now accept that animals are also conscious, although the degree of their consciousness probably depends on their intelligence.  Chimpanzees, elephants, dolphins, and whales are probably more conscious than dogs and cats, which are more conscious than mice and bees.  To what degree are ants and worms conscious?  Most people would say they have at least some glimmers of it.

But if ants and worms are conscious, then is the laptop that I’m typing this on also conscious?   It has more memory and processing power than the brain of a worm or an ant, and maybe even than a bee?  If my laptop isn’t conscious, then why not?  What separates a conscious entity from a merely robotic one?

I think the difference is a recognition of shared experiences, shared instincts.  We see a bee trying to find nectar, or an ant scouting for its colony, even a worm trying to make its way in the world, and we recognize something of a shared experience with them.  They try to stay alive.  They procreate.  They have many of the same drives that we do.  So, they seem more conscious than my laptop.

This isn’t set in stone.  It changes over time.  I remember when I first played on a computer back in the late 70s, and felt just a bit creeped out that maybe there was a mind or entity of some type in there, processing my requests.  As I learned to program and became aware of just how dependent computers are on being given instructions, that feeling faded.  But it’s worth remembering that we once thought the sun, the moon, the earth, rivers, and many other natural phenomena had minds.  Spirits and gods that had to be propitiated.  Today, we don’t regard them as conscious primarily because we understand the rules that govern what they do, which raises interesting questions about how we’ll react as neuroscience progresses.

Ultimately, I think Turing was right.  What is conscious is that which can convince us of its consciousness.  The usual philosophical response to that assertion are things like philosophical zombies, beings that seem conscious but aren’t.  It’s difficult to posit a difference between a conscious entity and a philosophical zombie without getting into arguments about dualism.  But I think I’ll save that for another post.

28 thoughts on “Consciousness is in the eye of the beholder

  1. Great article and thanks for linking to mine!

    I dislike the Turing test because it’s subjective. One person may be convinced about a program’s responses while another isn’t. An AI measurement should be based on an objectively measurable criteria and not to the judge’s experience or possible naivette.

    Searle’s Chinese Room is thought provoking and confusing. One of the problems is that it sounded like it was about intelligence, but he later claimed it was about consciousness. For example, the first time I read it, I thought he was simply talking about AI, and I dismissed the experiment. The second time (when I read that he was talking about consciousness) the thought experiment took on a whole new dimension and became a lot more compelling.

    Regarding other creatures being conscious… what is it about purposeful behavior that would be an indicator of consciousness? For instance, if a program exhibited that same behavior (arguably some do, in games) would it be conscious, and if so, why not?

    Sorry about the long comment. Consciousness is one of those fascinating, confusing and frustrating subjects. I’m still trying to figure out just what I think about it all 🙂

    Like

    1. Thanks, and I very much appreciate you stopping by!

      I understand your concern about the Turning test and subjectivity. It could probably be made less subjective by a large sample size of people, but I’m not sure if determinations about consciousness can escape being inherently subjective.

      I’m one of those people who doesn’t think the Chinese Room shows what it purports to show. Like the Turning test, I suspect our reactions probably indicate our overall feelings about consciousness.

      Great question about program purposefulness and consciousness! My hypothesis is that, for us to intuitively feel like it was conscious, we’d have to see purposes and motives that we can relate to, at least to some degree. For example, a program concerned about its own survival might make us feel like it was conscious. Of course, the programmer who coded instructions to make it be concerned about its own survival might disagree. But computer scientists will probably be the last ones to accept a machine as conscious, just as many neuroscientists are starting to conclude that consciousness may be an illusion.

      No worries at all on comment length. I love discussing these topics, and lengthy comments are only a sign of engaged participants.

      Like

      1. No problem; I’m enjoying your articles.

        I read a scientist who claimed consciousness was an illusion, and thought the claim was self-refuting. How can there be an illusion unless there was a consciousness that perceived it?

        I understand those who claim the illusion is in how we think of consciousness (which is how I understand Daniel Dennet). I can even understand those who argue against the idea of a stable or stream of consciousness. Possibly even those who argue against unified consciousness as an illusion.

        But consciousness itself an illusion?

        If you have any references that go into detail about this view, I’d love to read them!

        Like

        1. Checked out your blog this morning. It looks extremely interesting. Enough for me to subscribe. I’m looking forward to reading your thoughts on this.

          I think you got it right. The phrase “consciousness is an illusion” is usually just a statement that consciousness isn’t what we commonly think of it as. That our intuitions about this are not to be trusted. (You mentioned Dennett. Another person to check out is Susan Blackmore, and her book, Consciousness A Very Short Introduction.) It’s a bit like saying my desk is an illusion because it’s mostly empty space (between subatomic particles). Obviously, my desk exists at an emergent functional level, and so does consciousness, just not as the Cartesian theater many people think of it as.

          Like

          1. Thank you!

            What you write makes perfect sense and I agree.

            Kudos on mentioning Susan Blackmore! I found her thoughts on Zen, memes, consciousness and OBE interesting. I don’t know if I read that book you mentioned, so I’m off to check it out.

            Like

  2. SelfAwarePatterns, your appraisal of Dennett’s view is right on the mark, and is often known in the literature as ‘eliminative materialism’, perhaps searching for this term might aid you in your researches, bloggingisaresponsibility. Paul and Patricia Churchland have advanced influential versions of the view. The essential idea is that as neuophysiological advances are made it will turn out that our “folk” psychological concepts (such as anger, thirst, tiredness) do not accurately map the sorts of brain states that obtain in such a way that they do not overlap. Thus, it will turn out that the terms that we use to refer to mental state concepts don’t refer to really existing entities (brain states) and we will eliminate them. Sorry for butting in, but I found this conversation too interesting. Thanks for sharing.

    Like

  3. I concede – 2 straight days of philosophy of mind is my limit for this week, maybe even the rest of the month, and I still don’t have much more than a feeling that computers are, and will never be, conscious. I don’t know, maybe it’s just biocentric hubris, but:

    “… according to my version of panpsychism, it feels like something to be the internet …”

    Christof Koch, ‘A Neuroscientist’s Radical Theory of How Networks Become Conscious’

    … “hmmm, really?”, (*arching a single eyebrow*), it’s been an informative, educational 2 days though – thanks!

    Like

        1. I’m not overly familiar with AI, and I’m still somewhat unclear on intentionality in the philosophical sense, but if Tecumseh Fitch is correct there’s a fundamental difference between the living and non-living:

          “Abstract – I suggest that most discussions of intentional systems have overlooked an important aspect of living organisms: the intrinsic goal-directedness inherent in the behaviour of living eukaryotic cells. …

          … I suggest that an appreciation of this aspect of living matter provides a potential route out of what may otherwise appear to be a hopeless philosophical quagmire confronting information-processing models of the mind.”

          ‘Nano-Intentionality – A Defense of Intrinsic Intentionality’, Fitch 2008

          Click to access Fitch2008NanointentionalityCorrect.pdf

          … the difference being between intrinsic and derived intentionality. The reason I say I’ve not much more than a feeling is, though I understand derived intentionality well enough, I’m still a little fuzzy on the intrinsic variety.

          Panpsychism – no fan here either, in fact to my mind to say that it feels like something to be the internet … well, I’d politely say that it’s a unique perspective 🙂

          Like

          1. Interesting. Scanning that paper, his argument seems to be that because organic minds are made up of cells that themselves respond to their environment, they have an intrinsic intentionality that sets them apart from the derived functionality of machines.

            As usual, I find myself siding with Dennett here. I think intrinsic functionality is a failure to bite the bullet that all intentionality ultimately is derived. Certainly, the complexity of life has not been equalled yet by any machine, and if it ever is we may not even call it a machine anymore but engineered life. But I don’t see how that leads to his assertion. And his statement that even if it were all simulated successfully, it still wouldn’t have the intrinsic intentionality of the original, seems to me to be verging on vitalism (despite his assertion that he doesn’t hold to vitalism).

            A key question is to what extent consciousness requires the underlying molecular machinery. How far down in resolution do we need to go? Is that molecular machinery simply building blocks that could, theoretically, be replaced with a different hardware layer (which happens all the time in computer technology), or an essential component of the information processing that we call consciousness? Only time will tell.

            Like

          2. I don’t mind being a biological machine, but the equivalent of a thermostat – ouch! I did just find that Dennett refers to Fitch’s paper in ‘Intuition Pumps And Other Tools for Thinking’ – maybe someday 🙂

            Like

          3. amanimal, I can assure you that you are orders of magnitude more sophisticated than any thermostat I’ve ever met 🙂

            I have ‘Intuition Pumps’, and have read sections of it. I recall it being a pretty good summary of Dennett’s views.

            Like

        2. Another oops! Maybe I shouldn’t be trying to think at 4am – “… and will never be …” is too strong, maybe more appropriately “highly unlikely to become”.

          Like

        3. I think Elliott Jaques is saying pretty much the same though a bit more colloquially:

          “Certainly to hold that mechanical systems …”, 3rd paragraph, page 127

          ‘The Life and Behavior of Living Organisms: A General Theory’, Jaques 2002
          http://goo.gl/VK4EFC

          … can possess only a derived intentionality?

          Like

          1. It might pay to unpack Jaques words a bit. What do we mean when we say we “know” something? Or when we have something to say “on our own”? Or that we are disappointed or worried?

            Knowledge is justified true belief. It is a model in the mind of a portion of reality. It is essentially data. Consider your knowledge of the address of this site. Why can’t we say that your browser “knows” that address?

            Ultimately, what can we really say “on our own”? That is, what can we really say that isn’t a result of perceptions (inputs) from our environment, synthesized perhaps over a lifetime, or derived from those inputs?

            And why are we ever disappointed or worried? What is disappointment and worry? Aren’t they emotions, instincts, our basic programming provided by evolution? Other than complexity, how is that different from the urgency my laptop is currently showing about needing charging soon?

            There are certainly differences in degree here. My laptop’s experience is not (yet) anything like my experience. But I can’t see a sharp distinction, except perhaps, as I noted in my post, about differences in basic programming, instincts, which may hinder our ability to feel empathy with a machine.

            Like

          2. “It is hard to imagine how free will can operate if our behavior is determined by physical law, so it seems that we are no more than biological machines and that free will is just an illusion.”

            I’ve read the paragraph that comes from in ‘The Grand Design’, Hawking/Mlodinow 2010, many times and it seems I’ve not quite taken it to heart though I don’t particularly care for the “no more than” because I think it’s pretty incredible to be a biological machine.

            From your post – “model”, “inputs”, “programming” … and I just looked at:

            http://manbynature.blogspot.com/2008/02/human-hardwired-firmwired-and-softwired.html

            … after reading:

            http://www.huffingtonpost.com/fr-richard-rohr/getting-back-to-our-first_b_4298903.html?utm_hp_ref=religion

            … where it seems to me Rohr presents a somewhat demonized and one-sided view of the unconscious. Anyway, thanks for all 5 paragraphs!

            Like

          3. The manbynature post definitely fits with my thoughts. Thanks for the link!

            I agree on the “no more than” phrase. I think a lot of the resistance to these ideas often come from the way they are presented. It reminds me of Alex Rosenberg’s views, many of which I agree with*, but I hate his presentation of them, which I think is far bleaker than necessary.

            * I do think his views on history and the social sciences is blinkered.

            Like

          4. Sorry? No need at all – in fact they’re much appreciated! I think you’ve brought to light a rather major inconsistency in my thinking!

            Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.