Why we’ll know AI is conscious before it will

At Nautilus, Joel Frohlich posits how we’ll know when an AI is conscious.  He starts off by accepting David Chalmers’ concept of a philosophical zombie, but then makes this statement.

But I have a slight problem with Chalmers’ zombies. Zombies are supposed to be capable of asking any question about the nature of experience. It’s worth wondering, though, how a person or machine devoid of experience could reflect on experience it doesn’t have.

He then goes on to describe what I’d call a Turing test for consciousness.

This is not a strictly academic matter—if Google’s DeepMind develops an AI that starts asking, say, why the color red feels like red and not something else, there are only a few possible explanations. Perhaps it heard the question from someone else. It’s possible, for example, that an AI might learn to ask questions about consciousness simply by reading papers about consciousness. It also could have been programmed to ask that question, like a character in a video game, or it could have burped the question out of random noise. Clearly, asking questions about consciousness does not prove anything per se. But could an AI zombie formulate such questions by itself, without hearing them from another source or belching them out from random outputs? To me, the answer is clearly no. If I’m right, then we should seriously consider that an AI might be conscious if it asks questions about subjective experience unprompted. Because we won’t know if it’s ethical to unplug such an AI without knowing if it’s conscious, we better start listening for such questions now.

This seems to include a couple of  major assumptions.

First is the idea that we’ll accidentally make an AI conscious.  I think that is profoundly unlikely.  We’re having a hard enough time making AIs that can successfully navigate around houses or road systems, not to mention ones that can simulate the consequences of real world physical actions.  None of these capabilities are coming without a lot of engineering involved.

The second assumption is that consciousness, like some kind of soul, is a quality a system either has or doesn’t have.  We already have systems that, to some degree, take in information about the world and navigate around in it (self driving cars, Mars rovers, etc).  This amounts to a basic form of exteroceptive awareness.  To the extent such systems have internal sensors, they have a primitive form of interoceptive awareness.  In the language of the previous post, these systems already have a sensorium more sophisticated than many organisms.

But their motorium, their ability to perform actions, remains largely rule based, that is, reflexive.  They don’t yet have the capability to simulate multiple courses of action (imagination) and assess the desirability of those courses, although the Deepmind people are working on this capability.

The abilities above provide a level of functionality that some might consider conscious, although it’s still missing aspects that others will insist are crucial.  So it might be better described as “proto-conscious.”

For a system to be conscious in the way animals are, it would also have to have a model of self, and care about that self.  This self concern comes naturally to us because having such a concern increases our chances of survival and reproduction.  Organisms that don’t have that instinctive concern tend to quickly be selected out of the gene pool.

But for the AI to ask about its own consciousness, its model of self would need to include a another model to monitor aspects of its own internal processing.  In other words, it would need metacognition, introspection, self reflection.  Only once that is in place will it be capable of pondering its own consciousness, and be motivated to do so.

These are not capabilities that are going to come easily or by accident.  There will likely be numerous prototype failures that are near but not quite there.  This means that we’re likely to see more and more sophisticated systems over time that increasingly trigger our intuition of consciousness.  We’ll suspect these systems of being conscious long before they have the capability to wonder about their own consciousness, and we’ll be watching for signs of this kind of self awareness as we try to instill it, like a parent watching for their child’s first successful utterance of a word (or depending on your attitude, Frankenstein looking for the first signs of life in his creation).

Although it’s also worth wondering how prevalent systems with a sense of self will be.  Certainly they will be created in labs, but most of us won’t want cars or robots that care about themselves, at least beyond their usefulness to their owners.  And given all the ethical concerns with full consciousness and the difficulties in accomplishing it, I think the proto-conscious stage is as far as we’ll bring common everyday AI systems, a stage that makes them powerful tools, but keeps them as tools, rather than slaves.

Unless of course I’m missing something?

36 thoughts on “Why we’ll know AI is conscious before it will

  1. Hey Mike. You said

    For a system to be conscious in the way animals are, it would also have to have a model of self, and care about that self.

    What would you say counts as a “model of self”? Say there is a reflex response to getting moderately poked in the ribs, which response includes smiling, flinching, and moving arms toward the place of poking. Suppose there is another reflex control mechanism such that when the timing of pressure on the fingertips corresponds to poking of the ribs, the above response to rib-poking is suppressed, does that count as a model of self?

    *

    Liked by 1 person

    1. Hey James,
      There are multiple levels of self models. For the initial version, I think it includes understanding that your body is separate from the rest of the world, both in terms of what you will sense and what you can do.

      For reflex responses, I like Antonio Damasio’s protoself, core self, and autobiographical self. A galaxy of self centered reflexes could make up a protoself. But the core and biographical self provides an ego-centric model around which to simulate various courses of action.

      To Damasio’s layers, I’d had the metacognitive self, essentially knowledge of one’s own thoughts. That the second model I refer to in the post.

      My current thinking is that a dog has the protoself, core self, and a limited version (by our standards) of an autobiographical self, but probably no metacognitive self, or if it does, a very limited one.

      Like

  2. If Frohlich thinks “philosophical zombies” a la Chalmers would ever behave differently than ordinary people, he doesn’t understand Chalmers. Which isn’t entirely his fault, since Chalmers’s position is incoherent. Speculating about what “philosophical zombies” might do is like asking “if water were impossible to freeze, then would it be possible to boil?” Sorry, but water is necessarily possible to freeze, so the “if” part is empty nonsense, and there is no call to assess the “then”. Similarly, a molecule-for-molecule duplicate of you necessarily has all your intrinsically supported properties, including consciousness. Just because a philosopher strings some words together without immediately obvious insanity, does not mean they have named a genuine possibility.

    Liked by 2 people

    1. I agree. Philosophical zombies are incoherent unless you presuppose substance dualism. In other words, they require as an axiom what they purport to demonstrate. And even then, they inherently require consciousness to be an epiphenomenalism.

      Liked by 1 person

  3. I don’t know how to tell if something’s conscious or not. Does it matter? As for me, I am perfectly happy to be an automaton. Now that you have read this, you can say that too without risking to be called “conscious”.

    Liked by 1 person

    1. In the end, there is no sharp distinction. The idea that there is seems like a holdover from classic dualism, the idea that humans, and possibly animals, have some immaterial animating force that mechanisms lack. The idea that we are mechanisms is one a lot of people seem to hate, finding every rationalization they can for it not to be true.

      Liked by 1 person

      1. Exactly. People need to let go of this idea of dualism. I think, the only difference between humans and machines is the level of complexity and organization. I think, rather than dualism, there is a continuum between order and chaos, the level of enthropy. I think, a phone switching to the power saving mode when the battery level is low is perfectly “self-aware” and I don’t have a problem being an automaton.

        As you may know, in those meditation techniques, they focus on the physical experience first, then they focus on the mental experience of having the physical experience, then they focus on experience of having the mental experience, etc. ad infinitum, until the exercise loses any sense, which is the whole point. This exercise actually creates certain neural paths in the brain and promotes its ability of self-regulation and self-control. A neural network can be trained to do that, I suppose. I think, a sufficiently complicated neural network can be trained to be conscious to a certain degree limited by its own complexity. Once the system understands its limitations and will be able to overcome them – that will be interesting.

        Liked by 1 person

    1. Certainly no Turing type test could ever give us absolute confidence that another system is conscious. Of course, no test can ever give me absolute confidence that you’re conscious, or you that I am. It’s the classic problems of other minds. But we use human behavior all the time to assess whether or not that human is conscious.

      But if we’re going to find a better way other than behavior to make this determination, I don’t know what it is. No physical understanding of the brain will ever give us certitude that we truly understand how our subjective experience comes from it. The best we could do is correlate those experiences with certain neural states. But given that many arthropods and cephalopods appear to be at least somewhat conscious with radically different neural states, I don’t think this helps us much in assessing an AI.

      Looking at the abstract for the Searle paper, as usual, I find his attitude confused and dogmatic. He seems to have long ago concluded that computers or software simply can’t provide cognition, and has been beating to the same drum for decades.

      Like

      1. Hmm. As I am sure you noticed, I posted a link to Searle’s original 1980 CRA paper .. A paper which, over the years i have been asked to lecture on, a lot of folk seem to like to dismiss, albeit i have also noticed that it is a paper which, when pressed, few of those who like to dismiss it as “confused and dogmatic” – not you i am sure – have actually read in toto themselves. As someone with a little background in Philsopphy & Cog Sci who is reasonably familiar with the CRA and a Cybernetician and Computer Scientist with a ‘day job’ that entails leading teams who actually build AI systems, for nearly 40 years, I find myself persuaded by Searle’s position (which i believe has even deeper – and broader – implications regarding computation than many of his critics realise), though of course I know many who are not..

        Like

          1. No worries. Sorry WordPress is chewing up your stuff. Unfortunately, it doesn’t look like the url made it in the comment text, so I can’t put it back in. Usually with comments, I find it best to just paste the URL in and let WP convert it to a link.

            Like

        1. Sorry, I missed that the paper was Searle’s original one on this subject. The article itself at the link is pay walled and the abstract showed it as published online in 2010; I missed the online part, and so thought this was another iteration of his old position. (He has reiterated it in other articles, interviews, and talks over the years.)

          I did read that original paper, and some of the responses, and skimmed Searle’s response to those responses, although it’s now been several years. From what I remember, a lot depends on your definition of “understand”. The issue is that understanding is always relative. So if a piece of software is processing information about, say, the Empire State Building, it knows nothing about the Empire State Building in the way we do.

          My issue though is, what do we really know about the Empire State Building? We hold mental images of it (sensory image maps), and perhaps some of us have been in it and have a mental map of it. But do we understand all the atomic relationships in it? Do most of us know its history? The materials used in its construction? The structural stresses it’s under?

          Our knowledge, our understanding of the Empire State Building is at a certain level, and is in terms of certain affordances, in terms of what it means for us. Currently our model of it exists at more levels than what a computer system can hold about it. But to regard our limited models of the Empire State Building as “understanding” while the computer system’s isn’t, as though there’s some sharp ontological difference between our models and a computer’s rather than just a matter of depth, is, I think, an unwarranted privileging of the way our brains process information.

          I didn’t perceive that Searle successfully justified this distinction. As I recall, many of the responses called him on it, and I didn’t find that his response adequately addressed the issues. But maybe I’m missing something?

          Like

  4. As for the comment above, asking whether AI’s might be irrational, if we presume all AI systems are at least initially programmed and not an accidental consequence, I believe it would be easier to program an AI rationally than irrationally due to Occam’s Razor; besides, irrational systems are inconsistent or otherwise flawed.
    I wanted to take issue with your assumption that AI consciousness couldn’t happen by accident. First off, it probably happened with biological systems like us. Given even a remote possibility of consciousness accidentally arising, it is inevitable that it would happen at least once from here to eternity and probably an infinite number of times. Secondly, I wrote in my first sci-fi novel, “Why Is Unit 142857 Sad?” (https://uncollectedworks.wordpress.com/why-is-unit-142857-sad-or-the-tin-mans-heart/), about a scenario in which an AI robot was programmed for a proto-consciousness (complete with sensorium and motorium) with built-in evolutionary routines. The proto-consciousness consisted of machine instructions plus a routine which would systematically move an arbitrary instruction sequence in persistent memory, called a sliding window, into a temporary memory buffer, and insert jump instructions into and out of the buffer, changing a single bit at a random offset in the buffer. If the AI robot fails as a result of the mutation or otherwise, it would reboot to its previous state. If it survives the mutation, then the mutated code is saved in persistent memory and the sliding window slides over the next instruction sequence, moving it into the temporary buffer, and randomly modifying/mutating a piece of code, rebooting or storing the result, on and on, until it reaches the end of the code and goes back to the beginning, so on and so forth ad infinitum. Each evolutionary generation would take only a few seconds.

    Liked by 1 person

    1. I don’t think I’d characterize consciousness evolving as an accident. If we’re talking about the most basic primary consciousness, it appears to have evolved independently at least three times: in vertebrates, arthropods, and cephalopods. If we’re talking about a more expanded version, that also seems to have evolved at least twice, in birds and mammals.

      Only human level sapient consciousness appears to be singularly unique, and it may be so unique that if we reran evolution again repeatedly, it probably wouldn’t arise again except in a minuscule fraction of the reruns. .

      So primary consciousness appears to be an example of convergent evolution, although even it took billions of years to evolve. But it’s far from established that sapient level consciousness is inevitable. It took hundreds of millions of years for it to evolve on top of primary consciousness and, at least so far, it’s only happened once.

      So having a robot reboot repeatedly in an attempt to evolve consciousness may work for the primary variety, but it seems unlikely to work for the sapient level version. And once you set evolution in motion, you never know what you’re going to get. The thing to remember is that evolution doesn’t progress so much as diversify.

      Not that it doesn’t sound like a cool story!

      Like

    2. Since you have some IT background, have you ever tried to program even a mini version of your robot?

      You could have a process that spawns another process from bytes saved to disc modifying the bytes each time by flipping bits. But then there would be practical issues.

      I am not sure how you define survival or death. If the spawned process crashes that could be called death. But how long do you let it run before you know it will not crash? And what if it is just in a loop?

      I am not sure how it would turn out but my gut feeling is that the robot would never evolve but become stuck with a garbage buffer that might cause it to change behavior but never develop anything sophisticated or gravitate towards looping algorithms that do nothing.

      Also, I assume the rebooting mechanism is exempted from the evolution, but it might be that the rebooting process itself is key the evolution of more sophisticated behavior.

      Interesting idea nevertheless.

      Liked by 1 person

      1. Hi James. Thanks for your astute comments. No, I never actually programmed such a robot, although I had the skills to do so. It was more like a thought experiment. Unfortunately, I didn’t have anyone of your caliber that I could bounce my ideas off of at the time of writing, but if and when I decide to do a revised version of the story, with your permission, I’ll work some of your ideas into it to ensure that it passes the issues you raised. Basically, the sliding window buffer is a “sandbox” similar to Finjan’s Java firewall. My concept needs some reworking and embellishments but my gut feeling is that that is the direction to go. Afterall, the evolution of all living things is based on mutation, survival, and replication, which are pretty straight-forward operations and parameters may be easily set for them. Thank you for your comment!

        Like

  5. As you said “there are many levels of consciousness”, but the AI’s is so far from ours… It’s so different from ours too.
    By the way the The Turing test is so inappropriate for proving AI’s consciousness. But it’s a different theme.

    Liked by 1 person

    1. Dimitar, that kinda depends on your understanding of what consciousness is. Some of us would say the Turing test is a perfect test of consciousness. It’s just not necessarily a test of human-level consciousness, but it would probably be close.

      *

      Liked by 1 person

  6. I quite agree that accidental consciousness isn’t likely to be an issue!

    (There was a cute SF short story way back when about the phone system waking up. The last line was something like, “Everywhere, all at once, the phones began to ring.” Wrong kind of network, of course, but a cute idea.)

    Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.