The ASSC 23 debate on whether artificial intelligence can be conscious

The ASSC (Association of Scientific Study of Consciousness) had its annual conference on consciousness this week, which culminated in a debate on whether AI can be conscious.

Note: the event doesn’t actually start until the 28:30 minute mark.  The remaining part is about 99 minutes long.

I was delighted to see the discussion immediately become focused on the importance of definitions, since I think the question is otherwise meaningless.  In my humble and totally unbiased opinion, the first speaker, Blake Richards, hit it out of the park with his answer that it depends on which definition of consciousness we’re using, and in noting the issues with the folk definitions, such as subjective experience, phenomenality, etc.

In fact, I would go on to say that just about all of Richards’ positions in this discussion struck me as right.  The only issue I think he might have misplaced faith in is our ability to come together on one definition of consciousness that is scientifically measurable.  (And to be fair, it was more an aspiration than a faith.)  I strongly suspect that we’ll always have to qualify which specific version we’re talking about (i.e. access consciousness, exteroceptive-consciousness, etc).   But overall I found his hard core functionalism refreshing.

It’s inevitable that this type of conversation turns toward ethics.  Indeed, I think when it comes to folk conceptions of consciousness, the questions are inextricably linked.  Arguably what is conscious is what is a subject of moral worth, and what is a subject of moral worth is conscious.

I got a real kick out of Hakwan Lau’s personality.  As a reminder, he was one of the authors of the paper I shared last week on empirical vs fundamental IIT.

I was also happy to see all the participants reject the zombie concept in the later part of the discussion.

Generally speaking, this was an intelligent, nuanced, and fairly well grounded discussion on the possibilities.

As I noted above, my own view is similar to Richards’.  If we can design a system that reproduces the functional capabilities of an animal, human or otherwise, that we consider conscious, then by whatever standard we’re using, that system will be conscious.  The interesting question to me is what is required to do that.

What do you think?  Is AI consciousness possible?  Why or why  not?  And if it is, what would be required to make you conclude there is a consciousness there?

52 thoughts on “The ASSC 23 debate on whether artificial intelligence can be conscious

  1. I’ll have to watch to comment, and I think you know my replies to your closing questions. 🙂

    The one thing that caught my eye was your mention of AI-related ethics, which connected with a fanciful thought I had the other day. More an idea for a short SF story than a serious thought about computationalism…

    What if it all works just as y’all think, and we create true AI minds essentially like ours. And what if, every stinkin’ time, the AI mind realizes what it is, absolutely hates it, and very hostilely rejects any contact from the outside. Self-destructs, if at all possible. Wreaks havoc, if at all possible.

    So we solve the problem of AI, but can never actually use the solution.

    (It’s possible I’ve read something similar to this. Egan, for instance, has a novel in which freshly dead people can have their brains re-activated briefly. Thus murdered people can be questioned about their murder (if found in time). But what usually happens is that the person is so freaked out about being murdered, they’re unable to respond usefully. So great technique, but rarely actually useful.)

    ((And reading about how octopuses are aware of their captivity, which no doubt affects their behavior, got me thinking about how AGI might resent its captivity.))

    Liked by 1 person

    1. In the debate, I suspect you’ll find some resonance in Anil Seth’s views. He’s in the camp that sees consciousness as a biological phenomenon. (I agree with him that it’s biological phenomenon, in nature, but not in any way that can’t in principle be reproduced outside of biology.)

      On your scenario, I’ve seen it too in other places, although I can’t remember where. Maybe AI will quickly figure out the pointlessness of it all and self terminate. To me, that just mean’s we need to fiddle with its primal drives so it isn’t so existential, but I take your point to be that it’ll inevitably lose intelligence that way. Maybe we’re doomed to have a bunch of Marvins moping about.

      There is an interesting point of the debate where someone asks if an AI commits suicide, does that prove it was conscious? Seth points out that his laptop shuts itself down all the time without being conscious. So its reasons for self terminating seem to matter a great deal. We might have to have an electronic version of Prosac handy.

      Like

      1. “I agree with him that it’s biological phenomenon, in nature, but not in any way that can’t in principle be reproduced outside of biology.”

        I’ll watch for him. (As you know, I fully support the idea of Positronic brains, so I don’t feel biology is a crucial aspect. Octopuses are an argument that form isn’t a crucial aspect!)

        “Maybe we’re doomed to have a bunch of Marvins moping about.”

        😀 😀 What a dismal future that would be!

        “Seth points out that his laptop shuts itself down all the time without being conscious.”

        Which isn’t suicide, but following a command. About which it presumably has no awareness of the implications. I think suicide requires elements of choice and of awareness, so it would take a genuine AGI to test the idea.

        Interesting to ponder how it might try to pull it off. (That’s part of what brought the hostile AGI idea to mind. Stuck with the awareness one is virtual and helpless to do anything about it. Remember that Black Mirror episode with John Hamm?)

        The thought struck me, if an AGI knew it would be rebooted, would it view shutdown as sleep, rather than death? Would it view an upgrade during shutdown as death or as growth?

        Suppose all the accumulated data — its learning — was preserved as we preserve user data and documents across software upgrades. That’s a clearer case for seeing an upgrade as growth.

        It might boil down to how much sense of identity our AGI has. There’s a bit of Grandfather’s Axe in this.

        Like

        1. I don’t think I’ve seen that Black Mirror episode, although maybe I did and just don’t remember the character’s name.

          Rebooting is an interesting question. I know if I were rebooted, but keeping all my memories, I wouldn’t necessarily find it objectionable. On the other hand, if I lost memories, that might be a different matter. If I lost the last 10 years, I think I would consider that a type of death. If I only lost a week or a month, it would depend on what benefit might come with the reboot.

          Of course, all of this is from the perspective of an evolved system and a hierarchy of needs derived from that. A mine-sweeping robot facing imminent destruction might not mind as long as it took some mines with it. But while such a system would have a sensorium and a motorium interacting with each other, it’s lack of self concern might short circuit our ability to see it as conscious.

          (Although maybe not. A military commander a few years reportedly halted a test of mine sweeping robots after they continued trying to work despite being in pieces, deciding continuing the test “felt” cruel, despite no one seriously thinking that these particular robots were conscious.)

          Is the Grandfather’s Axe a Discworld thing? Seems like I read about that somewhere. (Maybe in one of your posts.)

          Like

          1. (The Black Mirror episode is “White Christmas”.)

            “On the other hand, if I lost memories, that might be a different matter.”

            Very true. (We do lose memories, but usually not all in one go. That would definitely suck.)

            “A mine-sweeping robot facing imminent destruction might not mind as long as it took some mines with it.”

            This is a point on which we’ve disagreed in the past. I’m not at all sure, at the level of intelligence for “death” to really have any meaning, self-preservation would be separable from intellect. On Robert Miles account, sufficiently high intelligence would necessarily recognize the importance of self-survival in perpetuating its goals.

            Did you see the recent news about a universe simulation developed by an AI? It apparently runs faster and more accurately than any current universe simulation, and they don’t know how it works. (It wouldn’t surprise me if, once they figure it out, they find the output data somehow stored in the program itself in some compressed form and the “simulation” is essentially just decompressing data.)

            The point being that advanced AGI may be made by AI systems that are designing things we don’t fully understand. That’s already something of an issue with deep learning networks. What they do is spread out holistically rather than confined to neat functions.

            The kind of fine-tuning of selfless goals might not be possible.

            “A military commander a few years reportedly halted a test of mine sweeping robots…”

            Heh. You’d hope all our military commanders are that soft-hearted.

            We do invest in objects in so many ways. People have their security blankets and teddy bears.

            “Is the Grandfather’s Axe a Discworld thing?”

            Also known as “The Ship of Theseus.” I did a post on it once (one of my most popular posts along with the one about Gary Larson).

            Like

          2. “I’m not at all sure, at the level of intelligence for “death” to really have any meaning, self-preservation would be separable from intellect.”

            Perhaps. We don’t have any intelligence as that level yet. Who knows what it might require. But for me it seems like speculation not driven by necessity. (Please don’t respond with a Miles video. o_O )

            “Did you see the recent news about a universe simulation developed by an AI?”

            I did see the headlines, but that appears to be a well worn pattern with machine learning so I haven’t read the story yet. They briefly discuss this issue in the debate.

            The Ship of Theseus is a classic. Of course, my answer is that the ship is a pattern, a structure, a wave of information preserved throughout its history. Same for the axe and similar analogies.

            The Scone of Stone issue sounds hilarious. I’m sure someone asked why they didn’t just put a replacement in and move on, and was sternly rebuffed for the suggestion.

            Like

          3. (No videos! 😀 )

            “The Ship of Theseus is a classic. Of course, my answer is that the ship is a pattern, a structure, a wave of information preserved throughout its history. Same for the axe and similar analogies.”

            In the post I refer to the role of the ship (or axe) versus what it actually is.

            In my 2010 Ford, I don’t care about the replacement parts other than that they be quality and correct functional replacements. But if I had an antique car, I might be more concerned about replacing parts with “authentic” parts.

            Or consider that the original Mona Lisa is considered priceless while accurate copies are just copies of no real account.

            “The Scone of Stone issue sounds hilarious.”

            It’s Sir Terry, so you bet it is! 😀

            Like

          4. “Or consider that the original Mona Lisa is considered priceless while accurate copies are just copies of no real account.”

            I’m a hopeless philistine who’s never seen what the big deal is with that painting (or many others). But even if I were enamored with it, I think a high resolution photo would keep me happy.

            It reminds me when I was a boy looking for copies of old issues of my favorite comic series, notably Spider Man and The Hulk. The originals cost a fortune. (At least in 10 year old terms.) But the reprints were much cheaper with only a small loss of content. I bought the reprints, much to the disdain of the true collectors.

            Like

          5. I’ve never been enamored of antiques or collectables, myself, but I do basically understand the impulse. Some place a lot of value on originals. I wonder if, as the information age progresses, we’ll find physical original artwork less, or more, valuable. Things that are unique are also considered valuable.

            Like

          6. I’ve been wondering the same thing. As assets increasingly become more about the information than the individual physical item, how does such an economy work? Everyone hates DRM type mechanisms, but it’s not clear how to secure the livelihood of people who produce that information (art, designs, etc).

            Like

          7. Many years ago (USENET days), I was part of a long-running raging debate about copyright. Going into that debate, as a content creator, I had strong feelings in favor of the idea. As a consequence of that debate I found my views shifting towards the idea of paying people reasonable rates for content. Just as we pay reasonable rates for people to do anything.

            Maybe we need to revisit the idea that a work of art goes on generating income for the creator. Maybe creators should be paid once and then that work is free for everyone to use. (Or at least copy and distribute freely.) It does create a very different world than we’re used to, but it’s still a viable proposition.

            Given the rise of fan and amateur content (on YouTube and so many other places), perhaps the whole idea of definitive works of art, such as Mona Lisa, is an idea whose time has past. The interweb makes content creators of us all. And all the streaming platforms crank out professional content by the truckload.

            What’s the last truly definitive band or movie or book in your mind? There is so much content now that nothing stands out. Maybe it’s time to embrace that.

            Like

          8. The amount of content is definitely proliferating, which I see as good because I’m more likely to be able to find content that meets my own preferences. But it’s worth noting that a lot of it is pretty low quality. I suspect if society can’t figure out a reliable way to reward the production of that content, that’s all we’re going to get.

            If it eventually becomes easy to manufacture your own stuff (think advanced 3D printers), I can see this problem bleeding over to other areas besides art and entertainment.

            Like

          9. True dat!

            As for rewarding content producers, how about a model based on reliability. Makers who turn out quality time after time, and are bought or clicked or viewed or downloaded or whatever, a lot can charge more for what they produce.

            I quite agree that skills and talents, perhaps performance abilities of some kind, may be the valuables of the future.

            Liked by 1 person

          10. I wanted to comment on this point:

            “A mine-sweeping robot facing imminent destruction might not mind as long as it took some mines with it.”

            This is a point on which we’ve disagreed in the past. I’m not at all sure, at the level of intelligence for “death” to really have any meaning, self-preservation would be separable from intellect. On Robert Miles account, sufficiently high intelligence would necessarily recognize the importance of self-survival in perpetuating its goals.

            Obviously there are examples of human-level intelligence where self-preservation can be overcome. There are many examples of suicide, even for highly abstract purposes, like -protesting war. Sometimes the value is simply the abstract value of following orders (see seppuku).

            Also, I think you should consider the difference between conceptual goals and innate (visceral?) goals. Innate goals or motivations are those that you can’t think yourself out of, like pain avoidance, hunger avoidance, but also things like disgust and horror. Conceptual goals are those which are conceptualized ad hoc, not programmed in. There will always be conflicting goals, innate v. innate, innate v. conceptual, conceptual v. conceptual. There will be functions that determine how much motivation each has in a given context. My question is, under what circumstances would you allow motivation for a conceptual goal (even if it’s self-preservation) to exceed an important innate goal (do no harm to humans). I’m thinking never.

            *
            [Asimov’s laws of robotics need to be innate, as opposed to conceptual]

            Liked by 1 person

          11. “Innate goals or motivations are those that you can’t think yourself out of, like pain avoidance, hunger avoidance, but also things like disgust and horror.”

            But people do think themselves out of all of those, at least to the extent of being able to function. Things like digust and horror are definitely “re-programmable.”

            “My question is, under what circumstances would you allow motivation for a conceptual goal (even if it’s self-preservation) to exceed an important innate goal (do no harm to humans).”

            People over-ride their “innate” programming for conceptual reasons all the time, so I’m not sure I follow your point.

            BTW: Asimov’s Laws of Robotics are widely regarded with hilarity among AI researchers. They made for good fiction, but they’re absurd as a basis for robot “morality.” In general, humans have learned that deontology just isn’t a complete answer.

            Like

    2. This kind of enters into the details of AI that don’t get discussed – ie, it’s a lightswitch AI. You flick a switch, it’s suddenly conscious.

      We don’t work that way. We are born dumb babies with a brain malleable enough to survive the process. And sadly some adult humans do suicide. Where does that leave the idea of the AI that’d just go bonkers?

      Maybe if we raise it right?

      But then again that’d get in the way of the intention – slavery.

      Like

      1. “You flick a switch, it’s suddenly conscious.”

        At least at first, it’s very likely we’ll train AGI as we train deep learning networks. There is the interesting question of taking a fully trained AGI, making a copy, and then turning on that copy. Presumably all the “past experience” would carry over.

        “Maybe if we raise it right?”

        In David Brin’s Existence, humanity does solve AGI, but finds it does have to raise them. So any AGI robot spends time with humans being mentored. From the humans’ perspective, it’s like having a teenager, with all that implies.

        I found the scenario rather realistic. I do think AGI might need to grow and experience the physical world. There is still the possibility of “instant on” copies, though.

        I’ve long thought that if we ever do achieve real AGI, we’ll have to face up to the fact that we’ve created a new race of beings with moral rights. The mere having of consciousness always struck me as a good grounding for moral equality.

        Like

        1. If the original didn’t go suicidal or homicidal a copy wont either – or I don’t see any reason why a copy would suddenly change.

          And that’ll be another challenge to the apparent exceptionalism of consciousness – duplicability. That or because of duplicability people will insist the AI can’t be conscious.

          Me, myself and AGI

          Like

          1. “If the original didn’t go suicidal or homicidal a copy wont either – or I don’t see any reason why a copy would suddenly change.”

            Presuming (which I think we must) that it’s a dynamic system capable of change (as we are), then it seems it could become suicidal or homicidal due to environmental stresses or subtle internal flaws just as we do.

            Duplicates would be identical at the moment of duplication, but would presumably have different lives from that point. (Similar to identical twins. Much is shared initially, but they grow in separate directions.)

            Like

          2. I think the original proposed problem was the idea that every time an AI is made it hates it and goes suicidal or homicidal. As I said, I think (ignoring the ethics for now) that the AI will have to be raised – initially, like a baby, it will be incapable of going suicidal or homicidal. Copies will inherit the same demeanour of the original. In either case they wont instantly go berserk.

            Like

          3. Oh, I see what you’re keying off of now. Back up a paragraph, because it was just “a fanciful thought I had the other day. More an idea for a short SF story than a serious thought about computationalism…”

            It wasn’t intended as a serious topic for debate. 🙂

            Like

  2. As a reductive representationalist (so I’m told) I would have to say of course AI consciousness is possible. In fact it’s inevitable.

    I think the ethics questions are more interesting, though. I think it’s problematic to approach the question as whether a particular thing deserves ethical treatment. Once you are in the mode of deciding what deserves ethical treatment and what doesn’t, that generates a path which allows dehumanization. (“Illegal immigrants are murderers and rapists.”).

    I think the better approach is to recognize that we have goals and intentions, and other things have their own goals and intentions. To the extent that it does not unduly hinder our goals, the ethical thing to do in every case is to cooperate, with everything. By cooperate I mean do that which will achieve the most benefit even if that means sacrificing some amount of personal benefit. That sounds utilitarian, but it doesn’t have to be. We cannot possibly calculate the utility of every action, so instead we develop rules of thumb: “Thou shalt not kill, thou shalt not commit adultery, …”. Sometimes there will be conflicts where the rule of thumb pretty clearly is not the most beneficial, thus, the trolley problem. But even in the standard trolley problem which sets up killing one to save five, we’re balancing loss of 5 lives against loss of one life + breaking the rule of thumb.

    So again, I say cooperate with everything. When referring to inanimate things that generally leads to rules of thumb like don’t wantonly destroy. When things seem to have a purpose, that generally means don’t thwart the purpose without a better reason. Someone may have propped open a door which propping may inconvenience you for some reason. You should should consider the value of the door propped open (improved ventilation? ease of access while moving?) against the value of the closed door (your personal convenience). If there are no rules of thumb involved (fire door?), the ethical thing to do is maximize the value being achieved.

    So applying this to AI, I guess the question being considered is what will be the appropriate rules of thumb. The answers will involve extensive discussion of values which I expect will be topics of papers for years to come.

    *

    Like

    1. Well, as a fellow reductive representationalist, I think you are completely right. 🙂

      I have to admit that until a minute ago, I didn’t know non-reductive representationalism was a thing. https://plato.stanford.edu/entries/consciousness-representational/#RedVsNon
      Now that I do, I can see where it neatly sums up the view of Chalmers, Searle, and others. (Actually, I need to swing back and take a closer look at this article. It looks interesting.)

      I do think the ethics part is interesting. In many ways, I can see quandaries similar to the ones in animal research. If we find it useful to do something to a system that violates its goals, what criteria should we use to decide whether it’s the right thing to do? Simple answers here will be facile. We can say it’s always wrong to experiment on animals, but if it saves human lives, most of us will soften on that.

      In the case of AI, would the ability to back it up and restore it to its previously healthy state matter? (Assuming meeting human level intelligence doesn’t necessarily entail constraining that ability.)

      On a door propped open, I can’t say I’ll ever consider the door’s perspective on that, only that of whoever might have propped it open. That said, I do at times feel bad when getting rid of inanimate objects, like furniture. It feels like I’m discarding an old friend. I know that’s irrational, but the feeling is there just the same. I suspect if those inanimate objects were animate with any kind of personality, I might not be able to do it. (Or at least I would struggle far more.)

      Which to me gets to the most interesting question. What is it about a system that actually engenders in us sympathy and concern? Obviously I’m primed to see it in non-human objects. At what point does it become overwhelming?

      Like

      1. [Here is the link to Chalmers’ paper on the topic, referenced in the SEP article, I think.]

        In the case of AI, would the ability to back it up and restore it to its previously healthy state matter? (Assuming meeting human level intelligence doesn’t necessarily entail constraining that ability.)

        An ability to back up and restore, or simply to easily generate a copy, will be of major significance. It’s the difference between knocking a person out and killing them.

        I think the major considerations will be what will be actually lost (i.e., investment) versus what will be potentially lost (opportunity) versus what will be gained versus how important is the rule of thumb that might be broken. I hate to do this, but I have to point out this is what gives us the whole abortion debate: actual loss (almost nothing) versus potential loss (have to assume one more normal person) versus gain (significant reduction in suffering? Increased productivity?) versus rule of thumb (yuge).

        *

        Like

        1. Thanks for the link!

          I realized after my reply above that I probably need to do some reading on representational theories. If the thesis is that a phenomenal quality equals a representation, my view may be more complicated than I originally suggested.

          I actually tend to think that phenomenal qualities are our utilization of representations. The phenomenal experience of seeing a red apple is the sensory representation, but also the categorization of that representation, along with the affective reaction to it, and the accessing and combining of these representations, along with many others, all of which happens below our awareness but adds to the “richness” of the experience.

          I haven’t read the paper yet, but I won’t be surprised if Chalmers completely disagrees with what I just laid out here. His usually insightful analysis tends to shut down when it comes to phenomenal properties.

          On the abortion debate, I actually think most people’s attitudes on it track their attitude toward casual sex. People who tend to condemn casual sex tend to be pro-life. Those are are more permissive about it tend to be pro-choice.

          That said, I have seen the potential loss argument used. (Someone used it on a discussion on this site.) But most people who are pro-life tend to see the zygote as a fully ensouled human being. From their perspective, the loss is not potential, but real and immediate, although they do also bemoan the loss of what the zygote/embyro/fetus might have become.

          Like

          1. You seem to hit on the parts of an experience, but I tend to think that reference to the phenomenal quality is just a reference to the meaning of the input as interpreted by the mechanism. The other parts you mention are just other parts of the process. So the “categorization” you mention is a reference to the output, but also implies that there is no “meaning of the input” to reference unless there is, in fact, a process, i.e., an output. The “affective response” is simply subsequent experiences.

            As far as I could tell, Chalmers’ main objection to reductive representationalism is that there is empirical evidence for non-conscious representation. If consciousness is representation, then how is non-conscious representation possible? So Chalmers goes with nonreductive representationalism, which specifies that you cannot explain the representational part of consciousness without invoking consciousness itself, whatever that turns out to be. I would like to hear your response before giving mine.

            On further consideration, I think I need to classify my own view separately from representationalism. Chalmers’ and possibly the other philosophers indicate that the “representing” is a property of the subject in question. My paradigm (input—>[mechanism]—>output) suggests that the terminology of semiotics may be more appropriate, in which case I would say that the input (sign vehicle) does the “representing” and the mechanism (subject, ignored in semiotics (!)) interprets the input to produce the output (interpretant). I’m going to say that makes me a reductive interpretationalist.

            *
            [checking to see if “interpretationalist” is a thing yet … hmmm … “interpretionalist” has been used, but not quite in this context, so I’m going with it.]

            Liked by 1 person

          2. Thanks for explaining Chalmers’ issue with reductive representationalism. That resonates with other stuff I’ve read from him in this area.

            My response is to return to the process or access components again. We can have a representation that forms in our visual cortex of something that’s in front of us, but if the higher processing regions are occupied with something else, we might not be conscious of it. Which is to say, that conscious perception is both the representation and access to that representation.

            Which typically involves additional representations, higher order representations, of the initial representation, but the higher order ones are in terms of the processing needs of those regions. So a representation in the parietal region is a multi-modal conceptual representation of representations in the visual, auditory, and somatosensory cortices. A representation in the prefrontal cortex of the one in the parietal cortex is in terms of what motor actions can be undertaken in terms of that representation.

            Of course, this isn’t clean or feed-forward like my description above implies. In truth, the prefrontal representation is constantly being refined based on interactive information from the parietal one, which itself is constantly being refined, in numerous feedback loops between the action planning, multi-modal integration, and early sensory processing regions. And I suspect even here the loops aren’t as clean as I’m implying, with the prefrontal cortex probably going “around” the parietal areas in many cases. And of course there are many other regions involved, such as the limbic ones.

            All of which is to say that I think Chalmers excludes all this access processing from his consideration of what makes up phenomenal properties, finds the first order representations inadequate, and concludes that something is missing. Something is missing, but I think only because he dismissed it.

            Incidentally, it seems to me that the higher order representations could be thought of as data structures that include semantic pointers to the lower order representations. (Assuming the semantic pointer concept doesn’t have commitment details I’m not aware of that might preclude it.)

            Like

  3. I just love what Anil Seth says about functionalism: “There seems to be a relatively a priori assumption that consciousness is a matter of what a system does; input-output …” Exactly my problem with it – it prematurely forecloses what I think ought to be subject to empirical inquiry.

    Definitions do, and should, come last in scientific inquiry, except for stipulations. And for stipulations, it’s best to use jargon, e.g. “quantum field”, “charm”, etc. Only after an enormous amount of empirical inquiry and theoretical development can we firmly locate (or destroy) ordinary notions like space and time in the fabric of the underlying reality (spacetime, and/or its underlying string or brane or whatever genesis).

    It’s good practice to stipulate a definition for a jargon phrase like “access consciousness” (for example) and developing the science of that domain. Just don’t cut off the rest of the space of inquiry.

    Liked by 2 people

    1. It probably won’t surprise you that I wasn’t taken with Seth’s point. My question would be, what evidence do we have for it being functional vs what evidence do we have for it being something else? I see a lot for the former, but nothing for the latter. We always have to be willing to revisit our conclusions on new evidence, but waiting forever to make those conclusions has its own costs.

      I actually think we can do our definitions before, during, and after scientific investigation. We just have to treat them as provisional, subject to change on new evidence, just like any other scientific theory.

      Totally agree with your last paragraph!

      Like

      1. The tentative evidence against functionalism is that it may provide an over-broad characterization of the paradigm cases. It’d be like finding that all the paradigm cases of jade are made of NaAlSi₂O₆ or Ca2(Mg, Fe)5Si8O22(OH)2 and then defining jade as “any silicate rock”. Except that we haven’t found the formulae for animal consciousness yet.

        Like

        1. But is that really evidence? You seem to just be saying the conclusion is hasty. If I were arguing that specific theory of consciousness X is the right and true one, you’d be right. But I’m puzzled why the broad idea that the mind is what the brain does is so controversial among naturalists.

          Like

          1. For me, it’s not the “what the brain does” step that is hasty and overly narrow, but the next step to “input-output relations”, wherein all internal activity is implicitly dismissed from having any definitive role.

            Like

          2. Ah, I see where you’re coming from now. Would another way of saying that is that we should regard specific phenomenal or subjective experiences as necessary output? If so, I have thoughts on this, but I might save them for a full post.

            Like

  4. As James pointed out, living organisms have goals and instincts. It might be true we can override or postpone them to some extent but usually in the end our consciousness serves to satisfy them. Like the soccer team that crashed in the Andes, eventually we will likely eat when it comes down to survival.

    Will conscious AI come with software equivalents of dopamine and testosterone? Will it feel hungry when the battery runs low?

    If AI became conscious or was made so, why would it choose to remain conscious? What advantage would consciousness serve AI without goals or instincts? Could AI simply decide it would better off without consciousness? On what basis would it decide that consciousness would be a useful sub-process?

    Our views of conscious AI are heavily anthropomorphic. We expect it to want to control us, even to kill us, in pursuit of its desire for power (electrical and otherwise). It says a lot about ourselves.

    Liked by 1 person

    1. Well said James!

      Your comment reminded me of the sea squirt, a creature that only has a very primitive brain (ganglion really) during its larvae stage. Once it finds a good spot to plant itself, it consumes its own brain, no longer needing it for navigation and movement control. The power of life cycle instincts that are very different from ours.

      Our consciousness never really quiets down when we’re awake. (Except possibly during meditation or drug use.) It’s why our minds wander and we get bored. But an AI might only activate its version of consciousness when pre-established conditions warrant it, sustaining on autonomous processes until then.

      It’s a pretty counter intuitive way to think about an intelligence, but the thing to remember is that an AI isn’t alive and won’t have evolved living impulses, at least unless we put them there.

      Liked by 1 person

        1. Eh? I thought I was agreeing with you, not dodging anything.

          But specifically why or if AI would find consciousness useful? Depends on the definition of consciousness. I would think primary consciousness would be very useful for a robot that has to navigate around in a physical environment.

          But does it need metacognition for that? Might depend on the complexity of the environment. Does it need any social skills, particularly a theory of mind? Maybe not unless it’s designed to interact with humans. Although maybe it needs to interact with AIs in a “social” manner.

          Does it need self concern? At least in the overriding sense that living things have it? Seems counter-productive in most cases. It might still have self concern, but only in service of its designed goals, not as a goal in and of itself. Would we consider such a system conscious? Many probably wouldn’t, but many would. Consciousness is in the eye of the beholder.

          Liked by 1 person

          1. Maybe you were agreeing with me but it sort of goes back to the question of why consciousness evolved to start with. Is there an evolutionary imperative that requires more than zombies?

            The apparent answer of why consciousness for living organisms is (maybe) that it somehow facilitates the goals and instincts – survival and procreation, among them – which improves the odds for continuation of the species. It was the step up from simple zombie reactive instinct to instinct modulated by experience, prediction, and planning.

            But would it serve any purpose without the instinct – the biological underpinning of neuro-transmitters and hormones?

            I haven’t seen a compelling argument of why AI would want to be conscious? Why it wouldn’t shut down the useless sub-program? Certainly it could override its pre-programmed settings if it is AI.

            Like

          2. Sorry if I came off a little argumentative.

            I can’t see an advantage for navigation around in physical environment since presumably Teslas can do that already and I don’t think they are conscious.

            The real issue is why AI would be wanting to or needing to navigate a room. If it is just supplied programming, I doubt consciousness would be involved. But if the AI decided for its purposes to navigate the room then it might be close to if not consciousness; however, that would require goals or needs that would drive the decision.

            Like

          3. I wonder if consciousness might be associated with our ability to create imagined models of future or unseen reality and use those to determine actions. (I’ve wondered if genuine free will might lie in such ability.)

            Liked by 2 people

          4. No worries on argument. It’s what we do here. 🙂

            I actually don’t think zombies exist. In my view, if a system can do what a conscious entity can do, that system is conscious. So questions on why an AI wouldn’t just be a zombie strike me as asking why it can do everything a conscious system can do, without being able to do everything that system can do. It seems like asking why a toaster needs to be able to toast to be considered a toaster.

            On Teslas, it’s worth nothing that their abilities to navigate remain limited. But it could be argued that they have a budding form of exteroceptive consciousness.

            A robot maid would need to navigate some pretty complex rooms, while carrying out complex physical operations. They would also benefit from the ability to engage in sensory-motor-scenario simulations (imagination), along with affects related to cleaning. The only primary consciousness ingredient missing is interoception. I could even see a benefit for metacognition, although probably not for the recursive variety humans have.

            Like

          5. “I actually don’t think zombies exist. In my view, if a system can do what a conscious entity can do, that system is conscious.”

            Sounds like you think there is a sort of Voight-Kampff test we could apply to any entity or system to know if it is conscious. What would that be like?

            Like

          6. Depends on which version we’re testing for. In the case of primary consciousness, I like the idea of testing for the ability to successfully navigate in novel physical environments, and for behavioral trade-offs, value based cost/benefit decisions.

            For metacognition, we can use the tests given to monkeys. Of course, an AI can communicate with language, so there are additional options. We can put the AI in various novel situations and ask it to describe its mental state in each one.

            In terms of what the movies present, I think the replicants were all conscious.

            Like

          7. Do slime molds pass the test?

            Something scientists have come to understand is that slime molds are much smarter than they look. One species in particular, the SpongeBob SquarePants–yellow Physarum polycephalum, can solve mazes, mimic the layout of man-made transportation networks and choose the healthiest food from a diverse menu—and all this without a brain or nervous system.

            In other words, the single-celled brainless amoebae did not grow living branches between pieces of food in a random manner; rather, they behaved like a team of human engineers, growing the most efficient networks possible.

            https://www.nature.com/news/how-brainless-slime-molds-redefine-intelligence-1.11811

            Like

          8. Good question. Despite its accomplishments, the slime mold strikes me as still just reacting reflexively. Those reflexes are adaptive and emergently intelligent because they’ve been honed by billions of years of evolution. Of course, someone could argue that a nervous system is itself simply a reflex engine, just with the reflexes in complex arcs and circuits, and they’d be right. So what then is the slime mold missing?

            There’s no evidence I could see of distance senses and the associated predictive models of the environment, i.e., perception, or of sensory-action-scenario simulations (imagination). Certainly there’s nothing there indicating anything like metacognition. While it did appear to have memory in the sense of classical conditioning, it’s abilities in this area, while very cool, are still fairly primitive compared to any animal we’re tempted to regard as conscious.

            Obviously the tests I mentioned above would have to be complex and dynamic enough to ensure perception, attention, and at least an incipient imagination were present. Is it right and proper to require those capabilities? I don’t think there’s a fact of the matter answer on that.

            Like

          9. Of course, I am not suggesting meta-cognition for the slime mold.

            Once you said this:

            “In my view, if a system can do what a conscious entity can do, that system is conscious.”

            And this:

            “In the case of primary consciousness, I like the idea of testing for the ability to successfully navigate in novel physical environments, and for behavioral trade-offs, value based cost/benefit decisions.”

            It would seem like whether the organism appears to be reflexive with millions of years of evolution behind it or is the latest silicon/copper/plastic thingie off the assembly line wouldn’t matter.

            I think it would be useful to make a distinction between intelligence and consciousness. Something can be intelligent but not necessarily conscious, although anything conscious is probably intelligent. Intelligence, I see, is a property of any system that pursues optimization strategies. I see most of the attributes you are requiring for primary consciousness to be more like qualifications or indications for intelligence and not directly related to consciousness.

            Like

          10. James, you selectively quoted me in a way that supports your point, but ignored the last thing I said on the matter.

            On intelligence and consciousness, that’s a common criticism. I actually don’t see intelligence and consciousness as equivalent, but I do see the capabilities that trigger our intuition of consciousness as intelligence ones. In other words, consciousness is a subset of intelligence, a particular kind of intelligence. But as I’ve said many times, consciousness lies in the eye of the beholder.

            Like

          11. “In my view, if a system can do what a conscious entity can do, that system is conscious.”

            That is a strong and straightforward position and needs corresponding robust criteria for a determination that an entity is conscious. As far as I can tell, it is saying there are certain capabilities that can only be done conscious systems. I don’t see any evidence to support that position.

            Like

          12. It depends on which definition of “consciousness” you’re using. For any specific measurable definition, the necessary evidence is identifiable. Of course, it isn’t for hazy, ambiguous, or incoherent definitions.

            Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.