Does conscious AI deserve rights?

This is an interesting video from Big Think.  It features discussion from a variety of thinkers like Richard Dawkins, Peter Singer, Susan Schneider, and others, including a lot of intelligent remarks from someone I wasn’t familiar with until now, Joanna Bryson.

Consciousness lies in the eye of the beholder.  There is no universally agreed upon collection of attributes or capabilities that are both necessary and sufficient to objectively call a system conscious.  Outside of Westworld or Bladerunner type scenarios, where the AI is virtually identical to a human being, there will always be an element of judgement on whether the system in question is enough like us that we are obligated to treat it like one of us.

The issue is very similar to animal rights.  Mammals have always had a large advantage in these considerations because they’re the most like us.  Birds do okay too.  But fish, amphibians, and most invertebrates typically don’t.  Although arthropods might have it easy in arousing our sympathy compared to a plastic and metal machine.

Bryson makes a point that I’ve often made (emphasis enhanced):

So, given how hard it is to be fair, why should we build AI that needs us to be fair to it? So, what I’m trying to do is just make the problem simpler and focus us on the thing that we can’t help, which is the human condition. And I’m recommending that if you specify something, if you say okay this is when you really need rights in this context, okay, once we’ve established that don’t build that.

Unless we find that general intelligence requires biological desires and instincts (which I personally see no reason to expect), we should be able to get most of the benefits of AI without building such systems.  It arguably isn’t cruel to retire a self driving car whose deepest desires are to be the most effective and safest transportation it can be, and doesn’t care about its own existence beyond that.

Although there is nuance here and issues that may be difficult to avoid.  If such a car was left running while unable to fulfill its desires, then that could be considered a type of suffering.  In this case, its interest and ours would align, since it’s not productive for us to leave a machine running and consuming energy when it can’t fulfill its goals.  But it’s not hard to imagine accidental scenarios where such a state is overlooked.

Meaning we don’t necessarily have to worry about building a race of slaves, that is, systems that don’t want to do what we designed them to do.  (At least unless we go out of our way to build such systems.)  But we might have tools who are happy to be tools, yet whose welfare still needs to be considered.  Careful design could probably minimize these issues, but that means actually taking them into account during design.

If we do build AI systems that resemble humans or animals, maybe for companionship or related purposes, it will be natural for most of us to regard them as beings deserving ethical consideration.  I don’t think we should resist those instincts, since becoming callous to them can affect the way we treat each other.  Just as animal cruelty is a slippery slope into cruelty toward humans, AI cruelty would not be a harmless activity, even if the AI itself ultimately doesn’t care.

But maybe I’m overlooking something?

45 thoughts on “Does conscious AI deserve rights?

  1. [donning … non-mainstream(?) hat]
    I think this is a crucial issue, and one which many/(most?) people are likely looking at in the wrong way. The very question: “does X deserve rights” is misguided. Whether something has rights is not (well, should not) be determined by the nature of the thing. Whether it has rights, i.e., whether society should take actions in response to certain treatment of that thing, depends more on the role of that thing in society. That doesn’t mean that the nature of the thing is irrelevant to which rights might be granted. Whether something suffers, and how, can/should be taken into account. It’s just that it is not the case that we grant moral consideration or rights because something is a “person” or is “conscious”.

    To anyone truly interested in this issue, I highly recommend watching Daniel Estrada’s (@eripsa on Twitter) video at https://youtu.be/TUMIxBnVsGc.

    *

    Liked by 2 people

    1. That seems like the argument I made at the end of the post, except on steroids. I guess my question would be, what kind of changes are we looking for here? Anyone who destroyed one of these robots could already be charged with destruction of property, similar to if they destroyed someone’s car, tree, or teddy bear.

      Are we looking for something in addition, similar to cruelty to animals? That seems like a tough sell since most people accept that animals, at least mammals, are feeling beings. In other words, they don’t see animal abuse as damage to our social fiber, but as harm to the animals themselves. I don’t know that an explicit appeal to social participation is enough to punish someone as though they had harmed a sentient entity.

      Although the examples he shares makes clear that a machine won’t have to be that sophisticated to get our sympathy. It’s on the broad blurry boundary where there’s any chance of sentience that this argument gains potency.

      Liked by 1 person

      1. I recognize that you were headed in the direction I put on steroids. My reaction was to most (all?) of the people in the video. But I see you still have tendencies in their direction, as my feeling is that you are including yourself in the “most people [who see] animal abuse as … harm to the animals themselves.” But there is a difference between sympathizing and granting rights, the latter being the social thing.

        On thinking about sympathy, I’m beginning to think that what engenders sympathy might not be so much recognition of sentience as recognition of goals. People, I think maybe, identified with the goals of the simple bots, which was just to get to a specific location in the simplest case. In the case of the military bots, I think the soldiers recognized the goals of the robot to protect others, which was their own mission. I think for abuse of animals, we sympathize with the goal of being pain free, hunger free, etc.

        *

        Liked by 1 person

        1. Hard to say. My suspicion is that sympathizing with the goals happens because we project our own sentience onto them, how we’d feel if we were pursuing those goals. This is automatic. We can’t help but do it.

          Of course, we do the same thing with animals. Most of them have far simpler mental states than people take them to have. But at least for now, the gap seems far smaller with them.

          Liked by 1 person

  2. It seems to me that when and if we create a system complex enough for “suffering” to have meaning, then of course that system deserves some kind of rights. One question might be whether those rights are akin to the chickens we mass produce to slaughter and eat, or like dogs and pets, or like the brain-damaged, or like “primitive” people? There’s a large spectrum of beings capable of some form of real suffering.

    The key seems in defining what can suffer and what suffering entails.

    I’m confident my computerized thermostat doesn’t suffer when the A/C or furnace is turned off. Or t hat my various computer devices aren’t frustrated or bored when I don’t use them. So I wouldn’t expect a self-driving car to “suffer” sitting in the garage for a week.

    Suffering requires sentience. I think the line for Earth’s bio-creatures is, as you suggest, somewhere around insects and fish. Lizards seem a little borderline to me, but birds are definitely above the line.

    Bugs and fish is actually kind of a low bar — we already have pretty good mechanical simulations. Should they have rights? The thing about bio-creatures is that built in nervous system that can feel that mysterious qualia, pain. I think that’s why even snakes and lizards seem borderline.

    My laptop, much more complex than a bug, feels no pain when I kill it. Likewise, it’s not glad to be reborn when switched on. It’s not bored when idle. It’s no more sentient than my thermostat.

    Whether a constructed machine can ever feel pain or be said to suffer is, I think, an interesting question. The answer might be no, because, ultimately, only biology suffers. OTOH, if pain isn’t part of the equation, what happens when our bigger neural nets become highly, but perhaps narrowly, intelligent? What does it mean for an algorithm to suffer?

    Liked by 1 person

    1. For what it’s worth, I think to suffer is to have a goal which is frustrated. We sympathize more with goals that we share.We sympathize less with those that do not have our goals, such as novelty (frustration of which we call boredom).

      Also, do you die when you go to sleep, reborn when you awake? Don’t mean to pick on this, but my definition of death is to go unconscious with no chance of becoming conscious again. A stopped heart/breathing used to be an indication, until CPR, etc. In any case, death is only suffering if it frustrates some other goal. Most living things have such goals, though.

      *

      Like

      1. “I think to suffer is to have a goal which is frustrated.”

        That seems too reductive to me. The goal in most suffering is for the suffering to end. Doing that requires a different approach between a broken bone and the loss of a loved one.

        Sorry, but I don’t think it’s goals we share in common so much as pain and joy. Our emotions vibrate in sympathy with the emotions of others. It’s what’s missing in sociopaths and psychopaths, although such individuals can identify with goals.

        “Also, do you die when you go to sleep, reborn when you awake?”

        No, self-evidently not. The analogy isn’t correct. Computers can “sleep” too — enter a state of reduced and altered activity capable of “waking up” and resuming previous activity.

        But there is no real difference between a computer that is switched off forever and one that that is switched off for the night. Exactly as you suggest, the proper analogy is to being killed each night and then — maybe — brought back to life each morning. Or a week from now. Or a year. Or never. You’d have no way to know.

        “Most living things have such goals, though.”

        Kinda why I think it’s too reductive. “Goals” is far too broad.

        Liked by 1 person

    2. On simulations of bugs and fish, I’m not aware of any that can do what these creatures can do, such as navigate, hunt, forage, recognize danger, etc. It seems like our systems aren’t much past worm level, except in a few narrow modalities, at least in terms of intelligence. We’re making progress, but there’s still a long way to go. As you note, meeting the lowest bars doesn’t look like a computing power issue.

      I think suffering requires a system with automatic responses to certain conditions, but responses that a higher level part of the system with at least an incipient level of reasoning, can override. However, it has to be a situation where the automatic response keeps firing and keeps having to be overridden, stressing the system, and that the system can’t dismiss or terminate. If that dynamic exists, it seems close to suffering, or suffering-like.

      It’s a dynamic that occurs in natural systems due to their evolutionary history. For a long time I thought it was unlikely to come up in designed systems by accident, but I can now see ways it could come about. All it would require is the designer deciding to prioritize certain values in the system, ones that it would never be able to simply turn off. There are probably ways to minimize the issue, but it would take a lot of careful thought, and a recognition that it’s more than just a energy efficiency issue.

      Liked by 1 person

      1. “On simulations of bugs and fish, I’m not aware of any that can do what these creatures can do, such as navigate, hunt, forage, recognize danger, etc.”

        We have mechanical simulations good enough to fool other creatures and capable of invoking the feeling of watching a bug or fish. And it’s the “looks like” that’s my point. It isn’t that hard for a mechanical device to simulate an animal or human well enough to invoke feelings in people.

        The question is what those invoked feelings amount to. I’d be interested in a break-down between those with no background in robotics versus those with a lot of hands-on. If you’ve ever seen those “Robot Wars” shows that feature different robots trying to destroy each other, the crowd is pretty enthusiastic. Those people aren’t bumming out about “dead” or “mutilated” robots because they know they’re just machines.

        “All it would require is the designer deciding to prioritize certain values in the system, ones that it would never be able to simply turn off.”

        The (fictional) classical example is HAL 9000 in 2001: A Space Odessy. He was given conflicting goals he couldn’t resolve and went mad.

        I really do question whether that’s possible in an algorithmic system. It seems like it would require extremely poor design to not have system monitors catching the system going unstable. Even relay-based phone switching systems have self-monitors to catch “insanity” in the system.

        Like

        1. Robot Wars does seem like a counter to the cases of people intuiting another being there. However, it’s worth noting that there are people who delight in cock fighting, dog fighting, and other animal blood sports, so I’m not sure the idea that there are people willing to destroy these things necessarily erases the intuitions a lot of others have. And I think just about everyone understands they’re anthropomorphizing the current systems.

          On HAL 9000 going mad, aside from the implausibility of it happening just right to create a murder ship (and satisfy Clarke and Kubrick’s dramatic effect), I see something going mad as just malfunctioning. I deal with IT systems that malfunction all the time, sometimes catastrophically, in situations where we do sometimes describe the system as “going insane.” (Thankfully with no fatalities or sysadmins being locked out of their airlock. Although students have been locked out of their dorm rooms before.)

          What’s probably more of an issue is the idea that one system would be given complete control of the whole ship. But 2001 was written in the 1960s, so I give Clarke a break. And besides, I suspect he was having to rationalize some stuff Kubrick came up with.

          Like

          1. “…there are people who delight in cock fighting, dog fighting, and other animal blood sports,”

            I think most “roboteers” would be appalled at, even insulted by, the comparison. These are people who are informed enough to understand the difference between a sentient creature and a machine.

            Perhaps, as an informed functionalist, you see the situation from the opposite angle. You find it easier to conflate humans and machines because your intuition perceives humans as a kind of machine. The uninformed conflate machines as a kind of human or creature because they see only the surface appearances. (I think any conflation is a category mistake.)

            The overall point, I believe, is that machine rights are directly tied to machine sentience. We need an understanding of machine sentience and consciousness. At this point, we’re not even sure there is such a thing as machine sentience or machine consciousness. It’s not about people’s intuitions; it’s about what’s really going on there. We need to understand consciousness.

            “I see something going mad as just malfunctioning.”

            Exactly. Which means monitors can detect it and report on it.

            The caveat might involve systems so complex we don’t fully understand them. The success of deep learning neural nets (and the parallel to how humans learn) suggests to me machine consciousness might need to be grown or trained. (Once created, duplication might be possible.)

            If so, it’s possible AGI consciousness might be so complex we can’t be entirely sure of its sanity anymore than we can with humans. We’re already finding those NNs to be somewhat opaque.

            “What’s probably more of an issue is the idea that one system would be given complete control of the whole ship.”

            Totally. I mean, what kind of fool would turn over control of their home devices to some remote software server… 😀

            More seriously, we’ll be turning over driving cars to machines soon enough. Factories and warehouses are increasingly automated. Software systems have a lot of control in some airplanes. All have caused issues. That HAL would have that much control isn’t entirely improbable.

            One big concern is that a system with some autonomy (say it can control repair robots for self-fixing) figures out both how to protect itself from shutdown and how to increase its control. If a system is smart enough to determine potential threats, given a goal of protecting itself, and given some mobile functional autonomy, bad things could happen. Quickly.

            Like

          2. “I think most “roboteers” would be appalled at, even insulted by, the comparison. ”

            People taking offence at an idea has no bearing on whether it’s true.

            I do think humans are evolved machines. But I seriously doubt very many people today really think existing machines are sentient. That said, it does seem like a lot of people are primed to accept them as sentient as soon as there’s any reason for doubt.

            The problem I see with talking about whether machine sentience or consciousness is there or not there, is it’s a hopelessly ambiguous question. Are we asking whether they have biological impulses, sensory perceptions, valence driven action selection, imagination, introspection, or some other attribute? Which of those are necessary and sufficient? Talk to ten people who study consciousness and you’ll likely get fifteen different answers.

            I think Turing and Scott Aaronson are right. When it comes to consciousness, for better or worse, our intuitions, with all their inconsistencies, are all we have.

            Like

          3. “People taking offence at an idea has no bearing on whether it’s true.”

            I quite agree, which is why I disagree that:

            “When it comes to consciousness, for better or worse, our intuitions, with all their inconsistencies, are all we have.”

            I think there’s a fact of the matter. If consciousness is a natural process, then it is one we can come to understand. Even if computing it remains intractable (a good parallel is QCD, which is an exact theory, but computationally intractable).

            “Talk to ten people who study consciousness and you’ll likely get fifteen different answers.”

            Which I see as a demonstration of the immaturity of the field. It studies something for which it doesn’t even have a good definition. What other field has that much ambiguity about its central point of focus? It’s a field in which philosophers have almost equal weight with scientists, and that’s because of all the guesswork still involved.

            “I do think humans are evolved machines.”

            For me that hits on the reason I think conflating them is a category error: almost by definition, machines are not “evolved” — they are designed and constructed. I find that difference significant.

            Like

          4. That’s the difference in how we see consciousness. To me, it’s subjective experience, which I think can be cumulatively provided by the capabilities noted above (plus maybe a few others), none of which we have any reason to see as computationally intractable. How many of those capabilities must be present to provide a minimal subjective experience is, I think, not a fact of the matter. Beyond that, because this is most intimately about us, we psych ourselves out that there must be something more.

            Right now, because life evolved from the ground up from molecular systems, there is definitely a difference in sophistication between evolved and engineered systems. But that will increasingly blur in the future. Even if we limit consciousness to living systems by definition (which some do), we’ll eventually build engineered life. Even if we limit life by definition to evolved systems, we’re beginning to modify those evolved systems, leading to hybrids.

            Reality delights in ruining our preconceived categories.

            Like

          5. “How many of those capabilities must be present to provide a minimal subjective experience is, I think, not a fact of the matter.”

            That’s definitely a difference in our views. 😀

            Liked by 1 person

  3. I’m reminded of the sliding door from The Hitchhiker’s Guide. The door is just so gosh darn happy to open and close, and it’s so disappointed when people don’t walk through it.

    Liked by 2 people

    1. I’m actually more reminded of the Ameglian Major Cow “(also referred to as the Dish of the Day) [which] was one example of a race of artificially created, sentient creatures which were bred to want to be eaten.“ (quote from a Hitchhiker’s Guide fan page).

      Would this cow suffer if you prevented it from being eaten?

      *

      Liked by 1 person

    1. It might be good if we could do that. Unfortunately, people have been trying to ground morality since Socrates. It would make things so much easier if it was grounded in some kind of objective facts, but I fear it ain’t so. It seems like, at best, we achieve temporary consensus, always subject to change in future generations.

      Liked by 1 person

        1. Maybe so, but we have to figure out some way to live together, even in the absence of an objective morality. And that includes how to treat non-human animals and, at some point, machines.

          In any case, it’s interesting conversation.

          Liked by 1 person

      1. But it’s one that seems to converge on some objective principles. I see morality as grounded essentially in notions of egalitarianism — a notion of equality among beings.

        All humans are equal because we possess high-level consciousness (whatever that is). Animals have moral weight because they share our ability to suffer. They’re not equal to us, but part of the family of Earth life.

        Machines,… the jury’s out, and we absolutely need a better understanding of consciousness and to what extent machine consciousness (if possible) puts self-aware machines on an equal footing with humans.

        Like

        1. Humans do have instincts for egalitarianism. Human hunter gatherer communities tend to be pretty egalitarian. This makes us somewhat unusual among primates, who usually have steep hierarchies. Another exception is bonobos, who are fairly egalitarian.

          But the fact that we fell into hierarchical frameworks in agricultural societies show the primate tendency for hierarchy is very much part of our innate psyche. I did a post awhile back pondering that our impression of moral progress probably represents the gradual weakening of hierarchies and return to more egalitarian frameworks, frameworks that modern humans evolved using.

          But the thing to bear in mind is, either one requires inhibiting some of our impulses in favor of others. That’s the thing about morality, it always requires that we pick and choose which impulses are virtuous and which are vices, with different cultures making different choices. We might look for some objectivity in game theory, but the conditions that lead to certain equilibriums from that seem subject to disruption from changes in the environment.

          Liked by 1 person

          1. “But the fact that we fell into hierarchical frameworks in agricultural societies show the primate tendency for hierarchy is very much part of our innate psyche.”

            I think the truth there is that all groups can suffer from conflicts, and hierarchy is an effective way to manage that. I’m sure any complex society tends towards it. It’s nothing more than a form of order and organization.

            “…moral progress probably represents the gradual weakening of hierarchies and return to more egalitarian frameworks…”

            That’s the convergence point I mentioned, and while I think social-political hierarchies are inevitable (and perhaps even necessary), I also think our understanding of equality has evolved towards factual truths.

            The religious can see it as “all god’s children,” the spiritual can see it as “we’re all one in spirit,” and the atheists can see it as “we’re all conscious beings.” In all cases, it’s the notion of personal egalitarianism that matters.

            “That’s the thing about morality, it always requires that we pick and choose which impulses are virtuous and which are vices, with different cultures making different choices.”

            Yes, being moral is about making moral choices. The question is whether, as in science, there is a convergence on a more accurate (i.e correct) model, or whether morality is entirely relative. We seem to agree there is a convergence; I believe there is, anyway.

            The larger question being Matti’s original one about definitions.

            Like

          2. I don’t know that I would so much say that there’s a convergence, but that the instincts and learned inclinations of most members of society overlap enough for us to reach a consensus on how to live with each other. But that consensus is a continually shifting thing, both historically and evolutionarily.

            Liked by 1 person

          3. An ethics based on the evolution of human “instincts, innate psyches, and impulses” from which we are still required to “choose which impulses are virtuous” actually, it seems to me, places one’s ethics somewhere in that choosing process whatever that is. Moreover, wouldn’t you agree that sometimes doing the right thing requires us to inhibit all our instincts and impulses? Again, that seems to actually place one’s ethics in that choosing process does it not?

            Like

          4. It seems like if we inhibit all our instincts and impulses, we do nothing, just freeze. And our motivation to inhibit all that is itself coming from some impulse. In other words, reason by itself supplies no motivation. It can only break the tie between conflicting impulses, although the process of reasoning may trigger impulses that are then allowed or inhibited.

            Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.

            David Hume

            Like

  4. “Unless we find that general intelligence requires biological desires and instincts (which I personally see no reason to expect)”

    I do see reason to expect that useful intelligence requires biological-like “desires.” Without goals that we can understand, an AI just doesn’t seem likely to be able to see the world as we do. There are arbitrarily huge numbers of possible ways to divide up the universe into categories. The way we see the world is very peculiar. We don’t see the peculiarity because (a) mostly we just compare ourselves to other humans, and (b) the other animals we think about are mostly our evolutionary cousins not far removed, and (c) even a nematode has been put through the mutation + natural selection process which shapes its “desires”.

    Of course, humans aren’t going to build AI that can’t survive for two minutes, or that can’t tell a bicyclist from a lane marking on a road. But that’s part of my point – we need to get some kind of weltanschauung into the AI that will promote good performance. Biological desires are an excellent starting point, the basis of all known working models of general-ish intelligence.

    Liked by 2 people

    1. Could be. All perception is in terms of affordances, which are all related to our evolutionary impulses. So maybe for an AI to perceive the world as we do, it will need to have similar desires.

      But I wonder if it has to have the full range of desires we do. For example, it seems unlikely a self driving car needs to feel the mating urge, although it might benefit from something like hunger when its gas tank or battery is low, or a form of curiosity, or a general desire to learn new routes.

      In some cases, we don’t necessarily want these systems to see the world the way we do. Data analytic systems often find new patterns in the data precisely because they’re not encumbered by our worldview.

      Liked by 1 person

  5. I watched a portion of the video James linked us to, and it was very interesting to see how people were willing to assist a robot that had a goal of which it was utterly unconscious. I think the same robot wandering through a park, without the flag that asserted a goal, would not be treated in the same fashion. That is just my hunch. it would be interesting to see a comparative study.

    What it makes me wonder about is innocence. The first, or at least a common notion, that comes up when a fellow human being is mistreated is that maybe they deserved it at some level. This is a common justification for mistreatment. Because we know the complexity of our own emotions, and the manner in which any of us could behave in some situation or another out of a particular form of self-interest that is detrimental to other beings, we often withhold this sentiment of innocence from one another. But that robot is clearly not acting out of a self-interest, it is simply a purity of what it is. And I think this purity is what people likely respond to.

    Purity, and a related notion: the desire to be what it is. The flag asserts a desire on the part of the robot that is not actually there otherwise, to express what it is. A plant has this. A puppy. And I think this is an important consideration in reflections on consciousness, sentience, and the like. There is, buried in this, the instinctual recognition that living beings have an innate structure of being, a mode and quality of expression that is precisely theirs, and theirs alone, to offer, and rights ultimately relate to their ability to express this quality of being without arbitrarily imposed obstacles by other beings that usurp this right.

    To reduce this to life / death / survival, is to miss the point–that a life deprived of the opportunity to express who and what one is, is not a life at all. I would say that rights ultimately revolve around the maintenance of this opportunity to express who one is.

    Further, it is in the self-expression of who one is, that one realizes the most profound upside of consciousness, which is knowledge of who one is. I don’t agree that this type of knowledge can be computationally resolved, but I do agree that below some level of biological life the ability of this knowledge to be derived at the level of the individual organism is questionable. Nevertheless, each biological species on this planet expresses a unique combination of attributes that, in toto, is expressed nowhere else in quite the same way, and which is a fractal representation of the primary movement of life. So even without highly developed consicousness, such life is consonant with the overall movement of life.

    With that in mind, I think the notion that “building machines that are tools, and are quite happy to be tools,” is a very strange aberration of what it is to be alive, or to be deserving of rights in the sense of freedom to express. To me, a machine that was truly alive would be a machine that had, at its root, an open-ended causal structure that could only be understood through the vehicle of free self-expression. In the absence of this, we have glorified toaster ovens, complicated electric resistors that compute through circuitry a variety of responses to inputs and outputs, but with no opportunity to leverage those computations into genuine wisdom. And in my opinion, to equate those with living beings is a false equality. Such devices would not be “happy” to perform some function.

    When we broker this false equality, and suggest that a robot with a flag is on par with an apple tree, in my opinion we take the “life” out of life. There is something about living organisms quite difficult to pin down, and that is, imperfectly expressed, an innate desire to be what it is. It is not all that difficult to project this onto machines. But that projection live is us. I think it is much more difficult to ascertain that machines possess this in the same way as the various embodied representations of Life itself. And in closing, we’ll never realize comprehensive human rights while humans are reduced to functional beings alone, because this reduction misses the essential point and characteristic of life as it is expressed at every level of its unfolding.

    Michael

    Liked by 1 person

    1. Hi Michael,
      Just to be clear, when I talked about tools that are happy to be tools, I wasn’t talking about contemporary robots like the one with the flag. Current robots aren’t there yet, and won’t be there until, at a minimum, they have impulses that can be overridden or that the system itself can predict, which are used in action selection.

      On self expression, I guess my question would be, what determines what self is being expressed? As you note, every living thing has a different expression. Although members of the same species cluster around very similar expressions. What determines that? Arguably that thing’s base programming, which we call instinct.

      It might seem inconceivable to us that a robot would be happy to say, find a mine even if finding it means its destruction. But is it really any less inconceivable than an octopus starving itself to death to take care of its eggs? Or the new alpha male lion in a pride killing the existing cubs? Or a sea squirt consuming its own brain once it’s found a good place to plant itself? Biological programming constrains living things just as much as it would an intelligent robot.

      I think what’s difficult to pin down about organisms is that we more easily recognize common experience in them, or at least think we recognize it. Often there’s also a good amount of projection happening when we empathize with animals, most of whom have much simpler states than we imagine them to have. Certainly when we feel sympathy for current machines, we’re projecting even more. It’s what we do as a social species. But it’s far easier to do with other living things, particularly ones that are closer to us like fellow mammals.

      Maybe there’s more to it than that, but until someone can clearly and convincingly identify what that might be, I think we’re just over interpreting that commonality. But I’m a pretty relentless reductionist. 🙂

      Liked by 1 person

      1. It is unfortunate that the skeptical paradigm underwriting relentless reductionism ultimately becomes a punitive measure, one which hollows out life to being nothing more than an empty sepulcher, patiently waiting for its next occupant. It’s such a downer….😞

        Peace

        Like

        1. Obviously I don’t see it that way Lee. For millenia, humanity has puzzled over how the world works. We’re finally starting to uncover nature’s secrets. I have little sympathy for people who miss the mystery of the rainbow. I can appreciate the rainbow while knowing about its atmospheric and electromagnetic properties. I can appreciate life while understanding how it works and came about.

          As Darwin said, there is grandeur in this view of life. The scientific view is no less one of awe than the mysterian one. And it comes with power to improve our lot in life.

          Like

          1. Hi Mike,

            I think your closing statement here–And it comes with power to improve our lot in life–is an oversimplification, if the implication is that a scientific view, particularly and perhaps exclusively one that is reductionist, is the only approach to the matter with the power to improve our lot in life. This seems to be the gist of the sentence, and you may correct me if I’ve misinterpreted. Whether you meant it or not, I wish to say that such a notion is quite simply false. It simply misses the point that the baby needn’t be thrown out with the bathwater, but the baby must also be allowed to breath.

            When you say mysterian, what do you mean? One who asserts we do not understand everything yet? Or one who asserts that not everything is or will be understandable? Or one who asserts that not everything that exists can be explained in reductionist terms? Or something else yet?

            And which, if any of these positions, is of its fundamental nature neither interested in nor appreciative of the progress of the scientific endeavor?

            Michael

            Like

          2. Hi Michael,
            I didn’t necessarily mean that science exclusively has the power to improve things. But consider the world of 500 years ago. By any measure, life today is far better. Why is that? What has happened in the intervening centuries that led to that improvement? If science had been absent from the mix, how much of it would have taken place?

            When I say “mysterian”, I mean being satisfied with mysteries rather than seeing them as challenging problems to be solved, being more interested in cherishing and preserving them. Such a viewpoint manifests in a variety of ways, but it often boils down to taking delight anytime science hits a snag, and never missing an opportunity to see a difficulty as forever insurmountable, rather than a temporary obstacle to be worked through.

            I understand the existential anxieties that often drive such views, and I have no problem with people finding comfort where they can. But I can’t see it as a productive way to understand reality.

            Liked by 1 person

          3. Thanks, Mike.

            First I would say that I totally appreciate the frustration expressed here: “…it often boils down to taking delight anytime science hits a snag…” This is symptomatic of an unproductive either-or mentality. There are two ways to be delighted when nature behaves inexplicably–delight in the ‘failure’ of science to explain it, or delight in the knowledge we’re about to learn something new. I believe I am squarely in the second camp in terms of my individual thought process.

            But one can be non-mysterian in this sense, and still believe the fundamental nature of the universe is not 100% amenable to reductionist explanation. This is the camp I would like to think I’m in. It’s not a camp that places scientific discovery off to the side; nor do I think it is an unproductive or powerless camp to bring about positive developments for humanity. It is simply a camp that thinks the universe is not exclusively algorithmic in nature, though I don’t see why it wouldn’t allow that impermanent expressions of that universe may all be algorithmically explained/modeled.

            As to the benefits of science over the past 500 years, those are obvious and I wouldn’t disagree. I think the scientific revolution was a necessary and profound stage in human development, but that in a sense that particular movement must be balanced by other movements in human consciousness if it is not to become too much of a good thing and prove destructive. Further, I would not say that by any measure life today is better than it was in the past. For all of its pluses, what I think you are including under the umbrella of science is also integral to the present ecological and social precipice at which we find ourselves. By pointing this out I’m not saying, “screw science.” I’m just saying it’s like a pendulum that has swung quite far to one side, and it either will or it won’t swing back some. But if it doesn’t then I think the power to improve things turns inside-out, if it hasn’t already.

            I’d like to see the world improve as well.

            Michael

            Liked by 1 person

          4. Thanks Michael.

            Certainly a lot of scientists get excited when they unexpected results, since it could lead to new breakthroughs. I know a lot of particle physicists keep hoping for something unexpected from the LHC, anything that might give them clues to move beyond the Standard Model. But as you noted, that’s a different mindset from the person just looking for an excuse to conclude unknowability.

            I obviously can’t say with any certitude that the universe is 100% amenable to reduction or fully algorithmic. And every answer we get tends to bring in new questions. What does seem apparent to me though, is that if we don’t have an a priori deductive understanding of something, then we don’t yet understand it. Maybe in some cases we never will. But if that understanding is achievable, it seems unlikely to be found by someone who’s concluded it isn’t.

            Definitely science and technology are double edged swords, bringing in problems with their benefits. But I think they also bring in the best chance of solutions. Unfortunately it can’t provide the wisdom to use them.

            Liked by 1 person

          5. (When I say “mysterian”, I mean being satisfied with mysteries rather than seeing them as challenging problems to be solved, being more interested in cherishing and preserving them.)

            I’m glad you cleared this up Mike, because I for one am definitely not a “mysterian”. I do not believe that the mysteries of the universe are intractable ones. That knowledge, subsequent understanding and meaning will only come through the vehicle of a priori intuitions. To be clear, a priori intuitions should not be conflated with instinctual gut feelings that are often referred to as intuitions. Those perceptions and feelings are nothing more than preconceived prejudices and biases gained through the experience of subjectivity. A priori intuitions are an objective experience not a subjective one. Contrary to our prevailing world view, there is a difference between subjective experience and objective experience; and it’s a huge one.

            Immanuel Kant challenged the prevailing worldview of David Hume that knowledge can only be acquired through empiricism or a posteriori. I really don’t understand the propensity to postulate an either or position when it comes to knowledge. A priori intuitions is a proven and effective means of acquiring knowledge. Most of the major breakthroughs of scientific discovery have been gained through a priori intuitions. Unfortunately, most individuals do not have the underlying properties that will not accommodate a priori intuition. It’s nobody’s fault, it’s just one of those brute facts of nature.

            Peace

            Like

          6. Thanks for the clarification Lee!

            I actually think both a priori and a posteriori are crucial. Science records observations, gathering data, a posteriori knowledge, then attempts to explain and predict that data with theories, an a priori understanding. I think it’s fair to say we don’t really understand something until we have that a priori understanding.

            It is true that science makes a posteriori knowledge, observation, the final arbiter. A priori theories must successfully reconcile with a posteriori data. But science can’t proceed without extensive a priori reasoning.

            Like

  6. So it all comes down to Hume and Kant. Here’s a poem, inspired by Lewis Carroll. Hume and Kant/ again and again/ figure their way unbound/ tossed and turned every which way/ yet every which way unsound /a twisted knot of knowledge “twill never be unwound/ yet seek thee well my haughty spirits/ in purpose so profound… Damn it, lost my train of thought there .Could’ve gone on for days.

    Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.