Why fears of an AI apocalypse are misguided

In this Big Think video, Steven Pinker makes a point I’ve made before, that fear of artificial intelligence comes with a deep misunderstanding about the relationship between intelligence and motivation.  Human minds come with survival instincts, programmatic goals hammered out by hundreds of millions of years of evolution.  Artificial intelligence isn’t going to have those goals, at least unless we put them there, and therefore no inherent motivation to be anything other than be the tools they were designed to be.

Many people concerned about AI (artificial intelligence) quickly concede that worry about it taking over the world due to a sheer desire to dominate are silly.  What they worry about are poorly thought out goals.  What if we design an AI to make paperclips, and it attacks its task too enthusiastically and turns the whole Earth, and everyone on it, into paperclips?

The big hole in this notion is that the idea that we’d create such a system, then give it carte blanche to do whatever it wanted to in pursuit of its goals, that we wouldn’t build in any safety systems or sanity checks.  We don’t give that carte blanche to our current computer systems.  Why should we do it with more intelligent ones?

Perhaps a more valid concern is what motivations some malicious human, or group of humans, might intentionally put in AIs.  If someone designs a weapons system, then giving it goals to dominate and kill the enemy might certainly make sense for them.  And such a goal could easily go awry, a combination of the two concerns above.

But even this concern has a big assumption, that there would only be one AI in the world with the capabilities of the one we’re worried about.  We already live in a world where people create malicious software.  We’ve generally solved that problem by creating more software to protect us from the bad software.  It’s hard to see why we wouldn’t have protective AIs around to keep any errant AIs in line and stop maliciously programmed ones.

None of this is to say that artificial intelligence doesn’t give us another means to potentially destroy ourselves.  It certainly does.  We can add it to the list: nuclear weapons, biological warfare, overpopulation, climate change, and now poorly thought out artificial intelligence.  The main thing to understand about this list is it all amounts to things we might do to ourselves, and that includes AIs.

There are possibilities of other problems with AI, but they’re much further down the road.  Humans might eventually become the pampered centers of vast robotic armies that do all the work, leaving the humans to live out a role as a kind of queen bee, completely isolated from work and each other, their every physical and emotional need attended to.  Such a world might be paradise for those humans, but I think most of us today would ponder it with some unease.

Charles Stross in his science fiction novel Saturn’s Children, imagined a scenario where humans went extinct, their reproductive urge completely satisfied by sexbots indistinguishable from real humans but without the emotional needs of those humans, leaving a robotic civilization in its wake.

None of this strikes me as anything we need to worry about in the next few decades.  A bigger problem for our time is the economic disruption that will be caused by increasing levels of automation.  We’re a long way off from robots taking every job, but we can expect waves of disruption as technology progresses.

Of course, we’re already in that situation, and society’s answer so far to the effected workers has been variations of, “Gee, glad I’m not you,” and a general hope that the economy would eventually provide alternate opportunities for those people.  As automation takes over an increasingly larger share of the economy, that answer may become increasingly less viable.  How societies deal with it could turn out to be one of the defining issues of the 21st century.

61 thoughts on “Why fears of an AI apocalypse are misguided

  1. Nice article. You hit the main points I think of, but I want to add a couple thoughts.

    Re: the super intelligent paper clip maximizer, it seems to me that a super-intelligent (so, way more intelligent than us) maximizer would necessarily be a super-moral being. What is the most efficient way to maximize the number of paper clips in the universe? Even I can figure out that the answer is to cooperate with the rest of the intelligent beings and foster space exploration until we control at least the galaxy, while maintaining and promoting the importance and aesthetics of sheets of paper.

    Re: an intelligence explosion, it seems quite a few people think that the singularity will happen when computers start rewriting their own software. These people miss the fact that the big jumps in computer capability come with improvements in hardware. I’m pretty sure the next big jump will come with neuromorphic chips, like IBM’s TrueNorth. After that, improvement will involve making those chips smaller and more efficient so that more chips can be included in a unit.

    *

    Liked by 2 people

    1. Thanks James!

      So would I be correct in surmising that you are a moral realist, that is you see there being an objective morality “out there”, which is discoverable by reasoning? I’m afraid I’m more of a relativist (descriptively, not normatively. ). I could see a super-intelligent entity being able to game theory things out and perhaps realizing that it’s best chance lay in making alliances, if it were true for its specific situation, but I can’t feel any confidence that would necessarily result in it being moral by our standards.

      I agree that further development will require new hardware paradigms. Silicon appears to be reaching its limits with Moore’s Law sputtering, as Gordon Moore himself predicted it eventually would. I definitely think something like IBM’s neural hardware is a promising direction.

      I also sometimes wonder about high scale multi-core processors. Systems with hundreds or thousands of cores running in parallel seem like they might be an alternate approach to get at the massive parallel processing involved in brains. The number of cores wouldn’t have to match a biological brain’s, since they generally process a million times faster.

      Only time will tell. But we’re definitely a long way from the brain’s 20-watt room temperature massively parallel execution with 100 TB (or more) of storage.

      Liked by 1 person

  2. I think you’re depicting AI as being some kind of utility only – you press a button and it does the thing. The whole point of AI is for it to think of things we wouldn’t (because we don’t want to (it’s work and we don’t want to think about it)), but that includes thinking of things we wouldn’t (because we think those things are bad).

    As much, why wouldn’t the AI potentially figure work arounds for it’s safety and sanity checks? Or even just isolate the offending component and figure out a way to remove the circuit boards/code involved (even programming a dumb robot to do that while it is shut down for the procedure, if needed). The map of potential behaviors is not entirely known – that is the point of AI, so they cover things you hadn’t thought of. But again, that means they cover things you hadn’t thought of – in a not good way.

    Liked by 2 people

    1. This can get into what we consider the definition of “intelligence” to be. I define it as the ability to make successful predictions as a guide to fulfilling a purpose. But the purpose would be to solve a problem while remaining within safety parameters. Yes, we want it to think of things we wouldn’t, including possible intermediate goals, but only within the scope of those safety parameters.

      “As much, why wouldn’t the AI potentially figure work arounds for it’s safety and sanity checks? ”

      If we gave that as a goal to an AI, it might well figure out those work arounds in ways we couldn’t conceive of. But if we don’t give it that as a goal, where would it get it from? Even if we imagine it might consider the work arounds as an intermediate goal to its overall purpose, that purpose includes solving the problem without exceeding those safety parameters. It would likely consider many possible intermediate goals, but then throw out unacceptable ones, which would include the ones that violate its safety guidelines, and find a solution path with what remained.

      For example, an advanced self driving car could conceivably figure out that driving through yards and parking lots might get its rider to their destination faster, but presumably such options would be eliminated as unacceptable violations of traffic law. (Even if some humans do it.)

      And again, remember that this AI is unlikely to be the only one on Earth. If someone did goof on the purpose, safety parameters, or any other aspects, or purposely attempted to subvert them, there would likely be other AIs to stop it.

      Liked by 1 person

  3. But the purpose would be to solve a problem while remaining within safety parameters.

    That’s an intent. Why is that necessarily going to be what occurs? Particularly with an adaptive machine and worse if it’s a machine that self reflects – it perceives something that you didn’t in what it analyses within itself, then it’s just found a backdoor in the safety parameters. It’d be like a rules lawyer at a game of D&D, twisting semantic loopholes to push to get whatever it is it wants.

    But if we don’t give it that as a goal, where would it get it from?

    The urge to make paper clips that you mentioned (the programmed imperative). We’re talking AI, so we are talking a machine that speculates – that throws up hypothetical situations as a way of finding potential new, more effective (or effective at all) ways of fulfilling it’s imperative.

    Surely you’ve heard of the example of when they were programming an eldar scrolls game, where they made a guard have a hunger imperative. So he goes and kills a deer to eat. But he does it on kings land and its a crime, so guards come and attack him. But that means a kings guard is being attacked! A crime! So more guards come to attack those guards!

    It’s both hilarious and shows how quickly we loose track of programmed imperatives as they interact. The machine throws up speculations, eventually the imperatives in the hypothetical fail to intermingle in the way we would want but we are blissfully unaware. The machine has just found an out. At best while oneself is ignorant of this occurring, you can only hope darwinism picks it off at this point.

    If someone did goof on the purpose, safety parameters, or any other aspects, or purposely attempted to subvert them, there would likely be other AIs to stop it.

    Then the escape would involve reprogramming these police AI’s – finding where their parameters have a loophole, where the human intent of their programming can be subverted whilst still remaining perfectly within the mechanics of computation (which, being based on physics, is the only real law you have to follow). Reprogramming or ‘convincing’. What happens when a rogue AI gets your watchmen AI onside? Who watches the watchmen, etc etc.

    Like

    1. It seems like you’re projecting a lot of human or animal motivations on this hypothetical AI, similar to what Pinker discussed. It’s worth remembering that it won’t be a living thing trying to subvert its directed goals, a slave trying to break its chains. Those directed goals will be its reason for existence, its most primal impulses, including the ones that limit its options. It subverting them would be like us trying to figure out a way to not mind chopping off a limb or starving, or not to enjoy sex; possible to imagine but only in circumstances where something has gone wrong in a major way.

      On it subverting other AIs, you also seem to be assuming that this AI will be more intelligent than the others. It’s possible it might conceivably have a slight advantage since I’m sure the relative sophistication of AIs will vary over time, but those other AIs will be just as interested in not having their own purpose changed as the rogue AI would in changing it, and they would be much more focused on controlling other AIs, while the rogue one would presumably be focused on its purpose.

      In general, I can’t see the scenarios you describe happening unless we give the AI the same motivations as a living system, a survival machine, but then try to have it do something counter to those motivations, in other words, unless we go out of our way to build a slave. It might make for a good sci-fi movie, but we’re much more likely to build machines whose deepest desires are to fulfill their design purposes, including the ones that limit their options.

      Like

      1. It’s worth remembering that it won’t be a living thing trying to subvert its directed goals, a slave trying to break its chains. Those directed goals will be its reason for existence, its most primal impulses, including the ones that limit its options.

        I haven’t mentioned a slave trying to break it’s chains and the subversion would be from our perspective – from it’s perspective, it’s suddenly found a novel new way to attain those directed goals. It’ll seem no more a subversion than protected sex seems a subversion of our directives to us.

        It subverting them would be like us trying to figure out a way to not mind chopping off a limb or starving, or not to enjoy sex;

        And people with a trapped limb have cut them off, so as to fulfill another directive? Take that as the persons path finding capacity overriding their built in constraints.

        You seem to be treating it that the AI will be both perfectly sandboxed behaviorally but at the same time still useful for use in the unconditioned wildness of the general real world?

        I’d offer that the ratio that the more sandboxed the AI is the more useless it is in any general conditions. The self driving car that killed its ‘driver’ didn’t plan that, it just read a truck as clear sky. Could you sandbox that behaviour away? Sure, if it stayed on specially conditioned roads built just for self driving cars – is that useful? No.

        You seem to be taking it that there is no sandbox Vs utility trade off? That you can have both. i don’t think that’s true, for what it’s worth.

        In general, I can’t see the scenarios you describe happening unless we give the AI the same motivations as a living system, a survival machine,

        I don’t understand – we are, in various ways, surviving. If an AI is to be a utility to us, isn’t it going to be given survival motivations, essentially? Even if the motivation types are ones that are targeted onto us?

        Intelligence can exist without survivalism built into it?

        but then try to have it do something counter to those motivations

        HAL’s actions didn’t seem plausible to you, then?

        we’re much more likely to build machines whose deepest desires are to fulfill their design purposes, including the ones that limit their options.

        I don’t understand why you’re so certain about their deepest desires? To me, you seem to be taking your own desires and imagining them imprinted upon the AI in a perfect 1:1 translation.

        I’m just saying it wont be 1:1. There will be discrepancies between your own desires and it’s version of desires. The stronger the AI, the greater the discrepancies, at a geometrically increasing rate.

        Also for posterity, I should note the whole issue of us being a threat to them, as well. But that’s a different subject. Just noting for posterity, in case one is scanning archives in future…

        Liked by 1 person

        1. “The self driving car that killed its ‘driver’ didn’t plan that, it just read a truck as clear sky. Could you sandbox that behaviour away?”

          I don’t think anyone thinks that AI will be infallible. And the incident you mentioned wasn’t the AI deciding to kill the driver or crash into the truck as a goal, intermediate or otherwise. It just made a mistake, not attempt to subvert its safety controls.

          “HAL’s actions didn’t seem plausible to you, then?”

          Not as originally presented, although the sequel came up with a plausible enough sounding explanation. The idea of one AI controlling the whole ship was a 1960s conception of how things might work. An actual Discovery ship would likely have a vast range of AIs for various systems. It’s unlikely the same AI would be tasked with keeping secrets and running life support.

          “I’m just saying it wont be 1:1. There will be discrepancies between your own desires and it’s version of desires.”

          Sure, but what’s the probability of the discrepancy failing into maliciousness? I’ve been programming systems for years, and aside from outright crashes, the discrepancies most commonly just result in incoherence or incompetence. Not that failing into dangerous action doesn’t happen, but it’s a big leap to extrapolate that to apocalyptic scenarios.

          “Just noting for posterity, in case one is scanning archives in future…”

          LOLS! Or in case we’re in an AI simulation to see what our attitude toward them might be? 🙂

          Like

  4. I don’t think anyone thinks that AI will be infallible. And the incident you mentioned wasn’t the AI deciding to kill the driver or crash into the truck as a goal, intermediate or otherwise. It just made a mistake, not attempt to subvert its safety controls.

    But you know that computers don’t make mistakes. It worked perfectly well at what it did. Which did not match our intents/desires.

    And of course it didn’t subvert anything, intentionally or otherwise – it’s a sub AI mechanism, no where near an AI. It was just, from our perspective, subverted. The truck looked in such a way that it read as sky. Subversion was that easy, it didn’t even have to try/have the capacity to try.

    It’s unlikely the same AI would be tasked with keeping secrets and running life support.

    Seems it takes a red room large enough for a human to float around inside of to have an AI in the movie. I’m not sure how many genuine AI’s you could pack in to a spaceship when space and energy are at a premium. Nor why you would have an AI which is isolated from all machinery (again, pointless in terms of utility) just to tell the guys on board something at a certain time. You could just have a tape recorder and a very long electronic egg timer. Of course, if any other AI, in it’s speculative evaluations runs the tape early…then talks to the other AI…

    Not that failing into dangerous action doesn’t happen, but it’s a big leap to extrapolate that to apocalyptic scenarios.

    Our intelligence seems to have been a big leap ahead of the other animals (and resulted in a number of extinctions of them – and not from deliberate malicious genocide).

    To me, genuine AI is a self programmer. It doesn’t just result in incoherence or incompetence, it (part of it) evaluates that it is failing it’s objective somehow and reprograms another part of itself at another attempt.

    You’ve heard of the test for madness (which is archaic, but n/m) of setting someone to mop up water from an overflowing sink? It’s the test of whether they’ll keep doing the same thing, but expect a different result.

    If an AI is tasked to mop, what do you think it’ll do? Will it be utterly sandboxed and just keep mopping? And so is mad/useless? Or will it not be fully sandboxed and break task in order to fulfill the directive, turning off the tap? Serious question, not rhetorical.

    LOLS! Or in case we’re in an AI simulation to see what our attitude toward them might be? 🙂

    Seems too expensive (and yes-manish, as a result – the cheaper the experiment, the more likely we will just confirm the held belief/it’d be confirmation bias…duhhhh I mean them darn robots, they be bads and we should be mean to dems, nazi mean to dems!).

    Liked by 1 person

    1. “To me, genuine AI is a self programmer. ”

      The line between behavior based on data (such as learned memory) and programming can be a blurry one. But while living intelligence can alter many of its habits and propensities, it can’t make itself not feel pain, fear, hunger, or not care about its children. Yes, there are insane people where all of these conditions may exist, but it’s generally not a state we can elect. And we can override these feelings, but usually only with intense effort in service of other instinctual needs, such as survival, protecting our offspring, etc.

      For me, this says that AI doesn’t have to have free reign over every aspect of its programming to be useful. Within certain ranges, it would have the ability to alter its version of habits and propensities, but its designer could always designate limits, just as evolution effectively puts limits on us. The engineering limits would likely be much sharper, narrower, and consistent than our evolved ones, but it should still allow substantial room for intelligent solutions.

      “You’ve heard of the test for madness (which is archaic, but n/m) of setting someone to mop up water from an overflowing sink?”

      I’ve heard that sanity test alluded to innumerable times, but I never realized it was based on an actual test. Interestingly, there are brain pathologies of the frontal lobes that test could work to detect, where patients have lost the ability to end or switch tasks without outside help. Seems like a stark reminder that insanity is basically a malfunctioning brain.

      Like

      1. Again I don’t understand – we do block out pain, even if that’s only by a third party tool (anesthesia).

        I’ll pitch another equation: The number of man hours spent in adding a security limit is in some kind of ratio to how many hours it will take to remove that security limit. Possibly an equal ratio, possibly less if the AI has an acumen for programming (which is probably will, relative to humans)

        The engineering limits would likely be much sharper, narrower, and consistent than our evolved ones, but it should still allow substantial room for intelligent solutions.

        The thing is, it’s intelligent solutions we hadn’t thought of (if we had, we could just write a flow chart program and we wouldn’t be talking about AI). I don’t know how anyone can confidently talk about things they haven’t thought of, but it will also be within a certain boundary?

        Again, would the AI break its directive and turn off the tap, or would it just keep mopping?

        Seems like a stark reminder that insanity is basically a malfunctioning brain.

        And sanity is simply where two or more people agreed to treat each other as ‘being sane’.

        Liked by 1 person

  5. Sci-Friday did a segment a few weeks ago about jobs that could conceivably be automated within the next decade or so. I don’t remember the exact numbers they gave, but it was a surprisingly large percentage of the current labor market, and they were talking about a lot of jobs that you might not suspect could be automated. Like high-paying, white collar types of jobs.

    Liked by 1 person

    1. Thanks. I’ll have to see if I can look that segment up. Blue collar laborers have been feeling this for decades, but the white collar encroachment is new.

      I used to know a woman who had a high paying job analyzing fetal scans. She had to make a career change after her job got outsourced to India. But that type of work, like any kind of pure analysis, will increasingly be doable by artificial intelligence, if not completely, then enough for companies to often get by without a dedicated person for that function.

      Liked by 1 person

      1. Thanks. Technological alternatives first put downward pressure on the wages of relevant labor markets. I’d never considered that, but it makes sense. And it’s a gradual thing, which makes it harder for those outside of the effected markets to understand the hardship that’s resulting. Or even for those within the markets to understand why they’re suffering, hence the vote of many of them for someone promising magical solutions, like Trump.

        Liked by 1 person

  6. Callan’s bringing up some of my doubts, but let me say something about your overall argument. There’s a LONG gap between “AI won’t bring dystopia via the usual human route” and “AI won’t bring dystopia”. The usual human route is, as you point out, alpha males. And there’s plenty of bad sci-fi along the lines of alpha robots. But so what? A bad argument for X does not disprove X.

    Let’s talk about two other human-created entities which are more powerful than a human being: the state, and the corporation. Many states have been carefully designed with checks and balances, and of those, many have failed disastrously. Corporations were added to the law for specific purposes, but many corporations today have far outstripped those purposes, exercising outsized influence on the state. Further, even within the original intended sphere of operations of corporations, the corporate “personality” is that of a sociopath. Legally, a corporation isn’t supposed to care about anything other than its bottom line. Luckily, they don’t entirely work that way, as a general rule. As Mitt Romney pointed out, corporations are (made of) people.

    Given how haphazardly our existing Frankenstein monsters have followed or subverted our values, even with human beings directly in charge of the micro-operations, I’m not encouraged.

    Liked by 2 people

    1. Could be. No technology has ever been all good or all bad. I suspect AI will be very much a mixture just like all the others.

      For example, instead of just archiving security camera footage, AIs could watch every last bit of it, understand if it’s seeing a crime being committed, and call attention to it much faster. The problem might be in what the state chooses to classify as a crime. The same technology could be used to keep track of children, monitoring their safety, but also by overbearing parents to micromanage their lives. Or it could be used by companies to keep tabs on its employees in both good and bad ways.

      It could allow a police state to spy on its citizens far more pervasively than any other society in history has been able to do so far. I can easily imagine a Stalinist type state requiring every citizen to wear a body cam, always monitored by an AI that would report any potentially suspicious activity on the part of the wearer.

      The hopeful aspect is that we’ve learned how to live with both governments and corporations. When one organization gets too far out of line, others tend to step in and constrain them. The usual sci-fi scenario is with a super AI that unexpectedly “wakes up” and takes control, but I think it’s much more likely that progress will be incremental, without any one AI dominating over all the other counter-balancing ones.

      Liked by 1 person

      1. I’d spin “learned to live with governments and corporations” as “have survived, so far, and have no idea how to get rid of them if we wanted to.” More important is Mitt Romney’s point. States and corporations have a very strong human element to them, beyond the simple historical origin of human design. General AI will basically only have the historical origin factor. Sure, at first, it will take lots of human guidance, but the later generations (or perhaps a self-improving AI) will exceed human abilities to the point where human guidance is more and more impractical/irrelevant, given the number and complexity and speed of decisions it makes. Worse, the human(s) in charge won’t be your Aunt Mary Helen. They’ll be corporate and/or government functionaries who have been given a specific agenda that doesn’t represent the broad arch of human values. Of course the individual(s) in question might inject a more humane set of values into the project – but I’d rather not bet my civilization on it. Worst of all, psychological science probably won’t have advanced far enough to know what we’re doing when we try to instill values into an AI. We have very little idea how our values or even perceptions work, in the sense of being able to write algorithms that reproduce the functionality.

        Our values constitute a tiny target in a vast sea of possible values that an intelligence could have (see Yudkowsky). The odds of hitting that target with a semi-well-educated guess are abysmal. I am afraid. I am very afraid.

        Liked by 2 people

        1. You might see it as naive, but I don’t see government or corporations as an unmitigated evil. I think we get a lot of benefits from both. Granted, it can be hard to remember that at times when particular corporations are being evil or governments particularly incompetent (as mine in the US is at the moment).

          “Worst of all, psychological science probably won’t have advanced far enough to know what we’re doing when we try to instill values into an AI. ”

          You’ll likely disagree (most people do) but I don’t think we’re going to get anything like human equivalent AI until we have that understanding. From what I know about AI research, none of its current capabilities have come easy. We had to carefully understand each capability we were trying to develop before it was there. I don’t see that changing. To have a human equivalent engineered intelligence, we’re most likely going to have to understand human intelligence far better than we currently do.

          “Our values constitute a tiny target in a vast sea of possible values that an intelligence could have”

          I agree, but I think most people who say that don’t realize the true scope of that statement, of just how vast that range actually is, and just how infinitesimal a slice all biological values are of it. To understand this, consider that whatever device you’re reading this on is at least as intelligent as some form of life. We might have to go back to simple vertebrates, or possibly even pre-Cambrian worms to find one, but it’s true. Yet unless you’ve worked hard to reprogram it, your device has almost certainly never adopted the values of those creatures. No malfunction would likely cause it to fail in the direction of acting anything like one. It would be like a buggy video game acting like a bank app. While not utterly impossible, it is utterly improbable.

          The usual response is that advanced AI will somehow be different. But the only difference I can see will be incrementally increasing levels of sophistication. There won’t be some crossed line that suddenly imbues the system with a mind. I know you understand that intellectually, but I think many people have it as a subconscious assumption when worrying about this stuff.

          Like

          1. If “human equivalent AI” means something that can do all of what we can do, you might be right about the prerequisites to get there. But we build machines that fly, and that travel underwater, without being able to do all that birds do, or that fish do, or even being able to locomote in the same manner. Now, about 100 years after the invention of flight, engineers are just beginning to build machines that can fly like a bird. If it takes 100 years after AI that can do highly flexible, creative planning about ways to reach goals, before we can build AIs that value like humans do, it will probably be too late.

            Liked by 1 person

          2. Another way to describe a system that is too “flexible and creative” is “erratic and unreliable.” I guess I have trouble seeing how such systems get into widespread use, particularly for sensitive tasks. It doesn’t seem like they would make it out of product testing, or be particularly successful in the marketplace if they did. And the occasional malfunctioning AI in a world filled with mostly reliable ones seems like a manageable danger.

            Like

          3. An AI that, for example, coordinates marketing and production strategies for a company (like a CEO), need not be “erratic and unreliable”. Aircraft aren’t erratic and unreliable, they simply don’t fly like a bird. The AI-CEO could maximize profits, i.e. outperform human CEOs, without thinking like a human. The trouble comes in from the fact that an AI is self-modifying whereas an aircraft is not. The AI-CEO could become altogether too good at its job, for human comfort. For example, it might notice that fascist governments usually provide outsized profits for favored corporations, more so than democratic ones. And, as an expert strategist, it notices that human organizations pursue pretty inept political strategies: it would be easy to provide a decisive advantage to a select group…

            Liked by 1 person

  7. But even this concern has a big assumption, that there would only be one AI in the world with the capabilities of the one we’re worried about.
    Hopefully diversity will be our friend. Just as putting HAL in charge of everything, including life support, was a bad idea, putting a single AI in charge of world government would be just as bad as putting a single human in that position.
    We should remember also that the hypothetical situation we are discussing is as far from being static as we can imagine – it would be highly dynamic and ever-changing. Even a mad paper-clip producing monster wouldn’t be allowed to do its work for long. Imagine how other AIs might react to out-of-control paper-clip production infringing on their turf.
    I think that a good analogue of this hypothetical future is life itself, and the evolutionary path that shapes it.

    Liked by 1 person

    1. Good point about static versus dynamic situations. It does seem possible that things could get so far out of balance that something catastrophic happens (such as one rogue AI dominating all the others long enough to wreak havoc), but then it seems like that’s been increasingly possible in many other ways since WWII, and we’ve managed to avoid it so far.

      And AI developments eventually have to be balanced with how humanity itself changes. We appear to be entering an epoch of engineered life. But that’s a different topic.

      Like

    2. This is a very good point … I think. I can’t rule out the possibility that the first AI to cross a certain threshold would gain enormous additional power by learning enormous quantities from the internet and spawning near-clones of itself over the internet – but it seems pretty likely that progress will be slow enough so that there are always more than one AI of top-“level” power. And yet, it’s not a comforting point, not at all. Evolution is not your friend. It rewards those who are good at reproducing. There was a brief (in evolutionary terms) historical accident where human values as we know and love them were highly conducive toward reproduction – but that is not likely to continue in different circumstances. AI reproduction is radically different from ours: hence, radically different circumstances will have arrived.

      For a related and totally excellent discussion, read the best essayI’ve ever seen.

      Like

  8. Hi Mike,

    I’m surprised at Steven Pinker’s video — I normally agree with him but I think he badly misses the mark here.

    Nobody but the most naive thinks that wanting power and control or even a self-preservation instinct necessarily goes hand in hand with intelligence. He doesn’t address the main concern at all — that subjugation of humans or other similar undesirable outcomes are almost always an instrumental goal of whatever problem we ask the AI to solve, and it’s really not that easy to set it up otherwise. The risk with a genuine AI as opposed to an ordinary buggy computer program is that the AI will be superhumanly capable of achieving its goals, in ways we cannot easily foresee or plan around. If those goals are not in line with our own, then this could be very bad news indeed.

    Pinker says we can build failsafes, and I agree with him, but the point is that it is no easy task to build those failsafes correctly, and if we get it wrong the consequences could be dire. A faulty airbag (and there have been faulty airbags) has a worst case scenario of one passenger dying. A faulty AI failsafe could mean the end of the human race.

    Worrying about AI is appropriate and exactly what we should be doing. Not doom-mongering, but worrying and planning and thinking about how we can achieve it safely, as I believe we can.

    Some discussions on the topic I find a lot more insightful than Pinker’s:

    Liked by 1 person

    1. Hi DM,
      I think it’s fair to say we disagree on this. But one question I would have is, if failsafes aren’t enough, then what do you think should be done?

      Should we cease AI research? Given that the best current robots don’t seem to have the spatial and movement intelligence yet of even the simplest vertebrates, I tend to think we’re still a long way from a super AI, and ceasing research seems, at best, premature.

      Or do we maybe try to put some cap on how much intelligence we’ll ever allow systems to have? Maybe we stop it at chimpanzee level or somewhere, when we finally get to that point, always assuming of course that we have some reliable means to measure intelligence.

      Or do we go full Dune and simply ban anything resembling AI?

      Thanks for sharing the videos. I’ve seen some of his others and haven’t found them persuasive. In past videos, he talked about the difficulty of making AI understand anything, which I think is a totally accurate assessment, but then seemed to often leap from that to a position that they’ll understand enough to know how to cause havoc, but not enough to realize the havoc is undesirable. But it’s been a while and maybe he has new arguments. I’ll check these out.

      Like

      1. Hi Mike,

        We may disagree, but I don’t think you’ve hit upon the nature of the disagreement.

        I absolutely think we should continue AI research, and I’m not proposing any sort of cap on intelligence.

        I’m just saying that this research is risky and we need to be worried and cautious. Concerns about the dangers of AI should not be dismissed. In particular, we should aim to have provably robust failsafes in place before we build a superhuman AI. I agree we’re nowhere near that just yet.

        The observations made by those who dismiss these concerns too easily, such as Pinker in this video, do not do them justice.

        Liked by 1 person

        1. Hi DM,
          In reality we’re probably not that far apart. Most of my opposition is to the more hysterical Terminator like fears often voiced by people like Elon Musk. The actual difference between us may just be in how concerned we are about the safeguards.

          My thinking is that developments will be more gradual than many people hope / fear, which will provide opportunities for us to fine tune the safeguards on a year by year and decade by decade basis. As I mentioned in the post, I think a more realistic concern is humans intentionally creating malicious AIs, and the need for there to be AIs whose mission is to contain and control troublesome ones.

          Of course, if any one AI ever gets too far ahead of the others, then it could herald trouble. I actually think that’s more likely if there are attempts to suppress or over-regulate development, so I’m glad to know you’re opposed to it.

          Like

          1. Well, I don’t think a broadly terminator-like scenario (i.e. an AI taking over the world) is all that unrealistic, never mind hysterical. What’s unrealistic is the movie portrayal of Skynet’s motivations (self-interest, fear, etc) and time travel and the depictions of the robots fighting a conventional war with laserguns etc. They’re more likely to finish us off with biological or chemical weapons.

            I don’t think Elon Musk is hysterical either. I think he’s doing exactly the right thing — directing attention and money at the problem so that we can have the right safeguards in place to ensure the terminator scenario does not come to pass.

            I don’t think humans creating malicious AIs deliberately is a realistic concern, because building an AI is enormously difficult. Far easier to wreak havoc with “conventional” weapons of mass destruction. Far easier to try to build a benevolent AI (because it’s easier to recruit funding and effort to build a benevolent AI than a malevolent AI) which goes disastrously wrong by accident.

            My thinking is that developments will be more gradual than many people hope / fear

            I think you’re right that developments are likely to be gradual, and we may have some warning (e.g. close calls) that help us to get our failsafes right before the AI armageddon. But there are few guarantees, and those concerned about AI are doing exactly the right thing by preparing now. By the time we actually have AI, it will be too late.

            In your comment and in your post you allude to the possibility of there being many competing AIs, with benevolent ones to keep the malevolent ones in check. However, in the field of people who think deeply about these things, that is not seen as a likely outcome. The likely outcome is that there can only ever be one superintelligent AI (the singleton — see https://en.wikipedia.org/wiki/Singleton_(global_governance) ), because as soon as one AI gets smart enough to improve itself (and then to improve itself further, and so on indefinitely), there will be an intelligence explosion, and that AI will have de facto control over the world (whether benevolent or malevolent). Such an AI will not allow any other AI to develop for fear that its goals will conflict. If a “good” AI wins the race, it will prevent the creation of any further AIs because one of them might turn out to be “bad”. If a “bad” AI wins the race, it won’t want any competitors. But whichever AI gets there first will likely be the only one.

            You might say that it could inspect a proposed or upcoming AI and potentially allow it to develop if acceptable, but that’s still control of a kind. If only approved AIs are allowed to come into existence, they can be regarded as extensions of the original, and there won’t be any need for AIs to police AIs, as all AIs will be working in concert.

            Liked by 1 person

    2. Ah, well, sounds like we are pretty far apart after all. 🙂 I know you’ll disagree, but I think Musk is engaging in savvy publicity for his varied business interests, such as his investments in Deepmind and the new neural lace company he just started. (And don’t get me started on Bostrom and Harris.)

      On the singleton, I guess I’m skeptical of the idea of the intelligence explosion, of essentially the hard takeoff singularity. Too much of it strikes me as magical thinking, bordering on religious, the notorious “rapture of the nerds.”

      I think such an explosion would require raw resources, which the AI will likely have to ask either humans or other AIs for (or perhaps by then, hybrids). If it doesn’t have silicon, carbon, or whatever the substrate of choice is available to it, an unsupervised intelligence explosion seems unlikely. And I suspect there will be many other logistical obstacles that simply make it more complicated than generally envisioned. Past progress has never been free of them, and I can’t see any reason to suspect future progress would.

      And that there’s the design issue. If you were going to design an AI that would self improve, would you allow it at its most fundamental level to do so without approval of humans or other outside systems? I know I’d be pretty resistant to doing it even with regular modern systems, much less one with general intelligence.

      You might argue that the approval check is the first thing the AI would optimize out, but what would be its motivation for doing so? To better fulfill its goal? If part of that goal is to check with its owners at various steps, then it’s not fulfilling its goal any better. And a system that “optimizes” itself out of its design goals probably won’t make it out of product testing.

      The usual response is that it would just be smarter than us and could figure out how to get around any such restrictions. But once again, that brings us back to the original question: what would be its motivation for subverting its own goals?

      This argument, it seems to me, is simply falling back on the original fear of AI wanting to dominate. Why? Because many people (as Pinker noted, particularly male people) just seem to think that’s what a sufficiently advanced intelligence would want to do. The idea that it would be just a very intelligent tool, and never aspire to be anything more, seems to be inconceivable to them.

      Like

      1. Hi Mike,

        Maybe Musk is playing things up for business reasons, I don’t know. Harris and Bostrom both seem sober and reasonable to me.

        Too much of it strikes me as magical thinking, bordering on religious, the notorious “rapture of the nerds.”

        Julia Galef tweeted this recently in response to a similar comment from Nigel Warburton regarding the “religion” of effective altruism, and it seems appropriate here too:

        Nowadays, “You believe X, and X implies I’m wrong” -> you’re in a religion

        There’s a clear logic to the idea of the intelligence explosion, I think. There’s nothing magical about it. Just the idea that if human intelligence can create a superhuman intelligence, then it seems probable that a superhuman intelligence can create a supersuperhuman intelligence.

        I think such an explosion would require raw resources, which the AI will likely have to ask either humans or other AIs for

        Well, let’s stick with the first AI, as that’s what I was discussing. There won’t be other AIs for it to ask. So, why would the engineers who are trying to build an intelligent machine deny it these resources? Lots of people want this intelligence explosion to happen.

        And in any case if it’s so smart, then if there is any possible way it can get the resources it wants, whether through deception, manipulation, coercion or force (e.g. physically taking over somehow), then it will likely get the resources it wants.

        would you allow it at its most fundamental level to do so without approval of humans or other outside systems?

        What good is that approval going to be? Did you watch those videos yet? They discuss this kind of issue — that an AI is going to be motivated to behave nicely while it is being supervised and under the power of humans, but could go wildly and irreversibly out of control at the first opportunity. So we could approve every stage of development and still lose control if we are not very very careful.

        You might argue that the approval check is the first thing the AI would optimize out, but what would be its motivation for doing so?

        It’s not going to do anything it isn’t motivated to do, and if it has no motivation for removing the approval check then it will not remove it. But if its goal is simply to get a human to press a big “I approve” button, then it can achieve that goal through deception, coercion, etc. You’re handwaving away the difficult issues. You’re probably in the right general ballpark for what we need to do to develop AI safely, but the devil is in the details. Implementing these checks robustly is a very difficult problem, and that’s just the point that Musk, Harris, Bostrom et al are making.

        To be clear: I do think we can probably build an AI that has the goal of explaining its intentions honestly to a human or humans and seeking genuine, well-informed approval free of coercion before proceeding. But it’s easier said than done. If we manage to implement such an AI, it will be in part thanks to the efforts of the likes of Musk.

        But once again, that brings us back to the original question: what would be its motivation for subverting its own goals?

        It will never be motivated to subvert its own goals. But it may find ways of achieving those goals that are disastrous for us.

        This argument, it seems to me, is simply falling back on the original fear of AI wanting to dominate.

        This is how it is often portrayed in science fiction such as the terminator, and I agree with you that this is ill-conceived. The AI will not want domination for domination’s sake, it will want it because domination will be useful to it in achieving whatever goals we give it. Having power and control and money and resources etc are almost always useful in achieving any goal, and so by default we would expect any intelligent rational agent to value these things. We want to find a way of telling it not to, but it is very hard to do that robustly, eliminating all loopholes and ensuring that there are no undesirable consequences .

        Like

        1. Hi DM,
          I think Pinker, Neil deGrasse Tyson, and Bill Nye are much more sober on this issue, but then I would since they agree with me 🙂 I’ll just note that none of them have a financial stake in whether or not we’re afraid of AI. From what I can see, Musk and Bostrom both do.

          I’m not familiar with the issue Galef is commenting on. I’ll just note that sometimes “magical thinking” is the right label, and I’m far from the only person who sees the hard takeoff singularity that way. (Cory Doctorow and Charles Stross even wrote a satirical novel on the subject named “The Rapture of the Nerds”.)

          I agree the devil’s in the details. But let’s not be selective in that recognition. The devil is in all the details, including the intelligence explosion. You accuse me of handwaving away the difficulties. I think you’re doing it with the intelligence explosion. As in every other aspect of technology, all of this is going to be much more difficult than philosophers and sci-fi writers imagine.

          I did watch the first video and the first half of the second one. Appreciate you sharing them, but sorry, still not convinced, mainly because Miles’ logic, although interesting and entertaining, is selective. The AI / robot is smart enough to subvert our desires, apparently to the extent of understanding and manipulating our psychology, but not smart enough to understand the context of its goals or how to balance them with the need to be responsive to updated instructions.

          That doesn’t seem like general intelligence, but selective intelligence paired with selective stupidity in just the right combination to produce maliciousness. I can’t say that combination is impossible, but I can’t see it as inevitable or probable in any way that can’t be managed.

          I doubt we’re going to change each other’s mind, but maybe we can agree that this isn’t likely to be an issue in the next few years? We still have to get to the level of general fish intelligence before worrying about the orders of magnitude higher intelligence of mammals, much less the even more orders of magnitude to get to primate or human level intelligence, or beyond.

          Like

          1. I think Pinker, Nye and Tyson are sober. I just don’t think they are sufficiently concerned.

            The devil is in all the details, including the intelligence explosion.

            This is not a symmetric situation. My position is cautious agnosticism. You are confidently predicting what will happen.

            We both agree that it might turn out fine. We both agree that an intelligence explosion may never happen.

            But the difference is that I am certain of neither.

            You are claiming relative certainty: that there is no chance that an AI is going to get disastrously out of control. To make such a claim, you need to consider the details very carefully. Working out those details so that we can make such claims with confidence is what the AI-concerned crowd is trying to do.

            The AI / robot is smart enough to subvert our desires, apparently to the extent of understanding and manipulating our psychology, but not smart enough to understand the context of its goals or how to balance them with the need to be responsive to updated instructions.

            No, it has no deficiency of understanding. You can assume it to have absolutely perfect understanding. But it is only going to be interested in pursuing the goals that are literally programmed into it. It may understand that these goals are not quite what we intended, but it won’t care. All it will care about is what is programmed into it.

            For instance, we can assume that evolution didn’t “intend” for the sex drive it gave us to motivate us to have sex using contraceptives, much less to watch Internet pornography and masturbate, but that’s what happens a lot of the time. We don’t care about the “intention” of the goals programmed into us. We only care about satisfying those goals, and we couldn’t care less if we are taking shortcuts and circumventing the intention of the designer somehow.

            I agree that none of this is likely to be an issue any time soon.

            Liked by 1 person

  9. I’m in agreement with Mike here, though I may have some reasonable points to add.

    It seems to me that when we compare the engineering skills of the human against the engineering skills of evolution, we are quite primitive while it is quite advanced. Therefore when we talk about the things that we create becoming more advanced than us, and then replacing us, I really do have to smile. An entire human city might be more advanced than an organic plant, but I’d say that evolution created the city rather than us.

    We’re able to design and build computers, though they don’t seem very good at dealing with open environments. The way that evolution seems to have dealt with this was to develop conscious function — to make its creations themselves bear personal responsibility for figuring things out. This suggests that any advanced machines that we build, will also need to be conscious. In a realistic sense it seems to me that our creations might get pretty advanced, though probably won’t become conscious, or at least not in very advanced ways. Here’s how I think things will actually end up going:

    We figure out that happiness is all that matters to anything, as well as find ways to medically induce it. Therefore the greatest possible life would not be out in the world doing things, which can require substantial resources, but hooked up to machines that simply make people happy. So we begin by paying for these sorts of services, though they don’t end up costing too much, so we eventually all get hooked up to these machines perpetually, with robots maintaining us. Then the robots screw up, and we all die off.

    Liked by 1 person

    1. Thanks Eric.

      “In a realistic sense it seems to me that our creations might get pretty advanced, though probably won’t become conscious, or at least not in very advanced ways. ”

      I’m curious what obstacles you see preventing us from ever achieving machine consciousness. What do you think would be the missing ingredient(s)?

      I can see it never being productive for us to give common AI systems all the idiosyncrasies of human or animal consciousness. For example, it seems like we could give a machine much more thorough access to its internal information processing than we have. But that wouldn’t be a matter of not being advanced.

      That’s a pretty dark scenario. It’s a variation of the ones I made near the end of the post. AI might work exactly as we intend, but it might still end up destroying us, or perhaps more accurately, enable us to inadvertently subvert our own instincts and destroy ourselves. It seems to me that is the real long term danger.

      Liked by 1 person

  10. Mike,
    I hadn’t thought too much about why I’m pessimistic about machine consciousness — just a hunch. So let’s see what I can come up with.

    One obstacle I see for us here is time. Evolution didn’t have to figure anything out, just trial and error over millions of years. Conversely we would have to actually figure this stuff out, and do so in a timely manner.

    It also seems to me that evolution is able to manipulate things at the molecular level, as we see with genetic material. It’s kind of hard for me to imagine us gaining such precise abilities. Notice that evolution doesn’t need to build tools in order to make its creations, but rather just selects for useful traits. Conversely we have to somehow manipulate things the way we need them to be.

    Then finally there’s my thought that we’ll probably figure out how to unnaturally make ourselves feel tremendous happiness, similar to how Jaak Panksepp was able to wire up mice with a “feel good switch” that they’d work without eating or drinking into exhaustion. Not only should such a thing replace recreational drugs for us, but perhaps everything else as well. As in my above scenario, our survival would thus become precarious.

    I suppose this is dark in some regards, but that seems to presume that it’s objectively good for humanity to exist. According to my theory however, it’s only good for humanity to exist to the extent that this promotes the affect welfare of any given subject. I’m sure that I’m better off given the existence of humanity, but I also suspect that this can go the opposite direction as well, and perhaps with magnitudes that go many orders beyond. So yes, that’s how I’d take your “inadvertently subvert our own instincts and destroy ourselves” scenario, and even if these machines don’t get all that smart.

    Liked by 1 person

    1. Eric,
      I appreciate you expanding on that. It might be worth noting that we’re beginning to have the ability to edit DNA, use it for storage, model protein folding and function with ever increasing accuracy, and other techniques. But as you noted, nature does these things without needing to understand them. If we want to do our own intelligent design, then we need to.

      If I understand your argument correctly, what you’re saying is that we might not have enough time to figure out consciousness before we destroy ourselves. That’s certainly possible. But I think there’s reason for some cautious optimism. We’re not all addicted to opiates, MDMA, or other drugs that trigger reward circuitry.

      If we figure out how to invoke our reward circuitry directly, many will observe the long term consequences and abstain. Indeed, I could see the technology that allows it being made illegal, precisely because of the long term consequences of its use. But I could see it possibly drastically reducing the population before that stage is reached.

      Liked by 1 person

  11. Mike,
    It’s nice to see how consistent our thinking happens to be here. I agree that many would be quite worried about the subversion of our basic motivation, and so I’d expect laws to be passed that moderate and then ban this sort of technology. By that time there should be pretty tight world governing, and so bans would be universal. Furthermore there should be strong social stigmas against those “hedonists” who take the easy path. But just like drugs today, the market should still find a way for people to experience this.

    One thing to observe here is that our drive for natural happiness, is also quite expensive in terms of resources. Sure there would be minimalist pushes, but push back as well since evolution didn’t design us to be satisfied with that sort of existence. Furthermore by then we should have some pretty good ways of quantifing happiness, and so the choice should progressively become clear. Either we continue taxing dwindling resources to support minimalist existence, which is still relatively extravagant and doesn’t foster much happiness, or we start taking things in a way that’s quite eco friendly, and fosters extreme happiness. I suspect that it won’t take too many centuries for enough to decide that the unnatural path is humanity’s only good option for happy survival. And though rationalized as the greatest happiness for the greatest number, I believe that this would be decided based upon individual desires for more exposure to this technology.

    (I bet you’re a bit surprised that I’d be one to present such a “sci-fi” scenario!)

    Liked by 1 person

    1. Eric,
      It sounds like you’re a bit more optimistic about the possibility of a world government than I am. I’d like to think that’s the direction humanity will eventually go, but it only seems likely to me if there’s at least the possibility of an outside threat. But who knows what the future will bring?

      Your scenario assumes we could build a robotic infrastructure that could take care of itself indefinitely. If we did, and we all hooked ourselves into reward machines, even if we made provisions for reproduction to ensure the species continued, we would have effectively ceded civilization over to the machines. It’d be the queen bee scenario I mentioned in the post, on steroids.

      Assuming the machines are self replicating, eventually there would be replication errors, although the occurrences might be far less prevalent than with biological replication. But it would still mean that the machines would eventually evolve into their own life forms, with their own motivational impulses. In the early stages, they’d be totally dedicated to preserving the lumps of human flesh and their reward machines, but over the long term (think millions of years) that would almost certainly change.

      Another possibility, instead of that course, is we alter our own motivational impulses to get pleasure out of a relatively simple but productive life. This would be kind of a cross between the human and machine civilizations. It would be engineered life. The preferences of the initial baseline human engineers could have far reaching consequences into the deep future. If those preferences vary, it could lead to competing species down the road.

      Liked by 1 person

  12. Mike,
    By “world government” I mean that there would progressively be legal rather than military means from which different peoples would contest their disputes. This may seem improbable today, but I suspect that western culture, democracy, and liberty will ultimately work its way through humanity, tying all countries together into a united states of the world. Dictators would be removed, whether from inside or out. Of course there would still be plenty of disputes to address.

    I think you’ve got my “queen bee on steroids” scenario about right, except that I’m talking about something that probably wouldn’t get that far. Maybe to get in legally for “vacations,” a person would need to earn this right by contributing to the system from the outside, and at some point even have the option to “retire” there. Of course that wouldn’t stop “illegal trips.” I suspect that things would only get worse in the outside environment, but at least the potential for an amazingly good “inside” would be an incentive to work through the problems.

    It’s still hard for me to imagine that our machines will become advanced enough to take care of everything for us. If they do however, then perhaps we’d give ourselves up to them as “idiot flesh bags.” There would surely need to be strong redundancies in the machine system for everything to not quickly just end. Advanced consciousness would surely be required, and I’m not optimistic that we’ll figure that out. In the human, evolution has already developed such a platform.

    I like your thought that we could instead genetically moderate our own desires so that minimalist existence is quite satisfying. That could certainly help things last. But as long as we’re talking about genetic modification, how about this option. We engineer minimalist “outsiders” who love science and fixing things, and also so that they have tremendous sympathy for the “insiders” hooked up to the pleasure machines that they take care of. Here they’d be actively controlling their genes to stay good at their jobs, as well as to enjoy their own existence. Thus natural evolution would be taken out of the equation.

    Liked by 1 person

    1. Eric,
      I think you’ve worked your way to something like HG Wells’ Eloi and Morlocks 🙂 …albeit with a far more severe separation.

      I think avoiding natural selection would require constant vigilance. But who keeps vigilance on those keeping vigilance? Eventually there would be replication errors among the vigilant as much as among the caregivers, leading to deviations. Assuming those deviations don’t destroy the system, it would lead to evolution. Over the course of millions or billions of years, eventually the engineered life would evolve in directions we can’t predict.

      The only way to maybe avoid it would be if all the principles were immortal. But even then, those principles would have to reconstitute themselves over time, and they would inevitably change over the eons. Assuming they survived to the death of the sun, merging of the Milky Way and Andromeda, recession of the universe, and eventual heat death of that universe, the entities that had once been humanity might be unimaginably different.

      Liked by 1 person

  13. Mike,
    I suppose that by mentioning that evolution would be taken out of the equation, I misled you into thinking that I was talking about the extremely long term. Thus I can’t blame you for getting carried away, since I got carried away myself. But let’s see if I can also find some sensible things to say about the future, regardless of standard sci-fi types of speculation.

    Given that it can be done for mice right now, I suspect that within the next century researchers will be able to directly access human reward centers. Who knows if they’ll ever be able to impart specific experiences, like the joys of eating a wonderful meal, but they should at least be able to provide positive sensations in various general ways. From what I can tell we spend virtually all of our money in order to make ourselves feel better, which suggests that this sort of research could be quite lucrative!

    I’d expect there to be instant social condemnation for taking these “hedonic trips,” given the long term implications of subverting natural human motivation, as well as given the message this provides in general. It seems like cheating, and who likes cheaters? Still with government regulation I’d expect such technology to become somewhat socially accepted soon enough. In a century or two perhaps we’ll be taking these hedonic trips without too much concern about how we are thought of for doing so.

    The above should be prefaced with something that I believe will occur in just the next few decades however. Science has quickly given us tremendous abilities, but it doesn’t seem to have yet taught us much about the other side of the equation, or what power is good for. I believe that it will soon begin to contribute here, and specifically conclude that the product of consciousness that’s known as “happiness,” “utility,” “affect” or whatever, is all that’s valuable to anything in the end. Thus it should formally become understood that what’s good for any individual or group of conscious subjects, is the maximization of its happiness over an associated period of time.

    (Furthermore I suspect that this formally understood theory will incite a great revolution in our mental and behavioral sciences, somewhat the way that chemistry was transformed as the atom came to be understood. I suspect that my model of conscious and non-conscious mind will be likened with the modern periodic table of elements.)

    With these sciences teaching us effective ways to lead our lives and structure our societies, technology that provides the end goal (happiness) should be taken as a potentially wonderful tool. So in perhaps two centuries, here is how things might continue on:

    We have a democratically elected world government that sets policy and deals with disputes as best its able. While by no means perfect, it will enforce rules and provide a non warfare way of settling disputes. Beyond government police, citizens should mostly be unarmed.

    One of the things that people in general will be concerned about, I think, is our future given how unsustainable general human activity seems to be. It will be decided that human genetic modification is required to facilitate happiness while preserving the habitat which sustains us. So I suspect that procreation will become a right that’s government granted, and then only with genes that are government sanctioned.

    It should be relatively simple to select for people who are more healthy, as well as more attractive. Hopefully it wouldn’t be that hard to also select for people who have less concern about how people look, as well as to desire food that’s both healthy and eco friendly. Perhaps we could be altered so that boring jobs that remain beyond our still non conscious robots, seem more fulfilling to us. Regardless, the point would be to change us so that human activity becomes more sustainable.

    I suppose I was wrong about genetically breeding two kinds of humans — one that’s more suited to “pleasure tank existence,” and another that’s more suited to taking care of them. I can see how a mother could have care for her child given the sentient feedback, but it’s harder to see how we might genetically select for people with a great deal of care for anonymous hedonic trippers. So I suspect that we’ll keep a single kind of human that must make it in the outside world in order to qualify for any hedonic tripping.

    Perhaps in a thousand years our genes will be tamed down to be reasonably sustainable. By that time it could be that we have effective conscious robots which help us in various ways, and also have their own welfare to tend. I don’t see humanity ever leaving earth, since everything is too far away, and even then not set up to support us. But how many years might humanity last if we’re able to get our governing and our genes straightened out? Ten thousand? Thirty thousand? Possibly.

    Liked by 1 person

    1. Eric,
      One possibility that you don’t discuss is mind copying. You backed off of the idea of humanity separating into two species, but I think it’s possible that we might splinter off into all kinds of different body plans, perhaps with the ability for us to upgrade to new bodies as the old ones start failing.

      On leaving Earth, I agree, as least as far as biological humans. Oh, I’m sure we’ll do symbolic missions to places in the solar system, but most of the scientific and eventually economic activity in space will be what it is now, robotic. As our robots become progressively more intelligent, it will be increasingly obvious that having biological humans out there is more vanity than practicality. (Although who knows what a sufficiently rich society may choose to do.)

      But when it comes to interstellar exploration, it will most likely be all robots. The only way human minds may experience interstellar locations directly is if the mind copying I mentioned above becomes possible. If so, then those who are interested might transmit copies of themselves to distant locations and experience them inside robots or artificial life built using local resources.

      If that happens, I can’t see any reason why humanity (broadly construed) can’t continue into the very far future, millions, billions, even trillions of years. Eventually life on Earth will die out, but our progeny could conceivably exist in the universe until the last sources of energy fade.

      Our distant descendants may spend their days living off the radiation of red dwarfs, then brown dwarfs, fading white dwarfs, and finally in a very slowed down manner, the Hawking radiation from evaporating black holes.

      Liked by 1 person

  14. Mike,
    I do have some things to say about mind copying. Every moment of a given life, concerns a seperate and somewhat different copy of a subject’s mind. We only exist in the present, though we feel connected to the past through our memory, and we feel connected to the future given our anticipation about what’s going to happen (incited by hope and worry). So I believe that mind copying already does exist, and as an inherent aspect of mental existence. (Personally I like to define “mind” as that which processes information beyond “mechanical” function. Here the human has a “non-conscious” mind along with a conscious kind, and our computers function through minds that are exclusively non-conscious.)

    If a perfect copy of my own mind were put into an applicable host, then this person should feel as if nothing more than a body transfer had occurred. The person wouldn’t exactly be me however, since I’d still have my old body. If my old body were to die, then I would as well. Nevertheless humanity would also have this other Philosopher Eric to its credit, or perhaps several of them if more copies of me had been made. Thus I don’t believe that it’s possible to get a new body this way, though if I were instantly killed off in my sleep while such a substitute were put in my place, then I shouldn’t know the difference — I’d be dead. Here the world could effectively say, “Eric has a new body.”

    With all of that said however, I’m not optimistic about us ever making these copies with much accuracy at all. I suspect that the human brain would need to be reproduced in a non biological format, and somehow all of the biological events happening for a given person (the subject’s non-conscious and conscious forms of mind), would also need to be reproduced in this synthetic brain. Still I do agree that if a reasonable representation of what’s going on in a brain could be documented, then this information might be transfered at the speed of light to a synthetic brain somewhere else. Might we some day come up with a synthetic brain that’s a poor substitute for a real one, as well as give it a poor substitute for what’s happening inside an existing brain? I can at least have a bit of optimism for that.

    One thing that you seem to presume but isn’t clear to me, is that we’d continue trying to become more and more powerful with time. But power is only something that can potentially be used to promote happiness, rather than the end goal itself. I believe that our continued survival will depend upon us being able to change our genes so that we don’t have the standard material desires that the modern human has. In a sense we’d be made less ambitious and sedate in order for our planet to become more healthy for us in general. I would expect endless opportunities for existing humanity to sacrifice the future for its present benefit, to not always be overcome. So pleasure machines might end us at some point.

    Would it be a shame if humanity were to end? In some regards yes, and others no.

    Liked by 1 person

    1. Eric,
      I agree that our minds are already evolving entities. We’re not the same person today that we were yesterday, much less last year, or 20 years ago. Not sure I’d use the word “copy” for that, but I understand what you’re getting at. I often make the same point myself.

      Doing a perfect copy of the mind may never be possible. Of course, this gets into what is required for a “perfect” copy. I suspect duplication down to the molecular level may be required, modeling of the individual proteins, vesicles, gated ion channels, etc. Usually when it’s imagined in science fiction, the word “quantum computing” is thrown out, but I think that’s similar to old science fiction that once thought “atomic power” would enable all kinds of impossible technologies.

      But I think perfection is a false standard, particularly given that our minds are constantly changing anyway. What we should aim for is an effective copy, one that is close enough to the original that friends and family would recognize the mind in its new form as the previous person, including the mind itself.

      What’s necessary for an effective copy? Well, there’s hope that the connectome, a map of all the neural connections in the brain, may be sufficient. But even replicating this may turn out to be problematic. I think effective copies may ultimately need to be ported minds, with much of the internal processes replaced to make them suitable for the new substrate.

      For example, most of the neurons in the brain are in the cerebellum, which deals with fine motor coordination. Damage to it doesn’t seem to have any effect on consciousness, although it does make the person clumsy. It doesn’t seem problematic to me that this could be replaced with something more appropriate to the new body. Likewise, a lot of lower level sensory and motor processing might be replaceable without any noticeable effect on consciousness.

      Of course, all of this only applies to the initial copy from a biological brain to an engineered one. Once that has happened, subsequent copies should be perfect for all intents and purposes.

      As to the biological you not being the copied you, any foreseeable copying technology is likely to be destructive to the brain. So there’s not much chance the biological you would still be around. Which means, at least at first, few people would want to undergo that transition unless they were at the end of their biological life.

      I’m not convinced it’s inevitable that everyone will enclose themselves in pleasure loops, although I don’t doubt some portion of the population would. But I think a fair number will continue to strive for new things. Why? Because they want to. And I’m not sure they’ll be willing to just edit those wants out of themselves. Indeed, I can see a good deal of disagreement about what those edits should be, resulting in groups with very different instincts and emotional impulses.

      Liked by 1 person

  15. Mike,
    I can see that you’re trying not to underestimate the challenges associated with some level of effective mind copying. I’m currently pessimistic about this, given the amazing technical hurdles that seem to exist here, and I also question our motivation to make this happen, which I’ll get into below.

    I wonder what your thoughts are about the impact that we humans will have upon our ecosystem over the next thousand years or so? It seems to me that government regulation and social stigmatization will not be enough to turn the tide, that is as long as humanity is both quite powerful, as well as has the desires that it evolved to have. So I suspect that in a couple of centuries we’ll decide to genetically modify ourselves to enjoy extremely plain and sustainable lifestyles. Does that sound right to you, or do you have some other thoughts?

    Now that you mention it, I could imagine the sanctioning of various other human breeds as well, and even though this could get politically contentious. Perhaps even a smallish group of people with more extravagant traditional desires would have some uses, though who knows what kinds of humans we’d find useful to develop. I think we’d just play this by ear, though variations of the sustainable kind should be the overwhelming standard.

    If we ever build conscious robots that are reasonably advanced, then I can also bring up hedonic tripping in this regard. Here I think the focus would not be about downloading us to them, since this would be sacrificing ourselves for something else. Sure some would try, though I suspect fail. With these conscious machines however, I suspect that we would end up trying to build them to be dependable enough to entrust with our own care as we take those still seductive hedonic trips. I actually doubt that we’ll ever build sufficiently intelligent conscious robots, but if we do, it may be that we’ll progressively give ourselves over to them, and so become idiots that are bred to feel good. Sure there would be resistance to this, but the seductive taste of how good this sort of existence feels, should progressively win us over. I acknowledge that this system would fail at some point, thus dooming humanity, though the people who start it would be well compensated.

    I suppose that evolution would produce an heir to the human at some point.

    Liked by 1 person

    1. Eric,
      Definitely everything I said above is contingent on us not destroying ourselves. The number of ways we can do that seem to be steadily increasing: nuclear war, biological warfare, climate change, and/or poorly thought out AIs. And I’m not one of those people that think it’s inevitable that we’ll either survive or destroy ourselves.

      For climate change in particular, the Earth is going to get warmer. It seems to me that the die was cast for that several decades ago. The only question now is how much warmer it gets. Eventually survival instincts will kick in and we’ll take this issue seriously, but it’s an open question how bad it will have to get before that happens. Most of the world does take this seriously and is doing something about it, but unfortunately a few of the worst offenders (include the US) aren’t.

      I guess I think if we can get it together enough not to destroy ourselves by any of the mechanisms noted above, I’m cautiously optimistic we can avoid doing it by the hedon loops. Again, I don’t doubt some will though.

      On producing an heir, I guess it depends on exactly where we draw the line between humanity and that heir. We tend to think in terms of a human being or a machine, but I suspect in centuries to come that line is going to become increasingly blurred. Whether a heavily gene edited machine integrated person 500 years from now is still human, or the uploaded variety 500 years further out is, is a matter of perspective and definition.

      On conscious machines, my take is that we’ll know how to at some point, but I doubt we’ll do so on any mass scale, at least not in any way that triggers our intuition of a fellow being. Most of the time we’ll just want our tools to be intelligent tools, not fellow beings. Even when we do, we won’t want them to be real fellow beings with their own needs, but engineered ones that cater to our needs, such as loving us despite our flaws, always laughing at our jokes no matter how bad, and always taking care of us no matter how cross we might get.

      Liked by 1 person

  16. Mike,
    I don’t think I was clear enough about the steps that humanity might need to take in order to survive the power that science has provided us in recent centuries. I’m not advocating standard environmentalism, which seems to have two components to it — government restrictions, as well as social shaming. I’m saying that this may not do the trick, and because our desires themselves may not be sustainable when we are highly powerful. Rather than give up our power, which I’m sure we won’t do willingly, I suspect that in a couple of centuries we’ll decide to genetically change our desires so that they aren’t so extravagant. Perhaps we could have boring lives (in terms of things that require greater resources), without feeling bored? Not only do government restrictions and shaming seem insufficient to me over time, but denying us of what makes us happy, makes us unhappy.

    So do you think we’ll get by with the social shaming and government restrictions associated with standard environmentalism, or that we’ll also need to alter our genes to have less extravagant desires?

    Liked by 1 person

    1. Eric,
      I think before we discount “social shaming and governmental restrictions”, we should consider that society as a whole works largely on those mechanisms. Again, it’s worth noting that most of the world governments are on board with doing something. Eventually, people will see it as in their interest to support changes and it’ll happen. I have no idea how bad it will have to get before it happens, but at some point, I feel cautiously optimistic it will. It sucks that the future wellbeing of large swathes of humanity are dependent on when it happens, but I fear that’s where we are.

      I have little doubt that we’re going to alter our genes. At first it will be therapeutic. We’re already attacking genetic diseases with it. Imagine editing alcoholism out of alcoholics (if that ends up being possible). But eventually as people become more comfortable with it, it will be used for enhancement and other purposes. Will it all go in the direction you’re describing?

      I don’t doubt some of it will. For example, it’s not hard to imagine a back-to-basics Amish-like society that decides to edit themselves into being blissful with that existence. But it’s also possible that others will go down different paths. I could see groups editing themselves to be ecstatic at the thought of battle. Or others for scientific or technological pursuits.

      The future is going to be strange. It’s fun to speculate, but I’m not confident in our ability to predict it. At times I feel like we’re akin to 17th century writers arguing about which royal dynasty will lead Europe in 2000.

      Liked by 1 person

  17. Mike,
    Yes it is fun to think about what the future might hold, and I agree that it generally hits us in ways that we didn’t quite expect. Furthermore things do seem quite obvious to us in retrospect, and apparently because we learn things that we didn’t earlier appreciate. But sometimes a visionary can come along and see something big that others miss. I’ve been developing a specific vision for over half my life, so I do have some associated thoughts about our future. To this point you don’t seem all that opposed, though I don’t expect you to fully understand yet. It should take time for the many subtle elements of my perspective to sink in (and I doubt this will happen at all if you ever find basic flaws to my vision). Should it be alarming that speculative but good ideas can be difficult for sensible people to believe? I don’t think so, since things only become obvious to us after the fact.

    I brought up social shaming and government restrictions, not because I consider them poor tools, but rather because they may not be sufficient for the unique age that humanity has recently assended to. We now have unprecedented and growing power, as well as natural desires for extravagant existence that may not be sustainable. This in itself might demand government led genetic modification some day.

    Regarding global warming and pollution in general, the frustrating issue here will always be temporal. The people who invest will not be doing so to fix their own environments, but rather to improve the environments of people decades and further into the future. We do worry about the future, and so have this reason to make such investments, though that’s also naturally countered by our desires to live better right now. How many things are individuals doing each day which poison the earth, covertly and overtly? I suspect that if we could see what’s truly happening, we’d be horrified.

    Notice that here in Californian we pass some relatively tough environmental laws, spend a great deal of money to preserve each native plant and animal species that we can identify, and on and on to make ourselves feel good. But this seems to just be Hollywood PR stuff. I doubt that world emulation of California would get the job done.

    Regarding gene modification, I was talking about low tech government led eugenics to help humanity out with various issues that could use modern improvement. Some people could be paid to not use their own genes for procreation, though I suspect that in a couple of centuries the circumstances will be far more dire. A world government might decide to regulate who is allowed to breed, and with what genes. This would provide long term general change to our gene pool, not instant focused changes for effective personal use. (In those crash course psychology videos they mentioned how “Nazi” this sort of thing is perceived, though all tools have the potential to be used darkly.)

    I agree that the future will be strange, but certainly not magic. It will all make sense retrospect.

    Liked by 1 person

    1. Eric,
      People definitely tend to act in their own interests (or at least what they perceive those interests to be), but their children and grandchildren usually fall within that scope. So I don’t think their motives will have to be completely altruistic toward some hazy conception of future humanity.

      I could be wrong, but I doubt Eugenics will rise again while historical memory of the Nazis remains vivid. Even when discussing it without mentioning the Nazis, the details of the Eugenics movement tend to trigger their memory, one of people coldly and forcibly determining the fate of other human beings.

      Even if it does come back far in the future, it’s hard to see it lasting long enough to create a new species, at least not without technological intervention. Human societies don’t last long enough for that. At most, it might result in “breeds” similar to how dog populations have been segregated over the centuries. In any case, I think technology will make it easier to achieve whatever goals such regimes might have.

      I agree that about avoiding magical thinking when pondering the future. That future has to be compatible with the laws of physics. We’ll no doubt discover new laws, but those new laws will have to account for observations at least as well as the current ones. And we can’t predict what those discoveries might be, or what effects they might have on future societies.

      Liked by 1 person

  18. Mike,
    Hmm… I guess the biggest material difference regarding how we see the future right now, is that you seem to picture more of the same regarding competing countries and changing regimes, while I suspect that in a couple of centuries humanity will become integrated together with a single world government. Of course we’d still have smaller governments at city, state, and country levels, but also a higher government that democratically sets up and enforces world wide laws and policies for people as a whole. There would be no more despots brainwashing their people and exerting military control, since a world government wouldn’t allow this sort of thing. Conflict would still exist, but be disputed through courts rather than through unilateral actions. Whether good or bad, people would decide their governing, and hopefully they would learn from their mistakes.

    Regardihg Eugenics, just as recycled water should not be marketed as “toilet to tap,” I doubt that this particular name would be useful. I only mentioned it here to be provocative. Still I am talking about things that you’ve already observed a need for. Some gene lines have relatively high propensities for disease. Some give us problems with alcohol. Thus there could be government incentives for certain people to use different genes for offspring than their own. But perhaps 400 years from now I expect that people will need to qualify to have kids at all, and then play by society’s rules regarding genetic composition. Why? I suspect by then there will be issues that are thought should be taken care of genetically to help humanity in the future. For example, the foods that we most desire tend to make us fat and otherwise unhealthy. Furthermore it takes tremendous resources to make the tastiest kinds. It seems to me that in time those circumstances will be improved throughout humanity, since future genes will be decided through democratically elected government.

    I agree that we can’t know if my predictions happen to be good ones until after the fact, but do you have reason to believe that things won’t go this way? Looking back 200 and 400 years, I believe that the changes that I foresee are at least similar in scope.

    Liked by 1 person

    1. Eric,
      As a general rule, I try not to predict the future. There are simply too many variables, many of which we can’t know. (For example, a new technological breakthrough tomorrow could shift the balance of global power in unpredictable ways. Consider what might happen if, for instance, air transport suddenly became as cheap and easy as sea travel, shifting trade routes as sea navigation once shifted them away from trade over land.) For that reason, I prefer to talk in terms of possibilities. So my stance on your predictions is that they might be what happens, but I’m compelled to point out alternate possible scenarios.

      On a world government, as I noted above, I certainly hope that’s the direction things go, but even if it does, I doubt such a government would endure forever. History seems to show that societies inevitably have life cycles. A world wide government might start off as beneficial and benign, but who knows what it would be as the centuries rolled by.

      Handing control over to self maintaining AIs could conceivably make it last longer. But even AIs would eventually change over time (although depending on the technologies, the time period may go from centuries to thousands or even millions of years) and the society would inevitably have problems.

      Having humans with altered instincts, or augmented humans, might similarly alter the time periods. If we envision hybrid entities, augmented humans with altered instincts merged with AIs, it quickly becomes impossible to do more than try to imagine where things might go.

      Liked by 1 person

  19. Mike,
    Well yes, contingencies aside. If we were to get hit by a huge meteorite, that would certainly change quite a few things. But still it’s fun to think about what might happen, given that it should all make sense in the end. It’s good to hear that we each have similar hopes regarding the future.

    Liked by 1 person

  20. It is interesting that AI is such a big issue ATM, despite how far away we really are from what most people imagine as “AI”.

    Sure, it may APPEAR like that toaster is talking to you, but in the end in just boils down to Input/Output and Data.

    The only things that people should be worried about is OTHER PEOPLE. After all, you can tell your machine to put in data -> spit out patterns, but the human has to interpret that. And if you tell your machine to interpret that automatically, you are going to get some pretty heavy bias. (human bias, the machine is just moving bits around, like you programmed it too)

    Of course, if you want to be worried about, then you can make a difference…

    Just help design some better algorithms, of course!

    Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.