The return of heretical thought?

A couple of weeks ago I highlighted Robin Hanson’s ideas about alien civilizations. A big part of Hanson’s reasoning involved the Fermi paradox, the question that, if alien civilizations are common, there where is everyone? It seems like Earth should have been colonized long ago. Hanson focused on the number of difficult evolutionary filters life has to go through to evolve into a civilization producing species, and concluded that such civilizations are very rare, although not so rare that we’re alone in the observable universe.

But there’s always been a more disturbing possibility, that it is common for intelligence to evolve, but it’s in the nature of such intelligence to destroy itself. This means that the “great filter”, the thing that makes civilizations uncommon, is ahead of us rather than behind us. This would mean that the galaxy is filled with the remains of dead civilizations. It’s a disturbing possibility that, strictly speaking, we can’t rule out.

The question then is, what might that future filter look like? In a pretty disturbing article, Nick Bostrom and Matthew van der Merwe discuss the possibility that it might be a technology of some kind, one that might make our destruction likely.

Bostrom and van der Merwe note that we somewhat lucked out with nuclear weapons, since they’re not easy to build, requiring a lot of knowledge and exotic materials. They ask, what might have happened if the technology had been easy to construct using common household items? Society might have suddenly found those items inexplicably banned, with the government being unwilling to discuss why.

The problem though is that keeping knowledge secret is problematic. If such a technology were ever found, they argue, the only solution might be to set up a worldwide citizen monitoring paradigm, where authorities could monitor every citizen to make sure they weren’t trying to construct the dangerous weapon. We can imagine such an intrusive and pervasive system might only be used for these kinds of existential threats, but the temptation to use it for other things by authoritarian minded governments would be overwhelming.

At this point, we might be thinking we can only hope such a technology never materializes and forces this kind of thinking on us. But Bostrom and van der Merwe point out that setting up a worldwide monitoring program would take time, years or decades, time we might not have if such a threat were suddenly discovered. The implication is that maybe we should think about setting it up now.

Personally, this makes all kinds of dystopian science fiction scenarios run through my head. Perhaps the most disturbing one is that it might end the era of free thought and open inquiry. In the middle ages, certain kinds of thought were routinely banned and considered heretical. The wrong kind of thinking might lead to widespread sin and bring God’s wrath down on all of us. For the good of all, vigilance against heresy needed to be maintained, and heretics hunted down and punished. And the punishments needed to be severe to discourage others from engaging in that kind of activity.

Bostrom and van der Merwe paint a possible future where this kind of thinking might return. But unlike the old superstitious thinking, there would be a real danger here in pursuing certain ideas. Letting people pursue them really would be to put everyone in danger.

A key question, of course, is will there ever be a technology like this? As Bostrom and van der Merwe point out, nuclear weapons are difficult to build and use, which has largely kept them in state hands. And almost as soon as anyone had such weapons, another competing power developed them, resulting in a stand off. Mutually assured destruction wasn’t a very comforting deterrent, but it apparently worked.

Will there ever be a technology easy to acquire but so destructive that no one can build a defense against it? Note that the last part of that question is crucial. It’s easy to identify destructive technologies (starting with fire), but it’s harder to identify ones with no possible defense. Bostrom’s favorite boogeyman, artificial intelligence, is arguably addressable with more artificial intelligence, in this case AIs to compete against any out of control AIs.

Even a biological weapon might, eventually, be addressable with nanotechnology that can neutralize it. But until that point is reached, we already live in a world where certain biological samples are only allowed to be stored in certified heavily secured locations. And if you make too much of an effort to acquire certain materials, you run the risk of raising flags that might attract the attention of law enforcement or national security agencies. To some extent, we already live in a version of the world Bostrom and van der Merwe describe.

So, the parameter space of what they are warning about seems smaller then they imply. But it does exist. And if a technology along those lines were ever discovered, we might still find ourselves in a period where there was no effective protection against it, and society’s only option was to do what they describe.

Ironically, if we wanted to ensure such a regime was unlikely to be abused, artificial intelligence might actually be part of the solution. It might be easier to ensure an AI never abused the program the way humans almost certainly would. Although the existence of AI would also enable those humans to abuse it in ways far more effective than anyone could do today, so it’s a double edged sword.

What do you think? Is this kind of technology inevitable? If so, should be institute a monitoring program along the lines of what Bostrom and van der Merwe describe? Or are we just working ourselves up into anxiety about a hypothetical threat when we have enough real ones to worry about already?

29 thoughts on “The return of heretical thought?

  1. If, by some astronomically low-odds means, other intelligences have arisen in the cosmos, it might be advantageous to not use our current civilization as a model for comparison. Rather, I suggest we take, say, the Sumerians and rework the past to create a fortuitous path of technological development for them. I believe our fossil fuel driven existence is an advantage rarely, if ever, duplicated in the cosmos. If we were to use such an ancient civilization and allow them to progress through the ages, I think they might represent a more common model of what other Tech-ETs might look like.

    What might be different (from us) is that the pace of development would be much slower. They might be constrained to a Renaissance or 1700’s-esque frequency of discovery. If we assume they were a more peaceful species, avoiding crippling war and a dependency on slavery to drive progress, they might well learn as they go, to build safety into their technology. We, on the other hand, given the explosion of nearly free energy, food and population, pushed out, hellbent, without any thought as to the implications of technological advancement. We, I believe, are unique in this scenario. A modern Sumerian civilization would have had to figure out how to leverage green energy for their entire existence, eventually transitioning, we assume, to nuclear energy and whatever *next* technologies might be possible. But, again, much more slowly.

    Regarding AI… From my viewing of Robert Miles’ videos, I’ve come to the conclusion that AI will not be containable. AGI, once created with sufficient capability, will not be a thing we can restrain. By its very nature of self-direction, it will do as it pleases. Given that such an intelligence would figure out how to explore the galaxy and universe, and that it would choose to do so (resources and energy?), we’re back to the Fermi Paradox. Would an AGI eventually destroy itself? Would they all?

    We are unique in the Universe.

    Liked by 1 person

    1. You might have to go back further than the Sumerians to do that type of analysis. From what I recall, they made pretty heavy use of the crude oil that, in their time, was readily available in the surrounding regions. The earliest Cuneiform writing probably happened under an oil lamp, and I suspect the oil was an ingredient in a lot of their processes.

      On being unique, I don’t know. Most fossil fuels are the result of biological processes. A lot of people think about dinosaurs when this is mentioned, but most fossil fuels come from ancient single celled organisms. In that sense, they might be inevitable in any long lasting biosphere. Although, maybe not. If we imagine a world without tectonic processes, it might turn out that such fuels never get buried before something consumes them. If so, it’s conceivable that that itself might end up being a major filter. (This assumes land life is possible without tectonic processes, which is itself not necessarily a given.)

      Can’t say I’m a fan of the Rob Miles videos. It’s very close to Bostrom’s type of reasoning, which I think is excessively alarmist. That’s not to say there are no dangers with AI, but it’s more complicated than a lot of the AI alarmists have historically claimed. Recently, some, like Max Tegmark, have become more nuanced in their concerns, which I think is an improvement. Personally, I’m more worried about what humans will do with AI than what the AI themselves will do.

      Liked by 2 people

      1. Sumerians had access to fossil fuels in industrial quantities? I figured animal fat and some grain/seed oils maybe. (Sesame seeds may have been the first food-stuff processed for their oil.) But beyond the rare tar pit or coal seam, I didn’t know the area thought to leverage such fuels.

        The automation of most labor will no doubt be the crowning victory of vertical AI. If society doesn’t come to grips with the inequity that induces, that may lead to a cultural dead-end.

        One thought came to mind regarding an advanced Sumerian civilization: LEO and beyond space travel relies upon cheap fossil fuels. I’m guessing that without some form of equally abundant liquid energy, any Tech-ET would be planet bound. I suppose with nuclear power (Fi/Fu) they could crack water and use liquid hydrogen. But I can imagine the discussions such a society would have trying to direct such seemingly superfluous usage toward that goal. “Wha’d ya mean ya wanna burn all that go-go juice in a flying bomb? We can barely afford to run our tractors with what we have.”

        Liked by 1 person

        1. I’m not sure about industrial quantities, but the ancient Sumerians did use oil from the ground, not just from current organic sources. It’s been awhile since I read about it. It was something I remember being covered in my college world history course, which was some 35 years ago. Googling around I see a few references to it, particularly stuff like this: http://www.dnr.louisiana.gov/assets/TAD/education/BGBB/2/ancient_use.html

          Like

  2. For this post I have several unconventional positions.

    First it makes perfect sense to me that other civilizations wouldn’t be here and that we’d never even detect them. Though they surely do emerge from time to time, I suspect that when advanced enough they tend to kill themselves off relatively fast (geologically that is). Though weapons may be an issue sometimes, the main downfall I think should be that they tend to subvert their evolved purpose to be happy, by having fulfilling lives through direct stimulus (“neurological” or whatever). Thus they should come to depend upon machines to keep them healthy that they’re no longer able to understand or build. So when that goes wrong the civilization should end. Of course they’d surely understand this danger and try to guard against it, though at the cost of greater instant happiness. Anyway from here I’d think that civilizations should tend to last only thousands rather than millions of years, or geological blips on the map. After us I suspect that new civilizations will emerge, with the time frame largely dependent upon which species we permit to go on.

    Colonizing other parts of the universe? I suspect that this won’t happen given that what’s out there is so different from what’s here. We can of course build vessels which sustain us and our machines for a while out there, but should never be able to use Earth independent resources to sustainably survive for long, either in ourselves or as human created machines. So I think space exploration will remain disposable voyaging, and often not fit for biological substrate.

    Regarding the main focus of the post however, or that governments may need to monitor us so that some of us don’t destroy us all, I’m surprised that Bostrom and van der Merwe didn’t consider what’s happening in China. Not only are the people there becoming more and more monitored, but direct incentives and penalties are being instituted which encourage people to behave as their government wants them to. Each person has what’s called a social credit score, which is based upon how well a given person follows the rules as assessed by government monitoring. With such coordinated direction which permits individuals to seek their own happiness under specified incentives and penalties, I suspect that China will become more prosperous per capita than those under western systems. And if for example Chinese citizens generally end up with twice the living standard of those in the west, I’d expect many here to give up tremendous liberties to potentially assimilate over there. So eventually we may all be heavily monitored, not to mention actively controlled by our governments.

    Liked by 1 person

    1. On civilizations failing, is this a variation of your pleasure machine hypothesis? Or just a more general statement about what might happen? If the latter, I actually do think that’s a danger. We could see a situation develop where every human need is so pampered by machinery that most, if not all, ambition gets lost.

      Charlie Stross explores a similar scenario in his novel Saturn’s Children. But Stross has a new robot civilization continuing after humanity has gone extinct. I actually think something like this scenario, and a spiritual reawakening against it, was more what Frank Herbert originally had in mind for his Butlerian Jihad, rather than the slave revolt his posthumous collaborators came up with.

      What gives me some hope that this isn’t necessarily inevitable is the general resistance to falling into drug induced stupors. Of course, you can argue that we don’t have the infrastructure to really allow people to do that yet. Once we do, I don’t doubt that some portion of humanity will disappear to whatever stupor drugs or other technology allows. But I can also see some having Herbert’s Butlerian impulse and resisting it. Only time will tell.

      I do agree that colonizing the universe is unlikely to be something that biological humans do. But whether it’s something our AI or uploaded progeny do may be another matter. Once we have machinery that can reach another star and reproduce itself, I think that dramatically reduces the probability our civilization dies out in every form. Of course, that assumes such machinery is possible. Again, only time will tell.

      We’ve discussed China’s social credit system before. All I’ll note here is that China is probably going to see a lot of success in the next few decades. It has the largest population in the world and a lot of natural resources. I think its success will be more about that than the social credit system, or its system of government. Although I don’t doubt people will credit either or both for their economic progress and ignore the broader factors.

      Liked by 1 person

      1. Mike,
        The thing about merely having machines satisfy every human need in the standard sense of today, is that a naturally desired lavish life would tend to demand tremendous resources. Even with amazing machines we simply should not all be able to sustainably live anything like Bill Gates. But we could theoretically all sustainably exist in meager facilities with basic sustenance and such. If an amazingly happy life could be had in such a place because some electrodes were exciting the right brain parts, this could be how things go.

        Of course there would be tremendous resistance against such a repugnant situation. If happiness does happen to be what ultimately drives sentient forms of life however, this resistance may be assessed as little more than “circling the drain”, or at least in terms of surviving for geological lengths of time.

        I recall reading a while back about some researchers in the 70s who got hold of some troubled gay prisoners, fitted them with some brain electrodes for pleasure, and then tried to make them straight by stimulating them directly while they were in sexual situations with a woman that they provided. Apparently it was amazing how strongly these people wanted the electrical stimulation, or nothing short of what Jaak Panksepp documented in experiments where mice were wired this way to levers for feeling good. As you know they’d work these levers endlessly without food or water until exhaustion prevented more.

        This form of pleasure does at least seem more healthy to me than through chemicals, though I can’t say why I haven’t yet heard of rich suicidal drug addicts having their doctors wire them up this way. But note how few resources such happiness would demand when weighed against what it takes to satisfy people who have tremendous resources. Rich people often require a standing army of servants. Conversely this situation could be sustainable.

        Regarding space colonization, yes surely biology is not right for it. But could we build machines that reach another solar system and propagate themselves? I’m not against the idea of machine consciousness given the right physics based mechanisms, but even then it’s hard for me to imagine these machines finding what they need to carve out functional environments and propagating themselves by means of local resources.

        I can understand why people would doubt that China’s SCS will create tremendous wealth. Surely only “the free” can be productive. I suppose this is a lesson from the west’s victory over the Soviets. Their failure was not the result of totalitarianism however, but rather socialism. Chinese officials not only understand this, but have tools in motion to reward and penalize their people in ways that should be virtually unavoidable. The goal is for the Chinese people to become as productive as possible, and thus as rich as possible. While we should continue to have lots of crime, poverty, social strife, and injustice, their citizens should tend to do whatever their government influences them to. We’ll see what happens for a while, though in these early days of the SCS I predict that their crime and dissent rates will plateau and decline, while their education and productivity rates will rise substantially.

        Liked by 1 person

        1. Eric,
          On crime plateauing and whatnot, we’ll see. But have you considered the implications of the two scenarios you’ve described?

          Let’s say the Chinese have in fact worked out the right combination of incentives for a healthy productive society. And now we have the technology to make people happy under arbitrary circumstances. You don’t think they wouldn’t be tempted to use that technology to embed the SCS within people’s skulls? Particularly if a lot of people were already doing the embed for entertainment?

          Maybe the future is we’re all delightfully happy slaves.

          Liked by 1 person

          1. That’s a good point in the long run Mike, though for now the Chinese have bigger fish to fry. Their goal is to make their people highly powerful through education and experience, and at the same time highly deferential to the state. They don’t need to directly “juice” their citizens right now since there should already be a natural drive be happy that the government can use. Not only should they tend to punish people for recreational drug use, but certainly people wiring up their brains for pleasure.

            If/when their social monitoring and engineering does create extremely powerful and wealthy citizens who are even grateful to their government for giving them the opportunity, they might indeed experiment with government controlled direct pleasure manipulation. Perhaps they find that certain people with less fulfilling jobs need more incentive to productively live such lives?

            Good point about us all potentially becoming delightfully happy slaves. I can see how a prime directive for government would be long term sustainability and so perhaps they wouldn’t permit their citizens to lose critical skills. Thus perhaps humanity could survive indefinitely under such a system?

            I was wondering if you were going to tell me more about this brain stimulation business. I looked this up again and here’s a reasonable article about the doctor in charge: https://www.google.com/amp/s/www.vice.com/amp/en/article/ezzvem/wireheading-1950s-wetware-hacking

            Liked by 1 person

          2. Can’t say I know too much about that specific history. I do know deep brain stimulation is a growing thing, but the way this article talks about it makes my skeptic meter jump somewhat. From everything I’ve read, no brain intervention works consistently like that. It’s always more complicated than what these types of articles portray.

            Liked by 2 people

          3. Yes it surely is more complicated Mike. And it must be that such complications have prevented this sort of service from being commercially exploited so far, unlike recreational drugs. Science should progressively work the bugs out though as more is learned.

            Liked by 2 people

  3. IMHO, although our universe is not infinite, it is certainly big enough that, for all intents and purposes, it would behoove us to treat our universe as infinite. If the universe were infinite and I were to throw a bag of coins into the air, not only would the odds be that, at least once, all the coins would land on their edges and continue standing like that, but that would happen an infinite number of times over an infinite period of time. I doubt that there is any such thing as a singularity in our universe. What I mean by “singularity”, is that there is only one, that it is a one-off. Other than the Big Bang (and maybe not even that), it doesn’t seem likely to me that there is a single thing in the universe of which there is only one instance. On the contrary, every single thing in our universe seems to be repeated over and over and over again. We know there is (somewhat) intelligent life in the universe since we see it on earth. It seems highly presumptuous of us to presume that our kind and level of life does not appear throughout the universe, time and again in every galaxy and an uncountable number of solar systems. Just because we haven’t encountered evidence of alien civilizations yet in the vastness of space during the narrow window of our technological capabilities to see or hear sources outside our solar system, is not sufficient for us to conclude there is no intelligent life besides what we have on this planet.

    Liked by 2 people

    1. I agree. The idea that there is no other intelligence anywhere else in the universe is implausible. The only question is how far away our nearest neighbor is, either in space or time. It might turn out that that neighbor is far enough away that we’ll never encounter them. Or it may turn out that there was once one within a few hundred light years, but we missed them in time by a million years.

      Only time will tell.

      Liked by 2 people

      1. Sadly, even Time may choose to keep that secret to itself. I’ve long wished that we would find incontrovertible evidence for an independent genesis of life elsewhere before I turn to dust. I’d even settle for finding an independent Tree-Of-Life – no matter how primitive – here on earth, perhaps deep in some oceanic trench, or and independent TOL that appeared long ago and is now extinct. That (for me) would seal the deal that life is a robust process that almost certainly pervades the universe.

        Liked by 1 person

        1. In the short term, our best bet is probably finding some kind of life elsewhere in the solar system, maybe in one of the underground oceans that seem to be all over the outer solar system. If we find that, it would mean life is pervasive in the universe.

          We might also see signs of life in the light reflected off exoplanets, but that wouldn’t be as conclusive. There would always be doubt that maybe we had overlooked some abiotic process that could produce what we’re seeing. Still, it’d be better than nothing.

          Like

  4. You have a good point about defense. Between comparable rival powers, defense is harder than offense, as Stanley Baldwin said, “the bomber will always get through.” But if we are worried about an individual lunatic vs their own society, the rival powers are not comparable.

    To destroy all of humanity with something like a nuke-on-steroids, you would need to release an enormous amount of energy suddenly. I think we have good reason from physics to suspect that no such sources of energy are readily available. (If large amounts of antimatter become a household item, feel free to make fun of me for being such a fool.)

    Then there are self-reproducing weapons, notably bioweapons and AI. For those, self-reproducing defenses are the obvious go-to. As you say, we can hope to make friendly AIs to keep rogue AIs from getting too far. But this has its own catch: friendly to whom? To the U.S., for example, or to China, or to Russia? An arms race for AI might lead all or most of these world powers to cut corners on safety. Bostrom and van der Merwe discuss a world-government or powerful association of governments to avoid such dangerous competitions. Perhaps they’re right, but that idea makes me nervous too.

    For bioweapons, you suggest nanotech. But nanotech that can fight viruses seems a very long way off. I just hope it takes at least that long for biohackers to get 3D virus-printers along with software that designs super contagious and deadly variants of Ebola or Nipah virus.

    Liked by 1 person

    1. The individual lunatic is the problem. If any nut can get hold of a civilization destroying technology, we’re in trouble, because mutually assured destruction obviously isn’t guaranteed to work. So hopefully you’re right about the physics. It’s worth noting that if the physics were trivial, we might have encountered something like it spontaneous in nature. The fact that we haven’t might be encouraging.

      I don’t know if you watch the show: The Expanse, but it makes clear something that is often ignored in science fiction. Any interesting space capability results in weapons of mass destruction being easily available to just about anyone due to simple kinetic energy. Anyone with an advanced space based capability can throw rocks at a planet, and kill millions or billions. (The show also posits a defense network to detect and take out such rocks, but it’s sabotaged. But with enough of a speed build up, it’s not clear any network would be impenetrable.)

      On nanotech, it depends on how we define that technology. The RNA based vaccines arguably are nanotech. And they were developed within weeks of the worldwide outbreak. The only reason they weren’t distributed sooner was the extended testing periods. But in the case of an active bioweapon, I suspect the tests would have been kept much shorter.

      Of course, what we’d really like to have are nanorobots that can take on any unexpected biological weapons. Those are still a way off, but probably not as far off as we might assume.

      Liked by 1 person

  5. The proposed imposition of a world-wide, all-encompassing surveillance technology would itself provide the trigger for individuals to seek and use the type of cheap, simple weaponry you describe. Some would even embark on a suicidal, doomsday path, unafraid and even welcoming the destruction of humanity. Perhaps a more likely scenario would be the more gradual imposition of surveillance technology that is occurring now. Each step in this graduated process would (if it is gradual enough) enculturate the practice for a majority, but I imagine each step would also result in a growing minority to resist.

    Liked by 1 person

    1. You might be right about the gradual imposition. Recently in a podcast I was listening to, someone noted the common conspiracy thinking of some anti-vaxxers, that Bill Gates has secretly implanted chips in the vaccine which will be embedded in anyone who receives them. They noted that, aside from being ridiculous, there’s little need to implant chips to track us, since most of us already carry around phones that can do the same thing.

      Of course, our phones can’t tell if we’re trying to craft a weapon of mass destruction…yet.

      Like

  6. Hi Mike,

    It’s an interesting idea–the question of how concerned we should be about the fate of the world resting in the hands of anyone with access to a Walmart–but this one struck me as a silly to a certain extent. Almost a waste of intellectual resources. I pretty much align with your notion that we have bigger problems for the moment. And I wouldn’t be a fan personally of using this possibility as a reason to institute more surveillance on people around the world. As Jim noted, this kind of thing has the potential to create the future it is intended to avoid.

    Michael

    Liked by 1 person

    1. Hi Michael,
      That’s the thing. Even if it doesn’t create that future, we’d still be living in maximum surveillance state, probably a totalitarian one, on the possibility that something dangerous might eventually come along. There’s something to be said for thinking about the possibility, but I have a hard time seeing it as productive to prioritize it right now, or acting on it at all until the possibility seems more likely.

      Liked by 1 person

  7. “We can imagine such an intrusive and pervasive system might only be used for these kinds of existential threats, but the temptation to use it for other things by authoritarian minded governments would be overwhelming.”

    My gut reaction is that this level of authoritarianism would likely cause the very disaster it was trying to prevent. There would be so much tension and social unrest. There would be some sort of resistance, and the resistance fighters would, inevitably, start building whatever weapons they could, including that easy-to-construct doomsday device that the government was trying to prevent anyone from constructing.

    Liked by 1 person

    1. You might be right. On the other hand, we were raised in a culture that values privacy. I’m not sure that’s necessarily a cultural universal. For example, most hunter-gatherers are used to living their whole lives within sight of others, including doing things we might blanch at, such as sex and defecation.

      Someone above suggested it could happen slowly. If so, we might slide into it easier than we can currently conceive of right now.

      Liked by 2 people

  8. Mike, you’re right, this article is right up my alley, thanks for the link.

    We don’t really need much futuristic speculation here.

    1) We have no credible plan for getting rid of nuclear weapons. 2) And there is no credible reason to assume that we can keep these weapons around forever and never use them. 3) So, some form of nuclear war is coming.

    The question is, what form will that event take? The ideal outcome would be an event that is large enough to wake us up, but not so large as to collapse the entire system. That’s the best we can hope for.

    You write, “Mutually assured destruction wasn’t a very comforting deterrent, but it apparently worked.”

    MAD will work until the day that it doesn’t. MAD is built upon the assumption that human beings are rational. If that were true, we would have never mass produced these weapons in the first place. Imagine this. Kim Jong-un gets diagnosed with terminal cancer. What does he have to lose? He can die a victim, or he can die a “hero” in his own mind. MAD irrelevant.

    Also, MAD is worthless against mistakes. On multiple occasions both the US and Russia have come within minutes of launching their arsenals BY MISTAKE. As example, in one case somebody at NORAD mistakenly put a training tape in the system, and for precious minutes the entire national security apparatus thought they were witnessing an incoming Russian first strike. Same kind of errors on the Russian side.

    It’s on top of this insanity that we are piling ever more, ever larger powers, at an ever accelerating rate.

    Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.