A couple of weeks ago I highlighted Robin Hanson’s ideas about alien civilizations. A big part of Hanson’s reasoning involved the Fermi paradox, the question that, if alien civilizations are common, there where is everyone? It seems like Earth should have been colonized long ago. Hanson focused on the number of difficult evolutionary filters life has to go through to evolve into a civilization producing species, and concluded that such civilizations are very rare, although not so rare that we’re alone in the observable universe.
But there’s always been a more disturbing possibility, that it is common for intelligence to evolve, but it’s in the nature of such intelligence to destroy itself. This means that the “great filter”, the thing that makes civilizations uncommon, is ahead of us rather than behind us. This would mean that the galaxy is filled with the remains of dead civilizations. It’s a disturbing possibility that, strictly speaking, we can’t rule out.
The question then is, what might that future filter look like? In a pretty disturbing article, Nick Bostrom and Matthew van der Merwe discuss the possibility that it might be a technology of some kind, one that might make our destruction likely.
Bostrom and van der Merwe note that we somewhat lucked out with nuclear weapons, since they’re not easy to build, requiring a lot of knowledge and exotic materials. They ask, what might have happened if the technology had been easy to construct using common household items? Society might have suddenly found those items inexplicably banned, with the government being unwilling to discuss why.
The problem though is that keeping knowledge secret is problematic. If such a technology were ever found, they argue, the only solution might be to set up a worldwide citizen monitoring paradigm, where authorities could monitor every citizen to make sure they weren’t trying to construct the dangerous weapon. We can imagine such an intrusive and pervasive system might only be used for these kinds of existential threats, but the temptation to use it for other things by authoritarian minded governments would be overwhelming.
At this point, we might be thinking we can only hope such a technology never materializes and forces this kind of thinking on us. But Bostrom and van der Merwe point out that setting up a worldwide monitoring program would take time, years or decades, time we might not have if such a threat were suddenly discovered. The implication is that maybe we should think about setting it up now.
Personally, this makes all kinds of dystopian science fiction scenarios run through my head. Perhaps the most disturbing one is that it might end the era of free thought and open inquiry. In the middle ages, certain kinds of thought were routinely banned and considered heretical. The wrong kind of thinking might lead to widespread sin and bring God’s wrath down on all of us. For the good of all, vigilance against heresy needed to be maintained, and heretics hunted down and punished. And the punishments needed to be severe to discourage others from engaging in that kind of activity.
Bostrom and van der Merwe paint a possible future where this kind of thinking might return. But unlike the old superstitious thinking, there would be a real danger here in pursuing certain ideas. Letting people pursue them really would be to put everyone in danger.
A key question, of course, is will there ever be a technology like this? As Bostrom and van der Merwe point out, nuclear weapons are difficult to build and use, which has largely kept them in state hands. And almost as soon as anyone had such weapons, another competing power developed them, resulting in a stand off. Mutually assured destruction wasn’t a very comforting deterrent, but it apparently worked.
Will there ever be a technology easy to acquire but so destructive that no one can build a defense against it? Note that the last part of that question is crucial. It’s easy to identify destructive technologies (starting with fire), but it’s harder to identify ones with no possible defense. Bostrom’s favorite boogeyman, artificial intelligence, is arguably addressable with more artificial intelligence, in this case AIs to compete against any out of control AIs.
Even a biological weapon might, eventually, be addressable with nanotechnology that can neutralize it. But until that point is reached, we already live in a world where certain biological samples are only allowed to be stored in certified heavily secured locations. And if you make too much of an effort to acquire certain materials, you run the risk of raising flags that might attract the attention of law enforcement or national security agencies. To some extent, we already live in a version of the world Bostrom and van der Merwe describe.
So, the parameter space of what they are warning about seems smaller then they imply. But it does exist. And if a technology along those lines were ever discovered, we might still find ourselves in a period where there was no effective protection against it, and society’s only option was to do what they describe.
Ironically, if we wanted to ensure such a regime was unlikely to be abused, artificial intelligence might actually be part of the solution. It might be easier to ensure an AI never abused the program the way humans almost certainly would. Although the existence of AI would also enable those humans to abuse it in ways far more effective than anyone could do today, so it’s a double edged sword.
What do you think? Is this kind of technology inevitable? If so, should be institute a monitoring program along the lines of what Bostrom and van der Merwe describe? Or are we just working ourselves up into anxiety about a hypothetical threat when we have enough real ones to worry about already?