The problems with ensuring humanity’s survival with space colonies

Artist impression of a Mars settlement with cu...
Artist impression of a Mars settlement with cutaway view (Photo credit: Wikipedia)

Stephen Hawking, as he has done before, expresses a common sentiment, that we need to colonize space in order to survive.

Humans should go and live in space within the next 1,000 years, or it will die out, Stephen Hawking has warned.

“We must continue to go into space for the future of humanity,” Mr Hawking said. “I don’t think we will survive another 1,000 years without escaping beyond our fragile planet.”

…In February, he said that humans should colonise other planets as “life insurance” for the species, and could be the only way of ensuring that humanity lives on.

My first reaction to this is that if we’re looking for space colonies to ensure the survival of the human race, we have a long way to go.  It seems to me that the first goal is simply to create a successful viable long term closed ecological system that can support humans.  As I understand it, every experiment attempting to do this so far has failed.  I think we need to succeed pretty strongly at that before attempting to do it in habitats millions of miles away, like on Mars.  Until we do, any space colony is going to be crucially dependent on a thin and fragile lifeline from Earth’s biosphere.

It’s also worth noting that, once we can create a closed ecological system, we might be better off creating colonies here on Earth.  A closed hardened underground habitat would be a lot easier to build and maintain and would probably do just as much to ensure humanity’s survival.

Anyone who thinks doing off world colonies is a substitute for fixing our environmental and social problems doesn’t understand the obstacles involved in any foreseeable colony.  Mars, the best candidate right now, is cold and desolate in a way that makes Antarctica look like The Garden of Eden.  Add no oxygen, very low air pressure, and we have an environment that humans can’t exist in without spacesuits.  Add radiation exposure from Mar’s lack of a magnetic field, that would force humans to stay underground most of the time, and the idea of consigning humans to live there for the rest of their lives starts to look a bit sadistic.

(None of this is to say that I think we shouldn’t have researchers and scientists on Mars, just as we currently do in Antarctica.  But no one is really tempted to colonize Antarctica.)

Looking at the longer term, people talk about things like terraforming.   But I strongly suspect that, by the time we have the technology and power to actually have a chance at terraforming an environment, we’re going to find that it’s a lot cheaper and easier to modify ourselves for the environment rather than the environment for us.  We will likely colonize other worlds, but doing so will probably force us to give up the evolved forms that are fine tuned for Earth’s biosphere and location.

At the end of the lecture, Hawking encouraged his audience to “look at up at the stars and not down at your feet”.

I’ve written before about the immense difficulties in any foreseeable interstellar travel.  In short, FTL (faster than light) travel, a common plot device in science fiction, would most likely require a new physics.  But before you let that bother you, consider that even getting to an appreciable percentage of the speed of light will require appalling amounts of energy.  (Think in terms of fuel equivalent to the mass of a planet possibly being necessary to accelerate a decent sized manned ship to, say, 10% of the speed of light.)

Our most likely path to the stars will be microscopic probes, with enough intelligence to bootstrap an infrastructure at the destination solar system using local resources, and to transmit their findings back to us.  It’s hard to see human interstellar travel being anything but the most extravagant of vanity projects, unless mind uploading of some type or another becomes possible.

Stephen Hawking has repeatedly warned of the danger that humanity finds itself in, as a result of the rise of artificial intelligence and the dangers of human aggression and barbarity.

I’ve written repeatedly about why I think the dangers of AI, although real to some degree, are vastly overblown.  I won’t reopen that debate here.  The only thing I’ll point out is that if AIs are a danger on Earth, they’d also be a danger in a space colony, or anywhere else we’d go and be tempted to use them.

On the dangers of human aggression and barbarity, if we did solve the problems of closed ecosystems and had colonies around the solar system, and humanity reached a point where it destroyed Earth’s biosphere in a war, it’s not clear to me why such a war would stop there.  It’s extremely difficult to protect yourself from a space based attack.  The attacker can always go further out to accelerate an asteroid or something similar at you, allowing kinetic energy to wrought destruction.  Space colonies might slightly increase the probability that humanity survives such a war, but not nearly as much as people like to think.

None of this is to say that I think humans shouldn’t colonize space, in the long term.  But thinking that we are doing it to preserve the species is misguided, except in the very broadest of terms and time scales.  (Think human intelligence, in one form or another, surviving the evolution and eventual death of the Sun.)

In the mean time, our best chance of survival, it seems to me, is to address the real issues we have here, because we’re a lot more likely to destroy ourselves than to have nature do it to us.  The threats of nuclear war or terrorism, global warming, biological warfare, or overall overpopulation, worry me a lot more than a species ending asteroid strike or other mass extinction event, which only happens once every 50-100 million years.  (Not that we shouldn’t do what we can to protect against asteroid strikes.  Even one that doesn’t endanger the whole world can cause a lot of devastation.)

I think the best way to protect against the threats of us destroying ourselves, indeed the only way over the long term, is to give as much of humanity as possible a stake in the success of human civilization.  This involves fighting poverty worldwide, and promoting women’s rights, which will help with the population problem, which in turn helps with just about every other problem.

If we really want to maximize humanity’s long term survivability, that’s where we should start.  The good news is that, when viewed through the broad sweep of history, things are moving in the right direction.  The only question is whether that movement will be fast enough.

Push back against AI alarmism

We’re finally starting to see some push back against the AI (artificial intelligence) alarmism that has been so prevalent in the media lately.  People like Stephen Hawking, Elon Musk, Max Tegmark, and many others have sounded the alarm.  Given my previous post from last night, I think these alarms are premature at best, and are generally misguided.

Now, Rodney Brooks, of Roomba fame, has a post up telling people to chill about AI.

Recently there has been a spate of articles in the mainstream press, and a spate of high profile people who are in tech but not AI, speculating about the dangers of malevolent AI being developed, and how we should be worried about that possibility. I say relax. Chill.  This all comes from some fundamental misunderstandings of the nature of the undeniable progress that is being made in AI, and from a misunderstanding of how far we really are from having volitional or intentional artificially intelligent beings, whether they be deeply benevolent or malevolent.

…In order for there to be a successful volitional AI, especially one that could be successfully malevolent, it would need a direct understanding of the world, it would need to have the dexterous hands and/or other tools that could out manipulate people, and to have a deep understanding of humans in order to outwit them. Each of these requires much harder innovations than a winged vehicle landing on a tree branch.  It is going to take a lot of deep thought and hard work from thousands of scientists and engineers.  And, most likely, centuries.

The science is in and accepted on the world being round, evolution, climate change, and on the safety of vaccinations. The science on AI has hardly yet been started, and even its time scale is completely an open question.

And this Edge discussion, titled ‘The Myth of AI‘, is getting shared out a lot.  I found it a bit long winded and rambling, but it expresses a lot of important points.

About the only thing I disagree with on these posts is how much they emphasize how far away we currently are from having AGI (artificial general intelligence), as opposed to the specialized AI we have today.  It’s totally true that we are very far away from an AGI, but I think comforting people with only that leaves out the main reason they shouldn’t freak out.

As I’ve written about multiple times, the fear of AI is the fear that it will have its own agenda, similar to how we and other animals typically have our own agenda.  But our agenda is largely influenced by hundreds of millions of years of evolution.  AIs aren’t going to have that history.  The only agenda they will have, the only desires, impulses, etc, will be the ones they are engineered to have.  The chance of them accidentally acquiring the self actualization agenda that most animals have is infinitesimal.

This is easier to conceive of if we called AIs “engineered intelligences” whose main agenda will be an engineered one, in contrast with “evolved intelligences” whose main agenda is typically survival, procreation, and anything that promotes those goals.

Of course, we might eventually have the ability to build an AI to have an agenda similar to ours.  But if we do that, and treat them as anything less than a fellow being, I think we’d deserve whatever happened next.  Luckily, we have no real incentive to design machines that would hate what we want them to do.  We have every incentive to design machines that will love what we want them to do.  As long as we do that, the danger from AI will be minimal.