The Q-Drive and the difficulty of interstellar exploration

I’ve discussed the difficulties of interstellar exploration before.  To get a spacecraft to another star within a human lifetime requires accelerating it to an appreciable percentage of c (the speed of light), say 10-20%.  In general that requires titanic amounts of energy.  (Forget about the common sci-fi scenarios of going into warp drive or jumping through or into hyperspace.  Those are fantasy plot devices with either no science or highly speculative science behind them.)

The mass ratio of fuel-propellant to the rest of the craft, using the most plausible short term option, nuclear pulse propulsion, is something like 10,000 to 1 to reach 10% of c, that is, for every kilogram of spacecraft you want to reach the destination, you’ll need 10,000 kilograms of fuel.  Although multiple stages would help, when we consider everything that would be required to send humans, things start to look pretty bleak.  It’s a little bit more hopeful with uncrewed probes.

One solution being considered is Breakthrough Starshot.  Use tiny probes with light sails attached, which are accelerated by ground based lasers to 20% of c.  The biggest issues with this plan include the cost and logistics of the ground based lasers, the challenges in successfully miniaturizing the craft, and the fact that there’s no way to slow the probes at the destination, so they’d have to collect what data they could during the few hours they had when flying through the destination system.  And their small size limits their transmitting power, meaning sending back the resulting data would require decades.

Another old solution proposed in 1960 by Robert Bussard, is to collect fuel from the interstellar medium.  The Bussard Ramjet (BR) has a tremendous electromagnetic scoop in front of it, which brings in the diffuse hydrogen floating ahead of the craft, compresses it so that it undergoes nuclear fusion, and expels it as propellant.  The idea is that the faster the craft is moving, the more fuel available to it, and the faster it can accelerate.  The biggest issues with the BR is that the interstellar medium has been found since Bussard’s proposal to be far thinner than he believed, and the drag of the scoop limits its overall effectiveness.

Alex Tolley has a post up at Centauri Dreams discussing a new proposal: the Q-Drive, as put forward by Jeff Greason, chairman of the Tau Zero Foundation.  Like the BR, the Q-Drive uses the interstellar medium, but in a different manner.  Unlike the BR, this craft uses an inert stored propellant: water.  (The water is stored as a giant cone of ice in front of the craft, acting as a shield against interstellar particles.)  The water is ionized and accelerated out the back of the craft, propelling it forward.

What comes from the interstellar medium is the power to accelerate the water.  This involves two large magnets that create a couple of magsails (sails made of magnetic fields), but instead of using them as sails, they function sort of like a wind turbine, in that they collect energy by slowing down (relative to the craft) the passing ionic interstellar matter, transferring the difference in kinetic energy to the drive.  The faster the craft is moving, the more energy collected, the faster it can accelerate the propellant, and the higher the thrust.

Tolley’s diagram of the Q-Drive interstellar craft. (Click through for source.)

Notably, the craft has to be initially accelerated to some small percentage of c by other means, such as the nuclear pulse propulsion I mentioned above.  But once that is achieved, that first stage can be jettisoned, allowing the Q-Drive to supposedly then reach speeds around 20% of c.  It can decelerate by pointing the drive in the direction of travel.  (It’s less clear to me how the final stages of deceleration near the destination would work.)

If you’re interested in the details, I recommend reading Tolley’s post.  Tolley is clear that this is something that seems possible in principle, but the practicalities may be another matter.  In particular, there is a question of whether the energy conversion can be efficient enough to make the Q-Drive more effective than just using something like the nuclear pulse rocket for the whole trip.

However, if it does work, or it can be developed into something that works, it may make interstellar exploration far more practical than it currently looks.

If you’re interested in the hard core technical details, check out Jeff Greason’s paper, or his talk on the subject.

Greason’s primary message in the talk is to emphasize the key idea, that drag energy from the surrounding medium (the interstellar medium in interstellar flight, the solar wind in solar system flight) represents an untapped energy source.

It’s been a while since I saw a new idea in this space.  Interstellar travel is a very hard engineering problem.  Hard enough that some scientists think it’s effectively impossible.  It’s nice to see someone make a possible dent in that problem.

Apollo 11 and the lost space age

Buzz Aldrin on the moon.
Image credit: NASA via Wikipedia

I was very young when Neil Armstrong and Buzz Aldrin landed on the moon in 1969, so I have no memory of the landing, and only limited memory of the Apollo program in general.  I think I remember seeing some of Apollo 17 on TV in 1972, the final flight to the moon.  (At the time, my six year old self wondered why a huge rocket left and only a tiny capsule came back.)

For three years, we had men walking on another world.  Shortly after that, we had a space station in orbit (Skylab), and the space shuttle was rumored to be around the corner.  There was a sense that we were on the verge of a new age.  The movie 2001: A Space Odyssey conveyed a future of rotating space stations in orbit with hotels and restaurants and large scale bases on the moon, with regularly scheduled flights between it all.  In the early 1970s, this vision of the future seemed inevitable.

History has obviously not been kind to that vision.  Skylab had a lot of problems, it was several years before the space shuttle was operational, and it was never the economical vehicle, with flights every two weeks, that it was sold to be.  Indeed, the space shuttle is now generally regarded in the space industry as having been a gigantic waste of time and money.

In the early 1970s, the sentiment was that we’d be on Mars by the mid to late 1980s.  Over the years, Mars has steadily been moved back.  In the 1990s, I remember reading that it would be in this decade.  Today we talk about Mars in the 2030s.  It always seems to be 20 years in the future.

From the late 70s until the early 2000s, I had the most common attitude of space enthusiasts, that NASA after Apollo was hopelessly incompetent and simply lacked vision.  As time went on, I came to recognize that their budget, adjusted for inflation, was nothing like it had been in the Apollo years.  I then became frustrated that space was not a priority of the government.

In retrospect, despite being an incredible technological, organizational, and heroic achievement, it’s now clear to me that the Apollo program was also a gigantic cold war public relations project.  We went to the moon to get there before the Russians, primarily because we were upset that they’d gotten to space first.  Everything was orchestrated to put a man on the moon with an American flag behind him.  Apollo 11 was the culmination of that effort that had lasted throughout the 1960s.

This is exemplified by the fact that we went to the moon, but did so with very little thought to building any kind of infrastructure to stay there.  Apollo accomplished its main goal, a demonstration of American technological supremacy, the superiority of capitalism over communism.  From that perspective, once the goal was accomplished, the collapse of funding in the 1970s seems inevitable.

I’ve often wondered what would be needed to spark the space age vision of 2001.  We see some of the answer in the movie itself, which presents companies like Pan Am, Hilton, Howard Johnson, and the original AT&T, titans of the 1960s, operating businesses in space.  The implication is that space is not only economical, but profitable.

However, space remains far from economical.  Getting material into Earth orbit is appallingly expensive.  Relatively new companies like SpaceX are attempting to reduce the costs, but even with those reductions, they remain staggering.  And there’s no foreseeable solution in sight to reduce them by the orders of magnitude necessary for hotels and restaurants in space, at least for the middle class.

What space lacks is a strong economic incentive for governments and industry to make the huge investments necessary to operate in it.  There’s often a lot of talk about the spirit of exploration and comparison with the “Age of Discovery” (more like the age of conquest for non-Europeans).  But what’s often missing from those comparisons is what actually motivated rulers like Henry the Navigator and Isabella of Castile to fund exploration missions: economics, namely the promise of finding a route around the Ottoman Empire to the spice islands and other riches in Asia.

Men risked their lives and governments funded them because of the substantial economic benefits, the riches, that could be attained.  Yes, finding the fabled Prester John’s kingdom, spreading Christianity, and general exploration were also goals, but it’s doubtful anyone would have funded the missions on just those objectives.

Space exploration needs its own version of the spice trade.  Many see possibilities in asteroid mining, but it remains a speculative proposition, and the cost to get out there and know whether it would be profitable is a major obstacle.  Whatever the economic impetus might turn out to be, until it’s found, the large scale space age often envisioned in science fiction will continue to be only an aspiration.

It might be that technological advances, such as more efficient propulsion methods, will eventually make things cheap enough to at least put crewed scientific stations on Mars and other locations around the solar system, although if artificial intelligence continues to advance, the benefits of risking humans in these locations might remain a dubious proposition.

While humans, except for those three brief years, have generally remained in low Earth orbit, robots have explored the solar system,  and there are now multiple craft on their way into interstellar space.  Space belongs first and foremost to the robots.  It seems clear they will always be the pioneers.  (Which isn’t decadence on our part.  15th century explorers, rather than risk their lives, would have sent robots in their place if they’d had them.)  The only real question is to what degree humans will follow in their wake.

What do you think?  Are there economic incentives other than mining?  Or some other motivation that might drive humans out into the solar system?

What’s at the edge of the universe?

Image credit: Pablo Carlos Budass via Wikipedia

Gizmodo has an interesting article that someone asked my thoughts on.  Part of their “Giz asks” series, it asks various physicists what’s at the edge of the universe?  The physicists polled include Sean Carroll, Jo Dunkley, Jessie Shelton, Michael Troxel, Abigail Vieregg, and Arthur B. Kosowsky.

They all give similar answers, that space isn’t known to have any edge.  It may be infinite, or it may curve back on itself in an extra-dimensional sphere or torus (donut) shape, meaning that if you travel in one direction long enough, you might end up where you started.  The best measurements to date imply that space is flat, although we can’t rule out that within the uncertainty of our current measurements that it doesn’t curve in a way that eventually leads one of the other shapes.

Many of the physicists mention the observable universe, and that the actual universe is thought to continue well beyond it, although one pointed out that we can’t rule out major variations just beyond the boundary of our observations.

If you think about the universe as all that we can causally interact with, then the observable universe could be considered our universe, with the “edge” being the edge of what we can observe.  Although the edge of observations may change in the future.  Currently the furthest thing we can see is the cosmic microwave background radiation.  In terms of electromagnetic radiation, it’s hard to imagine we’ll ever see farther than that.

However, if cosmic inflation is correct, and a lot of physicists are convinced it is, then the causal universe might be far larger than the currently observable universe.  We can currently detect gravitational waves.  If we could ever detect such waves from the period during inflation in the 10-32 seconds after the big bang, when the universe was thought to have expanded to 1030 to 10100 its previous size, then we might be able to infer things about the universe far beyond the limits of electromagnetic observation.

Of course, that range refers to things from the past that could causally affect us.  If we only think about what we can causally affect from here on, then due to the ongoing expansion of the universe, the cosmic horizon has a radius of about 14-16 billion light years, which would be the limit of what we could ever conceivably have any causal influence on.  As I’ve written about before, this means that most of the universe we can observe is already forever beyond our reach.

But it’s interesting to speculate what might happen if we’re ever able to travel FTL (faster than light).  What might we see beyond the observable universe?  Would it just be the same kind of stuff we can currently see going on into infinity?  Or would we eventually find regions of the universe where things are very different?

I mentioned cosmic inflation.  A variation of that idea is eternal inflation, where inflation is the natural state of spacetime, but that due to a random quantum fluctuation, a bubble of low inflation was created, aka our universe.  There are different conceptions about what the edge of this bubble might look like.  Some see it as a bubble of time as well as space, which we can’t leave because the edges of the bubble are the beginning of our universe in time, the big bang.

Other physicists have speculated that we could travel until we reached regions where the expansion of the universe was faster and faster, until we approached inflationary space.  Unless our method of FTL protected us in some manner, we could never enter inflationary space.  Aside from it perpetually receding from us, the expansion rate would overcome the nuclear forces holding our atomic nuclei together, not to mention the electromagnetic forces, and we’d be instantly ripped apart.

So travel to another bubble, even with an FTL drive, would probably never be a thing.  Even if it was, other bubbles are thought to have different laws of physics.  If we ever made it to another bubble, we might find its physics hostile to our form of life.

There have been measurements lately indicating that dark energy, the force driving the current expansion of the universe, may be increasing in strength.  If so, then within a few tens of billions of years, the universe as we know it might end in a “big rip.”  If we are in a bubble, then that bubble might eventually come to a violent end, perhaps dissolving into inflationary space.

I sometimes wonder if information of any kind, not to mention any form of life, could be preserved in inflationary space.  Based on the description, it doesn’t seem so.  I don’t know what’s bleaker, the long slow heat death of the universe under constant dark energy, or a big rip in a few tens of billions of year.

So in pondering the edge of the universe, we have edges in observability (currently 13.8 billion years), in inward causality (depends on cosmic inflation), on outward causality (14-16 billion light years), and edges in time including the big bang and one of the possible endings (heat death, big rip, and an increasingly unlikely one we didn’t discuss, the big crunch), as well as possible curving loopbacks and infinite expanses.

Did I miss anything?

Why faster than light travel is inevitably also time travel

I’ve always loved space opera, but when I was growing up, as I learned more about science, I discovered that a lot of the tropes in space opera are problematic.  Space operas, to tell adventure stories among the stars, often have to make compromises.  One of the earliest and most pervasive is FTL (faster than light) travel.

Interestingly, the earliest interstellar space opera stories in the late 1920s largely ignored relativity.  E.E. “Doc” Smith and Edmond Hamilton simply had their adventurers accelerate away at thousands of times the speed of light.  If relativity was mentioned, it was just as a superseded or wrong theory.

But by the early 1930s, authors found a way to seemingly avoid outright ignoring Einstein by simply hand waving technologies that bypassed the laws of physics.  One of the earliest and most enduring was hyperspace, a separate realm that a spaceship could enter to either travel faster than light, or where distances were compressed.  Over the decades, hyperspace came in a wide variety of fashions and with a lot of different names: subspace, u-space, slipstream, etc.

One variant, popularized by Isaac Asimov in his Robot and Foundation series, has hyperspace as a realm where ships jump through it to instantly move light years away.  (I’ll be using this version in an example below.)

There are a wide variety of other FTL technologies that often show up in science fiction.  An interesting example is the ansible, a device that allows instant communication across interstellar distances.  Often the ansible shows up in stories where actual FTL travel is impossible, but an interstellar community is enabled by the instant communications.

I’ve written before that there are lots of problems with all of these ideas.  Generally they’re not based on actual science.  They’re just plot gimmicks to enable the type of stories authors want to tell.  And the few that are somewhat based on science, such as wormholes or Alcubierre drives, involve speculative concepts that haven’t been observed in nature.

But FTL has another issue, one that I only started appreciating a few years ago.  FTL, no matter how you accomplish it, opens the door to time travel.  Most FTL concepts are conceptualized within a Newtonian understanding of the universe.  In that universe, there is an absolute now which exists throughout all of space.  If we imagine a two dimensional diagram with space as the horizontal axis and time as the vertical, then now, or the absolute plane of simultaneity, exists as a flat line throughout the universe.

But that’s not the universe we live in.  We live in a universe governed by special and general relativity (or at least one where those theories are much more predictive than Newton’s laws).  In our universe, there is no single plane of simultaneity, no universal version of now.  In this universe, talking about what is happening “right now” for cosmically distant locations is a meaningless exercise.

Most people are aware that, under special relativity, time flows slower for a traveler at speeds approaching the speed of light.  But not everyone is aware that, from the traveler’s perspective, it’s the rest of the universe that is traveling near the speed of light and experiencing slower time.  How can both see the other as having slower time than themselves?  Because simultaneity is relative.

Image credit: Acdx via Wikipedia

As this image animation shows (which I grabbed from the Wikipedia article on the relativity of simultaneity),  under relativity, whether certain events occur simultaneously is no longer an absolute thing, but a relative one.  If B is stationary, then events A, B, and C all happen simultaneously.  However, if B is moving toward C, B’s plane of simultaneity slopes upward, leaving C in its past.  On the other hand, if B is moving toward A, C is now in its future.  (Note: this never allows information to influence the past because, in normal physics, such information can only travel at the speed of light.)

An important point here is that these effects do not only happen at speeds approaching the speed of light.  They happen with any motion.  However, in normal everyday life, the effect is too small to notice, which is why Newton’s laws work effectively for relatively slow speeds and short distances.

Crucially, the upward or downward slope of simultaneity still happens at slow speeds, but the angle of difference is small, and again we don’t notice.  However, while a small angle of deviation may not be noticeable for everyday distances (say between New York and Sydney), or even for distances within the solar system, when the distances start expanding to thousands, millions, or even billions of light years, then even minute angle deviations grow to significant variances.

So imagine we have a spaceship heading out of the solar system at 1% of c (the speed of light).  Using the Asmovian version of hyperspace, the spaceship jumps to a destination 1000 light years away.

Which plane of simultaneity, which version of now, does the ship’s instant jump happen in?  The plane associated with stationary observers back on Earth?  Or the plane associated with the ship traveling at 1% c?  If it’s the ship’s plane, then when the ship exits hyperspace 1000 light years away, it will do so 18 days in the future of the stationary Earth observers.

That is true if the spaceship’s hyperspace jump is in the direction of its 1% c velocity.  But if the 1000 light  year jump is in the direction opposite the one of it’s velocity, it will arrive 18 days in the stationary observer’s past.

It doesn’t take a whole lot of imagination to see how this technology could be used to travel to arbitrary points in the past or future.  All a ship would need to do is jump in circles either in the direction of their rate of travel or opposite it to travel forward or backward in time.

We encounter exactly the same issue with other versions of FTL, such as warp drives or versions of hyperspace that take time to travel through, it’s just more of gradual than sharp jump in time.

In the case of ansibles, which version of simultaneity are the communications happening over?  The chances that the two correspondents happen to be traveling at the same speeds are nil.  The variances in the speeds of their star’s movement around the galaxy, the orbits of the planets, etc, will all conspire to ensure that their various planes of simultaneity are out of sync with and constantly changing in relation to each other.  An ansible accelerated to relativistic speeds could be used to communicate with the past or future.

Even wormholes would be an issue.  The wormholes in fiction always connect distant points together in the same now, but wormholes are connections between two points in spacetime.  There’s no particular reason it would be limited to some arbitrary version of now.  Indeed, a natural wormhole, like the one in Star Trek Deep Space Nine, would be more likely to open to some distant point in future, long after the heat death of the universe, than somewhere along the Bajoran plane of simultaneity.

We might imagine that if the FTL technology allowed us to choose which plane of simultaneity we moved under, maybe everyone would just agree on some standard, albeit an arbitrary one.  But that only makes the time travel capability more pronounced.  Orson Scott Card made the point years ago that if you’re going to introduce a technology into your fictional universe, you should account for all the ways that technology might be used, or abused.

It’s often said that the absence of tourists from the future probably indicates that time travel is impossible.  Even if future societies have strict taboos against interfering with the past, the idea that such taboos would hold for all societies until the end of time seems unsustainable.  Since FTL is also time travel, the same observation would seem to rule out most forms of it.  (Star gates or wormholes where a destination version has to be built might be the only ones that avoid this issue.)

Unless of course there’s something I’m missing about this?

The difficulty of going to Mars

There’s been a lot of celebration this holiday season of the fiftieth anniversary of the Apollo 8 mission, the first time humans went into (relatively) deep space and orbited another body, the moon.  I’m glad to see Apollo 8 getting some recognition.  It’s usually overshadowed by Apollo 11, the first mission to actually land on the moon, but Apollo 8’s accomplishment should be remembered.  When the S-IVB Saturn stage fired, initiating the translunar injection, it was an incredible leap into the void for those astronauts.

But for some, the celebration has been marred a bit due to comments made by one of the astronauts from that mission, Bill Anders:

According to one of the astronauts aboard NASA’s 1968 Apollo 8 mission, it would be “stupid” and “almost ridiculous” to pursue a crewed mission to Mars.

“What’s the imperative? What’s pushing us to go to Mars? I don’t think the public is that interested,” said Bill Anders, who orbited the Moon before returning to Earth 50 years ago, in a new documentary by BBC Radio 5 Live.

Anders argued that there are plenty of things NASA could be doing that would be a better use of time and money, like the uncrewed InSight rover that recently touched down to study Mars’ interior.

A lot of the reasons behind Anders remark seem to focus on criticism of NASA and its focusing on crewed missions at the expense of the more scientifically oriented robotic ones.  Those robotic missions have done far more for science than anything from the crewed missions.

I think Anders has a point.  Sending a crewed mission to Mars is problematic.  The difficulties in getting there are enormous.  With present technology, it would require a crew to spend nine months in transit, a year or so exploring, and then nine months coming back: a three year mission.

The problem is keeping astronauts healthy for that long.  No human has spent that much time in space.  We know about some of the health effects from astronauts living in the ISS (International Space Station).  They involve bone loss, vision impairments, and other changes.  Astronauts returning from the ISS typically have to go through a recovery period before they can function again on Earth.  We should expect a three year span to exacerbate these effects.

And the ISS can lead to overconfidence.  The ISS orbits only a few hundred kilometers above the Earth.  It receives regular resupply runs from Earth, so we haven’t really been testing our ability to provision people to live independently for years in isolation.  And with the crews safely ensconced in Earth’s magnetic field, we haven’t had to deal with the effects of long term radiation exposure.

Impatient space enthusiasts who get most of their information from science fiction will often counter that all of these things can be handled.  We just need to spin the spaceship for artificial gravity, have sufficient storage space for supplies, and provide solid shielding for radiation.  The problem is that all of these things add mass, and given the tyranny of the rocket equation, every additional kilogram is extraordinarily expensive.

Myself, I think Mars crewed missions should wait until we’ve had a chance to improve drive technologies, notably the various types of ion thrusters, which may be able to bring the transit times down substantially, say a month or two instead of the nine month time frame.  Such a technology might reduce a round trip mission down to a few months, or a year if we decide it’s best to wait until Earth and Mars are near each other for the return trip.  That still leaves major challenges, but it starts to look more achievable.

This might enable us to get to the point of eventually establishing scientific research stations on Mars.  Personally, that’s as far as I think things are liable to go for a long time.  I doubt humans will ever colonize Mars, at least not in any significant numbers.  The reason is economics.

Living on Mars would be outrageously expensive.  Mars habitats would be dependent on long tenuous supply lines from Earth.  The idea that they could be self sufficient in any meaningful way is mostly fantasy.  We’re vitally dependent on Earth’s biosphere.  Self sufficiency would require generating a replacement biosphere, something we currently don’t know how to do.  And even if we did, maintaining such a biosphere farther out from the sun, where there is far less free energy available, would always be more expensive than doing it on Earth.

We won’t colonize Mars, or space in general, until there’s a major economic interest that drives us to do it.  People often talk about mining being the economic interest, but with the cost of doing anything in space being so high, that interest may not be enough.

Overpopulation sometimes gets mentioned as a motivator, but setting aside the biosphere dependency, before colonizing Mars, we should first look at colonizing Antarctica, the ocean floors, or underground.  Yes, life in these locations would be difficult, dangerous, and expensive, but not nearly as difficult, dangerous, or expensive as living on Mars or anywhere else in the solar system.  And all in all, it’s a lot cheaper to solve the overpopulation issues in other ways.

So I think Anders has a point.  The public may be enthusiastic for crewed space missions, but they’re a lot less enthusiastic for the associated costs.  This is shown in the fact that crewed Mars exploration is always a couple of decades in the future.  When I was a boy in the early 1970s, the Mars mission was supposed to happen in the 1980s or 90s.  By the 1980s, it was supposed to happen in the first years of the 21st century.  For the last decade or so, it’s been supposed to happen by the 2030s.  I won’t be shocked if by 2022 it hasn’t slid to the 2040s.

To me, the question is whether crewed scientific missions into deep space will become viable before they get obviated by advances in artificial intelligence.  From the beginning, the pioneers in space have been the robots.  It may be that we’ll have a solar system populated far more by robots than by humans, at least of the biological sort.

Unless of course I’m missing something?

SETI vs the possibility of interstellar exploration

Science News has a short article discussing a calculation someone has done showing how small the volume of space examined by SETI (Search for Extraterrestrial Intelligence) is relative the overall size of the galaxy.

With no luck so far in a six-decade search for signals from aliens, you’d be forgiven for thinking, “Where is everyone?”

A new calculation shows that if space is an ocean, we’ve barely dipped in a toe. The volume of observable space combed so far for E.T. is comparable to searching the volume of a large hot tub for evidence of fish in Earth’s oceans, astronomer Jason Wright at Penn State and colleagues say in a paper posted online September 19 at arXiv.org.

“If you looked at a random hot tub’s worth of water in the ocean, you wouldn’t always expect a fish,” Wright says.

I have no doubt that the amount of stars SETI has examined so far is a minuscule slice of the population of the Milky Way galaxy.  And if SETI’s chief assumptions are correct, it’s entirely right to say that we shouldn’t be discouraged by the lack of results so far.

But it’s worth noting what one of those chief assumptions are, that interstellar travel is impossible, or so monstrously difficult that no one bothers.  If true, then we wouldn’t expect the Earth to have ever been visited or colonized.  This fits with the utter lack of evidence for anything like that.  (And there is no evidence, despite what shows like Ancient Aliens or UFO conspiracy theorists claim.)

But to me, the conclusion that interstellar travel is impossible, even for a robotic intelligence, seems excessively pessimistic.  Ronald Bracewell pointed out decades ago that, even if it is only possible to travel at 1% of the speed of light, a fleet of self replicating robot probes (Bracewell probes) could establish a presence in every solar system in the Milky Way within about 100 million years.  That may sound like a long time, but compared to the age of the universe, it’s a fairly brief period.  Earth by itself has existed 45 times longer.

NASA image via Wikipedia

People sometimes respond that the Earth may be in some type of backwater.  The problem here is, if you know about where the Earth is in the Milky Way, in the Orion Spur off the Sagittarius Arm, about halfway between the center and rim of the galaxy, you’ll know that we’re not really in a backwater.  The backwater theory might be plausible if we were thousands of light years off the galactic plane, beyond the rim, or in a cluster far removed from the main galaxy, but we’re not.  Even then, the nature of the self replicating probe propagation is pretty relentless and would still eventually reach backwater stars.

Of course, if there is only one or a few other intelligent species in the galaxy, then it’s entirely possible that their Bracewell probe is here, just lying low, observing us, possibly waiting for us to achieve some level of development before it makes contact.  (Or maybe it has been making contact 2001: A Space Odyssey style.)

But if the number of civilization is in the thousands, as is often predicted by people speculatively playing with the numbers in the Drake equation, then we should have hundreds of those probes lying around.  Given their diverse origins, we shouldn’t expect them to behave with unanimity.  Even if one probe, or coalition of probes, bullied the others, the idea that such an arrangement would endure across billions of years seems implausible.

And the Earth has been sitting here for billions of years, with an interesting biosphere for most of that time.  The idea that none of these self replicating probes would have set up some kind of presence on the planet, a presence we should now be able to find in the geological record, again seems implausible.  Indeed, if they existed, we should expect to have at least some of them in front of us now.

Now, maybe they are in front of us, and we’re just not intelligent enough to realize what we’re seeing.  Monkeys, after all, likely have no understanding of the significance of the buildings and machinery they climb over.  It seems like something we have to keep in mind, but historically it’s never been productive to just assume we can’t understand something, and taking this principle too much to heart seems like it would make it impossible to ever dismiss any dubious notion.

So SETI largely depends on interstellar travel being infeasible.  This is actually the conclusion a lot of radio astronomers have reached.  Could they be right?  I don’t think we know enough to categorically rule out the possibility.  If they are right, then SETI will be our best chance to someday make contact with those other civilizations, even if it’s only composed of messages across centuries or millenia.

As I’ve written here before, my own conclusion is that some form of interstellar exploration is possible, and that life is probably pervasive in the universe, although most of it is microscopic.  Complex life is probably far rarer, although I wouldn’t be surprised if there aren’t thousands of biospheres, or more, in our galaxy that have it.

But intelligent life capable of symbolic thought and building a civilization?  The data seems to be telling us that this is profoundly rare, so rare that the nearest other intelligent species is probably cosmically distant.  If we’re lucky, they might be close enough that we can encounter them before the expansion of the universe separates us forever.  If we’re not lucky, we’ll never have a chance for that encounter.

Unless of course, I’m missing something?

The extraordinary low probability of intelligent life

Marc Defant gave a TEDx talk on the improbable events that had to happen in our planet’s history for us to eventually evolve, along with the implications for other intelligent life in the galaxy.

I find a lot to agree with in Defant’s remarks, although there are a couple points I’d quibble with.  The first, and I’m sure a lot of SETI (Search for Extraterrestrial Intelligence) enthusiasts will quickly point this out, is that we shouldn’t necessarily use the current lack of results from SETI as a data point.  It’s a big galaxy, and within the conceptual space where SETI could ever pay off, we shouldn’t necessarily expect it to have done so yet.

My other quibble is that Defant seems to present the formation of our solar system as a low probability event, or maybe he means a solar system with our current metallicity.  I can’t really see the case for either being unlikely.  There are hundreds of billions of stars in our galaxy, most with some sort of attending solar system.  So I’m not sure where he’s coming from on that one.

My own starting point for this isn’t SETI, but the fact that we have zero evidence for Earth having ever been colonized.  If the higher estimated numbers of civilizations in the galaxy are correct, the older ones should be billions of years older than we are.  They’ve had plenty of time to have colonized the entire galaxy many times over, even if 1% of lightspeed is the best propagation rate.

The usual response is that maybe they’re not interested in colonizing the galaxy, not even with their robotic progeny.  That might hold if there is one other civilization, but if there are thousands, hundreds, even a few dozen?  Across billions of years?  The idea that every other civilization wouldn’t be interested in sending its probes out throughout the galaxy seems remote, at least to me.

But to Defant’s broader point about the probability of intelligent life evolving, there are many events in our own evolutionary history that, if we were to rewind things, might never happen again.

Life seems to have gotten an early start on Earth.  Earth is roughly 4.54 billion years old, and the earliest fossils date to 3.7 billion years ago.  With the caveat that we’re unavoidably drawing conclusions from a sample of one planet’s history, the early start of life here seems promising for its likelihood under the right conditions.

But there are many other developments that seem far less certain.

One crucial step was the evolution of photosynthesis, at least 2.5 billion years ago.  The development of photosynthesis gave life a much more reliable energy source than what was available before, converting sunlight, water, and carbon dioxide into sugars.

And its waste product, oxygen, started the process of oxygenation, increasing the levels of oxygen in Earth’s atmosphere, which would be very important as time went on.  The early atmosphere didn’t have much oxygen.  Indeed, the rise of oxygen levels may have originally been a serious problem for the life that existed at the time.  But life adapted and eventually used oxygen as a catalyst for quicker access to free energy.

The good news with photosynthesis is that there are multiple chemical pathways for it, and it’s possible it evolved multiple times, making it an example of convergent evolution.  That means photosynthesis might be a reasonably probable development.  Still, oxygen producing photosynthesis doesn’t seem to have arisen until the Earth was more than halfway through its current history, which doesn’t make it seem very inevitable.

The rise of eukaryotes may be a more remote probability.  The earliest life were simple prokaryotes.  Eukaryotes, cells with organelles, complex specialization compartments, arose 1.6-2.1 billion years ago.  All animal and plant cells are eukaryotes, making this development a crucial building block for later complex life.

Eukaryotes are thought to have been the result of one organism attempting to consume another, but somehow instead of consuming it, the consuming organism entered into a symbiotic relationship with the consumed organism.  This low probability accident may have happened only once, although no one knows for sure.

Yet another crucial development was sexual reproduction, arising 1-1.2 billion years ago, or when Earth was 73% of its current age.  Sexual reproduction tremendously increased the amount of variation in offspring, which arguably accelerated evolution.  Who knows how long subsequent developments might have taken without it?

Oxygen had been introduced with the rise of certain types of photosynthesis, but due to geological factors, oxygen levels remained relatively low by current standards until 800 million or so years ago, when it began to rise substantially, just in time for the development of complex life.  The Cambrian explosion, the sudden appearance of a wide variety of animal life 540-500 million years ago, would not have been possible without these higher oxygen levels.

Complex life (animals and plants) arose in the last 600-700 million years, after the Earth had reached 84% of its current age.  When you consider how contingent complex life is on all the milestones above, it’s development looks far from certain.  Life may be pervasive in the universe, but complex life is probably relatively rare.

Okay, but once complex life developed, how likely is intelligent life?  There are many more low probability events even within the history of animal life.

Earth’s environment just so happens to be mostly aquatic, providing a place for life to begin, but with enough exposed land to allow the development of land animals.  In general, land animals are more intelligent than marine ones.  (Land animals can see much further than marine ones, increasingly the adaptive benefits of being able to plan ahead.)  A 100% water planet may have limited opportunities for intelligence to develop.  For example, mastering fire requires being in the atmosphere, not underwater.

Defant mentions the asteroid that took out the dinosaurs and gave mammals a chance to expand their ecological niche.  Without an asteroid strike of just the right size, mammals might not have ascended to their current role in the biosphere.  We might still be small scurrying animals hiding from the dinosaurs if that asteroid had never struck.

Of course, there have been a number of intelligent species that have evolved, not just among mammals but also among some bird species, the surviving descendants of dinosaurs.  Does this mean that, given the rise of complex life, human level intelligence is inevitable?  Not really.  While there are many intelligent species (dolphins, whales, elephants, crows, etc), the number of intelligent species that can manipulate the environment is much smaller, pretty much limited to the primates.

(Cephalopods, including octopusses, can manipulate their environment, but their short lives and marine environment appear to be obstacles for developing a civilization.)

Had our early primate ancestors not evolved to live in trees, developing a body plan to climb and swing among branches, we wouldn’t have the dexterity that we have, nor 3D vision, or the metacognitive ability to assess our confidence in making a particular jump or other move.  And had environmental changes not driven our later great ape ancestors to live in grasslands, forcing them to walk upright, and freeing their hands to carry things or manipulate the environment, a civilization building species may never have developed.

None of this is to say that another civilization producing species can’t develop using an utterly different chain of evolutionary events.  The point is that our own chain is a series of many low probability events.  In the 4.54 billion years of Earth’s history, only one species, among the billions that evolved, ever developed the capability of symbolic thought, the ability to have language, art, mathematics, and all the other tools necessary for civilization.

Considering all of this, it seems like we can reach the following conclusions.  Microscopic single celled life is likely fairly pervasive in the universe.  A substantial subset of this life probably uses some form of photosynthesis.  But complex life is probably rare.  How rare we can’t really say with our sample of one, but much rarer than photosynthesis.

And intelligent life capable of symbolic thought, of building civilizations?  I think the data is telling us that this type of life is probably profoundly rare.  So rare that there’s likely not another example in our galaxy, possibly not even in the local group, or conceivably not even in the local Laniakea supercluster.  The nearest other civilization may be hundreds of millions of light years away.

Alternatively, it’s possible that our sample size of one is utterly misleading us and there actually are hundreds or even thousands of civilizations in the galaxy.  If so, then given the fact that they’re not here, interstellar exploration, even using robots, may be impossible, or so monstrously difficult that hardly anyone bothers.  This is actually the scenario that SETI is banking on to a large extent.  If true, our best bet is to continue searching with SETI, since electromagnetic communication may be the only method we’ll ever have to interact with them.

What do you think?  Is there another scenario I’m missing here?

The difficulty of interstellar travel for humans

Futurism.com has an article reviewing the results of a survey they conducted with their readers asking when the first human might leave the solar system.  The leading answer was after the year 2100, which make sense given our current level of progress just getting humans back out of low Earth orbit.  But I think the prospects for human interstellar exploration are far more difficult than most people realize.

First, when pondering this question, we can forget about the traditional sci-fi answers of hyperspace, warp drive, or similar notions.  These concepts are fictional plot devices with little or no basis in science.  Even concepts that have been considered by scientists, such as wormholes and Alcubierre warp drives, are extremely speculative, requiring that certain speculative and unproven aspects of physics be true.

Getting anything like these types of technology will require a new physics.  Undoubtedly, we will learn new things about physics in the decades and centuries to come, but the probability that what we learn will enable interstellar travel to function like sea travel (the preoccupation of most space opera stories) is an infinitesimal slice of the possibilities.

The only way to speculate scientifically on this kind of thing is to take the science that we currently have and try to extrapolate from it.  When we do that, the obstacles to human interstellar flight seem pretty daunting.

Worrying about the speed of light limit, which current physics tells us is the ultimate speed limit in the universe, is sour grapes to a large extent.  Even getting to an appreciable percentage of the speed of light turns out to require astounding amounts of energy.  Doing it with our current chemical rocket technology is a lost cause.  According to Paul Gilster, in his book Centauri Dreams, Imagining and Planning Interstellar Exploration (which I recommend for anyone interested in this subject, as well as his blog), it would take more mass than exists in the visible universe to propel a chemical rocket to a substantial fraction of the speed of light.

An artist’s concept of nuclear pulse propulsion.
Image credit: NASA

Of course, there are better, more efficient propulsion options that might eventually be available.  For purposes of this post, let’s leap to the most efficient and plausible near term option, nuclear pulse propulsion.  This is a refined version of an original idea that involved lobbing nuclear bombs behind a spacecraft to push it forward.

Giltster, in his book, notes that a nuclear pulse propulsion spacecraft, to reach 10% of light speed, would need a mass ratio of 100:1.  This means that for every kilogram of the spacecraft with only its payload, you’d need 100 kilograms of fuel.   Initially, that doesn’t sound too bad since the Apollo missions had an overall mass ratio of 600:1.  But that was for the entire mission, and all we’ve considered so far is the mass ratio to accelerate to 10% of light speed.  We haven’t talked about slowing down at the destination.

In space, given inertia and no air friction, slowing down takes just as much energy as speeding up.  And the kicker is that you have to accelerate the fuel you’ll later need to decelerate.  So slowing down doesn’t just double the mass ratio from 100:1 to 200:1.  The deceleration fuel has to be on the “1” side in the initial acceleration ratio.  That means the ratio for the overall mission (and we’re only talking about a one way mission here), has to be squared, taking it from 100:1 to 10,000:1.

Traveling at 10% lightspeed gets us to Proxima Centauri, the nearest star to the sun, in about 43 years.  When you consider what kind of living space a human crew would need for that time span, and multiply it out by 10,000, an interstellar mission starts to look like the most expensive thing human civilization might ever attempt.  It gets worse if we try to shorten the time span.  Increasing the speed to 20% of light speed raises the ratio to 100,000,000:1.

Imagining antimatter technology might improve the mass ratio substantially.  But it adds new difficulties.  Producing antimatter itself takes tremendous amounts of energy.  It would have to be manufactured and stored in deep space, since any containment failure of any substantial amount would likely result in a gigaton level explosion.  We might save on the mass ratio of the spacecraft, but only at the expense of vast resources dedicated to generating and storing the fuel.  And human crews would likely have to be heavily shielded from the gamma rays generated by antimatter reactions, increasing mass throughout the overall craft.

No discussion of this type is complete without at least mentioning the Bussard ramjet, the idea of a spacecraft with an immense ram scoop to take in interstellar dust to use as fuel.  There was a lot of excitement for this concept in the 60s and 70s, but further study has shown that the interstellar medium isn’t nearly as thick as the initial design hoped for, and many think the ram scoop would generate as much friction as thrust.

Other options are to forego rocketry altogether and go for something like light sails.  Robert Forward, decades ago, put forth a design where a gigantic laser on Mercury would send out a beam to an interstellar light sail spacecraft, steadily accelerating it.  At some point, the craft would separate its sail into two components, one of which would be hit by the laser and reflect it back to the remaining component attached to the craft, decelerating it.  Forward’s design is ingenious, but it would still require titanic amounts of energy, and precise coordination across centuries and light years to work.

Things get a lot easier if we just think about sending uncrewed probes.  That’s the current direction of the Starshot Breakthrough initiative.  The idea is to propel a small, perhaps gram sized probe, to 20% of light speed using Earth based lasers.  The probes would reach Proxima Centauri in about 22 years, taking pictures and readings as they fly through the system, and transmitting the information back to Earth.  There are still major technological hurdles to overcome with this idea, but they all seem achievable within reasonable time periods and with reasonable amounts of energy.

The big drawback to the Starshot design is that it doesn’t have any way to slow the probe down, so everything would have to be learned in the few hours available as it sped through the target system.  An alternate design has been proposed, using the unique topology of the Alpha Centauri / Proxima Centauri system to slow down the probe, but at the cost of increasing the travel time to over a century.

But once we give up the idea of crewed missions, the rocket solutions actually become more plausible.  A 10,000:1 ratio doesn’t seem problematic if the ultimate payload is a one gram probe.  Even the 100,000,000:1 ratio associated with a 20% light speed mission starts to look conceivably manageable.

And when we consider the ongoing improvements in artificial intelligence and the idea of probes building their own daughter probes to explore the destination system, and perhaps even to eventually launch toward systems further out, the possibilities start to look endless.

All of which is to say, that it’s much easier to conduct interstellar exploration with robots, particularly very small ones, than with humans.  It seems likely that we’re going to be exploring the stars with robots for a long time before humans get there, if they ever do.

Unless of course I’m missing something?

97% of the observable universe is forever unreachable

Observable_universe_logarithmic_illustration (1)
Artist’s logarithmic scale conception of the observable universe with the Solar System at the center, inner and outer planets, Kuiper belt, Oort cloud, Alpha Centauri, Perseus Arm, Milky Way galaxy, Andromeda galaxy, nearby galaxies, Cosmic Web, Cosmic microwave radiation and Big Bang’s invisible plasma on the edge. By Pablo Carlos Budassi

The other day, I was reading a post by Ethan Siegel on his excellent blog, Starts With a Bang, about whether it makes sense to consider the universe to be a giant brain.  (The short answer is no, but read his post for the details.)  Something he mentioned in the post caught my attention.

But these individual large groups will accelerate away from one another thanks to dark energy, and so will never have the opportunity to encounter one another or communicate with one another for very long. For example, if we were to send out signals today, from our location, at the speed of light, we’d only be able to reach 3% of the galaxies in our observable Universe today; the rest are already forever beyond our reach.

My first reaction when reading this was, really?  3%.  That seems awfully small.

What Siegel is talking about is an effect that is due to the expansion of the universe.  Just to be clear, “expansion of the universe” doesn’t mean that galaxies are expanding into space from some central point, but that space itself is expanding everywhere in the universe proportionally.  In other words, space is growing, causing distant galaxies to become more distant, and with space growing in the intervening space, the more distant a galaxy is from us, the faster it is moving away from us.

This means that as we get further and further away, the movement of those galaxies relative to us, gets closer and closer to the speed of light.  Beyond a certain distance, galaxies are moving away from us faster than the speed of light.  (This doesn’t violate relativity because those galaxies, relative to their local frame, aren’t moving anywhere near the speed of light.)  That means they are outside of our light cone, outside of our ability to have any causal influence on them, outside of what’s called our Hubble sphere (sometimes called the Hubble volume).  Note that we may still see galaxies outside of our Hubble volume if they were once within the Hubble sphere.

How big is the Hubble sphere?  We can calculate its radius by dividing the speed of light by the Hubble constant: H0. H0 is the rate by which space is expanding.  It is usually measured to be around 70 kilometers per second per mega-parsec, or about 21 kilometers per second per million light years.  In other words, for every million light years a galaxy is from us, on average, the space between that galaxy and us will be increasing by 21 km/s (kilometers per second).  So, a galaxy 100 million light years away is moving away from us at 2100 km/s (21 X 100), and a galaxy 200 million light years away will be receding at 4200 km/s (21 X 200), plus or minus any motion the galaxies might have relative to their local environment.  The speed of light is about 300,000 km/s.  If we take 300,000 and divide by 21, we get a bit over 14000.  That would be 14000 million, or a Hubble sphere radius of around 14 billion light years.

(If you’re like me,  you’ll immediately notice the similarity between the radius of the Hubble sphere and the age of the universe.  When I first noticed this a few years ago, it seemed like too much of a coincidence, but I haven’t been able to find any relationship described in the literature.  It appears to be a coincidence, although admittedly a freaky suspicious one.)

Okay, so the Hubble sphere is 14 billion light years in radius.  According to popular science news articles, the farthest galaxies we can see are about 13.2 billion light years away, and the cosmic microwave background is 13.8 billion light years away, so everything we can see is safely within the Hubble sphere, right?

Wrong.  Astronomy news articles almost universally report cosmological distances using light travel time, the amount of time that the light with which we’re seeing an object took to travel from the object to us.  For relatively nearby galaxy, say 20-30 million light years away, that’s fine.  In those cases, the light travel time is close enough to the co-moving or “proper” distance, the distance between us and the remote galaxy “right now”, that it doesn’t make a real difference.   But when we look at objects that are billions of light years away, there starts to be an increasingly significant difference between the proper distance and the light travel time.

Those farthest viewable galaxies that are 13.2 billion light years away in light travel time are over 30 billion light years away in proper distance.  The cosmic microwave background, the most distant thing we can see, is 46 billion light years away.  So, in “proper” distances, the radius of the observable universe is 46 billion light years.

Crucially, the Hubble sphere radius calculated above is also in proper distance units.  (The radius in light travel time would be around 9 billion light years per Ned Wright’s handy Cosmological Calculator.)

volumeofsphereWe can use the radius of each sphere to calculate their volumes.  The volume of the Hubble sphere is about 11.5 trillion cubic light years.  The volume of the observable universe is about 408 trillion cubic light years.  11.5 divided by 408 is .00282, or around 3%.  Siegel knew exactly what he was talking about.  (Not that I had any doubt about it.)

In other words, 97% of the observable universe is already forever out of our reach.  (At least unless someone invents a faster than light drive.)

It’s worth noting that, as the universe continues expanding, all galactic clusters will become isolated from each other.  In our case, in 100-150 billion years, the local group of galaxies will become isolated from the rest of the universe.  (By then, the local group will have collapsed into a single elliptical galaxy. )   We’ll still be able to see the rest of the universe, but it will increasingly, over the span of trillions of years, become more red shifted, and bizarrely, more time dilated, until it is no longer detectable.  By that time, there will only be red dwarfs and white dwarfs generating light, so the universe will already be a pretty strange place, at least by our current standards.

If our distant descendants manage to colonize galaxies in other galactic clusters, they will eventually become cut off from one another.  If any information of the surrounding universe survives into those distant ages, it may eventually come to be regarded as mythology, something unverifiable by those civilizations living trillions of years from now.

Why alien life will probably be engineered life

Martin Rees has an interesting article at Nautilus: When We Find Aliens, We Might Find Something Like the Borg

This September, a team of astronomers noticed that the light from a distant star is flickering in a highly irregular pattern.1 They considered the possibility that comets, debris, and impacts could account for their observations, but each of these explanations was unlikely to varying degrees.2 What their paper didn’t explore, but they and others are beginning to speculate, is that the flickering might be caused by enormous structures built by an advanced civilization—whether the light might be evidence of ET.

In thinking about this possibility, or other similarly suggestive evidence of extraterrestrial life, an image of an alien creature might come to mind—something green, perhaps, or with tentacles or eye stalks. But in this we are probably mistaken. I would argue that any positive identification of ET will very likely not originate from organic or biological life (as Paul Davies has also argued), but from machines.

Few doubt that machines will gradually surpass more and more of our distinctively human capabilities—or enhance them via cyborg technology. Disagreements are basically about the timescale: the rate of travel, not the direction of travel. The cautious amongst us envisage timescales of centuries rather than decades for these transformations.

A few thoughts.

First, I haven’t commented yet here about KIC 8462852, the star Rees mentions in the first paragraph.  It would be beyond cool if this turned out to be something like a partial Dyson swarm or some other megastructure.  But with these types of speculation, it pays to be extra skeptical of propositions we want to be true.  Possibility is not probability.  I think the chances that this is an alien civilization are remote, but I can’t say I’m not hoping.

On the rest of Rees’s article, I largely agree.  (I’m sure my regular readers aren’t shocked by this.)  I do have one quibble though.  Rees uses the terms “robotic” or “machine life”.  In cases where it would make sense to have a body of metal and silicon, such as operating in space or some other airless environment, I think it’s likely that’s what would be used (or its very advanced equivalent).

But when operating inside of a biosphere, I suspect “machine life” might be more accurately labelled as “engineered life”.  In such an environment, an organic body, designed and grown by an advanced civilization for the local biosphere, might be far more useful and efficient than a machine one.  An organic body could get its energy from the biosphere using biological functions such as eating and breathing.  This might be substantially more efficient than carrying a power pack or whatever.

If we met such life, they might well resemble classic sci-fi aliens in some broad fashion.  Nor do I think we should dismiss the possibility that the form of such aliens might not stray too far from their original evolved shapes.  Even the advanced machine versions might well resemble those original shapes, at least in some contexts.

Of course, that original shape might still be radically different than anything in our experience, such as Rees’s speculation about something that starts as an evolved integrated intelligence.  And after billions of years, engineered life may inevitably become an integrated intelligence, at least on the scope of a planet.  (The speed of light barrier would constrain the level of integration across interstellar distances.)