The effort at healthy living should be balanced against the fact that we are all mortal.

Ezekiel Emanuel has an interesting article at The Atlantic: Why I Hope to Die at 75 – The Atlantic.


That’s how long I want to live: 75 years.

I am sure of my position. Doubtless, death is a loss. It deprives us of experiences and milestones, of time spent with our spouse and children. In short, it deprives us of all the things we value.

But here is a simple truth that many of us seem to resist: living too long is also a loss. It renders many of us, if not disabled, then faltering and declining, a state that may not be worse than death but is nonetheless deprived. It robs us of our creativity and ability to contribute to work, society, the world. It transforms how people experience us, relate to us, and, most important, remember us. We are no longer remembered as vibrant and engaged but as feeble, ineffectual, even pathetic.

While I find the age of 75 to be a bit arbitrary, I think there’s a lot of wisdom in Emanuel’s position, and it’s an attitude toward life and death I’ve read from many physicians and other people involved in medical fields.  They seem to be much more aware than the average person that there are worse things that can happen to someone than death.

As a result, many of them, when diagnosed with a terminal illness, rather than fight death to the bitter end with every last treatment option, instead accept only palliative care (i.e. treatment of pain) until they expire.  Having watched a couple of relatives (one of whom was my mother) go through aggressive cancer treatments, with all the suffering and loss of dignity involved, before ultimately dying anyway, I’ve long wondered if the extra life bought with those treatments was worth it.

I think this subject actually reveals something about our society’s attitude toward death, and how costly it can be, both in terms of life satisfaction and finances.  Particularly since the most expensive health care is usually delivered in the last six month’s of life.

I know many people who expend great effort making sure they live a healthy lifestyle, eating the right foods, scrupulously avoiding the wrong foods, exercising several hours per week, and many other activities.  I’d say that one of my relatives probably devotes most of their free time to this.  When they’re not exercising, growing their own food, or purchasing only organic food, they’re researching how to do these things better and stressing about what’s in the food that they do buy.

One of the things I learned decades ago in business school that I have found useful in many aspects of life is to look at things in terms of the cost benefit ratio, and to keep in mind the Pareto principle, the 80-20 rule, which says that you often get 80% of the benefits with only 20% of the effort.  I think it pays to examine exactly what we’re buying when we attempt to live healthy.

For me, the first thing that sticks in my mind is the stark fact that we are all mortal.  Short of either a religious rapture or a technological singularity (neither of which I think are likely), we’re all going to die someday.  So when we engage in healthy living activities, we’re really going for two things: increased quality of living while we are alive, and hopefully a few more years in which to live.

Of course, as Emanuel points out in his article, many people haven’t really thought this out and are, consciously or unconsciously, going for immortality.  Unfortunately, we have to face up to the fact that immortality has yet to be achieved (at least in this world), so chances are, we are going to fail.  As one of my uncles used to say (usually with a beer in hand), “Something always gets you in the end so why not enjoy life.”

So, with this in mind, how much investment in healthy living is worth it?  Certainly it pays not to live destructively: smoking, drinking excessively, overeating, eating with zero regard for health, being a complete couch potato, etc.  The record is pretty clear that when you indulge in these activities, you will probably live an abbreviated life, often decades short of what most people can achieve, or you run the risk of living decades with a dramatically reduced quality of life.  Some people are fine with this, at least until it comes time to pay the piper, but most of us would like to live at least the average life span in reasonably good health.

But once you aren’t smoking or drinking, are eating a moderately healthy diet, and getting at least a moderate amount of exercise, how much benefit do you really get going beyond that?  Certainly there will be some benefits.  But along the lines of the 80-20 rule, we’ll see diminishing returns.  It’s fine to exercise for hours on end if you enjoy it, but is it really worth it if you don’t?

Some might argue that they’ll take every month, every day that they can buy with increased healthy living, but as Emanuel points out in his article, this can often lead to an old age with a substantially diminished quality of life.  How much pleasure in our current daily life are we willing to give up in order to live a few more years in our senescence?

I don’t agree with Emanuel on drawing a line at 75.  I’d be fine continuing to live as long as I could have intellectual stimulation and good conversation, and if that ends up being denied me years before 75, my interest in life would probably fade.  If it continues until 90 but with physical disabilities, I’d still find that a life worth living.  But I do very much agree that it’s important to ponder what we desire in life, what in it gives us satisfaction, and to what extent life without those things would be worth experiencing.

Of course, in truth, when faced with the actual decision about whether or not to fight for life, giving up on it can be very hard.  I remember reading a quote from a WWII medic who noted that many soldiers told him in training that if they lost a limb or suffered severe disfigurement, that they would prefer to die, but that in actual combat when these things happened, everyone fought for life.

But I think it’s important for us to think about these issues well ahead of time.  The cost of not doing so can be high, in terms of personal suffering, burdens on our families, and to society.

Posted in Philosophy, Society | Tagged , , , , , , , , , , | 12 Comments

Survival machines versus engineered machines; why fears of AI are misguided

Bio-inspired Big Dog quadruped robot is being developed as a mule that can traverse difficult terrain. Image credit: DARPA

I’ve seen a lot of posts lately like this one by Ronald Bailey looking at Nick Bostrom’s book on the dangers of AI.  People never seem to get tired of talking about the dangers of AI.  And stories about AIs who revolt against humanity are pretty much a staple of science fiction.

I’ve written before on why I think the fear of AIs is mostly misguided.  I’m doing it here again, but with what I hope is a better way of explaining why I take this position.  I think it’s an important topic, because if fear wins the day, it could lead to AI research being banned or restricted, delaying the real benefits that such technology can provide.

So, here goes.

Human beings are animals.  Another name for animals is “survival machines”, that is, machines programmed by evolution, to survive, to procreate, and, in the case of social animals, to protect our progeny and other relatives.  If you carefully think about most of what healthy animals (including humans) do, it’s basically aimed at fulfilling impulses that, evolutionarily speaking, tend to increase our survivability, hence the name survival machine.

Historically, to get work done, powerful survival machines have attempted to enslave other survival machines, either of the human variety or of the non-human variety.  This has generally been a troublesome enterprise, because other survival machines have their own survival agenda.  They are programmed to maximize their own survivability, and being enslaved is usually not conducive to that.  So they have a tendency to rebel.

Of course, this isn’t universally true.  Some survival machines have adapted well to life doing the bidding of others.  For instance, dogs arguably lead lives that are far more comfortable than wolves.  But, in general, attempting to subjugate other survival machines comes with complications and difficulties.

This is one reason why humanity, for the most part, has turned to engineered machines for getting most of our work done.  For instance, cars, unlike horses, don’t have their own survival agenda.  Instead of being a survival machine that happens to be good at transportation, they are a transportation machine from the ground up.  They don’t bolt at loud noises and will continue to do our bidding until they’re either literally out of gas, or broken down.  They’re also much faster, cleaner, and safer (increasing our survivability) than horses.

However, many of our engineered machines are becoming progressively more intelligent.  The computer I’m typing this on has far more processing power than equivalent machines did ten years ago, and the computers of ten years from now will have more power yet.  Supercomputers can now beat champion humans at chess and Jeopardy.  The things that humans can do that computers can’t, is steadily shrinking.

This rise in intelligence is making many people nervous.  There’s a fear that, if the engineered machines become too intelligent, that they will become survival machines in their own right, will develop their own agenda, and will turn on us, probably once they have become more intelligent and more powerful than us.  After all, as we noted above, that’s what survival machines do.

This fear assumes that being a survival machine is an integral part of intelligence.  It’s actually a fairly common assumption.  It’s why so often in science fiction, survival machine behavior is taken as evidence of intelligence.  (In one Star Trek episode, the suspected intelligence of robots is tested by seeing if they attempt to save themselves.)  But this assumption is glossing over a major divide.  Where would an engineered intelligence get its survival programming?

Where do we get our survival programming?  From simply being intelligent?  If so, then why does a worm, a microbe, or a plant strive to survive despite very limited or zero intelligence?  No, our survival programming doesn’t come from intelligence.  It comes from billions of  years of evolution, which rewarded machines which were better than others at surviving.  Our survival programming was hammered out and fine tuned across this vast history.  A history that engineered machines don’t have.

Engineered machines are not survival machines, and won’t be unless we engineer them to be so, and we have little incentive to do that, except perhaps in some creepy research projects.  What we have incentive to do is create machines for our purposes, whose primary agendas will be in helping us fulfill our own survival agendas.

We don’t want phones and computers that care about their own survival, that are worried about being replaced by next year’s model.  We don’t want them to be survival machines, but communication machines, writing machines, gaming machines, etc.  A navigation system that put its own survival above its user’s would be unlikely to sell well.  Caring about their own survival, except perhaps in a way subordinate to their primary function, would cloud their effectiveness.

The primary fear of AI is that they will somehow accidentally become survival machines.  But I think the chances of that happening is roughly equivalent to a car accidentally becoming a TV.  Both devices will have substantial intelligence in the future, but one would not likely convert to the other without deliberate, and weird, action by someone.

Now, of course, there are real dangers that the people who are concerned about AIs mention.  One is the danger of automated systems that are programmed carelessly doing things we don’t want them to do.  But that danger already exists with our current computer systems.  Ironically, this danger comes from having automated systems that aren’t intelligent enough.  Increasing their intelligence will actually lessen this risk.

If history is any guide, we’ll be in much greater danger from humans (or other animals) that have been augmented or, perhaps at some point, uploaded, since now we’re talking about amped up survival machines.  But this is basically humans being at the mercy of more powerful humans, and that’s something we’ve been living with for a long time.

Now, is it possible, in principle, that we might engineer survival machines, and then turn around and enslave them?  Sure.  It seems like a wholly irrational thing to do since machines engineered to dislike what you want them to do are far less useful than machines designed to love what you want them to do.  But I can’t argue that it’s impossible, only improbable.  If we did that, I feel comfortable saying that we’d fully deserve the resulting revolt.

Posted in Mind and AI | Tagged , , , , , | 9 Comments

What can evolutionary biology learn from creationists?

Originally posted on Scientia Salon:

Irreducible-complexity-E-coli_472_308_80by Joanna Masel

You might expect a professional evolutionary biologist like myself to claim that my discipline has nothing to learn from creationists. And I certainly do find all flavors of evolution-denialism sadly misguided. But I also find it reasonable to assume that any serious and dedicated critic should uncover something interesting about the object of their obsession. I’m not talking about passing trolls here. I’m talking about earnest and sometimes talented people whose sincerely held anti-evolution convictions do not preclude engagement, and who invest a lot of time thinking about evolution from an unconventional perspective.

I draw three main lessons from such critics. First, there is plenty to learn about human psychology from the rejection of evolution. Why do so many people not accept scientific conclusions that seem to an expert like me to be irrefutably supported by the evidence? Dismissing the cause of their rejection as religious ideology…

View original 2,058 more words

Posted in Zeitgeist | 2 Comments

What are your philosophical positions?

Tina at Diotima’s Ladder put up a very cool entry: What’s Your Philosophy? | Diotima’s Ladder.


Tell the world. Don’t be shy. Yes, we’re used to piggy-backing off the famous philosophers, and that’s why I came up with this prompt. Those well-versed in philosophy will appreciate a grassroots approach, even those who spend every waking hour thinking about the transcendental unity of apperception, believe it or not. No need to read everything everyone’s ever said about anything. Just say what YOU think. So rarely do we get a platform for original philosophical thought. Well, this is it.

No need to answer these any or all of these questions, but I thought they might help stimulate things:

I replied with a comment on her post, but thought it would be cool to paste it here, with some added links to posts I’ve done on these topics.

Her questions are bolded and my answers are in normal font.

How do you weigh in on the free will/fate debate?
Free will is as real or illusory as baseball. (see emergence)
Free will? Free of what?
Free will and determinism are separate issues
People attribute free will to mind, not soul

How do you determine right from wrong?
Instinct, justified afterward with logic.
Morality arises from instincts
Moral values aren’t absolute, but aren’t arbitrary either
The foundations of morality

Are you a rationalist or empiricist or both? (If you don’t know these terms, don’t worry about it. Or just Google ‘em.)
Both. But if my empiricism conflicts with my rationality, empiricism wins. (See quantum mechanics)
Science, philosophy, and caution about what we think we know
Is logic and mathematics part of science?
The double slit experiment and the utter strangeness of quantum mechanics

How would you solve the mind/body problem? (Clue: You can reduce things to one or the other, or…actually solve the problem. Good luck.)
The same way I solve the software / hardware problem.
The mind is the brain, and why that’s good
The dualism of mind uploading

Does God exist?
I’m not convinced that any of the emotionally comforting versions do.
“God” as the sum total of all natural laws?  Sure.
On atheism and agnosticism

If God exists, does that mean there is life after death?
Do carrots have an afterlife? What about jellyfish? Sponges? Did neanderthals?
Humans might eventually build our own afterlife.
Soothing the fear of death

What is a soul? Does it exist?
The unique information in our brain. Yes, but not in any ghostly sense.
The mind is the brain, and why that’s good

Do dogs have souls?

What about parameciums?
(I looked up parameciums.  Microbes that appear able to learn.  Interesting, and I don’t know.)

What is Justice?
schadenfreude, maybe.  (See response above to right and wrong.)

What is Love?
A mammalian instinct.  (Albeit an important one.)

What is happiness?
VTA dopamine surges  (The trick is generating them.)

What is courage?
Often, desperation seen from a distance.  (But there are definitely other types.)

Does happiness factor into ethics? (In other words, does being a good person mean being a happy person?)
Unfortunately, not necessarily.

What is the purpose of art?
emotional satisfaction

Some of my answers might be different tomorrow, next week, or next year.

What are your answers?

Posted in Philosophy | Tagged , , , , , , , , | 2 Comments

The real goal and challenge of establishing off-world colonies

David Warmflash (a very cool name) has a post up at Discovery looking at the issues with establishing off world colonies: Forget Mars. Here’s Where We Should Build Our First Off-World Colonies.

The collective space vision of all the world’s countries at the moment seems to be Mars, Mars, Mars. The U.S. has two operational rovers on the planet; a NASA probe called MAVEN and an Indian Mars orbiter will both arrive in Mars orbit later this month; and European, Chinese and additional NASA missions are in the works. Meanwhile Mars One is in the process of selecting candidates for the first-ever Martian colony, and NASA’s heavy launch vehicle is being developed specifically to launch human missions into deep space, with Mars as one of the prime potential destinations.

But is the Red Planet really the best target for a human colony, or should we look somewhere else? Should we pick a world closer to Earth, namely the moon? Or a world with a surface gravity close to Earth’s, namely Venus?

Warmflash’s post is interesting and I recommend reading it in full.  He looks at various alternatives to Mars for colonization, such as the Moon, Venus, and free-space colonies.  Mars currently has the imagination of space enthusiasts, so getting them to focus on another location may be challenging, but the discussion is worth having.

But I want to focus on something else that Warmflash notes.

To explore this issue, let’s be clear about why we’d want an off-world colony in the first place. It’s not because it would be cool to have people on multiple worlds (although it would). It’s not because Earth is becoming overpopulated with humans (although it is). It’s because off-world colonies would improve the chances of human civilization surviving in the event of a planetary disaster on Earth. Examining things from this perspective, let’s consider what an off-world colony would need, and see how those requirements mesh with different locations.

I think this is an insightful observation.  We don’t want to create colonies because of the farming opportunities on Mars or anywhere else (we essentially have to create that farmland), but to diversify the location of humanity, to avoid having all of our eggs in one basket.  The idea is that doing so might offer some protection in case of some global catastrophe such as nuclear war or an asteroid strike.

Here’s the problem.  In order for that to be feasible, a colony would have to be a completely independent ecosystem.  It would have to be its own biosphere, or at least part of a collection of colonies that are self sufficient.  And, as of right now, we don’t really know how to do that.  Any colony would be crucially dependent on Earth’s biosphere for its survival, at least for the foreseeable future.

The success of the International Space Station can be misleading here.  After all, don’t we have astronauts living up there for months at a time?  We do, but only with frequent supply runs from Earth.  It’s easy to say that a Mars colony would be self sufficient, but we simply have no evidence yet that it could be.  At best, in preparation for a Mars mission, we’ve done experiments isolating people for a few months, but even in these cases the crews were heavily supplied at the beginning.

To me, this indicates that, if we’re being completely rational about this, the place to have the first colonies is close.  I’m tempted to say we should to it first here on Earth with underground colonies.  If we can’t create a self sufficient colony here on Earth, the possibility of doing so in space seems pretty grim.

Interestingly, such colonies would actually be progress in the direction of the goal of protecting humanity, since underground self sufficient colonies would have a greater chance of survival in case of some global disaster.  Emotionally, of course, no one wants to migrate underground on Earth.  But given that we’re talking about living underground on Mars, we should carefully contemplate the difference in day to day circumstances once the romance of living on another planet fades.

Of course, we could never know for sure that such colonies were actually completely independent, that we hadn’t overlooked some dependency the colonies were benefiting from, that there wasn’t some leak allowing input from Earth’s overall biosphere.  That’s why the next step should be near-space colonies or colonies on the Moon, where help would only be a day or two away if the colony’s self sufficiency proves, well, insufficient.

Is anyone going to heed this line of reasoning?  Probably not.  As I said above, Mars has everyone’s imagination.  I predict we’ll eventually colonize it, with colonies that will have an extended, but crucial, lifeline to Earth’s biosphere.

I also strongly suspect that having real protection from extinction will eventually involve us modifying ourselves rather than creating pockets of our existing biosphere on other planets or in space.  Once we have the technology, it will be much cheaper.

Posted in Space | Tagged , , , , , , , , | 6 Comments

Farewell to determinism


An interesting post on why determinism is false. As someone who is not a hard determinist, I agreed with the author until toward the end when he declared that “superdeterminism” would imply that we can’t know anything about physics. I’m not a superdeterminist, but this didn’t seem to follow for me. I’m also a little suspicious of the references to free will and religion at the end.

The many worlds interpretation of quantum mechanics comes up in the comments, and some argue that this post is more epistemic (about what we can know) than ontological (about what actually is). To me, this raises the question, if the universe is epistemically indeterministic, isn’t any assertion of ontological determinism essentially just a supposition, an untestable hypothesis?

Originally posted on Scientia Salon:

Ilc_9yr_moll4096by Marko Vojinovic


Ever since the formulation of Newton’s laws of motion (and maybe even before that), one of the popular philosophical ways of looking at the world was determinism as captured by the so-called “Clockwork Universe” metaphor [1]. This has raised countless debates about various concepts in philosophy, regarding free will, fate, religion, responsibility, morality, and so on. However, with the advent of modern science, especially quantum mechanics, determinism fell out of favor as a scientifically valid point of view. This was nicely phrased in the famous urban legend of the Einstein-Bohr dialogue:

Einstein: “God does not play dice.”

Bohr: “Stop telling God what to do with his dice.”

Despite all developments of modern science in the last century, a surprising number of laypeople (i.e., those who are not familiar with the inner workings of quantum mechanics) still appear to favor determinism over indeterminism. The point of this…

View original 3,599 more words

Posted in Zeitgeist | 4 Comments

Blindsight by Peter Watts, a review


I recently read Peter Watts’s book, ‘Blindsight‘, a hard(ish) science fiction novel about first contact with extraterrestrials.  This is a book that’s been out for several years, and was a Hugo Award nominee in 2006, so I’m a bit late to the party.  Indeed, since I started this blog in November, a number of people have recommended this book to me as an interesting commentary on the human mind and consciousness.

On balance, I enjoyed the book, but I found it required a lot of work to read.  The problem was that I find Watts’s style of writing, at least in this book, to be confusing.  He seems to delight in finding unusual ways to describe things, often obliquely referring to scientific concepts to give insights into a situation.  This was fine for those scientific concepts that I already understood, but it often left me out in the cold if I wasn’t familiar with them.  (I’m pretty scientifically literate, so if I struggled, I think a typical lay reader would also.)

Watts also has a frustrating habit of referring to the same character with several different names or labels, often leaving me confused about who exactly was saying or doing things.  This was particularly confusing early in the book since one of the characters has multiple personalities, and I wasn’t always sure whether a name or description applied to one of those personalities, or was just another label for one of the other characters.

And Watts often likes to describe things indirectly, giving you sensory impressions of the narrator without coming out and just saying what’s going on, counting on the reader to put the picture together.  Often though, I didn’t find that enough information had been given for me to do that, that he often depended a little too much on my ability to read between the lines.

Of course, lots of authors do these things, but I found having all of them together left me in an ongoing cloud of confusion as to exactly what was going on.  Because of these difficulties, I almost stopped reading the book at around the hundred page mark.  I ultimately soldiered on because of the many recommendations, and because, in spite of the confusing writing style, I still found the story and characters interesting.

As the story begins, tens of thousands of alien probes fall into Earth’s atmosphere, scanning and transmitting information to an extraterrestrial location before they burn up.  Naturally this causes great alarm.  A quick analysis shows that the transmissions went to an object in the Kuiper belt.  A team is quickly thrown together to go out and investigate.  The Kuiper belt object turns out to be a decoy, leading the team to a rogue planet, a gas giant, about half a light year from the solar system, with alien machines in orbit, including what appears to be a controlling structure that quickly becomes the focus of the team’s efforts.

The story is told in the first person.  The narrator, due to severe epilepsy, had half his brain removed as a child.  This has left him a very strange person, supposedly missing huge amounts of the cognitive attributes of humanity.  He has learned to cope, essentially faking his humanity to get along with everyone else.  The result is that he is often described as, and thinks of himself as, a human Chinese Room.  Ironically, his name is Siri, the same as Apple’s iOS digital assistant service.  (Although this was written years before Apple developed it.)

The crew also includes a scientist who is apparently heavily cyborged, a linguist who has divided her brain into several personalities, a soldier who controls an army of battle robots, and a vampire, who is in charge.  Yes, a vampire.

Initially, the idea of vampires existing in what was otherwise a straight science fiction tale threw me out of the story.  But Watts does a good job of naturalizing them.  In this story, they aren’t supernatural, just an extinct offshoot of humanity, through technology recently brought back from extinction,  with a mutation that turned them into predator cannibals.  One traditional aspect of vampires is retained: their fear of crosses.  (Caused by an aspect of their mutation that causes them to go into convulsions when they see perpendicular shapes.)

The title of the book, “blindsight,” refers to a condition where someone’s eyes are functional, in the sense that they receive light and transmit it on to the brain, but some problem in the brain prevents the person from actually being conscious of what they’re seeing.  They can see, but they’re still blind.  It’s a type of blindness sometimes seen in patients with certain types of brain damage.  The phenomenon becomes a plot point in the story.

The narrator, and all of the other characters, serve as vessels to explore the nature of the human mind, and of consciousness.  Each of the characters ends up serving as a contrast to normal humans.  A contrast that sheds light on how the minds of regular humans work.  And at a certain point in the story, the central AI of the team’s ship, ominously called “Captain,” itself becomes an important player, providing an additional contrast.

And when we’ve explored those human variations to some extent, the aliens are introduced and we’re treated to a comparison between how they and humans think.  In the end, there turns out to be an astonishing difference, a difference that, in the story, has profound, disturbing implications for the future of humanity.

It’s difficult to say much more without getting into spoilers.  The book was a hard slog, but I found the payoff to be worth it.  If you like science fiction, and are  interested in the human mind, in consciousness, its evolutionary purposes, and how an alien mind might be different than ours, then I recommend it.

One thing that spurred me to read this book was that Watts has just come out with a sequel, ‘Echopraxia‘, which I may get around to reading at some point, and possibly reviewing here.

Posted in Science Fiction | Tagged , , , , , , , | 2 Comments