Life on the Billionth Rock From the Sun | Seth Shostak

Seth Shostak has an article at HuffPost on asteroids.  Not the usual we-need-to-prepare-for-incoming, but discussing something I’ve noted before that the space age needs: an economic incentive.  As some of us have discussed, mining asteroids looks like it might be an excellent candidate.

These rocks are a resource. The fact that they’re in small chunks makes mining them as appealing as cat videos. And at least two companies are considering doing just that. The consequences could be mind-boggling. According to John Lewis, chief scientist for Deep Space Industries, if humanity can improve its recycling efforts, then ores smelted out of just the nearest asteroids will supply the needs of 80 billion of us until that distant day on which the sun dies.

That sure beats the slow and inevitable impoverishment that will be our fate if we confine mining to our own back yards (or preferably someone else’s back yard). The asteroids aren’t so much a renewable resource as an endless one.

via Life on the Billionth Rock From the Sun | Seth Shostak.

Shostak also discusses the possibility of us living on asteroids, and on the rocks further out from the sun in the Kuiper belt and scattered disk.

The only issue with these far out locations is their distance from the sun and the inability to use solar energy.  Shostak talks about Freeman Dyson’s idea of using mirrors near the sun to beam energy out to specific colonies.  But that seems like it would quickly grow cumbersome with large numbers of such colonies.  To be economically feasible, those outer colonies would probably eventually need to figure out a way to use the materials on hand to generate energy.

I suspect any such colonies would be “manned” by robots, or post-humans.  Maintaining natural humans that far out from the sun would probably be too energetically expensive.

American positions on moral issues and tensions between the moral foundations

Gallup did a poll on American positions on various moral issues, finding that Americans are now more accepting than ever on a range of issues.


Most of these I don’t find particularly surprising.  Of course, it turns out that Democrats and Republicans have differences of opinion on many of them.  HuffPost, in their write up of this, did a nice infographic:



Well, we agree that birth control is good and that things like affairs, cloning humans, and polygamy are bad.  But other than that, agreement is limited.  One thing I would be interested to know is how people would feel about unmarried or teen sex if proper birth control is exercised.

Anyway, as an interesting exercise, I decided to try to map these issues into Jonathan Haidt’s Moral Foundations theory.

A quick refresher: these moral foundations are thought to be the primal biological urges or instincts from which morality arises.  From cross cultural studies, they appear to be universal across humanity, although individual humans feel them in differing combinations of magnitude, and cultural learning has a big effect on how they eventually map to a society’s values.

(By the way, fellow blogger Steve Morris has been doing a series of posts on these foundations which are well worth checking out.)

The foundations, listed in virtue / vice format are:

  • care / harm
  • fairness / cheating
  • loyalty / betrayal
  • authority / subversion
  • purity / degradation
  • freedom / oppression

These are the foundations that psychologists such as Haidt have been able to identify to date.  As more studies are conducted, others may be found, or one or more of the currently identified ones could eventually be broken up into multiple foundations.

It’s important to understand that the contentious issues above are tensions between these various foundations, with how you fall on an issue being determined by which foundations you are more motivated by, at least for that particular issue.

Here is my mapping, formatted as issue:foundation motivating acceptance vs foundation motivating disapproval.

  1. Birth control: freedom vs purity
  2. Divorce: care vs purity
  3. Unmarried sex: freedom vs purity
  4. Stem cell research: care vs purity
  5. Gambling: freedom vs purity
  6. Death penalty: fairness vs purity & care
  7. Wearing fur: freedom vs care
  8. Baby outside of marriage: freedom vs purity
  9. Gay / lesbian relations: care vs purity
  10. Medical testing on animals: care (of humans) vs care (of animals)
  11. Doctor assisted suicide: care vs purity
  12. Abortion: care (of the woman) vs purity & care (of the fetus)
  13. Cloning animals: care (of humans) vs purity & care (of animals)
  14. Pornography: freedom vs purity
  15. Teenage sex: freedom vs purity
  16. Suicide: care vs purity
  17. Polygamy: freedom vs purity
  18. Cloning humans: freedom vs purity & care
  19. Marital Affair: freedom vs loyalty

I’m a little nervous about how often I invoked “freedom”, when it might be more accurate to simply say “non-moral” motivation for some of them.  But I think “freedom” is relevant to a third party’s attitude toward that activity.  A woman may be motivated to wear animal fur for comfort and appearance, but a third party’s attitude toward allowing her to wear it seems motivated by the freedom impulse.

For many of the ones I labeled “purity”, the people opposed to it might say they oppose it for “care” reasons.  I tried to throw care in on these cases, but someone can always claim their motivation comes from care.  For example, many pro-life advocates claim their desire to restrict the mother’s actions is based on their care of her well-being.

Doing this, it became obvious to me that many of these positions are motivated by multiple foundations in combination.  I picked the ones I thought were most relevant, but you may well disagree.

The difference between life and machine

English: Schleichwölfe Deutsch: Schleichwölfe

Addy Pross has an interesting post up at HuffPost looking at what actually makes life…life.

Most of us recognize that there is a fundamental difference between mechanical objects designed and created by man, no matter how sophisticated, and the naturally derived complexity of living things. In fact, my granddaughter, when she was just 2, already understood one basic difference. She loved toy dogs but was scared of real ones. Real dogs were unpredictable; she recognized that they had a mind of their own. All living things act on their own behalf, doggedly pursuing their agenda. That’s true even for mindless bacteria — no designer, no creative sculptor required.

I think this is exactly right and I’ve noted it in several older posts.  Living things have desires that relate to their own survival.  For mammals, this includes survival of their offspring and for social animals may include survival of their family or pact.

Machines don’t have this, at least not yet, and aside from some research projects, are unlikely to have it.  We build machines to fulfill our agendas, not have their own.  Their programming will form their desires, which will be to fulfill their primary purpose, or purposes.  So a navigation system will have simply a “desire” to fulfill the navigating function, and won’t be concerned about being replaced by a newer model next year.

We’re unlikely to regard a machine as sentient, as a  fellow being, until it has drives and desires we can identify with, until we can sense a fellow being there.

We program machines of course.  But then where do the desires of living things come from?  Evolution.  Our desires for survival, procreation, and all the rest, come from our evolved instincts, from the programming natural selection slowly developed in us.

One thing that can be a little disturbing to think about is what might happens if we ever reach the point where we can reprogram living things.  Essentially turn them into living robots.  Indeed, as machines become more sophisticated, the difference between machine life and engineered life will start to become blurred.  Just as a machine usually won’t care about its own survival, these reprogrammed animals might not either.

Of course Douglas Adams did think through this, and the result was classic.

The Unexpected Way Philosophy Majors Are Changing The World Of Business

Dr. Damon Horowitz quit his technology job and got a Ph.D. in philosophy — and he thinks you should too.

“If you are at all disposed to question what’s around you, you’ll start to see that there appear to be cracks in the bubble,” Horowitz said in a 2011 talk at Stanford. “So about a decade ago, I quit my technology job to get a philosophy PhD. That was one of the best decisions I’ve made in my life.”

As Horowitz demonstrates, a degree in philosophy can be useful for professions beyond a career in academia. Degrees like his can help in the business world, where a philosophy background can pave the way for real change. After earning his PhD in philosophy from Stanford, where he studied computer science as an undergraduate, Horowitz went on to become a successful tech entrepreneur and Google’s in-house philosopher/director of engineering. His own career makes a pretty good case for the value of a philosophy education.

more at The Unexpected Way Philosophy Majors Are Changing The World Of Business.

An interesting article at HuffPost on the value of studying philosophy.  As a manager who looks at my share of resumes, I’m not sure, from a career standpoint, how good of an idea it actually is for students to major in philosophy if they don’t want to ultimately be philosophers.  It seems like most professional philosophers don’t really encourage lots of people to go into the field primarily because, like many academic fields, it’s already somewhat crowded.

That said, I do think it’s a very good idea for everyone to learn the basics of philosophy.  I wish my undergraduate education had included introductory philosophy courses, in the same manner that it included introductory history, English, math, or economics.  All of those courses served me well, and a basic philosophy course would have been of immense value.  (As it was, I didn’t get exposed to any philosophy until my grad school research methods class.)

For most people, that basic course would be enough.  It’s the Pareto principle, or 80-20 rule.  20% of the effort usually yields 80% of the results.  I think having a basic understanding of philosophy tremendously broadens your horizons.  It should be as required as English courses.  If course, if more people took those intro classes, more people would likely end up majoring in philosophy, so two birds one stone.

Why a Larger Multiverse Shouldn’t Make You Feel Small | Max Tegmark

The Higgs Boson was predicted with the same tool as the planet Neptune and the radio wave: with mathematics. Why does our universe seem so mathematical, and what does it mean? In my new book, Our Mathematical Universe, which comes out today, I argue that it means that our universe isn’t just described by math, but that it is math in the sense that we’re all parts of a giant mathematical object, which in turn is part of a multiverse so huge that it makes the other multiverses debated in recent years seem puny in comparison.

via Why a Larger Multiverse Shouldn’t Make You Feel Small | Max Tegmark.

Max Tegmark’s post at HuffPost promoting his new book, which discusses his theory that the universe is mathematics, not described by mathematics, but is mathematics.  Of course, the observational difference between being fully described by mathematics and actually being mathematics might be semantic at some level.

I think Tegmark’s idea is interesting, but as one of the commenters on his post said, this isn’t science, it’s philosophy.  What observation could ever falsify it?  If we observe something that we can’t come up with a mathematical description of, someone can always assert that it’s just a temporary gap in our knowledge.

That said, reading his reasoning will probably be instructive about just how abstract our knowledge of particle physics is.  My blog’s name is a reflection of the fact that we and the universe may ultimately be nothing but patterns, structure, all the way down.  A possibility that would completely fit with Tegmark’s thesis.

Singularity assumptions that should be questioned

The upcoming movie, Transcendence, looks like it will be interesting, but the trailer includes common assumptions about the singularity that I’m not sure are justified.

To be sure, the assumptions are held by a lot of singularity believers.  Below I offer some reasons why these assumptions shouldn’t be taken as self evident.

Assumption 1: There is almost infinite room to improve on human intelligence.

There could well be, but I’ve also read some studies that indicate that the human brain may be at an evolutionary optimal state given the laws of physics.  Machine intelligence may be able to go far past organic intelligence, or it may find itself faced with many of the same types of tradeoffs in processing speed, heat dissipation, energy consumption, and other factors.

A lot of this assumption is based on a projection of Moore’s law, the increasing power of computer processing chips.  However, Moore’s law is not an unlimited proposition.  It’s an S-curve one, a period of rapid growth that will eventually level out, and we don’t know where on the S-curve we are yet.  The ability to increase transistors on silicon chips is nearing its end, by 2020 at the latest.  Quantum computing may give it a new lease on life, but eventually we will hit the laws of physics and reach the top of the S-curve.

But, some singularity believers will say, an AI could be networked across several nodes.   A networked machine intelligence could certainly be larger than any currently existing organic intelligence, but we don’t really have a good idea of what the tradeoffs for such an intelligence might be.  It might be that once a networked intelligence gets too large, too complicated, its mental processing might slow down, its ability for coordinated action might become compromised, and its ability to maintain a unified self could conceivably become problematic.

All that said, I personally suspect that human minds can be improved on significantly, but not to the astronomical levels often assumed.

Consider the technology of flight, where although we did pretty quickly surpass birds in velocity and altitude, the cruising speed of the common airliner today is still less than ten times that of a falcon.  Certainly we have the technology to go much faster, but its rarely worth the cost, at least with today’s technology.

I suspect AIs will be similar; a significant but not infinite improvement, limited by trade-offs and costs.  The idea of god like AIs causing universal transcendence may be wishful thinking.

Assumption 2:  There will be unlimited processing capacity.  

Dreams of a post-scarcity society have been around a while.  The singularity just moves it into virtual computer environments.  Like the assumption of near infinite increases in intelligence, the assumption of unlimited processing capacity may be overly optimistic.

The idea here is that we will all upload our minds into shared computer environments, and then have the capacity to do whatever we want, spawn as many copies of ourselves as we’d like, explore any simulations we’d like, etc.

The problem is that there’s only so much raw material for making hardware.  (Not to say that there isn’t a lot of it out there.)  There’s also only so much power available to fuel that hardware.  We’ll certainly have a lot more capacity than today, but I see no real evidence that it will be unlimited.

That means resource scarcity will still be an issue.  Which also implies economic systems for allocating those resources, competition for acquiring those resources, and many of the related ancient ills, will all likely still be around.

Assumption 3:  Everyone will prefer living in a shared computer environment.  

Perhaps, but it’s worth thinking about the disadvantages of living in such an environment, aside from the issues of not having unlimited processing capacity.

Withdrawing from the world may leave us blind to outside threats such as natural disasters or rivals from another environment.  If resources aren’t unlimited, there’s no reason to suppose that war or criminals will go away.  For survival purposes, at least some portion of a shared environment would have to be outward looking.

We’d also be at risk of losing our individual identity in such an environment.  Once we’ve uploaded ourselves, there’s no limit on what we could change about ourselves, or what changes could be imposed.  If our survival instinct is removed, there’s nothing stopping us from making our knowledge available to the collective, and then ceasing independent execution, ceasing independent existence.  

Many people, aware of this possibility, might resist the collective environments, opting for their own hardware, their own body.  Doing so would also provide independent mobility and agency in the world, a freedom that we might dearly miss in a collective environment, particularly if survival requires keeping track of, and responding to, what’s going on in the real world.

A very strange world

None of this is to say that a post-singularity world wouldn’t be unimaginably strange or that it might not provide solutions to many age old problems.  Only that the laws of nature will provide some constraints on that strangeness.

Much of the thinking around singularity borders on semi-religious conceptions of a technological rapture.  An idea of an event that will reset all of the world’s problems and usher in a new utopia, usually in twenty years from whenever it is being discussed.

Either that or on apocalyptic thinking, with many concerned about what AIs might do to us, that humans might find ourselves obsolete and in danger of extinction, or enslavement.  I’ve already written about my views on this, but to summarize, I’m not particularly worried about it.

It would require that those AIs have something like our survival instinct, an impulse for self preservation along with preservation of our kin, that we only have because of billions of years of evolution.  We’d have to program that instinct into them, and if we can do that, we can also program an aversion to harming humans.

I think we should hold a healthy degree of skepticism for both utopic and apocalyptic visions of the singularity.

The future will be strange, and is impossible to predict with any accuracy.  But so was the future for medieval scholars, or for stone age foragers.  Today’s world would be largely incomprehensible to them, and to the extent that it was understandable, it would seem largely like a utopia.  Probably, if we could see it, the world of 2100 would be like that for us.

Saturday Morning Breakfast Cereal


Saturday Morning Breakfast Cereal.

This seemed tangentially related to my post about HuffPost’s new commenting policy and subsequent discussion.

If you’re not already reading Saturday Morning Breakfast Cereal, you’re missing out on a lot of awesome philosophical insight with an often hilarious bent.

Click through to read the rest.

Huffington Post commenting policy

So, just before making the last post, I discovered that HuffPost had changed their commenting policy, now requiring that people reveal and verify their real names.  HuffPost had previously promised that they would grandfather older accounts out of this policy, but have apparently rescinded that promise.  I understand why HuffPost is doing this, but I think it’s a mistake.

There are plenty of legitimate reasons why people may want to participate in discussions on the internet while remaining anonymous.  Anyone who has ever changed their political or religious views, or discovered their sexual orientation or identity, or who wants to discuss anything that they’re not ready to get back to their friends, family, teachers, boss, or overall community, will now effectively be locked out.

I understand that civility is an issue.  But there are lots of ways to handle that without forcing people out of anonymity.  HuffPost has a faving system to recognize quality comments.  Many other sites supplement this with a voting up and down system so that particularly poor or nasty comments get buried.  None of these are perfect, but neither is requiring people to use their real names.

This isn’t a personal issue for me.  My name is in the About document on this site and some of you already knew it anyway.  But I do think it is a real issue for the internet and free and open discussion.

When Will We Build the Starship Enterprise? | Seth Shostak

What if we could send humans anywhere at the speed of light, and at a rock-bottom price?

That’s eminently feasible if we send the information and not the protoplasm. No crew, just code.

Consider: The human genome consists of about 3.3 billion base pairs. Since there are only four types of pair, that amounts to 0.8 gigabytes of information, or about what you can fit on a CD. With a microwave radio transmitter, you could beam that amount of information into space in a few minutes, and have it travel to anyone at light speed.


via When Will We Build the Starship Enterprise? | Seth Shostak.

Seth Shostak has an interesting post on HuffPost, pointing out that we don’t actually need to travel to the stars, just beam our genome there and count on aliens at the other end to build a copy of us on that end.  It’s an interesting piece, but my reaction is, why stop at the genome?  If there is a civilization on the other end that can instantiate the human body, then it could probably instantiate a connectome; a recorded mind.

Of course, while we know the genome can make a copy of us, sans life experiences, we’re not sure yet about the connectome being an accurate copy of the mind.  But we also don’t know yet that there’s anyone at the other end, so why not dream big?

6 Reasons Why People — Not Things — Will Make You Happier

The holiday season is the time to focus on what’s truly important: Spending quality time with friends and family, being thankful for all the blessings in your life, and showing how much you care by giving of yourself. But after being bombarded with commercials and marketing messages galore, it can be easy to forget what it is that makes the holidays special. After all, pressures to give, give, give and receive, receive, receive, only add fuel to the fire of materialism. Check out these reasons why it may be better to take the focus off of material comforts this season — and turn it back on what really matters.

More at: 6 Reasons Why People — Not Things — Will Make You Happier.

This article on HuffPost is worth checking out.  From what I understand, the research shows that money matters to a point, but only to a point. Once you have enough to live moderately comfortably, increased happiness is most likely to come from having good friends (whether they be relatives or not).  Interestingly enough, this was understood by  ancient Greek philosophers like Epicurus, so it’s old wisdom, but wisdom nonetheless.