The unavoidable complexity of morality

I’ve written before on why science can’t determine morality.  This isn’t a particularly controversial position (even if many of Sam Harris or Michael Shermer’s followers find it so).  No one seems to have found an intellectually rigorous answer to David Hume’s is/ought divide, that you can’t derive an ought from an is.  To logically determine values, your premise must include at least one value, which means anyone who doesn’t accept that value will reject your conclusion, no matter how much empirical evidence you have for it.

But I’ve also expressed that, while it can be helpful, philosophy also cannot determine morality, which is a more controversial position.  Philosophy can argue for the relative pragmatic usefulness of certain ethical systems, but I haven’t seen one yet that matches all the moral intuitions that most of us have.

For example, Jeremy Bentham’s original conception of utilitarianism focused on the value of happiness, and argued that what was moral was that which maximized happiness.  Setting aside the difficulty of defining “happiness”, we’re still left with whether to focus on short term or long term happiness.  John Stuart Mill later clarified that we should focus on “higher” forms of happiness but that just left the question of what is “higher” and what is “lower.”

The problem is that maximizing happiness often violates our moral intuitions.  One of the classic examples is sacrificing one healthy patient to save five patients in need of various organ transplants.  If we do this, we’ve maximized happiness for the population of six patients, but I don’t know too many people who would think it would be an ethical move.

A determined utilitarian might argue that we need to take into account the longer term implications of such a move on the happiness of society.  But that gets to one of the problems with utilitarianism, or any consequentialist framework, where to draw the line at foreseeable consequences.  Often we stop when we’ve achieved logical justification for our pre-existing intuition, which essentially makes the logic redundant.

Deontology has similar weaknesses.  How many rules can we say are truly categorical, that is, that we can truly be absolutely consistent with?  Lying in general might be bad, but lying to a Nazi looking for the Jewish family you have hidden in the basement is something most of us would see as the highest virtue.

Since they must  have at least one value in their premises, all of these systems have it (happiness in classic utilitarianism, freedom of choice in preference utilitarianism, consistency in deontology, etc) but the problem is, in order to keep things as logical as possible, they try to keep it to just one value, or at least a minimal number.

Some naturalists (philosophical, not nudist) seek the one essential value in evolution.  Our moral intuitions evolved because they made us more likely to survive, so why not use that for our overriding value?  Or, perhaps more precisely, why not use the preservation and maximum propagation of genes, since that’s the actual measure of success in evolution.

The problem is that, while our intuitions did evolve because of their usefulness for genetic success, even using that for our overriding value doesn’t work.  Why?  Because evolution isn’t a precise engineer.

Remember that evolution works in two stages.  The first stage is random mutation.  Some of the mutations will aid in genetic success, some hinder it, while other will be irrelevant.  Natural selection, the second stage, will stain out the mutations that hinder genetic success.  The remaining traits will be those who either promote or are not relevant to genetic success.

The issue is that for evolution, what matters, mostly, is a trait’s effects on us up to and during the reproductive part of our life cycle.  Traits that aid in survival, such as fear of death, pain, etc, continue after we’re done reproducing.  The traits also don’t go away if the original reason for their evolutionary success are absent, such as in a person who is infertile.

Perhaps most strikingly, pain, a trait which insures that we resist damage to our bodies that may threaten our  reproductive success, doesn’t go away even if we know it won’t have any effect on that reproductive success, or even on survival.  This is why waterboarding is torture, even if the prisoner sees the medical practitioner standing by ready to make sure they don’t die.  Pain and suffering is still pain and suffering, even if the original cause of their evolutionary development is absent.

Put another way, our moral intuitions arise from foundational instincts,  instincts that developed because of their ability to aid in reproductive success, but the intuitions from those instincts are a superset of what is strictly necessary for reproductive success, or even survival.  It’s why we can’t boil down morality to survival or genetic success.

Okay, so we can’t boil morality down to one value.  Any system that attempts to do so inevitably ignores many moral intuitions.  But could we examine human instincts and maybe boil morality down to a minimal set of values?

The problem here is that our moral intuitions are often inconsistent.  Indeed, they are often in outright opposition to each other.  An example is the never ending tension between security and freedom as exemplified in the recent fight between Apple and the FBI.  Security and freedom both arise from human instinct.  By what objective measure do we designate any one spot on the spectrum of the tension between these values as the “right” one?

Where does this leave us?  Well, despite the fact that I don’t think consequentialism of deontology can determine morality, that doesn’t mean they can’t be helpful at times, along with game theory frameworks.  When genuinely trying trying to resolve a moral conundrum, they can be useful.  But their usefulness only helps us apply our pre-existing values to particular situations or notions.  They can’t be the final word.  That has to be left to the actual values we hold and can build a societal consensus on (as imperfect as most of us will find that answer).

I’m occasionally asked, given my skepticism of moral systems, what exactly is my morality?  The answer is a sort of consequentialism that generally aims to minimize suffering and maximize the potential for happiness.  But I make no claim to being rigorously logical about it.   For example, if I have the choice between helping a friend or two strangers, even though helping the strangers may maximize well-being, all else being equal, I’m going to help my friend, and not feel like I did anything immoral, because loyalty to my friends feels like an important value to me.

But maybe I’m missing something?

Rationally Speaking: What virtues, and why?

An example of a tree of virtues.
An example of a tree of virtues. (Photo credit: Wikipedia)

At any rate, what I’d like to do here is to explore a bit more of my own preferred framework for ethics, neo-Aristotelian virtue ethics (the “neo” prefix should alert the reader that I’m not about to defend everything Aristotle said, but rather discuss an updated version of the idea, based of course on his original insights). Specifically, I want to focus on the concept of virtue and the work that it can do in moral philosophy.

via Rationally Speaking: What virtues, and why?.

Massimo Pigluicci addresses a common criticism of virtue ethics: how does one identify what is a virtue and what is a vice?

Virtue ethics is one of the three major philosophical ethical frameworks.  The other two are consequentialism and deontology.  I considered myself a consequentialist for a long time, although as time has passed, I’ve drifted more and more toward virtue ethics.

I like virtue ethics because, unlike the other ethical frameworks, it is honest about its goal being to help you live the good life, and about its foundations (virtues and vices) not being objective.  It gets a lot of criticism for that from advocates of utilitarianism, the most popular form of consequentialism.

But, as I covered in a previous post, none of the normative philosophical frameworks can ultimately claim objectivity.  None of them can make the decisions for you.  If you think they can, then think about how you judge what maximizes utility in utilitarianism (or more broadly, how you evaluate consequences in consequentialism), or how you judge which rules are good in deontology.

Ultimately, you judge these systems by your pre-existing values, which arise from a combination of your evolved instincts for cooperation (which vary by person), and from social learning (which vary by culture).  We have no choice but to do the hard work of finding rules of conduct that most of us can live with.

That said, I do think moral philosophy can clarify our thinking.  It just can’t make the decisions for us.  In that light, I find it productive sometimes to run a moral dilemma through all three systems, doing a virtue check, a utilitarian check, and a deontological one.

If a proposition promotes the good life (virtue ethics), maximizes utility, and it is a rule I’d feel confident turned against me (deontology), then I can usually feel good about it being moral.  At least unless it gets vetoed by my revulsion reflex, in other words, by my evolved instincts.