The unavoidable complexity of morality

I’ve written before on why science can’t determine morality.  This isn’t a particularly controversial position (even if many of Sam Harris or Michael Shermer’s followers find it so).  No one seems to have found an intellectually rigorous answer to David Hume’s is/ought divide, that you can’t derive an ought from an is.  To logically determine values, your premise must include at least one value, which means anyone who doesn’t accept that value will reject your conclusion, no matter how much empirical evidence you have for it.

But I’ve also expressed that, while it can be helpful, philosophy also cannot determine morality, which is a more controversial position.  Philosophy can argue for the relative pragmatic usefulness of certain ethical systems, but I haven’t seen one yet that matches all the moral intuitions that most of us have.

For example, Jeremy Bentham’s original conception of utilitarianism focused on the value of happiness, and argued that what was moral was that which maximized happiness.  Setting aside the difficulty of defining “happiness”, we’re still left with whether to focus on short term or long term happiness.  John Stuart Mill later clarified that we should focus on “higher” forms of happiness but that just left the question of what is “higher” and what is “lower.”

The problem is that maximizing happiness often violates our moral intuitions.  One of the classic examples is sacrificing one healthy patient to save five patients in need of various organ transplants.  If we do this, we’ve maximized happiness for the population of six patients, but I don’t know too many people who would think it would be an ethical move.

A determined utilitarian might argue that we need to take into account the longer term implications of such a move on the happiness of society.  But that gets to one of the problems with utilitarianism, or any consequentialist framework, where to draw the line at foreseeable consequences.  Often we stop when we’ve achieved logical justification for our pre-existing intuition, which essentially makes the logic redundant.

Deontology has similar weaknesses.  How many rules can we say are truly categorical, that is, that we can truly be absolutely consistent with?  Lying in general might be bad, but lying to a Nazi looking for the Jewish family you have hidden in the basement is something most of us would see as the highest virtue.

Since they must  have at least one value in their premises, all of these systems have it (happiness in classic utilitarianism, freedom of choice in preference utilitarianism, consistency in deontology, etc) but the problem is, in order to keep things as logical as possible, they try to keep it to just one value, or at least a minimal number.

Some naturalists (philosophical, not nudist) seek the one essential value in evolution.  Our moral intuitions evolved because they made us more likely to survive, so why not use that for our overriding value?  Or, perhaps more precisely, why not use the preservation and maximum propagation of genes, since that’s the actual measure of success in evolution.

The problem is that, while our intuitions did evolve because of their usefulness for genetic success, even using that for our overriding value doesn’t work.  Why?  Because evolution isn’t a precise engineer.

Remember that evolution works in two stages.  The first stage is random mutation.  Some of the mutations will aid in genetic success, some hinder it, while other will be irrelevant.  Natural selection, the second stage, will stain out the mutations that hinder genetic success.  The remaining traits will be those who either promote or are not relevant to genetic success.

The issue is that for evolution, what matters, mostly, is a trait’s effects on us up to and during the reproductive part of our life cycle.  Traits that aid in survival, such as fear of death, pain, etc, continue after we’re done reproducing.  The traits also don’t go away if the original reason for their evolutionary success are absent, such as in a person who is infertile.

Perhaps most strikingly, pain, a trait which insures that we resist damage to our bodies that may threaten our  reproductive success, doesn’t go away even if we know it won’t have any effect on that reproductive success, or even on survival.  This is why waterboarding is torture, even if the prisoner sees the medical practitioner standing by ready to make sure they don’t die.  Pain and suffering is still pain and suffering, even if the original cause of their evolutionary development is absent.

Put another way, our moral intuitions arise from foundational instincts,  instincts that developed because of their ability to aid in reproductive success, but the intuitions from those instincts are a superset of what is strictly necessary for reproductive success, or even survival.  It’s why we can’t boil down morality to survival or genetic success.

Okay, so we can’t boil morality down to one value.  Any system that attempts to do so inevitably ignores many moral intuitions.  But could we examine human instincts and maybe boil morality down to a minimal set of values?

The problem here is that our moral intuitions are often inconsistent.  Indeed, they are often in outright opposition to each other.  An example is the never ending tension between security and freedom as exemplified in the recent fight between Apple and the FBI.  Security and freedom both arise from human instinct.  By what objective measure do we designate any one spot on the spectrum of the tension between these values as the “right” one?

Where does this leave us?  Well, despite the fact that I don’t think consequentialism of deontology can determine morality, that doesn’t mean they can’t be helpful at times, along with game theory frameworks.  When genuinely trying trying to resolve a moral conundrum, they can be useful.  But their usefulness only helps us apply our pre-existing values to particular situations or notions.  They can’t be the final word.  That has to be left to the actual values we hold and can build a societal consensus on (as imperfect as most of us will find that answer).

I’m occasionally asked, given my skepticism of moral systems, what exactly is my morality?  The answer is a sort of consequentialism that generally aims to minimize suffering and maximize the potential for happiness.  But I make no claim to being rigorously logical about it.   For example, if I have the choice between helping a friend or two strangers, even though helping the strangers may maximize well-being, all else being equal, I’m going to help my friend, and not feel like I did anything immoral, because loyalty to my friends feels like an important value to me.

But maybe I’m missing something?

18 thoughts on “The unavoidable complexity of morality

  1. I do not think you are missing anything. I think the problem comes in trying to generalize a morality. We could just study moralities as they exists and look for common elements (I think this has been done), but this, too, misses the point. A moral system seems to me to be a social compact. It is negotiated by people in a community and spreads culturally. Conflicts occur when peoples from different communities come into contact and opinions are formed about those vile or wonderfully kind people “over there.” There are some inherent samenesses in these systems for obvious reasons, but we, of course, focus on their differences.

    The creation of a universal moral system might just be a fool’s errand. Even if we were to find one, what would happen when we came into contact with aliens from other worlds? I think whoever said that if horses had hands they would draw pictures of their gods to look like horses, could very well say the very same thing about systems of morals.

    This is a subject where the journey is more important that the destination. I do not think a moral system is truly generalizable, but continuing the quest is vital to our communication with one another about what we think is important in human behavior.

    Liked by 3 people

    1. Very well said. I particularly like the observation social compacts and how it explains culture shock.

      On the creation of a universal moral system, I tend to agree on any explicit attempt to do so. Any such attempt could never rise above the suspicion that more powerful or influential societies weren’t tipping the scales.

      But I do tend to think that a more or less global morality will develop, if only because the world is gradually becoming a more interconnected place. That’s not to say that there won’t be sub-cultures with their own unique mores, but just as a general meta-moral consensus usually emerges in most countries, I think there will eventually be such as consensus around the world.

      Of course, even without aliens, if humans expand out into the universe, the vast distances and the speed of light barrier will again create separate cultures, and inevitably differing moral systems.

      Liked by 1 person

  2. Superb article. Like you, the measure I seem to inevitably fall back on is the reduction of suffering. Does an action increase or decrease suffering for others. It seems quite robust, and I haven’t yet come across a situation where it cannot be used as an effective measure of moral behaviour. Adding “happiness” just complicates it for it is thoroughly subjective. An opinion, after all, is all that stands between pragmatism and hostility. A sentiment is all that differentiates entertainment from cruelty. An impression is the only thing that separates the stimulating from the terrifying, and a judgment, truly, is the only thing that disentangles the appalling from the delicious.

    Liked by 2 people

    1. Thanks John!

      On happiness, if you’ll note, I actually used the phrase “potential for happiness.” That wording wasn’t casual. We can’t make anyone happy. All we can do is enable the opportunities for happiness.

      Of course, you might equate absence of suffering with potential for happiness, but I’m not sure I’d agree. For example, providing an education doesn’t in and of itself alleviate suffering, but being educated certainly increases the potential for a happy life. That said, whether this is a meaningful distinction depends on what we’re prepared to call “suffering.” Is an uneducated person who doesn’t know what they’re missing “suffering”?

      Liked by 2 people

  3. For practical purposes, I act instinctively. I can after acting look back and rationalise trying to give reasons or justification for what we did.
    In fact, even the example you give of choosing between a friend or strangers is acted upon instinctively almost without thought.

    Liked by 1 person

    1. Definitely. I do think there is considerable value in studying the logic of our moral positions, but it’s extremely easy to fall into rationalizing and fool ourselves into thinking that we’re being more rational than we actually are.

      Like

  4. I agree that there hasn’t been one all-encompassing ethical system that seems to work in all cases. The interesting thing to me is that it’s not necessarily logic that tells us what’s flawed in each system, but a sort of intuition. I don’t mean a willy nilly feeling about something, but—as in the Kant Nazi example—a nearly universal disgust with carrying out that system to its extreme conclusion (in this case, assuming you have no responsibility over the causal world outside your own actions, not even when the problem is literally at your doorstep). The problem in this case isn’t logical invalidity or soundness, it’s that we intuit wrongness about it. Not lying to the Nazi is just icky.

    And yet, we don’t want ethics to be founded on intuition. It seems we ought to have something more solid than that. There’s no guarantee that intuition will lead us to make the right choice, but it’s often the only backup to contradictory reasonable choices. Seeing that intuition is what leads us to conclude a certain system isn’t all-encompassing helps to take the sting out of that a bit. We trust intuition when it criticizes a certain ethical system, but we don’t trust it when we want to create an ethical system. Hm.

    I see what you mean about one philosophical ethical system contradicting others, and so the impulse is to say that none of them are entirely right. I agree. I tend to see them as tools which match specific circumstances. (When a Nazi comes to your door, don’t pick the Kantian tool! That one doesn’t do the job.) On certain issues (abortion, freedom vs. security, etc.) there’s a sense that we all have differing views on right and wrong, and intuition in these areas is bound to produce chaos. If we take out our toolbox and find there is no right tool, we might find that there’s some truth to each side and we simply have to draw an arbitrary line.

    But I’m afraid intuition is sometimes the only judge, flawed though it is, especially in private matters that don’t impact great numbers of people. The number of times in which we actually experience moral dilemmas—like those in the thought experiments—are fairly rare. (I can think of one time in my life where I had to really pull my hair out to make a decision.) Most of the time, it’s not about doing the right thing, but bettering ourselves. Helping your friend rather than helping two strangers sounds pretty justified to me. You’ve decided that the “maximizing happiness for the greatest number of people” is not the right tool in this situation. You’ve chosen to be a good friend.

    And this is why out of all the systems I’ve encountered, virtue ethics (I prefer Aristotle, but Stoicism fits here too) seems to me the most applicable in the most situations. Focus on improving yourself rather than saving the world. That said, I don’t buy into the idea that we can’t control anything but our own behavior and attitude toward the world…there are many times when doing the right thing means assuming that you can affect change in someone else. It’s called influence, and we do it all the time, especially with those we care about.

    Liked by 1 person

    1. Well said. I agree across the board.

      On virtue ethics, I do see a lot of value in it, particularly since it starts with the interests of the adherent. (Indeed, when it comes to personal morality, I’m more of a virtue ethicist than a consequentialist.) I particularly like that it recognizes that we may have unique obligations to friends, families, and other comrades, that we don’t necessarily have to total strangers. I also like its insight that virtues have to be practiced and cultivated, developed into a habit.

      But like you said, no tool is useful for every situation. When it comes to questions of public policy, where personal relations explicitly aren’t supposed to be significant, I do find consequentialism and deontology more promising.

      Agreed on stoicism. I think it has insights on coping with reality as it is, rather than as we’d like it to be. Some battles are hopeless, others are worth having, and knowing the difference can be tough. But I think it’s worth the effort to try figuring it out. Concluding that they are all hopeless is far too fatalistic for my tastes.

      Liked by 1 person

  5. A week I see I’ve been pondering this piece – thanks!

    Damasio’s “biological value”(‘Self Comes to Mind’ 2010) always comes to mind(sorry) and feels right(ie agrees with what I already believe) but it may be out of context in this discussion. I’m fairly certain that Damasio isn’t saying science can determine morality.

    A couple of things I came across:

    http://peterturchin.com/academic-publications/

    … many titles of interest including:

    ‘Religion and Empire in the Axial Age’, Turchin 2012

    Click to access Bellah_RBB.pdf

    … which will mean more to you having read Bellah’s book. The following also popped up at some point:

    ‘Deontic introduction: A theory of inference from is to ought’, Elqayam et al 2015, Journal of Experimental Psychology: Learning, Memory, and Cognition

    … but I couldn’t find a pre-print and don’t have access. Also, but unrelated, I read a couple pieces on a recent paper regarding the “time slice theory of consciousness” – will have to read more.

    (I finish McGilchrist this week 🙂 )

    Liked by 1 person

    1. Thank you Mark. Always good to hear that one of my pieces makes someone think.

      The Turchin review is interesting. I definitely agree with him that Bellah’s book is a difficult and exasperating one. It’s striking to me how different our takeaways are from it. My own Bellah-inspired post focused on theoretic culture, but his seems to focus on the evolution of egalitarianism. I don’t recall Bellah’s discussion of the U-curve stuff. (Although given how much of that tome I’ve probably forgotten, or never really processed due to its often tedious nature, I shouldn’t be too surprised.)

      A good summation might be: in the neolithic, we overrode our preference for egalitarianism to form large scale societies, but were never really happy about it. As literacy and other forms of communication became more prevalent, along with larger agricultural surpluses, some visionaries saw the chance to return to a more egalitarian framework, which led to universal moralities, stable empires, etc. It’s a narrative of progression, which is appealing, perhaps providing some support for Martin Luther King’s arc of history bending toward justice (i.e. toward a social structure closer to the one we evolved in).

      Turchin’s publication list and blog look interesting. Just started following the blog. Thanks!

      I’d love to hear your impressions of McGilchrist when you’re done.

      Liked by 1 person

      1. I’m looking forward to checking out Turchin’s work and am down to a dozen or so more pages 🙂 – thanks for the summation, think I’ll continue reading about ‘Religion in Human Evolution’ for time being anyway.

        Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.