Why science, philosophy, or religion cannot determine morality

There are some famous thinkers, Sam Harris and Michael Shermer, among others, who are currently attempting to sell the idea that we should have a “science of morality“.  They assert that moral propositions reduce to matters of fact about the wellbeing of conscious creatures.  Many philosophers, such as Massimo Pigliuci, take umbrage at this, seeing morality as the purview of philosophy.  And, of course, many religious believers think that morality can only come from religion, often preferably their own.  I think all of them oversell their respective areas.

First religion.  Different religions, different denominations and sects within the same religion, different scriptures, and often even different sections of scripture, all contradict each other on many moral commandments.  Believers are forced to choose which commandments they will obey.  Which ones they choose and which ones they ignore will be due to their pre-existing moral values.

Ethics, moral philosophy, is the endeavor of using logic to decide what our codes of conduct should be.  There are three major moral frameworks: consequentialism (which includes utilitarianism), deontology, and virtue ethics.  Which one of these you favor depends on  your pre-existing values.  (I personally lean toward virtue ethics.)  In consequentialism, you judge the consequences of an action according to those pre-existing values.  In deontology, you evaluate proposed categorical imperatives by those values.  In virtue ethics, your stance on what is a virtue or a vice will be based on those pre-existing values.

Now for science.  If someone doubts e=mc2 or some other scientific theory, we can conduct experiments to determine if it’s right or wrong. (At least in principle.)  What experiment could ever be conducted to determine the morality of corporal punishment?  Or any of the issues that Pigliucci lists?

It’s important to understand that what people like Harris are arguing for is, first and foremost, a particular type of morality.  Only if and when that morality is accepted would empirical work on it be authoritative.

I do think science can inform morality in a major way. We can’t scientifically determine whether or not corporal punishment is wrong, although we can scientifically explore the psychological and developmental consequences of it.  But we will still judge whether or not those consequences are bad or good according to our pre-existing values.

So where do those pre-existing values I keep mentioning come from?  Another name for these pre-existing values is conscience.  Many religious believers, many of whom have also reasoned themselves to this point, say that our conscience is the voice of God (or the gods) talking to us, and that it should be obeyed.  More scientifically, this conscience, these pre-existing values, arise from our evolved instincts as social animals.  In other words, it’s a survival tool, an adaptation.

The problem is that these instincts, these intuitions, often conflict with each other.  People feel some of these intuitions stronger than others, but the relative strength of the different moral intuitions vary from person to person.  Societies devise rules to resolve these conflicts, with different societies resolving them in different ways.  Cultural rules and norms influence how we perceive and process these intuitions.  In other words, moral intuitions vary between people and cultures.

What are these intuitions?  Jonathan Haidt has written a book on his theory of moral foundations.  If you’re interested in learning more about these foundations, I strongly recommend it, ‘The Righteous Mind‘.  It details why different people, notably conservatives and liberals, can sincerely disagree with each other about ethics.

Here are the foundations according to Haidt’s theory (he admits that there may be more):

  • Care
  • Fairness
  • Liberty / freedom
  • Loyalty
  • Authority
  • Sanctity / Purity

So, where does this leave us with regard to determining morality?  We have no choice but to do the hard work of finding codes of conduct that the majority of us can live with.  Science and philosophy (and for many, religion) can help, but ultimately they can’t make the decisions for us, as much as we might wish they could.

This entry was posted in Morality, Philosophy, Religion and tagged , , , , , , , , , . Bookmark the permalink.

31 Responses to Why science, philosophy, or religion cannot determine morality

  1. They are all pieces of a single puzzle.

    Moral intent seeks good for others as well as for ourselves. (“Good” is something that meets a real need that we have as an individual, a society, or a species.)

    The “utility” of rules is to achieve that good.

    The consequentialist would use empirical data to estimate the benefits and harms expected by choosing one rule (all restaurant owners may choose who they will serve) over another (no people will be denied service based on a prejudice against a race, religion, gender, or orientation).
    Based on that analysis of expected consequences, a working rule would be chosen democratically.

    After a rule has been in play long enough to gain general acceptance, the deontologist will sum it up in a short “principle” so it can be easily remembered, and shroud it in rhetoric such as “inherent”, “natural”, or “God given” to stress its importance. And we’ll pretend we knew it all along.

    Like

    • I think that’s an excellent summation of the utilitarian viewpoint. My question is, what’s a “real need”? How do you judge that a need is real or not? You still have to rely on your conscience, your intuition, your evolved instincts. And which answer that feels right to you may be different from mine (although we would probably agree on a lot).

      I’d be interested to know though, if you see any holes in that reasoning.

      Like

      • “Real need” is easy to define at the basic level: food, clothing, shelter, etc. The “real” is to distinguish what we need from what we want. We need food, but want cake. Lots of things we want are actually bad for us. Maslow has his hierarchy of needs, but as you move up, the things you really need become a lot more subjective, as you suggested.

        The fact that we don’t have a “God’s eye view” of the ultimate consequences means we do have to estimate the benefits and harms as best we can. And two persons may disagree as to what is a benefit or a harm. So data gathering, discussion, and a democratic vote usually establish a working rule. Further experience may lead to revising or discarding the rule.

        Rules are a work in process, and they evolve as our moral judgment evolves. But I believe that all rules are ultimately derived from moral judgment, which is by their utility to improve good or reduce harm for everyone.

        Like

        • I intuitively agree, but consider that by saying “to improve good or reduce harm”, you are actually arguing for a specific type of morality, one based on the care/harm intuition. Others might talk about the importance of fairness/proportionality, freedom, loyalty, respect for authority, and sanctity. The ongoing liberal / conservative debate often boils down to a difference between which intuitions are stronger in each camp. Liberals tend to favor the care/harm intuition (which, since I’m a liberal, is why I intuitively agree with you). Conservatives tend to spread more evenly over fairness/proportionality (i.e. no freeloaders), loyalty, authority, and sanctity. Libertarians tend to favor freedom over all the others.

          All of us tend to be impatient with others whose moral intuition balance is different than ours. Their position is hopelessly illogical, from our intuitive starting point. But from their intuitive starting point, it may be completely logical.

          Like

          • The “Why?” question eventually brings all values to the same criteria. What is the point of fairness or proportionality? Why should we value liberty, loyalty, or respect?

            Ultimately, we judge their utility for some purpose, and what other purpose than to make things better for everyone?

            Which brings up another moral wisdom from Jesus, “The Sabbath was made for man, not man for the Sabbath”. It seems Jesus was always pissing off the Pharisees by breaking the petty rules. He reminded us that, as my mother used to say, “there is a reason for the rule”.

            Like

          • That’s a good point. But consider if you saw a friend speeding on a highway. Would you turn him in for speeding? His speeding may be endangering the lives of others. But most people would regard anyone who did turn their friend in as a schmuck. Dispensing with loyalty in favor of care isn’t always as easy as it seems. (Obviously this gets easier as your friend’s action scale up to greater degrees of harm.)

            Like

          • Interesting scenario. Since a lot of us, including the police it seems, work on the principle that you won’t be pulled over until you’re 10mph or more over the limit, the actual speed limit rule in a 55 mph area is 65 mph.

            But your question is not really about a speeding violation, but rather a “reckless driving” scenario. In Virginia I think this starts at 15 MPH over the posted limit or driving “too fast for existing road conditions”, like rain/sleet/snow. Or perhaps driving while intoxicated.

            Sometimes you just gotta grab gray by the neck and shake it till it’s black and white!

            So, if your friend is actually driving recklessly, you want to convince him to stop, then and there, which is one rule. But if he is only speeding, you’ll probably want to treat him like you’d want him to treat you, which implies another rule.

            Like you suggested, the rule you choose will differ with the context. But there will be empirical data supporting each rule in its context. There will be the potential for different consequences, one getting a ticket and the other having a wreck.

            And how shall we define “loyalty” if it is not to act in our friend’s best interest? Your loyalty may also be expressed by protecting your friend from actual danger. If “loyalty” is nothing more than “enabling” self-destructive behavior, then what is it’s value?

            So, moral judgment of the benefits and harms of applying one rule or another also applies to judging virtues and their correct application.

            Like

          • I think one of the weaknesses of utilitarianism is that, if the consequence aren’t intuitively correct, you can always expand the scope of consideration and still claim it’s valid. But a theory that explains everything, no matter what, essentially explains nothing. The other problem is that people in quick thinking situations don’t run through those types of calculations. They don’t even do it in situations where they have time to ponder things. And, as you noted above, we don’t have a “God’s eye view” of all the consequences.

            One of the things Haidt’s work shows, and I’ve seen this from other sources, is that people make moral judgements, then try to justify them. I’m sure you’re familiar with the trolley care dilemma and the response difference between pulling a lever to sacrifice one person to save many versus pushing a large man into the trolley’s path to save those people. Most people will throw the switch, but not push the man. The utilitarian calculation should be the same, but pushing the man “feels” worse.

            Like

          • I don’t know Haidt, but it is known that a person given a posthypnotic suggestion, and then asked why he did something will come up with a reason, whether the reason makes sense or not. And often people will come up with an excuse or reason that may or may not make sense just in day to day experience.

            I have been in situations where I “sensed” that something was wrong. For example, at a community action agency the director asked us to fill in travel claims that were true, but which we hadn’t bothered to do the paperwork, and then contribute the money to the agency to avoid having to refund unspent funds back to the government. It sounded reasonable at the time, but when I went back to my desk I had a feeling that something was wrong, and went back and declined.

            I’m guessing that “instinct” is a subconscious moral calculation, one that is simply not worked out in words.

            In the case of the trolley car, there are long term consequences as well as immediate consequences. If it is always okay to kill one person to save five, then a doctor could take any person and harvest five organs. The immediate benefit is five potentially healthy people. The long term harm is that anyone, at anytime, could be taken and killed. Not a very beneficial outcome.

            The rules must achieve the best possible good and least harm for everyone.

            Like

  2. Pingback: Are Philosophers Morally Obliged to Engage in Debates about Important Public Policy? | Episyllogism: philosophy and the arts

  3. I suggest approaching the issue of needs from a more correspondentist viewpoint, applying the same standards we use for other kinds of categorization: adopt the mean and reject the outliers, at least initially. I am suggesting that any workable ethical system should provide guidance for most moral choosing. It is a mistake to focus on the most intractable issues first as a litmus test, rather like using abortion to test a moral system. Save the hardest problems for last. At least in the beginning, ignore “needs” that few individuals value and develop the surprisingly short list that is transcultural and timeless and that constitutes a workable starting point.

    Like

    • I think you’re right. It’s actually the strategy I usually see used in effective negotiations. Let’s nail down what we agree on first.

      And let’s face it, that’s democracy. People with unusual desires about how society should work rarely get what they want. Society actually ends up working through a series of compromises that the majority of us can live with. The problems begin when too many start thinking that compromise is unacceptable, as often happens on moral matters.

      Like

  4. Hello there,

    Your post is quite interesting.
    Here’s a line you wrote: More scientifically, this conscience, these pre-existing values, arise from our evolved instincts as social animals.

    I think you should read what C.S lewis says about whether conscience is just an instinct in his book Mere Christianity. Find a snippet here. Thanks for sharing your views.

    http://ricksonmenezes.wordpress.com/2013/11/04/conscience-is-not-an-instinct/

    Like

    • Hello Rick,
      Like much of what C.S. Lewis writes, it resonates with believers, but leaves those of us who aren’t already convinced of God’s existence unconvinced. My response to the assertion that you can’t override an instinct with another instinct is, why not? Often we override an instinct for a short term gain in order to satisfy an instinct for a longer term gain (which we may use reason to comprehend).

      Like

  5. SAP,

    Thanks for your reply. C.S lewis doesn’t reason about overriding an instinct with another. What he reasons is that when you want to save someone, what helps you choose between the instinct to save yourself and the instinct to save another cannot itself be an instinct. What helps you choose between playing two keys of a piano cannot itself be a key.

    What helps you decide between getting out of bed (an instinct to keep an appointmet) and to slump back into bed (laziness instinct) cannot itself be an instinct. Hence the conscience is not an instinct.

    You don’t have to be a christian to accept C.S lewis’s reasoning. Reason is open to all who want to discover it with a good disposition and clarity. That is the whole idea of philosophy: it bases itself not on scripture but on reason

    Like

  6. Rick,
    I guess I don’t see the comparison with piano keys (tools) and instinct as valid. And I wonder, if conscience isn’t an instinct then what is it?

    On C.S. Lewis and reasoning, it’s striking how many people can reason on the same thing and come up with contradictory conclusions. This isn’t surprising since philosophers often don’t agree on those conclusions. I mentioned on another thread that I’m becoming increasingly cautious about accepting philosophical conclusions.
    http://www.preposterousuniverse.com/blog/2013/04/29/what-do-philosophers-believe/

    Like

    • I’m thinking that “instinctual” would imply actions without training or thought. Conscience, on the other hand, will often be trained by society and may involve weighing right and wrong before deciding a course of action.

      The “well-trained” conscience may feel instinctual, because rightness or wrongness seems immediately apparent.

      Conscience is each person’s own moral judgment, distinct from the judgment of others. It may make demands that society may not. It may make demands that put the person in opposition with the demands of society, as with the burning draft cards and civil disobedience during the Viet Nam war.

      Like

      • Good points. I oversimplified. Conscience is more complicated that just an instinct. It’s probably more accurate to say it’s a combination of instincts, a balance of competing impulses. A balance that is informed by our culture and reason. But culture and reason themselves are invoked by something. The desire itself to be reasonable, logical, to conform with culture (or resist it), comes from impulses that arise from a combination of instincts (often conflicting ones). The “better angels of our nature” is really “the more pro-social combination of our instincts”.

        Like

  7. Pingback: Science, philosophy, and caution about what we think we know | SelfAwarePatterns

  8. Pingback: Evolution and altruism | SelfAwarePatterns

  9. Pingback: This View of Life: The Evolution of Fairness | SelfAwarePatterns

  10. Pingback: Rationally Speaking: What virtues, and why? | SelfAwarePatterns

  11. Pingback: Do we all do science? | SelfAwarePatterns

  12. agrudzinsky says:

    Excellent explanation. Razor-sharp vision. It’s suprising that people like Harris still have an illusion that morality can be a scientific question. Science can help us to get from “A” to “B” if “A” is our current state of affairs and “B” is some desired state of affairs. But science cannot set the goal for us. It cannot deteremine what the desired state “B” is. Science is the navigation system. A navigation system can tell us where to go only if we set the destination. I guess, his confusion comes from his reluctance to define the limits of science and rationality which, in turn, comes from a religious faith in science which, again, he is reluctant to admit or even see.

    Like

  13. amanimal says:

    I finished ‘The Righteous Mind’ yesterday and it could easily have been the kind of book that I just keep turning the pages were it not for my self-imposed limit of one chapter per day. I especially liked the layout/presentation with the book divided into 3 parts, each with several chapters with the ‘In Sum’ reviews at the end of each, and the ‘Conclusion’ at the end of the book with the review of key points.

    Note to self – Important lesson: read the notes as they come up – the author put them there for a reason! Haidt put more into them than just citations/references, some good directions for further reading.

    I can’t think of anything that I really disagreed with. He referenced all the right people to convince me from Scott Atran and John Bargh to EO and Timothy Wilson(Gazzaniga too). I may even have to give DS Wilson’s ‘Darwin’s Cathedral’ a read some day.

    “… a small rider on a very large elephant.” – p367

    Like

    • Good deal. I thought you would like it. You had more discipline than I did. When I read it, I had intended to just read the opening section and maybe swing back to something else I was reading at the time. Instead I ended up plowing through the whole book over a weekend.

      The book has a lot of important take-aways (many of which I knew going in but it was good to see the science behind them). I think the small rider / large elephant is one, but another is that morality can’t be boiled down to one criteria (usually argued to be well-being–usually not defined) but involves at least the foundations he identifies.

      Yet another is that rationality won’t always lead people to the same moral conclusions if they’re starting from different intuitive frameworks.

      Like

      • amanimal says:

        I’ve found I retain more if I don’t take in too much and give it a chance to soak in. Taking notes helps too, both in remembering and for future reference.

        The “hive switch” concept was good too. There’s a pair on papers on synchrony in the current ‘Religion, Brain & Behavior’ and I just came across the LEVYNA Project out of Brno, Czech Republic self-described as “… the only institution in the world exclusively dedicated to the experimental study of religion.”

        http://www.levyna.cz/research/

        They’re also researching ritual synchrony and prosociality.

        And his reference to Gary Marcus on innateness as “pre-wired” rather than “hard-wired”. Anyway, as you said, lots of good stuff.

        Gazzaniga also talks about moral intuitions and Haidt in ‘Chapter 5 – The Social Mind’ of ‘Who’s In Charge?’ 🙂

        Like

        • I find the same thing. Luckily, my habit of reading when I go to bed usually spreads my book reading over several 30-60 minute sessions, with the benefit of enhanced retention. Unless I get totally engrossed, then all bets are off.

          “Who’s In Charge?’ reminder received 🙂

          Like

          • amanimal says:

            I read the prologue to Hood’s ‘The Self Illusion’ last night, but I think I’m going with Damasio’s ‘Self Comes To Mind’ next – going to have a look at it tonight and see.

            Like

          • I’m still in Saturn’s Children, although I anticipate finishing it this weekend, and hopefully throwing up a review shortly thereafter.

            Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s