Last week, I made a post on the fact that we have desires, urges, impulses, motivations, that are inherently not rooted in reason or logic, that while reason and logic are extremely useful, they are tools of these desires, these instincts. And that while reason can have an effect on how we resolve conflicts between these desires, it is ultimately subordinate to them.
I went on in another post (and a followup clarification post) to note that morality arises from these instincts, in essence arguing that morality is an evolutionary adaptation of our species. We hold foundational values. We do not have reasons for holding these values. We just intrinsically hold them. They are effectively programmed into us by evolution, that is, naturally arising through mutations and natural selection.
In those posts, I purposely stayed at a general level, because while there are definite theories on the details, they aren’t as universally accepted as the general notions. Now I’m going to go into a little detail on Jonathan Haidt‘s theory of moral foundations.
Each of these foundations is essentially an instinct module, a base set of programming that inclines us to see something as moral or immoral. Haidt uses the metaphor of taste buds to explain how these foundations ultimately relate to moral values.
The human mouth only has a limited set of taste bud types. We can fundamentally only taste sweet, sour, salty, bitter, and savory, but from the combinations of these basic tastes, we develop food preferences. Any one of these tastes in isolation, would not be satisfying. It is only in combinations that they produce the rich variety of cuisine that we experience.
Likewise, any moral foundation in isolation won’t explain the full range of human morality. And many of the moral paradigms we follow arise from combinations of these foundational impulses. Haidt is careful to say that these are only the foundations that his empirical studies have isolated so far. There are almost certainly more that will be discovered in time. (Personally, honesty comes to mind, and the sanctity foundation seems too catchall to me.)
He warns against trying to be overly reductionist on these motivations, to oversimplify, to try to cram all of morality into only one or two. (See for example, his Edge response this year.)
Each of these foundations has an evolutionary reason, an adaptive challenge, that it evolved to solve. Each has situations that trigger them. Each has emotions associated with it, and each have what are commonly recognized as virtues associated with it.
One important thing to notice as you’re reading through these is that some of them contradict each other. That isn’t a failure of the theory. It recognizes the conflicts we wrestle with in resolving moral dilemmas.
The titles are in the format of desirable / undesirable.
1. Care / harm
This foundation’s adaptive challenge was care and protection of children. Its original trigger was suffering or neediness of a child. It’s now often triggered by anyone or anything that is suffering, particularly anything young or cute. The main emotion associated with it is compassion. The main virtues are caring and kindness.
2. Fairness / cheating
The adaptive challenge was reaping the benefits of two way partnerships. It is typically triggered by any form of cheating or deception. Modern examples are marital infidelity, laziness, or free riding. Emotions are anger, guilt, or gratitude. The main virtues are fairness, justice, and trustworthiness.
3. Loyalty / betrayal
The adaptation was cohesive coalitions and group cohesion, and it is triggered by threats to those coalitions or groups. It is often triggered today for many people by sports teams and by nations. Emotions include group pride and rage toward traitors. Virtues are loyalty, patriotism, and self-sacrifice.
4. Authority / subversion
Adapted to forge beneficial adaptations within hierarchies. Originally triggered by signs of dominance and submission. Contemporarily triggered by bosses and respected professionals. The emotions here are respect and fear. The virtues are obedience and deference.
5. Sanctity / degradation
Originally an aversion to contaminants, such as rotting meat or diseased people, although today it can often be triggered by taboo ideas such as communism or racism. The emotion is disgust, and the virtues are temperance, chastity, piety, and cleanliness.
6. Freedom / oppression
A foundation that is often in conflict with 3, this is resistance to dominance and restrictions to liberty. It appears to be a relatively late adaptation, arising in the last million years or so. Although chimpanzee troops tend to have strict hierarchies, hunter-gather human societies tend to be egalitarian. The adaptation may have been for cooperative hunting. With the rise of agriculture 10,000 years ago, the older hierarchy impulses appear to have reemerged, although they are now moderated by this foundation.
It’s important to understand that all normal humans are motivated by all of these foundations. However, while we all have similar instincts, the strength of these foundations in relation to each other varies from human to human.
To someone strongly motivated by one of these foundations over the other, that motivation will seem self evidently right to them. To someone else with a different dominant foundation, the first person will seem unreasonable. Working from different intuitional frameworks, they will be unlikely to convince each other by reason or logic.
Liberals (a group I include myself in) tend to feel strongest motivation from the care / harm foundation, and tend to be suspicious of motivations from the other foundations when they conflict with this one. Many, such as Sam Harris, insist that this is the only important foundation.
It’s pretty clear that libertarians are strongly motivated by the freedom / oppression foundation, being more willing than average to tolerate violations of the other foundations to insure that it is satisfied.
Conservatives tend to be more evenly distributed across the various foundations. This isn’t specific to American conservatives. It’s also the makeup of traditionalists in most cultures throughout the world. Of course, conservatives in one culture may have radically different values than conservatives in another culture.
So, where does that leave us? Reason and logic can still be useful in helping to sharpen our understanding of why we hold certain positions. It can help to clarify what our disagreements are really about. But it can’t “prove” moral precepts. Moral propositions can’t be determined, they can only be advocated for.
I’ve said this on other posts, but it still holds. We have no choice but to do the hard work of finding rules of conduct that most of us can live with. Science and philosophy can help, but they can’t make the decisions for us.
If you find this topic interesting, I strongly recommend Jonathan Haidt’s book, ‘The Righteous Mind‘. It covers this theory, which I can’t do justice to in a single blog post, in great detail. Haidt is an engaging writer, and you’ll come away with important insights into the human condition.
For other evolutionary psychology theories, I find this site interesting.
- Why science, philosophy, or religion cannot determine morality (selfawarepatterns.wordpress.com)
- Jonathan Haidt: Why Sam Harris is Unlikely to Change his Mind (3quarksdaily.com)
- Moral values aren’t absolute, but aren’t arbitrary either (selfawarepatterns.com)
- A Response to Haidt Regarding Harris’ Moral Landscape Challenge (danielmiessler.com)
18 thoughts on “The foundations of morality”
I seriously like this series of yours on morality. You’ve nailed the fundamentals brilliantly, far better than i could.
Thanks John, although based on what I’ve seen of your writing, I think you would cover it extremely well.
I wouldn’t be so sure about that 🙂 I actually tend to switch off whenever morality is raised. It seems so simple to me, so easily explained through natural mechanisms, that i can’t get excited about it… not like theists do.
Honesty seems a little off to me as an independent value. I can’t really see a pattern to honest people versus not. Plus, groups will automatically consider others to be less honest by virtue of not being in group. For example, Democrats see Republicans as less honest, even though they might be just the same.
Good point. Honesty is seen as a virtue, except where it’s pointlessly hurtful, and maybe judging the difference requires the other foundations.
Have you watched this talk by Dan Ariely?
He considers various factors that affect cheating and also attitude towards cheating. Not only groups presume members of other groups less honest, but also they are more lenient towards cheating within their own group (watch the “color of sweatshirt” example around 12:00).
That is interesting, it makes me think of how atheists are assumed to be less honest. Where the group of Christians might normally consider Muslims to be less honest, if you throw an atheist into the mix, the groups rearrange so that the group isn’t based around flavors of god belief, but god belief itself. Suddenly, atheists are the least honest in the room.
But it occurs to me that I’m making the distinction between being viewed as honest versus actually being honest. One is the moral expectations of yourself, and the other, the moral expectations of others. The latter is related to fairness / cheating, and I suspect the former comes from the interplay between fairness and groupishness.
Do you see these as personality traits that vary from individual to individual, so you for example are hard-wired (by genes? by childhood experiences?) to be liberal?
Then moral values are not fixed, not simply learned from society, but vary from person to person?
Then is there any possibility of consensus within a group to agree desirable group behaviour?
This model of moral foundations suggests that humans would be far better off dividing into groups of like-minded individuals rather than trying to form heterogeneous communities, as this will inevitably lead to conflict over our most deeply-held values.
That’s a good question. Haidt actually points out the benefits of monolithic societies in his book, although I think diverse societies bring a lot of their own benefits as well.
Chris Mooney has been writing a lot lately on the genetics of political orientation. I’m a bit skeptical. I think genetics definitely has a role, but because I’ve changed myself over the years, I tend to think our foundation mixup can be changed.
That said, changing takes a shift in emotional commitment, which doesn’t seem achievable to reasoning alone. And what is an emotionally persuasive message to someone dominated by the care foundation might not seem persuasive to someone dominated by the loyalty one, or the freedom one.
Great overview ‘SAP’, I read ‘The Righteous Mind’ recently and strongly 2nd your strong recommendation!
For an introduction see:
‘The Emotional Dog and its Rational Tail: A Social Intuitionist Approach to Moral Judgment’, Haidt 2001
Click to access LP_Haidt.pdf
… though you have to read the book for the rider/elephant metaphor:
“… that the mind is divided, like a rider on an elephant, and the rider’s job is to serve the elephant. The rider is our conscious reasoning – the stream of words and images of which we are fully aware. The elephant is the other 99 percent of mental processes – the ones that occur outside of awareness but that actually govern most of our behavior.” – ‘The Righteous Mind’, Introduction
… referencing ‘Strangers to Ourselves: Discovering the Adaptive Unconscious’, Wilson 2002, which is another good read. Our elephants don’t get near the appreciation and respect they deserve.
One of the main takeaways from ‘The Righteous Mind’ for me was, if not an appreciation for, at least a little bit better understanding of the slower members of our herd.
Thanks amanimal! Excellent URLs.
Ah it’s all coming together for me now (this series of essays and others on a priori knowledge). You’re proposing a theory for our self-evident moral intuitions (ethical intuitionism) that doesn’t rely on synthetic a priori knowledge? Very fascinating, I think it is a marvelous idea.
Thanks! I think that’s right. As I understand it, ‘synthetic a priori’ would imply logical conclusions from premises grounded in experience. Those things do have an effect, particularly in helping us resolve the conflicts between these innate desires, but the innate desires stem from neither logic nor experience.
Thanks for an excellent review, as usual. I like Haidt a lot. I was familiar with his ideas from his TED talks, but I have not read his criticism of Harris and a link to criticism of Haidt’s criticism of Harris is also interesting. When you like an opinion, reading criticism of this opinion is important.
Thanks. In the book, I recall Haidt only criticizing Harris in the most indirect of ways, referring more to the idea of oversimplifying morality than Harris by name. Obviously in this latest article, he’s being more direct, generating replies like the one I linked to.
I totally agree that we should read criticisms of cherished ideas. We should always be open to changing our minds. Ironically, I’ve occasionally found myself sold on some ideas more by the critics of it than the proponents.
Another excellent post. I really have to come back and read your older posts one of these days. (D’oh, so many interesting things to pursue, only so many days in a lifetime!)
Just an aside: there are “flavors” that aren’t flavors at all, but smells. A huge amount of what we perceive as taste is actually based on smell. I think wintergreen, for example, is purely a smell. That’s why food often tastes bland when we have a sinus problem. And while there are only the five tastes, there is a huge range of smells.
Thanks again! I’m reminded of the smell aspect of taste every time I have a bad cold.