Jonathan Haidt. The Righteous Mind: Why Good People Are Divided by Politics and Religion. 2012. (419 pages)

I have striven not to laugh at human actions, not to weep at them, not to hate them, but to understand them. – Baruch Spinoza, Tractatus Politicus, 1676


This book is about why it’s so hard for us to get along. (xi)

My goal in this book is to drain some of the heat, anger, and divisiveness out of these topics and replace them with awe, wonder, and curiosity. (xii)


…human nature is not just intrinsically moral, it’s also intrinsically moralistic, critical, and judgmental. (xiii)

I want to show you that an obsession with righteousness (leading inevitably to self-righteousness) is the normal human condition. It is a feature of our evolutionary design, not a bug or error that crept into minds that would otherwise be objective and rational. (xiii)

When I was a teenager I wished for world peace, but now I yearn for a world in which competing ideologies are kept in balance, systems of accountability keep us all from getting away with too much, and fewer people believe that righteous ends justify violent means. (xiii)

PART I Intuitions Come First, Strategic Reasoning Second

Central Metaphor: The mind is divided, like a rider on an elephant, and the rider’s job is to serve the elephant.

1. Where Does Morality Come From?

Understanding the simple fact that morality differs around the world, and even within societies, is the first step toward understanding your righteous mind. The next step is to understand where these many moralities came from in the first place. (4)


Where does morality come from? | There are two obvious answers to this question: nature or nurture. If you pick nature then you’re a nativist. … if you believe that moral knowledge comes from nurture, then you are an empricist. (5)

But this is a false choice, and in 1987 moral psychology was mostly focused on a third answer: rationalism, which says that kids figure out morality for themselves. (5)

Kids figure it out for themselves but only when their minds are ready and they are given the right kinds of experiences. (6)

…we can’t say that it is innate, and we can’t say that kids learn it directly from adults. It is, rather, self-constructed as kids play with other kids. (6)

This is the essence of psychological rationalism: We grow into our rationality as caterpillars grow into butterflies. If the caterpillar eats enough leaves, it will (eventually) grow wings. And if the child gets enough experiences of turn taking, sharing, and playground justice, it will (eventually) become a moral creature, able to use its rational capacities to solve even harder problems. Rationality is our nature, and good moral reasoning is the end point of development. (7)


Egalitarian relationships (such as with peers) invite role taking, but hierarchical relationships (such as with teachers and parents) do not. … Piaget and Kohlberg both thought that parents and other authorities were obstacles to moral development. (9)

But by using a framework that predefined morality as justice while denigrating authority, hierarchy, and tradition, it was inevitable that the research would support worldviews that were secular, questioning, and egalitarian. (9)


…if you are searching for the first appearance of a moral concept, then you’d better find a technique that doesn’t require much verbal skill. (9)

Children recognize that rules that prevent harm are moral rules, which Turiel defined as rules related to “justice, rights, and welfare pertaining to how people ought to relate to each other.” (10)

In other words, young children don’t treat all rules the same, as Piaget and Kohlberg had supposed. Kids can’t talk like moral philosophers, but they are busy sorting social information in a sophisticated way. They seem to grasp early on that rules that prevent harm are special, important, unalterable, and universal. And this realization, Turiel said, was the foundation of all moral development. Children construct their moral understanding on the bedrock of the absolute moral truth that harm is wrong. Specific rules may vary across cultures, but in all of the cultures Turiel examined, children still made a distinction between moral rules and conventional rules. (10)

…the political implications were similar: morality is about treating individuals well. It’s about harm and fairness (not loyalty, respect, duty, piety, patriotism, or tradition). Hierarchy and authority are generally bad things (so it’s best to let kids figure things out for themselves). Schools and families should therefore embody progressive principals of equality and autonomy (not authoritarian principles that enable elders to train and constrain children). (11)


That was my first hint that groups create supernatural beings not to explain the universe, but to order their societies. (12)

…morality often involves tension within the group linked to competition between different groups. (12)

So what’s going on here? If Turiel was right that morality is really about harm, then why do most non-Western cultures moralize so many practices that seem to have nothing to do with harm? Why do many Christians and Jews believe that “cleanliness is next to godliness”? And why do so many Westerners, even secular ones, continue to see choices about food and sex as being heavily loaded with moral significance? Liberals sometimes say that religious conservatives are sexual prudes for whom anything other than missionary-position intercourse within marriage is a sin. But conservatives can just as well make fun of liberal struggles to choose a balanced breakfast — balanced among moral concerns about free-range eggs, fair-trade coffee, naturalness, and a variety of toxins, some of which (such as genetically modified corn and soybeans) pose a greater threat spiritually than biologically. (13)

[via: Is this a fair comparison?]


…all societies must resolve a small set of questions about how to order society, the most important being how to balance the needs of individuals and groups. … Most societies have chosen the sociocentric answer, placing the needs of groups and institutions first, and subordinating the needs of individuals. In contrast, the individualistic answer places individuals at the center and makes a society a servant of the individual. (14)

…in the United States the social order is a moral order, but it’s an individualistic order built up around the protection of individuals and their freedom. (17)



…so many subjects tried to invent victims. … post hoc fabrications. (24)

These subjects were reasoning. They were working quite hard at reasoning. but it was not reasoning in search of truth; it was reasoning in support of their emotional reactions. It was reasoning as described by the philosopher David Hume, who wrote in 1739 that “reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” | I had found evidence for Hume’s claim. I had found that moral reasoning was often a servant of moral emotions, and this was a challenge to the rationalist approach that dominated moral psychology. (25)


The moral domain varies by culture. It is unusually narrow in Western, educated, and individualistic cultures. Sociocentric cultures broaden the moral domain to encompass and regulate more aspects of life.

People sometimes have gut feelings — particularly about disgust and disrespect — that can drive their reasoning. Moral reasoning is sometimes a post hoc fabrication.

Morality can’t be entirely self-constructed by children based on their growing understanding of harm. Cultural learning or guidance must play a larger role than rationalist theories had given it.

If morality doesn’t come primarily from reasoning, then that leaves some combination of innateness and social learning as the most likely candidates. In the rest of this book I’ll try to explain how morality can be innate (as a set of evolved intuitions) and learned (as children learn to apply those intuitions within a particular culture). We’re born to be righteous, but we have to learn what, exactly, people like us should be righteous about. (26)

2. The Intuitive Dog and Its Rational Tail

One of the greatest truths in psychology is that the mind is divided into parts that sometimes conflict. To be human is to feel pulled in different directions, and tomorrow of all – sometimes in horror – at your inability to control your own actions. (27)

I am dragged along by a strange new force. Desire in reason are pulling in different directions. I see the right way and prove it, but follow the wrong. – Roman poet Ovid

Western philosophy has been worshiping reason and distrusting the passions for thousands of years. … I’ll refer to this worshipful attitude throughout this book as the rationalist delusion. I call it a delusion because when a group of people make something sacred, the members of the cult lose the ability to think clearly about it. Morality binds and it blinds.


…two waves of moralism that turned nativism into a moral offense. The first was the horror among anthropologists and others at “social Darwinism” — the idea (raised but not endorsed by Darwin) that the richest and most successful nations, races, and individuals are the fittest. … The second wave of moralism was the radical politics that washed over universities in America, Europe, and Latin America in the 1960s and 1970s. (31)

If nativism could be used to justify existing power structures, then nativism must be wrong (Again, this is a logical error, but this is the way righteous minds work.) (31)

In his 2002 book The Blank Slate: The Modern Denial of Human Nature, Steven Pinker describes the way scientists betrayed the values of science to maintain loyalty to the progressive movement. Scientists became “moral exhibitionists” in the lecture hall as they demonized fellow scientists and urged their students to evaluate ideas not for their truth but for the consistency with progressive ideals such as racial and gender equality.” (31)

[E.O.] Wilson sided with Hume. He charged that what moral philosophers were really doing was fabricating justifications after “consulting the emotive centers” of their own brains. He predicted that the study of ethics would soon be taken out of the hands of philosophers and “biologicized,” or made to fit with the emerging science of human nature. Such a linkage of philosophy, biology, and evolution would be an example of the “new synthesis” that Wilson dreamed of, and that he later referred to as consilience — the “jumping together” of ideas to create a unified body of knowledge. (31)


The head can’t even do head stuff without the heart. So Hume’s model fit these cases best: when the master (passions) drops dead, the servant (reasoning) has neither the ability nor the desire to keep the estate running. Everything goes to ruin. (34)


Can people make moral judgments just as well when carrying a heavy cognitive load as when carrying a light one? The answer turned out to be yes. (36)


…judgment and justification are separate processes. (42)

…rapid intuitive judgment (“That’s just wrong!”) followed by slow and sometimes tortuous justifications (“Well, their two methods of birth control might fail, and the kids they produce might be deformed”). The intuition launched the reasoning, but the intuition did not depend on the success or failure of the reasoning. (43)

…moral judgments are not subjective statements; they are claims that somebody did something wrong. (44)

We do moral reasoning not to reconstruct the actual reasons why we ourselves came to a judgment; we reason to find the best possible reasons why somebody else ought to join us in our judgment. (44)


…emotions were filled with cognition. Emotions occur in steps, the first of which is to appraise something that just happened based on whether it advanced or hindered your goals. (44)

Emotions are a kind of information processing. (45)

…moral judgment is a cognitive process, as are all forms of judgment. The crucial distinction is really between two different kinds of cognition: intuition and reasoning. (45)

Intuition is the best word to describe the dozens or hundreds of rapid, effortless moral judgments and decisions that we all make every day. (45)

In The Happiness Hypothesis, I called these two kinds of cognition the rider (controlled processes, including “reasoning-why”) and the elephant (automatic processes, including emotion, intuition, and all forms of “seeing-that”). (45)

Moral talk serves a variety of strategic purposes such as managing your reputation, building alliances, and recruiting bystanders to support your side in the disputes that are so common in daily life. (46)

We make our first judgments rapidly, and we are dreadful at seeking out evidence that might discomfirm those initial judgments. Yet friends can do for us what we cannot do for ourselves: they can challenge us, giving us reasons and arguments…that sometimes trigger new intuitions, thereby making it possible for us to change our minds. (47)

For most of us, it’s not every day or even every month that we change our mind about a moral issue without any prompting from anyone else. | Far more common than such private mind changing is social influence. Other people influence us constantly just by revealing that they like or dislike somebody. (47)

Many of us believe that we follow an inner moral compass, but the history of social psychology richly demonstrates that other people exert a powerful force, able to make cruelty seem acceptable and altruism seem embarrassing, without giving us any reasons or arguments. (48)

But intuitions (including emotional responses) are a kind of cognition. They’re just not a kind of reasoning. (48)


The social intuitionist model offers an explanation of why moral and political arguments are so frustrating: because moral reasons are the tail wagged by the intuitive dog. A dog’s tail wags to communicate. You can’t make a dog happy by forcibly wagging its tail. And you can’t change people’s minds by utterly refuting their arguments. (48)

And as reasoning is not the source, whence either disputant derives his tenets; it is in vain to expect, that any logic, which speaks not to the affections, will ever engage him to embrace sounder principles. – David Hume

If there is any one secret of success it lies in the ability to get the other person’s point of view and see things from their angle as well as your own. – Henry Ford

If you really want to change someone’s mind on a moral or political matter, you’ll need to see things from that person’s angle as well as your own. And if you do truly see it the other person’s way — deeply and intuitively — you might even find your own mind opening in response. Empathy is an antidote to righteousness, although it’s very difficult to empathize across a moral divide. (49)


The mind is divided into parts, like a rider (controlled processes) on an elephant (automatic processes). The rider evolved to serve the elephant.

You can see the rider serving the elephant when people are morally dumbfounded. They have strong gut feelings about what is right and wrong, and they struggle to construct post hoc justifications for those feelings. Even when the servant (reasoning) comes back empty-handed, the master (intuition) doesn’t change his judgment.

The social intuitionist model starts with Hume’s model and makes it more social. Moral reasoning is part of our lifelong struggle to win friends and influence people. That’s why I say that “intuitions come first, strategic reasoning second.” You’ll misunderstand moral reasoning if you think about it as something people do by themselves in order to figure out the truth.

Therefore, if you want to change someone’s mind about a moral or political issue, talk to the elephant first. If you ask people to believe something that violates their intuitions, they will devote their efforts to finding an escape hatch–a reason to doubt your argument or conclusion. They will almost always succeed. (49-50)

3. Elephants Rule


Brains evaluate everything in terms of potential threat or benefit to the self, and then adjust behavior to get more of the good stuff and less of the bad. Animal brains make such appraisals thousands of times a day with no need for conscious reasoning, all in order to optimize the brain’s answer to the fundamental question of animal life: Approach or avoid? (55)

Every emotion (such as happiness or disgust) includes an affective reaction, but most of our affective reactions are too fleeting to be called emotions. (55)

The brain tags familiar things as good things. …”the mere exposure effect,” and it is a basic principle of advertising. (56)

The thinking system is not equipped to lead — it simply doesn’t have the power to make things happen — but it can be a useful advisor. (56)


Implicit Association Test (IAT), …at ProjectImplicit.org. (58)

The bottom line is that human minds, like animals minds, are constantly reacting intuitively to everything they perceive, and basing their responses on those reactions. Within the first second of seeing, hearing, or meeting another person, the elephant has already begun to lean toward or away, and that lean influences what you think and do next. Intuitions come first. (59)

Sure enough, people made harsher judgments when they were breathing in foul air. … As…Jerry Clore puts it, we use “affect as information.” When we’re trying to decide what we think about something, we look inward, at how we’re feeling. If I’m feeling good, I must like it, and if I’m feeling anything unpleasant, that must mean I don’t like it. (60)

…subjects who are asked to wash their hands with soap before filling out questionnaires become more moralistic about issues related to moral purity (such as pornography and drug use). Once you’re clean, you want to keep dirty things far away. (61)

…immorality makes people want to get clean. (61)

In other words, there’s a two-way street between our bodies and our righteous minds. Immorality makes us feel physically dirty, and cleansing ourselves can sometimes make us more concerned about guarding our moral purity. (61)

Moral judgment is not a purely cerebral affair in which we weigh concern about harm, rights, and justice. It’s a kind of rapid, automatic process more akin to the judgments animals make as they move through the world, feeling themselves drawn toward or away from various things. Moral judgment is mostly done by the elephant. (61)


Robert Hare, a leading researcher, defines psychopathy by two sets of features. There’s the usual stuff that psychopaths do–impulsive antisocial behavior, beginning in childhood–and there are the moral emotions that psychopaths lack. They feel no compassion, guilt, shame, or even embarrassment, which makes it easy for them to lie, and to hurt family, friends, and animals. (62)

The ability to reason combined with a lack of moral emotions is dangerous. (62)

Psychopathy does not appear to be caused by poor mothering or early trauma, or to have any other nurture-based explanation. It’s a genetically heritable condition that creates brains that are unmoved by the needs, suffering, or dignity of others. The elephant doesn’t respond with the slightest lean to the gravest injustice. The rider is perfectly normal–he does strategic reasoning quite well. But the rider’s job is to serve the elephant, not to act as a moral compass. (63)


…by six months of age, infants are watching how people behave toward other people, and they are developing a preference for those who are nice rather than those who are mean. In other words, the elephant begins making something like moral judgments during infancy, long before language and reasoning arrive. (64)

Looking at the discoveries from infants and psychopaths at the same time, it’s clear that moral intuitions emerge very early and are necessary for moral development. The ability to reason emerges much later, and when moral reasoning is not accompanied by moral intuition, the results are ugly. (64)


Utilitarianism is the philosophical school that says you should always aim to bring about the greatest total good, even if a few people get hurt along the way, so if there’s really no other way to save those five lives, go ahead and push. Other philosophers believe that we have duties to respect the rights of individuals, and we must not harm people in our pursuit of other goals, even moral goals such as saving lives. This view is known as deontology (from the Greek root that gives us our word duty). Deontologists talk about high moral principles derived and justified by careful reasoning; they would never agree that these principles are merely post hoc rationalizations of gut feelings. (65)

With few exceptions, the results tell a consistent story: the ares of the brain involved in emotional processing activate almost immediately, and high activity in these areas correlates with the kinds of moral judgments or decisions that people ultimately make. (66)


The main way that we change our minds on moral issues is by interacting with other people. We are terrible at seeking evidence that challenges our own beliefs, but other people do us this favor, just as we are quite good at finding errors in other people’s beliefs. When discussions are hostile, the odds of change are slight. The elephant leans away from the opponent, and the rider works frantically to rebut the opponent’s charges. | But if there is affection, admiration, or a desire to please the other person, then the elephant leans toward that person and the rider tries to find the truth in the other person’s arguments. (68)

Intuitions come first, and under normal circumstances they cause us to engage in socially strategic reasoning, but there are ways to make the relationship more of a two-way street. (70)


Brains evaluate instantly and constantly (as Wundt and Zajoc said).

Social and political judgments depend heavily on quick intuitive flashes (as Todorov and work with the IAT have shown).

Our bodily states sometimes influence our moral judgments. Bad smells and tastes can make people more judgmental (as can anything that makes people think about purity and cleanliness).

Psychopaths reason but don’t feel (and are severely deficient morally).

Babies feel but don’t reason (and have the beginnings of morality).

Affective reactions are in the right place at the right time in the brain (as shown by Damasio, Greene, and a wave of more recent studies).

4. Vote for Me (Here’s Why)

In this chapter I’ll show that reason is not fit to rule; it was designed to seek justification, not truth. I’ll show that Glaucon was right: people care a great deal more about appearance and reputation than about reality. (74)


If you see one hundred insects working together toward a common goal, it’s a sure bet they’re siblings. But when you see one hundred people working on a construction site or marching off to war, you’d be astonished if they all turned out to be members of one large family. Human beings are the world champions of cooperation beyond kinship, and we do it in large part by creating systems of formal and informal accountability. We’re really good at holding others accountable for their actions, and we’re really skilled at navigating through a world in which others hold us accountable for our own. (75)

[Accountability is the] …explicit expectation that one will be called to justify one’s beliefs, feelings, or actions to others. – Phil Tetlock

…we act like intuitive politicians striving to maintain appealing moral identities in front of our multiple constituencies. (75)

Accountability increases exploratory thought only when three conditions apply:

decision makers learn before forming any opinion that they will be accountable to an audience.

the audience’s views are unknown, and

they believe the audience is well informed and interested in accuracy.

When all three conditions apply, people do their darndest to figure out the truth, because that’s what the audience wants to hear. But the rest of the time — which is almost all of the time — accountability pressures simply increase confirmatory thought. People are trying harder to look right than to be right.. (76)

Our moral thinking is much more like a politician searching for votes than a scientist searching for truth. (76)


The sociometer is part of the elephant. Because appearing concerned about other people’s opinions makes us look weak, we (like politicians) often deny that we carea bout public opinion polls. But the fact is that we care a lot about what others think of us. The only people known to have no sociometer are psychopaths. (78)


…confirmation bias, the tendency to seek out and interpret new evidence in ways that confirm what you already think. (79-80)

Schools don’t teach people to reason thoroughly; they select the applicants with higher IQs, and people with higher IQs are able to generate more reasons. (81)

If thinking is confirmatory rather than exploratory in these dry and easy cases, then what chance is there that people will think in an open-minded, exploratory way when self-interest, social identity, and strong emotions make them want or even need to reach a preordained conclusion? (81)


The bottom line is that in lab experiments that give people invisibility combined with plausible deniability, most people cheat. (83)


…when we want to believe something, we ask ourselves, “Can I believe it?” Then (as Kuhn and Perkins found), we search for supporting evidence, and if we find even a single piece of pseudo-evidence, we can stop thinking. We now have permission to believe. We have a justification, in case anyone asks. | In contrast, when we don’t want to believe something, we ask ourselves, “Must I believe it?” Then we search for contrary evidence, and if we find a single reason to doubt the claim, we can dismiss it. You only need one key to unlock the handcuffs of must. (84)

If people can literally see what they want to see–given a bit of ambiguity–is it any wonder that scientific studies often fail to persuade the general public? Scientists are really good at finding flaws in studies that contradict their own views, but it sometimes happens that evidence accumulates across many studies to the point where scientists must change their minds. I’ve seen this happen in my colleagues (and myself) many times, and it’s part of the accountability system of science–you’d look foolish clinging to discredited theories. But for nonscientists, there is no such thing as a study you must believe. It’s always possible to question the methods, find an alternative interpretation of the data, or, if all else fails, question the honesty or ideology of the researchers.

| And now that we all have access to search engines on our cell phones, we can call up a team of supportive scientists for almost any conclusion twenty-four hours a day. (85)


Political opinions function as “badges of social membership.” … Our politics is groupish, not selfish. (86)

Like rats that cannot stop pressing a button, partisans may be simply unable to stop believing weird things. The partisan brain has been reinforced so many times for performing mental contortions that free it from unwanted beliefs. Extreme partisanship may be literally addictive. (88)


As an intuitionist, I’d say that the worship of reason is itself an illustration of one of the most long-lived delusions in Western history: the rationalist delusion. (88)

Anyone who values truth should stop worshipping reason. (89)

They [Hugo Mercier and Dan Sperber] concluded that most of the bizarre and depressing research findings make perfect sense once you see reasoning as having evolved not to help us find truth but to help us engage in arguments, persuasion, and manipulation in the context of discussions with other people. As they put it, “skilled arguers…are not after the truth but after arguments supporting their views.” (89)

…confirmation bias is a built-in feature (of an argumentative mind), not a bug that can be removed (from a platonic mind). (90)

We should not expect individuals to produce good, open-minded, truth-seeking reasoning, particularly when self-interest or reputational concerns are in play. But if you put individuals together in the right way, such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system. This is why it’s so important to have intellectual and ideological diversity within any group or institution whose goal is to find truth (such as an intelligence agency or a community of scientists) or to produce good public policy (such as a legislature or advisory board). (90)

And if our goal is to produce good behavior, not just good thinking, then it’s even more important to reject rationalism and embrace intuitionism. Nobody is ever going to invent an ethics class that makes people behave ethically after they step out of the classroom. Classes are for riders, and riders are just going to use their new knowledge to serve their elephants more effectively. (90)


We are obsessively concerned about what others think of us, although much of the concern is unconscious and invisible to us.

Conscious reasoning functions like a press secretary who automatically justifies any position taken by the president.

With the help of our press secretary, we are able to lie and cheat often, and then cover it up so effectively that we convince even ourselves.

Reasoning can take us to almost any conclusion we want to reach, because we ask “Can I believe it?” when we want to believe something, but “Must I believe it?” when we don’t want to believe. The answer is almost always yes to the first question and no to the second.

In moral and political matters we are often groupish, rather than selfish. We deploy our reasoning skills to support our team, and to demonstrate commitment to our team.

PART II There’s More to Morality than Harm and Fairness

Central Metaphor

The righteous mind is like a tongue with six taste receptors.

5. Beyond WEIRD Morality

Western, Educated, Industrialized, Rich, and Democratic: W.E.I.R.D.

The WEIRDer you are, the more you see a world full of separate objects, rather than relationships. (96)

Most people think holistically (seeing the whole context and the relationships among parts), but WEIRD people think more analytically (detaching the focal object from its context, assigning it to a category, and then assuming that what’s true about the category is true about the object). (97)

If WEIRD and non-WEIRD people think differently and see the world differently, then it stands to reason that they’d have different moral concerns. (97)

Part II of this book is about those additional concerns and virtues. It’s about the second principle of moral psychology: There’s more to morality than harm and fairness. (98)


The ethic of autonomy is based on the idea that people are, first and foremost, autonomous individuals with wants, needs, and preferences. …rights, liberty, and justice. (99)

The ethic of community is based on the idea that people are, first and foremost, members of larger entities such as families, teams, armies, companies, tribes, and nations. These larger entities are more than the sum of the people who compose them; they are real, they matter, and they must be protected. …duty, hierarchy, respect, reputation, and patriotism. (100)

The ethic of divinity is based on the idea that people are, first and foremost, temporary vessels within which a divine soul has been implanted. People are not just animals with an extra serving of consciousness; they are children of God and should behave accordingly. The body is a temple, not a playground. …sanctity and sin, purity and pollution, elevation and degradation. (100)



I had escaped from my prior partisan mind-set (reject first, ask rhetorical questions later) and began to think about liberal and conservative policies as manifestations of deeply conflicting but equally heartfelt visions of the good society. (109)

It felt good to be released from partisan anger. And once I was no longer angry, I was no longer committed to reaching the conclusion that righteous anger demands: we are right, they are wrong. I was able to explore new moral matrices, each one supported by its own intellectual traditions. It felt like a kind of awakening. (109)


The second principle of moral psychology is: There’s more to morality than harm and fairness. In support of this claim I described research showing that people who grow up in Western, educated, industrial, rich, and democratic (WEIRD) societies are statistical outliers on many psychological measures, including measures of moral psychology. I also showed that:

The WEIRDer you are, the more you perceive a world full of separate objects, rather than relationships.

Moral pluralism is true descriptively. As a simple matter of anthropological fact, the moral domain varies across cultures.

The moral domain is unusually narrow in WEIRD cultures, where it is largely limited to the ethic of autonomy (i.e., moral concerns about individuals harming, oppressing, or cheating other individuals). It is broader–including the ethics of community and divinity–in most other societies, and within religious and conservative moral matrices within WEIRD societies.

Moral matrices bind people together and blind them to the coherence, or even existence, of other matrices. This makes it very difficult for people to consider the possibility that there might really be more than one form of moral truth, or more than one valid framework for judging people or running a society.

6. Taste Buds of the Righteous Mind

…activation of the sweet receptor produced the strongest surge of dopamine in the brain, which indicated to him that humans are hard-wired to seek sweetness above the other four tastes. (112)

…moral monism — the attempt to ground all of morality on a single principle — leads to societies that are unsatisfying to most people and at high risk of becoming inhumane because they ignore so many other moral principles. (113)

In this chapter, and the next two, I’ll develop the analogy that the righteous mind is like a tongue with six taste receptors. (114)

Moral matrices vary, but they all must please righteous minds equipped with the same six social receptors. (114)



Autism, including Asperger’s syndrome (a subtype of high-functioning autism), is better thought of as a region of personality-space — the lower right corner of the lower right quadrant — than as a discrete disease. (117)


Introduction to the Principles of Morals and Legislation. …the principle of utility, which he defined as “the principle which approves or disapproves of every action whatsoever, according to the tendency which it appears to have to augment or diminish the happiness of the party whose interest is in question.” (118)

Bentham then systematized the parameters needed to calculate utility, including the intensity, duration, and certainty of “hedons” (pleasures) and “dolors” (pains). (118)


To discover this timeless form, it simply would not do to use observational methods–to look around the world and see what virtues people happened to pursue. Rather, he said that moral law could only be established by the process of a priori (prior to experience) philosophizing. (119)

…noncontradiction. … He called it the categorical (or unconditional) imperative: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.” (119)


But in psychology our goal is descriptive. We want to discover how the moral mind actually works, not how it ought to work, and that can’t be done by reasoning, math, or logic. It can be done only by observation, and observation is usually keener when informed by empathy. (120)



We borrowed the idea of “modularity” from the cognitive anthropologists Dan Sperber and Lawrence Hirschfeld. Modules are like little switches in the brains of all animals. (123)

…[they] distinguished between the original triggers of a module and its current triggers. (124)

Furthermore, within any given culture, many moral controversies turn out to involve competing ways to link a behavior to a moral module. Should parents and teachers be allowed to spank children for disobedience? On the left side of the political spectrum, spanking typically triggers judgments of cruelty and oppression. On the right, it is sometimes linked to judgments about proper enforcement of rules, particularly rules about respect for parents and teachers. So even if we all share the same small set of cognitive modules, we can hook actions up to modules in so many ways that we can build conflicting moral matrices on the same small set of foundations. (124)


Morality is like taste in many ways–an analogy made long ago by Hume and Mencius.

Deontology and utilitarianism are “one-receptor” moralities that are likely to appeal most strongly to people who are high on systemizing and low one empathizing.

Hume’s pluralist, sentimentalist, and naturalist approach to ethics is more promising than utilitarianism or deontology for modern moral psychology. As a first step in resuming Hume’s project, we should try to identify the taste receptors of the righteous mind.

Modularity can help us think about innate receptors, and how they produce a variety of initial perceptions that get developed in culturally variable ways.

Five good candidates for being taste receptors of the righteous mind are care, fairness, loyalty, authority, and sanctity.

7. The Moral Foundations of Politics


How much would someone have to pay you to perform each of these actions? Assume that you’d be paid secretly and that there would be no social, legal, or other harmful consequences to you afterward. Answer by writing a number from 0 to 4 after each action, where:

0 = $0, I’d do it for free

1 = $100

2 = $10,000

3 = $1,000,000

4 = I would not do this for any amount of money

Column A

Column B

1a. Stick a sterile hypodermic needle into your arm. ___

2a. Accept a plasma-screen television that a friend of yours wants to give you. You know that the friend got the TV a year ago when the company that made it sent it to you friend, by mistake and at no charge. ___

3a. Say something critical about your nation (which you believe to be true) while calling in, anonymously, to a talk-radio show in your nation. ___

4a. Slap a male friend in the face (with his permission) as part of a comedy skit. ___

5a. Attend a short avant-garde play in which the actors act like fools for thirty minutes, including failing to solve simple problems and falling down repeatedly onstage. ___

1b. Stick a sterile hypodermic needle into the arm of a child you don’t know. ___

2b. Accept a plasma-screen television that a friend of yours wants to give you. You know that your friend bought the TV a year ago from a thief who had stolen it from a wealthy family. ___

3b. Say something critical about your nation (which you believe to true) while calling in, anonymously, to a talk-radio show in a foreign nation.

4b. Slap your father in the face (with his permission) as part of a comedy skit. ___

5b. Attend a short avant-garde play in which the actors act like animals for 30 minutes, including crawling around naked and grunting like chimpanzees. ___

Total for Column A: ___

Total for Column B: ___


It use to be risky for a scientist to assert that anything about human behavior was innate. To back up such claims, you had to show that the trait was hardwired, unchangeable by experience, and found in all cultures. (130)

…now we know that traits can be innate without being either hardwired or universal. (130)

Nature bestows upon the newborn a considerably complex brain, but one that is best seen as prewired–flexible and subject to change–rather than hardwired, fixed, and immutable. – Gary Marcus

Nature provides a first draft, which experience then revises…”Built-in” does not mean unmalleable; it means “organized in advance of experience.” – Gary Marcus


The set of current triggers for any module is often much larger than the set of original triggers. (132)

Bumper stickers are often tribal badges.   The moral matrix of liberals, in America and elsewhere, rests more heavily on the Care foundation than do the matrices of conservatives, … (134)


Everyone cares about fairness, but there are two major kinds. On the left, fairness often implies equality, but on the right it means proportionality–people should be rewarded in proportion to what they contribute, even if that guarantees unequal outcomes. (138)


…chimpanzees guard their territory, raid the territory of rivals, and, if they can pull it off, kill the males of the neighboring group and take their territory and their females. (139)


The urge to respect hierarchical relationships is so deep that many languages encode it directly. (142)

Human authority, then, is not just raw power backed by the threat of force. Human authorities take on responsibility for maintaining order and justice. (143)


The “omnivore’s dilemma” … is that omnivores must seek out and explore new potential foods while remaining wary of them until they are proven safe. | Omnivores therefore go through life with two competing motives: neophilia (an attraction to new things) and neophobia (a fear of new things). (148)

The psychologist Mark Schaller has shown that disgust is part of what he calls the “behavioral immune system”–a set of cognitive modules that are triggered by signs of infection or disease in other people and that make you want to get away from those people. (148)

Cultures differ in their attitudes toward immigrants, and there is some evidence that liberal and welcoming attitudes are more common in times and places where disease risks are lower. Plagues, epidemics, and new diseases are usually brought in by foreigners–as are many new ideas, goods, and technologies–so societies face an analogue of the omnivore’s dilemma, balancing xenophobia and xenophilia. (149)

The Sanctity foundation makes it easy for us to regard some things as “untouchable,” both in a bad way (because something is so dirty or polluted we want to stay away) and in a good way (because something is so hallowed, so sacred, that we want to protect it from desecration). If we had no sense of disgust, I believe we would also have no sense of the sacred. (149)

Whatever its origins, the psychology of sacredness helps bind individuals into moral communities. When someone in a moral community desecrates one of the sacred pillars supporting the community, the reaction is sure to be swift, emotional, collective, and punitive. (149)

The Sanctity foundation is crucial for understanding the American culture wars, particularly over biomedical issues. If you dismiss the Sanctity foundation entirely, then it’s hard to understand the fuss over most of today’s biomedical controversies. The only ethical question about abortion becomes: At what point can a fetus feel pain? (152)


I tried to make (and justify) five such guesses:

The Care/harm foundation evolved in response to the adaptive challenge of caring for vulnerable children. It makes us sensitive to signs of suffering and need; it makes us despise cruelty and want to care for those who are suffering.

The Fairness/cheating foundation evolved in response to the adaptive challenge of reaping the rewards of cooperation without getting exploited. It makes us sensitive to indications that another person is likely to be good (or bad) partner for collaboration and reciprocal altruism. It makes us want to shun or punish cheaters.

The Loyalty/betrayal foundation evolved in response to the adaptive challenge of forming and maintaining coalitions. It makes us sensitive to signs that another person is (or is not) a team player. It makes us trust and reward such people, and it makes us want to hurt, ostracize, or even kill those who betray us or our group.

The Authority/subversion foundation evolved in response to the adaptive challenge of forging relationships that will benefit us within social hierarchies. It makes us sensitive to signs of rank or status, and to signs that other people are (or are not) behaving properly, given their position.

The Sanctity/degradation foundation evolved initially in response to the adaptive challenge of the omnivore’s dilemma, and then to the broader challenge of living in a world of pathogens and parasites. It includes the behavioral immune system, which can make us wary of a diverse array of symbolic objects and threats. It makes it possible for people to invest objects with irrational and extreme values — both positive and negative — which are important for binding groups together.

8 The Conservative Advantage

Republicans understand moral psychology. Democrats don’t. Republicans have long understood that the elephant is in charge of political behavior, not the rider, and they know how elephants work. (156)


ProjectImplicit.org; YourMorals.org;


Democrats generally celebrate diversity, support immigration without assimilation, oppose making English the national language, don’t like to wear flag pins, and refer to themselves as citizens of the word. Is it any wonder that they have done so poorly in presidential elections since 1968. The president is the high priest of what sociologist Robert Bellah calls the “American civil religion.” (166-167)

In the remainder of the essay I advised Democrats to stop dismissing conservatism as a pathology and start thinking about morality beyond care and fairness. I urged them to close the sacredness gap between the two parties by making greater use of the Loyalty, Authority, and Sanctity foundations, not just in their “messaging,” but in how they think about public policy and the best interests of the nation. (167)


It was the fairness of the Protestant work ethic and the Hindu law of karma: People should reap what they sow. People who work hard should get to keep the fruits of their labor. People who are lazy and irresponsible should suffer the consequences. (169)


In his book Hierarchy in the Forest, Boehm concluded that human beings are innately hierarchical, but that at some point during the last million years our ancestors underwent a “political transition” that allowed them to live as egalitarians by banding together to reign in, punish, or kill any would-be alpha males who tried to dominate the group. (170)

We mutually pledge to each other our Lives, our Fortunes and our sacred Honor – The American Declaration of Independence

If the original triggers of this foundation includes bullies and tyrants, the current triggers include almost anything that is perceived as imposing illegitimate restraints on one’s liberty, including government (from the perspective of the American right). (174)

The hatred of oppression is found on both sides of the political spectrum. The difference seems to be that for liberals–who are more universalistic and who rely more heavily upon the Care/harm foundation–the Liberty/oppression foundation is employed in the service of underdogs, victims, and powerless groups everywhere. It leads liberals (but not others) to sacralize equality, which is then pursued by fighting for civil rights and human rights. Liberals sometimes go beyond equality of rights to pursue equality of outcomes, which cannot be obtained in a capitalist system. This may be why the left usually favors higher taxes on the rich, high levels of services provided to the poor, and sometimes a guaranteed minimum income for everyone.

| Conservatives, in contrast are more parochial — concerned about their groups, rather than all of humanity. For them, the Liberty/oppression foundation and the hatred of tyranny supports many of the tenets of economic conservatism: don’t tread on me (with your liberal nanny state and its high taxes), don’t tread on my business (with your oppressive regulations), and don’t tread on my nation (with your United Nations and your sovereignty-reducing international treaties). (175)


Punishing bad behavior promotes virtue and benefits the group. And just as Glaucon argued in his ring of Gyges example, when the threat of punishment is removed, people behave selfishly. (179)

We can look more closely at people’s strong desires to protect their communities from cheaters, slackers, and free riders, who, if allowed to continue their ways without harassment, would cause others to stop cooperating, which would cause society to unravel. The Fairness foundation supports righteous anger when anyone cheats you directly (for example, a car dealer who knowingly sells you a lemon). But it also supports a more generalized concern with cheaters, leeches, and anyone else who “drinks the water” rather than carries it for the group. (181)


Moral Foundations Theory says that there are (at least) six psychological systems that comprise the universal foundations of the world’s many moral matrices. The various moralities found on the political left tend to rest most strongly on the Care/harm and Liberty/oppression foundations. (181)

A recent study even found that liberal professors give out a narrower range of grades than do conservative professors. Conservative professors are more willing to reward the best students and punish the worst. (183)


Moral psychology can help to explain why the Democratic Party has had so much difficulty connecting with voters since 1980. Republicans understand the social intuitionist model better than do Democrats. Republicans speak more directly to the elephant. They also have a better grasp of Moral Foundations Theory; they trigger every single taste receptor. (184)

Why do rural working-class Americans generally vote Republican when it is the Democratic Party that wants to redistribute money more evenly? (185)

But from the perspective of Moral Foundations Theory, rural and working-class voters were in fact voting for their moral interests. They don’t want to eat at The True Taste restaurant, and they don’t want their nation to devote itself primarily to the care of victims and the pursuit of social justice. Until Democrats understand the Durkheimian vision of society and the difference between a six-foundation morality and a three-foundation morality, they will not understand what makes people vote Republican. (186)

PART III Morality Binds and Blinds

Central Metaphor

We Are 90 Percent Chimp and 10 Percent Bee

9 Why Are We So Groupish?

Let me be more precise. When I say that human nature is selfish, I mean that our minds contain a variety of mental mechanisms that make us adept at promoting our own interests, in competition with our peers. When I say that human nature is also groupish, I mean that our minds contain a variety of mental mechanisms that make us adept at promoting our group’s interests, in competition with other groups. We are not saints, but we are sometimes good team players. (191)


Darwin grasped the basic logic of what is now known as multi-level selection. Life is a hierarchy of nested levels, like Russian dolls: genes within chromosomes within cells within individual organisms within hives, societies, and other groups. (193)

But how did early humans get those groupish abilities in the first place? Darwin proposed a series of “probable steps” by which humans evolved to the point where there could be groups of team players in the first place. (194)

But the most important “stimulus to the development of the social virtues” was the fact that people are passionately concerned with “the praise and blame of our fellow-men.” (194)



Group selection creates group-related adaptations. It is not far-fetched, and it should not be a heresy to suggest that this is how we got the groupish overlay that makes up a crucial part of our righteous minds. (204)


According to Tomasello, human cognition veered away from that of other primates when our ancestors developed shared intentionality. At some point in the last million years, a small group of our ancestors developed the ability to share mental representations of tasks that two or more of them were pursuing together. (205)

When everyone in a group began to share a common understanding of how things were supposed to be done, and then felt a flash of negativity when any individual violated those expectations, the first moral matrix was born. (206)

…language became possible only after our ancestors got shared intentionality. Tomasello notes that a word is not a relationship between a sound and an object. It is an agreement among people who share a joint representation of the things in their world, and who share a set of conventions for communicating with each other about those things. If the key to group selection is a shared defensible nest, the shared intentionality allowed humans to construct nests that were vast and ornate yet weightless and p

Show more