2015-01-05

Joshua Greene, of Harvard University and author of Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, talks with EconTalk host Russ Roberts about morality and the challenges we face when our morality conflicts with that of others. Topics discussed include the difference between what Greene calls automatic thinking and manual thinking, the moral dilemma known as "the trolley problem," and the difficulties of identifying and solving problems in a society that has a plurality of values. Greene defends utilitarianism as a way of adjudicating moral differences.

Play

Time: 1:10:06

How do I listen to a podcast?

Download

Size:32.2 MB

Right-click or Option-click, and select "Save Link/Target As MP3.

Readings and Links related to this podcast episode

Related Readings

HIDE READINGS

About this week's guest:

Joshua Greene's Home page

About ideas and people mentioned in this podcast episode:

Books:

Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, by Joshua Greene on Amazon.com.

Thinking, Fast and Slow, by Daniel Kahneman on Amazon.com.

Articles:

"The Tragedy of the Commons," by Garrett Hardin. Science, December 1968.

"Cross-Cultural Ultimatum Game Research Group," by Rob Boyd and Joe Henrich. Working paper, Caltech. PDF file.

"'Deep Pragmatism' as a Moral Engine," by Peter Reuell. The Harvard Gazette, November 2013.

"Two Track Mind: How Irrational Policy Endures," by Alexander Combs. Library of Economics and Liberty, June 7, 2004.

Tragedy of the Commons, by Garrett Hardin. Concise Encyclopedia of Economics.

Jeremy Bentham. Biography. Concise Encyclopedia of Economics.

Daniel Kahneman. Biography. Concise Encyclopedia of Economics.

Adam Smith. Biography. Concise Encyclopedia of Economics.

Web Pages and Resources:

Moral Cognition Lab. Harvard University.

Eudaimonia. Wikipedia.

The Trolley Problem. Wikipedia.

Podcast Episodes, Videos, and Blog Entries:

"A Deeper Look at Uber's Dynamic Pricing Model," by Bill Gurley. Above the Crowd, March 11, 2014.

Boettke on Elinor Ostrom, Vincent Ostrom, and the Bloomington School. EconTalk. November 2009.

Russ Roberts and Mike Munger on How Adam Smith Can Change Your Life. EconTalk. October 2014.

Jonathan Haidt on the Righteous Mind. EconTalk. January 2014.

Arnold Kling on the Three Languages. EconTalk. June 2013.

Nick Bostrom on Superintelligence. EconTalk. December 2014.

Ed Leamer on Macroeconomic Patterns and Stories. EconTalk. May 2009.

Highlights

Time

Podcast Episode Highlights

HIDE HIGHLIGHTS

0:33

Intro. [Recording date: December 23, 2014.] Russ: I want to mention that as we have done in the past, we'd like to know your top episodes of the year. To participate, go to econtalk.org, where you will find a link in the upper left-hand corner to a survey that will give you a chance to tell us a little bit about yourself, give us some general feedback if you'd like, as well as voting for your 5 favorite episodes of 2014. That survey will stay up through early February of 2015; and I will announce the results some time in mid- to late February.

1:05

Russ: Now, on to today's guest, Joshua Greene, Professor of Psychology at Harvard U. and the Director of the Moral Cognition Lab there. He is the author of Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, which is our topic for today's episode. So, this is a fascinating, thought-provoking, and very ambitious book. It's got an enormous amount of stuff packed into it--ideas, claims for making the world a better place, some fantastic thought experiments. We'll try to do justice to the book. I want to start with what you call our tribal nature. You argue that we have evolved to be fairly effective cooperators within our tribes, but not so good cooperators with other tribes. Explain what you mean by that--what you mean by 'tribes' and the tragedy of common sense[?] morality. Guest: Right. So it begins with a question of will: what is morality, to begin with? And what I think, and a lot of other recent commentators and some people in some sense going all the way back to Charles Darwin think morality is fundamentally about is our social nature. And more specifically about cooperation: that is, what we call morality is really a suite of psychological tendencies and capacities that allow us to live successfully in groups, that allow us to reap the advantages of cooperation. But these tendencies that make up morality come primarily in the form of emotional responses that drive social behavior and that respond to other people's social behavior. I think a natural starting point begins with a familiar story to an economist: this is the tragedy of the commons, which I can talk about a little bit, if you want. Russ: Yeah, go ahead. Guest: So, the tragedy of the commons is a parable told by the ecologist Garrett Hardin. He tells the story of a bunch of herders who share a common pasture, and these are rational, self-interested herders who ask themselves, 'Should I add more animals to my herd?' And they think, 'Well, if I add more animals, that's more animals that I have at market and that's good. That's the upside. What's the downside? Not so much downside: we're all sharing this common pasture.' And so they say the benefits outweigh the costs, and they add more and more animals to their herd. So then when they all do this, there's not enough grass to support any of the animals and they all die and everybody is worse off. And that's the same as the tragedy of the commons. It's basically a parable about the problem of cooperation, which is really the problem of how do you get people to put collective interest over self-interest. Russ: With the key point that by doing so, they'll be better off. Guest: Correct. Russ: Their self-interests will actually be served. So it's not a literal sacrifice. It's a sacrifice in the short run, for a longer-run benefit. Guest: That's right. If it's a repeated game then it's in everybody's long term self-interest. I think that that's right. In the short term it's a conflict between self-interest and collective interest, but in the long term, a cooperative system is one that makes everybody better off. Although at any given moment it may be possible for someone, at least in a short-[?] way to benefit themselves at the expense of the group. Russ: Absolutely. Guest: And so the idea is that our minds are designed to help us solve this problem. And you can think of us as having psychological carrots and sticks that we apply to ourselves and that we apply to other people. So, a psychological carrot that we apply to ourselves to be cooperative would be feelings of love and friendship and goodwill that motivate us to say, 'Hey, it's not just my sheep that matters. Everybody else's sheep, or at least some other people's sheep' motivates you to be cooperative. Or you could have negative feelings that act as a stick for yourself, like shame and guilt. I would feel ashamed of myself if everybody else limited the size of their herds for the greater good and then I cheated. And we have positive feelings that reward other people--so you have my gratitude if you keep your sheep in line. And we have negative feelings that punish other people--you'll have my contempt and my anger and my disgust if you grow your herd as much as you feel like without regard for the rest of us who share the pasture. So the idea is these feelings, these psychological carrots and sticks that we apply to ourselves and other people, that's the core of morality and that's what makes basic cooperation within a group possible. Russ: And just to mention Adam Smith, in The Theory of Moral Sentiments he says, "Man desires not only to be loved, but to be lovely." And so there's some self-regulating impulse to do the right thing, because you want people to respect you. And those carrots and sticks are flying around with all of our social interactions. So, it worked pretty well; and we had Pete Boettke on EconTalk talking about the work of Elinor Ostrom--she got the Nobel Prize. She explains that within small groups, they often devise norms and other voluntary, non-coercive ways to limit the tragedy. But the problem you are fascinated by--which I am, too--is when two tribes come along and they don't share the same morality. So, talk about the tragedy of common sense morality, as you describe it. Guest: Right. So, this is my sequel to Hardin's parable. And one version goes like this. So, imagine that there's this large forest. And all around this large forest are many different tribes. And these different tribes are all cooperative, but they are cooperative on different terms. So, on the one side you might have your communist herders who say, Not only are we going to have a common pasture; we're just going to have a common herd, and that's how everything gets aligned. Everything is about us. And on the other side of the forest you might have the individualist herders who say, Not only are we not going to have common herds; we are not going to have a common pasture. We are going to privatize the pasture, divide it up; and everybody's responsible for their own piece of land. And our cooperation will consist in everybody's respecting each other's property rights. As opposed to sharing a common pasture. And you can imagine any number of arrangements in between. And there are other dimensions along which tribes can vary. So, they vary in what I call their proper nouns, so that is: Which leaders or religious texts or traditions have authority to govern daily life in the tribe? And tribes may respond differently to threats and outsiders. Some may be relatively laissez faire about people who break the rules. Other people may be incredibly harsh. Some tribes will be very hostile to outsiders; others may be more welcoming. All different ways the tribes can achieve cooperation on different terms. They are all dotted around this large forest. And then the parable continues: One hot, dry summer, lightning strikes and there's a forest fire and the forest burns to the ground. And then the rains come and suddenly there is this lovely green pasture in the middle. And all the tribes look at that pasture and say, 'Hmmm, nice pasture.' And they all move in. So now we have in this common space all of these different tribes that are cooperative in different ways, cooperative on different terms, with different leaders, with different ideals, with different histories, all trying to exist in the same space. And this is the modern tragedy. This is the modern moral problem. That is, it's not a problem of turning a bunch of 'me-s' into an 'us.' That's the basic problem of the tragedy of the commons. It's about having a bunch of different us-es all existing in the same place, all moral in their own way, but with different conceptions of what it means to be moral. And so, if our basic psychology does a pretty good job of solving the me-versus-us problem of having basic cooperation within a group, the modern problem, both I think philosophically and psychologically is: What kind of a system and what kind of thinking do we need to regulate life on those new pastures of the modern world, where we have many different tribes with many different terms of cooperation, many different moral systems?

9:07

Russ: Before we go further, I want to just ask you an aside question that I thought about as I was reading the book, which is: You argue that we evolved morality to help us solve these kind of problems. Why do we have different wants? And in particular--we'll probably come back to this later on--I'm more of a bottom up guy than you are; you are a top down guy, more than I am. You concede in places that bottom up is good; and I of course concede in certain places that top down is good. But overall, we have a philosophical difference. And you identify that difference to some extent with the northern and southern tribes--the northern tribes being more individualistic-- Guest: Right: metaphorically northern and southern. Yeah. Russ: And southern tribes being more collectivist. As you point out, there's obviously lots of gray areas in between. Why do you think there are such different ideologies to start with? Why am I a bottom up guy, and why are you a top down guy? And you talk a lot about the fact that, of course, we both think that we are right. And we both think we have evidence for why we are right. But, given that the world's a complicated place, how do we get that difference to start with? Why don't we both have the same morality toward how we solve problems? Guest: Well, so I'm not sure exactly what you mean by bottom up and top down, but I actually have, I think, the leading scientific explanations are at least what I would call pretty bottom uppish. So, a couple of places here. Joe Henrich and colleagues, for example, have collected evidence from small-scale societies all around the world and found quite a bit of variation in terms of how people cooperate. In the "lab"--that is, having them play standardized economic games, and then also in their everyday life. So, take the Lamalera of Indonesia. These are people who make their living by hunting whales in collective hunting parties. So, their livelihood depends very much on cooperation. And sure enough, when you have them do public goods games, prisoner's dilemma--so the kinds of economic games that model the tragedy of the commons, they are exceptionally cooperative. You have other societies where people hunt individually--I hope I'm getting this right, but the Machiguenga of I believe Peru but certainly in South America, they hunt as individuals and individual families; and they they play these economic games, they are much less cooperative. Which is not to say that they are not cooperative people, but they tend to cooperate within family as opposed to across families, at least economically. Now, if you live in a place where there are whales to be hunted, then there are advantages to having a cooperative way of life. If you live in the Amazon where there aren't whales to be hunted and the way you get food is by just going off in your own directly and finding what you can, then that lends itself to a more individualistic society. There is a paper that came out a couple of years ago, or actually maybe it was just this year, by Kensayama[?] and colleagues arguing that there are big differences between cultures--and this is going back to some ideas by Richard Nisbett and colleagues--cultures that cultivate wheat versus cultivating rice, the more collectivist cultures of Asia, are ultimately driven by the original rice-based economies that lived there: rice cultivation can be incredibly productive but requires a lot of intense cooperation and Nisbett has also for example cited evidence about more individualistic tendencies for people who live in herding cultures where it's a mountainous region and you are not going to be growing crops on the ground but instead are going to be herding sheep, let's say. That ends up leading toward more individual societies. So, I'm not sure if we actually disagree on this. Russ: I don't think we do. At all. I'm trying to get a more nuanced view, which I think is in the book, which is: The tribe we're in is not just a result of evolution. It's also cultural and depends on our situation. Guest: Oh, absolutely. Absolutely. Yes. No, I think that what we're born with is a set of options. It's a lot like language, right? All humans, all healthy humans are born with the capacity for language. But whether you end up speaking English or Chinese or something else is going to depend on the environment, the linguistic environment into which you are born.

13:45

Russ: Let's talk about the two Trolley Problems and what you learn about morality from those, because obviously there's of variations on the Trolley Problems that you talk about in the book. But talk about the two basic ones and talk about what you mean by automatic mode and manual mode, which I found very interesting. Guest: Okay. So, before I get to trolleys specifically, let me say a little bit about how I think this connects to the first set of questions you asked about the tragedy of the commons, the tragedy of common-sense morality. Because one of the main ideas of the book is we have two kinds of problems; we also have two kinds of thinking. And that the our gut reactions, our intuitions, what I call our automatic settings, which I'll explain in a moment, do a good job of solving the original tragedy of the commons, but they create the problem of the problem of common-sense morality. That our gut reactions about how we ought to live make it harder for us to live in many ways in a pluralistic world. So, let me give you my metaphor, which is familiar to people who have read--well, at least the idea is familiar to people who have read Daniel Kahneman's book, Thinking Fast and Slow, and a lot of the research on dual-process decision-making. My preferred metaphor for this is the visual SLR (Single Lens Reflex) camera--so, a camera like the one I got many years ago now; has the automatic settings on it. So just for everyday use, if you are taking a picture of a mountain from a mile away in broad daylight, you put it in landscape mode and click, point and shoot, you've got your shot. Or if you are taking a picture of somebody up close in indoor light then you put it in portrait mode and click, you've got your shot. And it also has a manual mode where you can by hand adjust the f-stop and everything else. And I say, why does the camera have these two different ways of taking photos, your automatic settings and your manual mode? And the idea is that this allows you to navigate the tradeoff between flexibility and efficiencies. So, the automatic settings are very efficient, point and shoot; and they are good for the kinds of situations that the manufacturer has already anticipated. Like taking a landscape picture or taking a standard portrait picture. But the manufacturer also knows that there are going to be situations that the manufacturer isn't going to specifically anticipate; and so the manufacturer also gives you a manual mode where you can adjust everything yourself. The manual mode is very flexible, but it's not very efficient. So you can do anything with it, but you have to know what you are doing; it takes time; you might make a mistake. And this design of having both, overall makes a lot of sense, because sometimes, most of the time, you can get by just pointing and shooting, and that's what you really want. But occasionally you want to have the flexibility to put the camera in manual mode and get exactly what you want, depending on [?] conditions-- Russ: And if you don't you are going to get a really bad picture sometimes. I think that's the-- Guest: Right. Exactly. So the idea is that the human brain has the same design: that we have automatic settings, and we have our manual modes. Our automatic settings are our gut reactions, our largely-emotional responses to situations, especially social situations, that tell us: That's good, that's bad, this is what you ought to do, this is what you ought not to do. We also have a manual mode; we also have the ability to step back and think in an explicit, deliberate, what you might call, in a somewhat loaded sense, rational way about whatever it is that's facing us. And we might override some gut reaction we might have because we'd say, well, in this case, even though it feels like we should do this, it actually makes more sense to do that. So, with this idea in mind of the tension between our automatic settings and our manual mode, our gut reaction and our slow, deliberate thinking, all introduce, as you said, the Trolley Dilemma. This is the philosophical problem that got me interested, well, really got me started in my research as a scientist. So, one version of the Trolley case goes like this. You've got a trolley headed towards 5 people, and you can save them but they are going to die if you don't do anything. If you hit a switch you can turn the trolley away from the five and onto another track, but unfortunately there's still 1 person there. And if you ask most people, 'Is it okay to turn the trolley away from the 5 and have it run over the 1 person?' depending on who you ask and how you ask it, about 90% of people will say, 'Yes.' Russ: Better that one person dies than five. Guest: That's right. The tradeoff is between 5 lives and 1, and the particular mechanism is hitting the switch that will turn the trolley away from the five and onto the one. Parallel case, which we'll call the Footbridge Case: This time the trolley is again headed towards 5 people, but now you are on a footbridge over the track, in between the oncoming trolley and the 5 people. We stipulate the only way that you can save them now is to end up killing somebody. So, there's this large guy, wearing a large backpack, who is right next to you. And you can push him off of the footbridge and he'll land on the tracks and he'll die--he'll get killed by the trolley--but it will stop the trolley from running over the 5 people. Now, to cut down on the number of angry emails that you get from people, I have to make some stipulations clear. We are stipulating that, a). You cannot jump, yourself. The only way to save the 5 is-- Russ: You're not big enough. Guest: That's right. Not big enough. You cannot jump, yourself. And yes, this will definitely work. And I know you've all been to the movies and sometimes you are able to suspend disbelief, and I ask you to do the same thing here. And we ask our participants, when we do these experiments, to do the same thing; and in general they don't have any problem doing this. Here, one of the questions is: Is it okay to push the guy off the footbridge, use him as a trolley stopper to save the 5 people? Most people say no. There are some populations where people are more likely to say yes. But in general, take an American sample, somewhere between about 10% and 35% of people will say that it's okay to push the guy off the footbridge; most people will say that it's not okay. So, interesting question: What's going on? Why do we say that it's okay to trade 1 life for 5 when you can hit a switch that will divert the trolley away from 5 and onto 1, but it's not okay to push the guy off the footbridge--even if we assume that this is going to work and if we assume that there's no other way to achieve this worthy goal. Most people still say that it's wrong. We're coming up on a decade and a half of research on or stemming from this moral dilemma. And we've learned a lot. It seems that it's primarily an emotional response to that physical action of pushing the guy off the footbridge. And you can see, for example, in a part of the brain called the amygdala, which you might think of as a mammal's early-warning alarm system that something may be bad, needs attention, maybe not a good idea--you see that alarm bell going off in this basic part of the mammalian emotional brain. And the strength of that signal is correlated with the extent to which people say that it's wrong to push the guy off the footbridge or whatever it is. You also see increased activity in the dorsolateral prefrontal cortex, which is the part of the brain that's most closely associated with explicit reasoning, or anything that really requires a kind of mental effort, like remembering a phone number or resisting an impulse of some kind or explicitly applying a behavioral rule. That's sort of the seat of manual mode. And these two signals from different parts of the brain, one a kind of automatic response to the action and the other reflecting the balance of costs and benefits, do get out in the brain; and in some people they go one way and in some people they go the other way. And if you give people a distracting secondary task, then it slows down their utilitarian judgments--that is, the judgments when they say that it's okay to kill 1 to save 5. If you give people more time, they are more likely to give a utilitarian judgment. People who give more reflective answers to tricky math questions are more likely to say that it's okay to push the guy off the footbridge. If you give people a drug that in the short term heightens certain kinds of emotional responses--so the drug used in the experiments is Citalopram, which is an SSRI (selective serotonin reuptake inhibitor), kind of like Prozac, people are more likely to say that it's wrong to push the guy off the footbridge. If you give people an anti-anxiety drug, Lorazepam is the one used in the study I have in mind, they are more likely to say that it's okay to push the guy off the footbridge. And so there's a lot of evidence, from a lot of different kinds of experiments. Brain imaging, behavioral manipulations, pharmacological manipulations, looking at patients with different kinds of brain damage--they all support this kind of dual process story. That is, that there's a gut reaction that's saying, 'No, don't push the guy off the footbridge'; and then a more conscious, explicit, calculating response that says, 'Well, but you can save 5 lives; don't you think that makes sense?' And--well, I could go on.

23:13

Russ: Talk about how you might want to exploit or use those differences--and I just have to say as a footnote: There's a lot of experiments in economics that make all kinds of different claims about behavior, and one of the aspects of these experiments of course--it's really a big one in the footbridge example--is that this is a very alien experience for most people. And I think the challenge in interpreting, part of it, is the fact that, if it happened every day--if people were constantly shoving people over footbridges--maybe people would have different responses. Guest: Absolutely. Russ: There's a grappling uncertainty issue. And even though you say don't be uncertain, I think that's the automatic part maybe that's kicking in, not necessarily the morality. But let's put that to the side. It's definitely true that we have some gut reactions about some things and then some more pensive and thoughtful reactions. But others--what's the implication of that for these tragedies of the common-sense morality, these philosophical, ideological moral differences between tribes and groups? Guest: So, there are a few dots I think that need to be connected. So, if you sort of follow the arc of the book, the first part is about the two tragedies and their different structure. And then the next part is about morality fast and slow in general. Initially it's just illustrating the idea that our moral thinking involves a tension between gut reactions to certain types of actions that are generally bad but maybe not always bad. And then a kind of cost/benefit thinking that can either be selfish, or it can be impartial in the case of the third-party observer saying, 'Well, isn't it better just to save more lives?' What I propose as a solution to the tragedy of common sense morality is a much maligned and poorly named philosophy which many of your listeners will be familiar with, known as utilitarianism. Russ: Oooooh. Guest: Boo. [?] Russ: That was 'oooh.' Just suspense. It wasn't necessarily--I have an anti-utilitarian streak, but I a pro-one, also. So, I'm ambivalent. That was just 'oooh.' Go ahead. Guest: Okay. So, I think utilitarianism is very much misunderstood. And this is part of the reason why we shouldn't even call it utilitarianism at all. We should call it what I call 'deep pragmatism', which I think better captures what I think utilitarianism is really like, if you really apply it in real life, in light of an understanding of human nature. But, we can come back to that. The idea, going back to the tragedy of common-sense morality is you've got all these different tribes with all of these different values based on their different ways of life. What can they do to get along? And I think that the best answer that we have is--well, let's back up. In order to resolve any kind of tradeoff, you have to have some kind of common metric. You have to have some kind of common currency. And I think that what utilitarianism, whether it's the moral truth or not, is provide a kind of common currency. So, what is utilitarianism? It's basically the idea that--it's really two ideas put together. One is the idea of impartiality. That is, at least as social decision makers, we should regard everybody's interests as of equal worth. Everybody counts the same. And then you might say, 'Well, but okay, what does it mean to count everybody the same? What is it that really matters for you and for me and for everybody else?' And there the utilitarian's answer is what is sometimes called, somewhat accurately and somewhat misleadingly, happiness. But it's not really happiness in the sense of cherries on sundaes, things that make you smile. It's really the quality of conscious experience. So, the idea is that if you start with anything that you value, and say, 'Why do you care about that?' and keep asking, 'Why do you care about that?' or 'Why do you care about that?' you ultimately come down to the quality of someone's conscious experience. So if I were to say, 'Why did you go to work today?' you'd say, 'Well, I need to make money; and I also enjoy my work.' 'Well, what do you need your money for?' 'Well, I need to have a place to live; it costs money.' 'Well, why can't you just live outside?' 'Well, I need a place to sleep; it's cold at night.' 'Well, what's wrong with being cold?' 'Well, it's uncomfortable.' 'What's wrong with being uncomfortable?' 'It's just bad.' Right? At some point if you keep asking why, why, why, it's going to come down to the conscious experience--in Bentham's terms, again somewhat misleading, the pleasure and pain of either you or somebody else that you care about. So the utilitarian idea is to say, Okay, we all have our pleasures and pains, and as a moral philosophy we should all count equally. And so a good standard for resolving public disagreements is to say we should go with whatever option is going to produce the best overall experience for the people who are affected. Which you can think of as shorthand as maximizing happiness--although I think that that's somewhat misleading. And the solution has a lot of merit to it. But it also has endured a couple of centuries of legitimate criticism. And one of the biggest criticisms--and now we're getting back to the Trolley cases, is that utilitarianism doesn't adequately account for people's rights. So, take the footbridge case. It seems that it's wrong to push that guy off the footbridge. Even if you stipulate that you can save more people's lives. And so anyone who is going to defend utilitarianism as a meta-morality--that is, a solution to the tragedy of common sense morality, as a moral system to adjudicate among competing tribal moral systems--if you are going to defend it in that way, as I do, you have to face up to these philosophical challenges: is it okay to kill on person to save five people in this kind of situation? So I spend a lot of the book trying to understand the psychology of cases like the footbridge case. And you mention these being kind of unrealistic and weird cases. That's actually part of my defense. Russ: Yeah, there's some plus to it, I agree. Guest: Right. And the idea is that your amygdala is responding to an act of violence. And most acts of violence are bad. And so it is good for us to have a gut reaction, which is really a reaction in your amygdala that's then sending a signal to your ventromedial prefrontal cortex and so on and so forth, and we can talk about that. It's good to have that reaction that says, 'Don't push people off of footbridges.' But if you construct a case in which you stipulate that committing this act of violence is going to lead to the greater good, and it still feels wrong, I think it's a mistake to interpret that gut reaction as a challenge to the theory that says we should do whatever in general is going to promote the greater good. That is, our gut reactions are somewhat limited. They are good for everyday life. It's good that you have a gut reaction that says, 'Don't go shoving people off of high places.' But that shouldn't be a veto against a general idea that otherwise makes a lot of sense. Which is that in the modern world, we have a lot of different competing value systems, and that the way to resolve disagreements among those different competing value systems is to say, 'What's going to actually produce the best consequences?' And best consequences measured in terms of the quality of people's experience. So, that's kind of completing or partially completing the circle between the tragedy of the commons, that discussion, and how do we get to the Trolleys.

31:06

Russ: Yeah. So, there's some things about the utilitarian idea that are deeply appealing, and you do a beautiful job making the case for it. And you spend a lot of time conceding there are problems with it and then giving what you think is the best answer; and I found those very interesting. Not totally persuasive, but provocative. I want to raise a couple of issues and let you respond. So, the first is that: I think part of the reason that people have problems with pushing that guy off the bridge is: there's an arrogance involved. Which makes me nervous, as a northern herder in your example. Guest: Right. Russ: So, I like the idea of going around saving lives. And people make lots of claims for--the death penalty saves lives; it doesn't take lives, it saves lives. And there are a lot of different claims that people make. Ultimately most of those claims come down to empirical claims, somewhat supported by evidence but not totally, completely, ironclad, about how x leads to y. And one of the main themes of EconTalk is that, I'm [?] humble about that connection between x and y. And I'm thinking, you go out there pushing people off of footbridges, you're actually a dangerous person. You are not a moral person. You're going to run amok. Guest: I agree. I think what you are essentially doing is making a good, deep-pragmatist, long-term utilitarian argument against being too quick to implement what might narrowly seem to be a utilitarian solution. Russ: And that's really by the way--that's a nice way to put it. That's really what economists do, by the way--often what economists say: 'Not so fast.' Right? Guest: Right. So, I think it depends on the case, right? When it comes to--take something like physician-assisted suicide. Right? You might have a kind of footbridge sort of reaction: I think the American Medical Association are a lot of people, too, which says, it's just wrong for you to intentionally and actively end the life of a patient even if they want to. Right? It pushes--I'm willing to bet it pushes that amygdala button. Russ: Yeah, big time. Guest: Right? But, you might say, 'But the greater good is served by not forcing people who are suffering and who have no quality of life and no hope of a better life to go on and suffer and wait for the disease to kill them instead of them dying their own way.' Now, on the one hand, there's something--I think about that caution that says, 'Well wait a second. This could go terribly wrong.' If we have doctors who are too quick to say, 'Oh, you want to die? Oh, here you go.' Russ: It's a slippery slope argument. Guest: Yeah. So, on the one hand you want to be careful and you want to listen to that amygdala signal that says you are playing with fire here. But at the same time, you don't want to give it an absolute veto. And so I think that the kind of skepticism about overly ambitious social policy is a good skepticism. At the same time, I think it is often possible to do things that feel wrong but that actually end up making things better. Russ: For sure.

34:35

Russ: So, let's talk about the basic idea. You actually--in the book you sum it up in three words: maximize happiness impartially. And of course by happiness, you don't mean necessarily, although it could include dancing at a party while drunk or gorging on ice cream. It's a richer concept. Sometimes we call it flourishing here on the program. Or I think the fancy name is eudaimonia. I don't know if I'm pronouncing it correctly. I think that's Aristotelian. And it's about--there's a whole very rich menu of stuff that give us a feeling of pleasure, of utility, of satisfaction, deep tranquility, serenity, etc. And we're going to be open about--we're not going to try to narrow down that definition. So I'm with you there. So, for me, as an individual, me, just me, I face tradeoffs all the time about satisfaction and pleasure and happiness. How long should I stay at work? Should I watch the football game instead of helping my kids with their homework? These are all questions that we face every single day as individuals and we do our best, and sometimes we make mistakes that we regret; and we understand that: life isn't perfect. And morality to some extent, and self-help books, are trying to help us navigate those tradeoffs. The problem I have with your tradeoff is--and I understand the desire for a common currency across these tradeoffs--but they are across different people. And I can't measure happiness. Even if I could I'm not sure that I can imagine an entity that would come up with the right desire to make those tradeoffs. So, we think about this in a political context, which is naturally what you do in the book. So, here we are in the United States. We're in this pasture. We're all here together. We have very different philosophies. Unfortunately, we don't really have--not only do we disagree, even if we agreed, you and I, on what the right, say, way to adjudicate our dispute, we don't really have a mechanism for implementing it. We think we do. We call it democracy. But it's a very imperfect mechanism that often exploits our differences for the benefit and gain of individuals. So it's not obvious to me that it's even a good idea to say, Let's pretend we could decide what is the greatest happiness across these 330 million people, let alone the 7 billion, and then hope that somehow it'll get implemented. Is that really a practical solution to our political problems? Guest: No, I don't think that there is any alternative. I think that we are living someone's attempts to adjudicate these tradeoffs of values, and we can either just accept what the powers that be put in front of us, or we can vote our conscience and try to change them or vote our conscience and say, yes I endorse this. I think that what you're objecting to is the difficulty of the problem, not an inherent problem with the solution, if you want to call it that, that I'm proposing. So I think it's easier to think about these things with a concrete example. So, take the case of raising taxes on the wealthiest Americans. Now, let's suppose that I know that this is controversial. But let's suppose that government spending can provide good stimulus to the economy and can increase employment and make things better off for the people who are employed as a result. Okay, so you have to do a tradeoff. You would have to say, How much do the wealthiest people lose by having their incomes reduced by some amount from someone who is making half a million dollars a year, and they might pay, instead of paying 30% in taxes they'd pay 40% or something like that, versus the benefits that go to people who now have jobs as a result of expansion of the public sector, or children who have a better shot at living the good life because of increased commitment to early childhood education, etc. There are a lot of empirical assumptions here or questions here. But if we can at least agree on the empirics, then there's the question of, Okay, is this tradeoff worth it? I don't think there's any way to avoid asking that question, and I think that in a lot of these cases, it's actually pretty clear--that, for example, taking people who are already very wealthy and reducing their income somewhat doesn't really do much to their happiness. Whereas if you provide opportunities to people at the bottom of the scale, that actually can make an enormous difference in their lives. So, you know, I think that the alternative is to just say, let it just evolve the way it evolves without consciously thinking about this as a social problem. But I don't think that that's a better alternative. Russ: Well, that's because you're a southerner. I'm a northerner, and as a northerner, I say, if we get the government out of this, the private sector, charity and other ways, will be done to help poor people. They'll take money from rich people. They do give it voluntarily--maybe not so much as we'd like; certainly not as much as they'd give if they were forced to give. But the real issue I have, and this is my meta-meta morality, I guess, and I think it's an interesting thought experiment--the real problem I have is the empirical assumptions that you need to make for some reason don't appeal to me. And they do tend to appeal to people who are the collectivists. Right? So, you made a lot of--you just gave a couple; we could think of 10 more: better schools, better pre-schools, more training programs, greener this, reduce carbon dioxide emissions, stimulate the economy, reduce unemployment. And most of those things everybody agrees on would be good if they happened. But strangely enough--and this is, to me, a different kind of tragedy--the people who are from the north, us individualists, we seem to think that the empirical evidence is very unconvincing. Whereas the people who are in the south seem to find it extremely compelling. Guest: Right. Russ: So, what it comes down to is a pretense, what I would fear--it's a pretense we are doing something scientific by just looking at the outcomes rather than arguing about our principles. 'We're just going to see what works the best.' But that's kind of a false--that's kind of an illusion, I worry. What do you think? Guest: But why--I see this problem on both sides. I think that both sides-- Russ: I do, too. Guest: interpret the evidence. The evidence in social science is almost always ambiguous. And both sides interpret the evidence so as to support the kind of social policy that they intuitively favor. I think that's a problem on both sides. Russ: I agree. Guest: But, you know--I think it's not an impossible task to sort out the fact from the bias. And the signal-to-noise ratio may be lower than we'd like, but I still think that there is a signal there. I think one thing that we can do, and this is one of the major practical points in the book, is to not think of these social problems when we are really trying to have an honest discussion about it in terms of rights. Russ: Yeah, I really like that, by the way. Even though I've probably made those rights arguments. I thought this was fantastic. Go ahead. Guest: And I use the language of rights as well. I think it has its place, I also argued in the book. But if something becomes a matter of rights--take capital punishment; it's the public's right to see justice done, which means having the person killed; or capital punishment is a violation of human rights, as Amnesty International says--if you make something about rights then it essentially leaves the realm of the empirical, because we can essentially use the language of rights as a front for whatever our automatic settings say, for whatever our amygdala says. Right? Russ: Yep. Guest: And so, one way to try to make progress from both sides is to say, Okay, we're not going to discuss these problems in terms of absolute rights. Because we have no way of figuring out what rights people really have in some ultimate metaphysical sense. And instead we can ask, which kinds of policies actually work. A lot of these things are difficult because we can't do controlled experiments--we're not rats living in a lab. We're people living in a society where it's almost impossible to do controlled experiments with things like the death penalty. Russ: Or a stimulus. Guest: But we can look at other countries that don't have the death penalty and say, well, do they have rampant murder problems? Or, is there something fundamentally different about those societies that's making them relatively murder-free compared to the United States? I think that the empirical battle is winnable, but it's 10 steps forward and 9 steps back.

43:56

Russ: So, let me phrase the challenge in a different way. You concede[?] at one point in the book--you reject it, but you concede[?] at one point in the book that people think we're already doing this. We favor the policies that work out the best, or that create the most happiness, or that are good for most people, or the "best policies." And isn't part of the problem really that we're really pretending what we're arguing about? It's all rhetoric? We all have our stories to tell: as Ed Leamer says, we're pattern-seeking, story-telling animals. So we cherry pick our data. And it's just, all this utilitarian stuff, all it's really doing is just giving me a different rhetorical frame. I'm not really going to make progress. So tell me something cheerful. Guest: Uh, so, let's take the case of prison policies and things like solitary confinement and other exceptionally harsh treatments that exist in American prisons. You're seeing, now you're seeing a lot of this in the news. For a long time people on the Left have been saying these practices of exceptionally harsh punishment in prisons is not doing anything to help anyone; it doesn't deter crime very much because most would-be criminals are not paying attention to these levels of details. Russ: Worse. Could be worse. Guest: It makes things miserable for the prisoners. Russ: Could be worse. Guest: Sorry? Russ: Yeah, it could be worse for society. It reduces their ability to come out and do something productive. Guest: Exactly. Right. And what you're seeing now is people on the Right who are coming around to say, Look, it's not productive; this is not helping. This is a place where we're actually I think just beginning to see a consensus on Left and Right, at least on certain flash-point issues like solitary confinement and things like that. And it's really driven by evidence. Russ: That's a good example. And I'd use the drug war as another example. It's hard for--there are a lot of people who see it as a rights-based issue: people should not have the right to harm themselves. And when they see the effect of the drug war, they start--some, not all--but some people do change their minds based on the fact that they actually don't think it's making the world a better place. It's not reducing necessarily even the amount of drugs being taken; it's corrupting the police; etc. So, I don't mean to argue that empirical evidence or reality doesn't come into it. I'm just a little worried about the bigger, overarching claim.

46:48

Russ: Let me ask you a couple of different challenges. This is a little bit like ask the doctor; these are hard ones. Uber, the car-sharing, taxi-ish service you can use on your iPhone, recently got in trouble in Sydney, Australia during a crisis situation, and it's happened with other natural disasters: there's an increase in demand somewhere, and the Uber algorithm raises the price. Which draws more drivers into the area. And as an economist, whether I'm a southerner or not--or northerner or not, I mean--that kind of--I love that. I see more people getting out of town. A lot of people can't see it. They don't care, even. They see that it's just wrong to take advantage of people and they think Uber is immoral. And to me it's amoral; and in fact, it's good. So, why do you think people have that reaction to so-called price gouging? Guest: So, I actually haven't followed the details of the Uber situation, and I would say, whether or not I think it's a good or bad thing will probably turn on facts that are not much discussed in the case. So, I think the kind of standard [?] response to price gouging is, you know, there's a flood and the people who are selling buckets are suddenly selling them for a thousand dollars each. And the idea is, you are exploiting those people; you are making it harder for people to deal with their emergency and they could be losing an awful lot. Because you're saying, this is a chance where I could make an extra buck. And so from a utilitarian perspective, you are saying, okay, so you get a little extra money selling your stuff and the other person's house gets flooded--or I should have said fire. In a fire there's a person selling buckets. And the other person's house is burning down, and you're concerned about making a few extra dollars taking advantage of someone in need. There I think the utilitarian analysis clearly says, Price gouging is terrible. You are taking a little gain for yourself relatively speaking, because someone is desperate and they are trying to save their house, which is worth much, much more to them. If that's what's going on, then I think price gouging is bad, and it might be good to have regulations. Russ: And that's a world where there's a fixed number of buckets. Guest: Exactly. Russ: And a fixed number of buckets [?]-- Guest: Now, what's going on with [?] Uber, is all of these people saying, 'You know, I'm willing to work overtime', essentially: 'I'm willing to add extra travel capacity; but I'm not willing to do it for my usual price. I'm willing to do it for a little bit more; but fortunately there are people who are willing to pay for it.' I actually think that that is, overall, a better thing. So if it's actually increasing the availability in a time when people need it, that's better. Now, it would be better still if people said, 'You know what? I'm willing to do this as a kind of partial public service where I will get paid for it but I'm not going to increase my rate even though I could.' That would be even better. But we naturally compare it to Uber at the usual price instead of someone staying home and not driving at all. [more to come, 50:04]

Show more