2016-02-15


Why do so many medical practices that begin with such promise and confidence turn out to be either ineffective at best or harmful at worst? Adam Cifu of the University of Chicago's School of Medicine and co-author (with Vinayak Prasad) of Ending Medical Reversal explores this question with EconTalk host Russ Roberts. Cifu shows that medical reversal--the discovery that prescribed medical practices are ineffective or harmful--is distressingly common. He contrasts the different types of evidence that support or discourage various medical practices and discusses the cultural challenges doctors face in turning away from techniques they have used for many years.

Play

Time: 1:04:49

How do I listen to a podcast?

Download

Size:29.7 MB

Right-click or Option-click, and select "Save Link/Target As MP3.

Readings and Links related to this podcast episode

Related Readings

HIDE READINGS

This week's guest:

Adam Cifu's Home page

This week's focus:

Ending Medical Reversal: Improving Outcomes, Saving Lives, by Vinayak K. Prasad and Adam Cifu on Amazon.com.

Additional ideas and people mentioned in this podcast episode:

"A Decade of Reversal: An Analysis of 146 Contradicted Medical Practices," by Vinay Prasad, MD; Andrae Vandross, MD; Caitlin Toomey, MD; Michael Cheung, MD; Jason Rho, MD; Steven Quinn, MD; Satish Jacob Chacko, MD; Durga Borkar, MD; Victor Gall, MD; Senthil Selvaraj, MD; Nancy Ho, MD; Adam Cifu, MD. Mayo Clinic Proceedings, August 2013.

Eric Topol on the Power of Patients in a Digital World. EconTalk. May 2015.

Robert Aronowitz on Risky Medicine. EconTalk. November 2015.

Oster on Pregnancy, Causation, and Expecting Better. EconTalk. October 2013.

Subtopic: The FDA, drug testing and regulation

Pharmaceuticals: Economics and Regulation, by Charles L. Hooper. Concise Encyclopedia of Economics.

Drug Lag, by Daniel Henninger. Concise Encyclopedia of Economics.

Sam Peltzman on Regulation. EconTalk. November 2006.

A few more background readings:

Cartels, by Andrew R. Dick. Concise Encyclopedia of Economics.

A few more EconTalk podcast episodes:

Adam Ozimek on the Power of Econometrics and Data. EconTalk. February 2016.

Highlights

Time

Podcast Episode Highlights

HIDE HIGHLIGHTS

0:33

Intro. [Recording date: January 28, 2016.] Russ: I want to say first, this is a spectacular book that will resonate deeply with EconTalk listeners who are interested in health, what is reliable evidence, how do we know what we know. And ultimately I think we are going to get, at the end of this conversation, to issues related to economics and the parallels between health, evidence in health and economics, which we have talked about before. So I just want to start by encouraging people to check the book out. It's also delightfully written. I want to start with, what is medical reversal, which is the subject of your book? What does that term mean? How did you coin it? Guest: Medical reversal, yeah; we actually had a lot of trouble coming up with what to call this. We think about--I like to start with thinking about how medicine is supposed to evolve. Which I think of as replacement: We have a good therapy, a pill, a surgery, a device--whatever; we are happy with that. And then some good evidence comes out to tell us that something is better; and that something better replaces what we used to do; and we're sort of happy about that. And that's kind of how we expect medicine to improve, bit by bit, over time. Reversal is when a new therapy comes along that replaces the old therapy. But usually that new therapy is not based on really foolproof evidence. But we only find that out after the new therapy has been adopted--used by hundreds, thousands, maybe millions of patients. And then we discover, when more robust data comes out that: Huh. This new therapy is not as good as the old therapy. Maybe it's worse than the old therapy. Maybe it's worse than not doing anything else. And that's what we consider medical reversal, where it kind of flip-flops on what we've recommended. Russ: And the first question of course is: Is this a big problem or a small problem? We'd assume, given how phenomenal medical care is and its advances in technology and pharmaceuticals--this must happen, what, every once in a while, of course. But you seem to suggest it's a little more of a problem than one might think. Guest: Yeah. To be completely honest, we don't know how much this happens. Vinay Prasad, my co-author, and I began thinking about this just within our own clinical practice: we realized that some things we'd recommended a couple of years before, we not only no longer recommend but we were sort of apologizing to our patients that we'd recommended it in the first place. And so we sought to figure out: Boy, is this just a rare occurrence that sticks in your head; and certainly sticks in my patients' heads? Or is this something that's more frequent? A lot of people have looked into this, in various different interesting ways which I can talk about. Our approach was actually just to look at one journal, a really important journal, The New England Journal of Medicine, and we looked at all the articles published over a 10-year span. We got a lot of people to help us out with that: it was a pretty big job. And over those 10 years, we identified 146 articles, which concerned about 100 different therapies, which were clearly therapies that had been adopted, used widely, lots of money spent on them--that turned out to absolutely be the wrong thing. Our estimate, sort of taking other people's research with our research, is that maybe as much as 35%, 40% of what we do could be wrong. Russ: It's a really big number. And one of the effects of your book--and my listeners know how skeptical I am about lots of things. And one of the things I'm skeptical about is medical treatment and various new--and old--treatments. I'm always wondering: Does this really work? Incredibly, given how skeptical I am, this book made me even more skeptical. Guest: Oh, God, quite [?]. Russ: Which is quite an achievement. Yeah. Things I'm doing now: 'Well, of course, this is a good idea.' I'm starting to think, 'Well, I wonder if there's any evidence for this.' And even if there is some evidence, is it good evidence?

5:12

Russ: Let's start with some prominent examples. What's interesting about the book is the range of things that are not effective. It's not just, 'Well, that pill didn't do what it's supposed to do.' Talk about some of the--pick three or four that come to mind and talk about what happened: why they were reversed, the findings. Guest: Sure. Maybe I'll start with where I started on this, and probably the thing most familiar to listeners, is estrogen replacement therapy. Estrogen replacement therapy was really widely recommended for women after menopause and prescribed all throughout the late 1980s, 1990s, and even into the 2000s. And this was based on observational data, predominantly from the Nurses' Health Study that showed us that women who used estrogen replacement therapy did better--had fewer cardiovascular events--heart attacks, strokes, things like that--than women who didn't use estrogen replacement therapy. This idea was made even more attractive because there was a good [?]--biophysical, biochemical rationale of why we should use them, estrogen replacement therapy. We know that women develop coronary artery disease about 10 years later than men. We attribute that to the effect of estrogen. And so most doctors did this. I certainly recommended it to my patients. And then when a really good sort of experimental randomized control trial came out, we figured out that, 'Huh. You know, estrogen replacement therapy really doesn't help reduce the risk of cardiovascular events.' And for the first couple of years you use it, it may actually increase the rate of events. So that, for me, was the first time I began thinking about this. And I think I'll speak a little bit for my co-author, Dr. Prasad. I think the thing that got him the most, and certainly was shocking to me, was the story of using stents for stable coronary artery disease. Now, stents are these little expandable metal tubes which are like magic. They can be inserted with a catheter into, pretty much at this point, any artery in the body; but speaking here about inserting them into coronary arteries. And therefore can effectively open up blockages of the coronary arteries. And we know for an absolute fact that those stents are life-saving in people who have had heart attacks, people who have what we call unstable angina where they are sort of on the verge of having a heart attack. But what happened in the late 2000s is we started using these stents for people who had stable angina--people who were fine, but when they exercised they would get chest pain because they had mild to moderate blockages in their coronary arteries. It turned out that a tone of money was spent on this: by 2009, 80% of the Medicare dollars that were spent on coronary stents were spent on this indication, using them for stable coronary disease. And then this very famous trial, the COURAGE Trial (Clinical Outcomes Utilizing Revascularization and Aggressive druG Evaluation Trial), came out in 2009. Which, if you are looking at things like preventing heart attacks, preventing deaths, stents were no better than just the medical therapy we were using at the time. So I think those stick out in my mind as some of the most striking examples. Russ: And you have a mix of things that were just ineffective--had no impact--and others that were perhaps ineffective, but in the case of the stent, you really don't want to put a stent in if it's not effective. Guest: Right. Russ: The treatment itself often comes with risks of infection, side effects from a pill, etc. Correct? Guest: That's absolutely true. And some of the things that we state in the book--they probably didn't harm anybody; and the harm, you know, probably was the cost of the procedure: in American medicine nothing is cheap. There may be an opportunity cost for some of these interventions, where the person got something that we thought was helping but it wasn't; and maybe at the same time they could have been getting something that was actually effective. And then, certainly another harm is, this really does affect people's faith in medicine. Whether you are a skeptic to begin with or not, once you've spent a year on a medication which your doctor then tells you, you should come off, because it's not doing anything: You are a little bit more slow on the uptake for future recommendations, I think.

9:48

Russ: Just for a baseline: You talked about observational studies as being the original motivator for the estrogen replacement therapy. This is fantastic parallel in economics. So, talk about: What is the difference--because we are, I hope, going to talk about this all throughout the conversation--what's an observational study, on the one hand, versus a randomized control trial, an RCT, on the other hand? Guest: Sure. Russ: Why is one better than the other? Guest: Yeah. Great. So, an observational study is really a natural experiment. And I think it's--in economics, probably what you have to struggle with all the time. So, that, in medicine, when already something has differentiated into two populations, two groups of people. It may be that one group has decided to take a medication while the other one hasn't. It may be that one group has been exposed to something--say, you know, living in a poor neighborhood versus another group which has not been exposed to that. Or it may be that a doctor has made a decision to do an intervention on one group and decided not to do that intervention in the other group. And then an observational study will report the difference in outcomes for those groups. Did the people who take the pill do better than the people who didn't take the pill? And-- Russ: What's wrong with that? Guest: The obvious is that--yeah, so, what's wrong with that is that it's not just a pill that's different between those two groups of patients. Something motivated those people who took the pill to use it. Or their doctors to prescribe it. And something motivated the other people not to take the pill, or not to prescribe it. And so, usually, in observational studies, when you look at the groups, the groups are very different. To go back to the estrogen replacement example, when we look at that: Women who took estrogen were younger, thinner, had actually better cholesterol levels; I think they actually drank a little bit more than the women who didn't. So it's a very different population. And so it was probably not the estrogen replacement therapy which was benefitting them, but everything else about them that made the better outcomes. Russ: So, I'm going to play-- Guest: What we call confounding. Russ: I'm going to play the believer for a minute. Which is challenging for me, because this is one of my big skepticisms, in economics: Okay, so there's some confounding factors. You just ran off 4 or 5 of them. Control for those. That's what we have statistics for. That's what multivariate regression and other techniques do to control for those confounding factors. And then we can isolate the effect of the estrogen replacement. Guest: So, you are absolutely right. And these studies have a place. Right? The problem is that we never know if we are completely adjusting for those confounding factors. And, you know, the people who did the Nurses' Health Study, then, you know, they were smart. They were Harvard-school kind of health kind of people. And they controlled for--without the paper in front of me I would say probably a dozen, probably more things. But these groups were so different that they weren't able to control for all the confounders. There's a wonderful study--and this is using all medicine examples, from a couple of years back, where some really brilliant researchers took studies of a single intervention that were studied both in observational trials and in randomized control trials--which are really experiments, which get the whole confounding problem out of the picture. And it turned out that those studies agreed most of the time--think about 80% is what I recall. Which is good. But it's not perfect. And you don't know when those observational studies are steering you in the right direction and when they are steering you in the wrong direction. Russ: And I guess the--I want to just home in on this issue that you call mechanism or the underlying science which of course we have an imperfect knowledge of, also, in both medicine and economics. We have a pretty good idea, given what you mention in the data about the differences between male and female heart attack rates that estrogen in women probably protects them. Somewhat, perhaps-- Guest: Right. Russ: We don't know for sure. Because we don't really know how the heart exactly is affected by it. But it's presumed that that is the case. So, then, when you give people estrogen, that should reduce the probability of a heart attack. The problem was, of course, was that giving people estrogen may not be the same as producing estrogen. That's one problem. And the other problem is we don't really understand the mechanism. But in a lot of these cases, it seems like the mechanism itself is the thing we don't really understand. My question is: Do we ever get at that underlying mechanism? If we did, we'd find a much better way of finding things to help. Guest: That's so true. And I think if there's one thing that I took away from the work that went into this book, it was the humility that was sort of forced upon me, when I saw how many times we are absolutely sure something will work. Because it should work. We understand the mechanism; we think we understand the biology. And boy, we know a ton about how the human body works. And then when everything lines up to say, this intervention should work, and that's part of why it's been adopted; and then real empirical data shows that it doesn't work--you know, it's shocking. I think the fact is the human body is so complicated and there are so many different things that impact on how a medication or a procedure works or doesn't work, that--I don't know. I mean, I hope we'll know it eventually. But right now, we don't. And we need to go with empiricism more than the biochemical rationale that underlies some of these decisions.

15:48

Russ: You talk about surrogate end points. And that's one of the challenges that this relates to. Explain what those are and why that's a challenge. Guest: Sure. So, you know, I think of the dichotomy as surrogate endpoints and clinical endpoints. And clinical endpoints are the things that you care about: How do I feel? Am I going to live longer? Am I going to avoid having a stroke? Like, really, really key things. Surrogate endpoints are stand-ins for those. So, it might be: How high is your blood pressure? How high is your cholesterol level? How high is your average sugar? Things that you would have absolutely no idea about those numbers or those values unless you see a doctor. They don't bother you at all. And so when you come up with a new therapy, you really want it to improve a clinical endpoint. You know, you want this new pill to decrease the number of heart attacks people have. But, boy, to show that, you need a bunch of people; you need to follow them for a long time. And that's expensive. So it's a lot easier to pick a surrogate marker as stand-in, like blood pressure, and say, 'Well, you know, we've got this new pill, and it lowers blood pressure; and we know that people who have higher blood pressure are at higher risk for heart attacks.' So, it translates that lowering blood pressure should lower heart attacks. And we'll accept this therapy, because we know it affects the surrogate marker in a good way. Russ: So you mention one blood pressure medicine that in clinical trials did not have any clinical endpoint effect. It did affect, of course, the surrogate endpoint. It did lower blood pressure. But it didn't effect-- Guest: Sure-- Russ: strokes, heart attacks, etc. Is that true of all blood pressure medicine? Guest: Um, so that's not true of all blood pressure medication. I mean, most of the blood pressure medications we use, we do actually have hard endpoints on. And we've shown that they've decreased things like heart attacks, strokes, even some of the overall mortality. But that's not true for all of them. The one that we discuss in the book is Atenolol, which was marketed as Tenormin for a long time. And I'm actually attached to that, because back in medical school we had to write this personal pharmicopedia, which was, you know, 20 of our drugs we were most interested in and write all about them. And Atenolol was the first drug in my personal pharmicopedia. Russ: Hard to let it go. Guest: And it turned out that Atenolol does do a really wonderful job of lowering blood pressure. As good as most of the other blood pressure medications we use today. But when you brought together all the studies in which it was compared to a placebo, it turns out that it doesn't improve mortality. Doesn't improve the risk of heart attacks. It may slightly, slightly decrease the risk of strokes. But there are so many other medications which control the blood pressure just as well that actually control all of those real clinical endpoints. Russ: But to get back to this mechanism issue: We don't understand why it is that some medications that lower blood pressure seem to actually affect the things we care about, while this one did not. Guest: Absolutely true. Absolutely true. I think there are certainly people who know a lot more than me about that. But in the end, nobody can really predict: Will this have the outcomes that we hope it does and expect it does. Russ: So, let's go back to randomized control trials. We talked about the challenge of the confounding factors in an observational study, a so-called natural experiment, a so-called statistical analysis of observable behavior and outcomes. Why is a randomized control problem better? What's better about it? Guest: So, a randomized control trial is really an experiment. And so, you take a group of people who you ask--ask nicely--to enroll in your study. You make sure the study is an ethical study, approved by your institutional review board, say that there's equipoise--we don't know which is better; we don't know if the treatment is better than what we are presently doing. And then you randomize them. And half the group gets the treatment; half the group gets the placebo. And so those groups are exactly the same on average in all the risk factors that you know of. But also all the risk factors that you don't know of but that we might know of in 10 years. And so, in a really well-done randomized control trial, we now at the end of whatever difference there is in the group once the trial has ended, is due to your intervention--whether it be a surgery or a pill or a device implant. Russ: You hope you know. Because there are still issues about, as you say--you don't know everything to control for. It could be by chance that the people in the placebo group are different in ways that you don't observe. But the idea, of course, is the larger the sample, the more you hope you've dealt with that problem, because of the law of large numbers. Correct? Guest: That is true. That is true. And I can tell by the way you talk, you are a true skeptic.

20:54

Russ: Yeah. But well, what's fascinating is, you came up--there are a lot of issues that come up in randomized control trials in economics that are problematic, mainly because there's a big difference between physiology and the setting that an experiment takes place in, in economics. It's going to be harder to generalize. There are issues, of course. Medical issues by population and geography, etc. But you did bring up a couple of really interesting challenges of randomized control trials (RCTs) that I had not thought of. One positive, one negative. I loved your point--a lot of people say, 'Well, it's unethical to give people a placebo because they've got this condition and you are not helping them.' So, first, talk about--I think if I've got this correct--vertebroplasty. Talk about how--the degree to which they keep make the placebo as close as possible to the treatment. Guest: Sure. Russ: Which, I love that. And then talk about why it's actually kind of ethical--it is ethical--it's not what you think. Guest: Yeah. Vertebroplasty is a wonderful example. So, to take a step back, what vertebroplasty is. So, people with osteoporosis, with thinning of the bones, most commonly postpone a puzzle of women who we're picking on in today's conversation for some reason--will develop osteoporosis and then may develop what's called a spinal compression fracture. And if you picture the vertebrae in your back, a compression fracture is just when all of a sudden that collapses. Very common; the estimates are that there are 700,000 compression fractures in the United States every year. And about 280,000 of those are clinically important--meaning that people, you know, develop back pain; go to the doctor with it. For years, our only real treatment for compression fractures was pain medication and time. And people do get better from these eventually. And in the 1980s, some radiologists came up with this kind of new idea that, 'Well, what if we take some of those with compression fracture and we inject medical cement into that collapsed vertebrae?' And so the vertebrae puffs up, it's stabilized; the nerves that are coming around that area get a little more room to breathe; and people should get better. And this was an approved therapy, based on some not-perfect trials, which showed that people who got this procedure felt better than people who didn't get this procedure. The real test, though, was to design a test though that had a placebo group as close to the intervention group as possible, other than the vertebroplasty. And it was an amazing study, done by some very brave researchers, where patients were randomized--either to have vertebroplasty or to get sham vertebroplasty. And the sham was that they took people to the procedure room. They prepped their back like they were going to do vertebroplasty. They actually opened the medical cement so the patient could smell the medical cement. And then they just injected saline into their back. And it turned out that over the first month after the procedure, all the endpoints were exactly the same between the sham group and the intervention group. No difference in pain. No difference in quality of life. No difference in activity scores. Nothing. Russ: So, that's fascinating; but I don't want people to hear about the opening of the cement because it's my favorite thing. But the other point is that of course sometimes the procedure harms you. So, the placebo is great for the people who get that luck of the draw. Guest: It's absolutely true. And the placebo group--in a way is an insurance policy. And people argue this--is it ethical to do this sham procedure on people? To intervene on them and in some way it has no chance of helping them. But in fact, in the vertebroplasty case, you know, those people saved--I don't know--thousands, maybe millions of people in the future from getting a procedure which is not helpful. Russ: And as you point out, though, sometimes it's actually harmful-- Guest: Right. Russ: and it's a blessing to get the placebo. I have a lot of interesting things to say about placebo effects. That's for another time. Before I get to the other point about RCTs, I want to ask you a more pointed question about reversal. Which this is a good example of. I suspect there are people listening right now who are either patients or doctors who are either in line to receive this vertebroplasty or they actually do want it. Because one of the depressing aspects of this book is that a lot of these reversed procedures continue. Guest: That's true. Russ: I don't know if that one's totally off: everyone knows it's wrong, nobody does it any more. But there are plenty of things that you talk about in the book that continue. An example is you suggest, you say in the book, that rapid response teams don't work. Don't show any effect. The idea of creating a mobile group of people inside a hospital to respond to crises and emergencies don't seem to have any impact. Imagine that to a doctor friend of mine who says, 'What? They don't?' Either he missed the study or he doesn't agree with it or--so surely some of these so-called reversals, people say, 'That's not a reversal. Ah, it's one study that didn't work. Look, it's helped my patients. I know it.' Guest: Right. Right. So, I'll say in our defense: We were very careful in what we labeled reversals. And we only labeled something a reversal if the study that overturned the practice was clearly a better study than what had supported the practice in the past. Because you are absolutely right. I mean, are there things that clearly work but then one study says they don't work; and yes. And we know from our statistics that that's going to happen. So when we said something was a reversal, it's that the studies which had actually recommended this procedure or this intervention in the past were less robust than the ones that overturned it. Rapid response teams, are, I think a great example. And I think right now we are not really sure if they are beneficial or not. But they have been adopted far and wide. The data that says they work are generally single-center studies. So, one hospital shows that their rapid response teams work. Rapid response teams--also, boy, they make everybody feel better, because there are more people around to come running and helping. And the idea that this would be beneficial makes total sense. Russ: How could it not? Guest: The person's having a problem: give anybody can call the rapid response team. The fact is that for a rapid response team to really clearly be shown to work, it needs to be shown to help patients; and you need to figure out what input that is. Do you want your rapid response teams to save lives? Do you want your rapid response teams to get people out of the hospital faster? Or is your endpoint just that you want your rapid response teams to send more people to the intensive care unit? And to this point, we haven't seen that rapid response teams save lives. Russ: What are some of the psychological and monetary incentives that make it hard for doctors to admit that there is such a phenomena for a therapy or practice they are involved in? Guest: Yeah. This is always like the hardest thing for me to talk about, being 20-some-odd years into my practice and being fully accumulated into medicine. I like to think, and I really do believe, that for the most part, when doctors are, you know, shocked by reversals--maybe when actually they argue against a practice that they've recommended being reversed--it's because they truly believe it works. They've not only invested a lot of time and energy into the practice; they've seen people who get the intervention get better. And they think it's the right thing to do for their patients. There is a part of it, though, that you can't deny: That, boy, if you've made a lot of money over the years doing a procedure on people's knees, which you believe it works but it's also helping you, and your kids through college, when you find out that that doesn't work, you're probably a little bit more apt to argue with it. Russ: Yeah. I don't have any problem making that argument. But it's a very common problem in economics as well. I like to argue that about half of the macroeconomists in America think they are in the top 5% of candidates to head the Federal Reserve, and that affects their willingness to criticize the Federal Reserve even if they aren't aware of that. That subtle bias. But it's there.

30:18

Russ: The other issue about randomized control trials I found so interesting is that in the medical area, sometimes an RCT will be stopped prematurely because the effects seem so dramatic it would be cruel then to keep people on the placebo--or on the treatment. How does that affect the accuracy of the trials? Guest: This is another nice piece of research by another group. I'll step back a little bit. Clearly when a randomized control study is designed, we feel it's basically an ethical necessity that if one treatment is clearly coming out to be clearly superior than the other treatment, that that trial needs to be stopped. Because if our new intervention is working really well and we know it, it's unethical to keep giving people the placebo. Right? The issue is, is that if you are doing a bunch of studies, your studies are going to come up with somewhat different outcomes, just through random chance. And, so what we found, looking at multiple studies over time, is that studies that are stopped early tend to overestimate the benefit of a treatment. Which, when you hear it, you say, 'Yeah. Thinking about that makes sense.' It's surprising to me, because I have to say, my reaction is, when I'm listening to the radio in the morning and I hear about a new therapy and it's being released because the trial was stopped early because it was shown to be effective, I'm sort of more convinced by that. I'm like, 'Wow, this must be really good if they have to stop the trial. But it turns out we probably should not look at it that way; and we should say, 'Well, you know, this may be one trial that was positive; but maybe there are other trials which will come out which will be negative; and maybe this doesn't in fact work.' Russ: One of the problems you talk about of course is, even when we believe--and I think you are right--that randomized control trials are better than observational studies, they are very expensive. Guest: Yes. Russ: How do you deal with that reality? We want to make medicine better. How do we deal with the fact that the tool that we have to bring scientific technique to medicine, a true experiment, is really a problem? Guest: Right. I think not only are they expensive, but you really need some really generous people--the volunteers--to be in the randomized control trial. It may not be cheaper for the individual, but it's a whole lot easier to just take the pill that your doctor gives you rather than enrolling in a trial where there's going to be a lot more follow-up, probably a lot more monitoring. And this is something that we struggle with. We know we need more of these trials, but how do you do it in the cheap, easy way? We stir up some examples. We are big fans of the Nudge Principle. And we thought the idea that, for a lot of things, decisions that we don't know which is best, which no patient would conceivably have some sort of predetermined reason to prefer one therapy to another, may be something that you have to opt out to not be in the trial. So, if you go to your doctor with sinusitis and she's deciding between two different medications, two different antibiotics, both of which we know works, we just don't know which is the most effective, and you have no reason to prefer ciprofloxacin [?] to azithromycin, why not just have that person randomized? Unless they opt out. And we could get lots of data quickly in that way. Russ: Do you think we are going to make some progress on these questions as we enter the so-called Big Data Era? One of the things that you seem skeptical about is something we've talked about on the program before with Eric Topol, which is personalized medicine--the innovations that are coming in self-monitoring and other ways of assessing, maybe, effectiveness. You are a little more skeptical on that. Guest: Right. I think that personalized medicine and using people's genetics to tailor therapy to them has enormous, enormous promise. I think the issue, though, is that you still need to prove that therapies work. And there's even more temptation when you talk about personalized medicine to say, 'Hmmm. We know how this drug works on this gene, and so that should fix people.' Well, you know, you still don't know that till you've shown it. And in a way, personalized medicine may make randomized control trials and evidence-based medicine even more important because we need to test each of these personalized medicine interventions on a smaller and small group of people, since our therapies will generalize to smaller and smaller groups. If that makes sense. Russ: Yeah. Sure.

35:27

Russ: How do you deal with the criticism that your skepticism about so many received therapies and techniques is a recipe for "doing nothing"? I'm sure one of the things--and I think you write about this, and I get it all the time about economics: so, I'm skeptical that the minimum wage doesn't reduce employment. And I'm skeptical, I have to confess, because I think I understand the mechanism of how incentives work. And I might be wrong. But one of my responses then is when people say, 'Yeah, but look at the data,' I've want to say, 'Then, you better be--you've got to accept a different mechanism than mine; and that's going to have a lot of implications outside of just minimum wage policy.' But anyway, when I say stuff like that, people say, 'Oh, so we just do nothing? We've got these people who have terrible lives, they have terrible jobs, they have terrible opportunities in the labor force.' And they're being--some people would say they are being exploited. 'And you just want to do nothing, because you are not convinced it would help.' Doctors are in even a worse position. Here's a patient in pain, maybe at risk of death, and you are saying, 'Well, we just don't know if it's going to work.' And I'm sure many practitioners--you are a practitioner, so you have to deal with this daily--would say, 'So, what am I supposed to do--just go--I'll wait till the RCT comes out that shows me what to do? I've got to do something now.' Guest: That is so well put. I think the issue in medicine is that, 1. You have to consider who you are treating, and 2. You just need to be very open with the patient. So, if you are talking about a healthy person and you are talking about a screening intervention or preventative therapy, I would say, 'Boy, you know, you need to be absolutely sure that's going to help them.' Because you are taking a basically healthy person and you are basically turning them into a patient and potentially making them sick with your intervention. With someone who is sick, who is in pain--well, then I think the bar is actually a little bit lower. And you think about what you have to offer. You think about the likelihood that maybe it will work, and maybe it's based on observational studies; it may be that it's based on surrogate endpoints. And I think what's important is that you have an open discussion with the patient. And you say, 'Look, this is what I have to offer you. Maybe I have a well-proven older therapy and a less proven newer therapy: and these are the reasons it might work; these are the reasons that it might not. This is why maybe I'm a little bit uncomfortable about it.' And you let the patient make the decision. Just like doctors, we as patients I think have quite a breadth in our values. And I have some patients who, you know, never want to take a medication unless it's been on the market for 10 years. I have other people who are knocking on my door the day after it's advertised saying, 'I want that pill.' Russ: Yeah. And so, you mention screening. We had Robert Aronowitz on the program talking about our urges to reduce our risk. And screening is, I think, very appealing to most of us. Catch it early. But you, like he, appear to be somewhat skeptical. Guest: Yeah. I think we are brought up with the 'ounce of prevention is worth a pound of cure,' right? And there's nothing that makes more sense than screening. You find that breast cancer early, that prostate cancer early--it's got to help, right? The problem is, you know, our tests are not perfect; the diseases out there are--you know, even though we consider them common diseases, they are actually still rare. And so even with a pretty good test, when you are looking for a rare disease you are going to come up with a lot of false positives. And those false positives cause anxiety among the patients--that's probably the least impactful. They probably also lead to procedures that don't need to be done. And often treatment which doesn't need to be done. Our recent data from the world of prostate cancer screening says that to save a life from prostate cancer, we actually need to treat about 30, 35 people for prostate cancer. That's a lot of people being treated just to save one life. And if you are screening, you really need to take that into account. Russ: But if it's my life that you are saving-- Guest: Absolutely. Russ: there's this statistical issue there of what's a statistical life versus, you know, a personal experience. I think the question is--I'm being facetious--not facetious--I'm being, I don't know what the right word is. But the real question is: For me it's 1 out of 36 with lots of unpleasant side effects until I know otherwise. Right? We don't know who the 1 is. We're not saying it's too expensive to save the one. You're saying we don't know who the one is, and that's not a great return. Guest: Absolutely. And you are right. It's one in three for erectile dysfunction or one in three for incontinence after that intervention. So you are very likely to have the side effects. You are less likely to have the benefit. But that's where I think the patient decision-making really has to enter into it. And I feel like as long as people are well informed and you are letting them know what the data is, and there really is some chance of benefit or reasonable chance of benefit that it's reasonable to suggest, the people probably should have the freedom to make those decisions. Russ: One of the lessons for the book for me, and we'll talk about it at the end, is educating oneself as a patient or as a potential patient is really important. Guest: Yes. Russ: And I think most Americans, maybe most people generally, we like deferring to some authority. We like not in other areas. But in medicine, it's like, 'Look, doctor: you're the expert; I trust you; you carry yourself so well.' One of the things that struck me about your book is you are really emphasizing, as I do, the importance of humility in my field; and you are emphasizing the importance of humility in your field. A lot of people--we don't want a humble doctor, 'I want an arrogant one. I want a doctor who can just say, this is going to work; I've done this thousands of times; there's no side-effects,' blah, blah, blah. So, there is an interesting culture there that you are encouraging a change in. Guest: And I would say, 'You need to find the doctor that you need.' I certainly take care of people who are like that, who want me to be the decider, to be very clear about what I think is the right thing to do and they will follow my advice. I have other people who want to have an open discussion, maybe an open argument, about just about every decision. And you need to find a doctor who will do it the way you want to do it. Obviously if you are someone who wants to argue and you have a doctor who wants to dictate, you are probably not going to be a very good pair. Russ: Yeah, for sure.

42:39

Russ: Many economists--not all, but many economists really dislike the FDA--the Food and Drug Administration. Guest: Hmmm. Russ: This goes back to work by Sam Peltzman, who argued that the FDA kills people. It's so careful in making sure that drugs are safe, it rules out drugs and therapies that could save lives. And they raise the costs through their tests and their demands, which makes it harder to get any one drug to market. You argue for the other direction. You suggest that the FDA should be more vigilant--not so much in safety but in efficacy. So, defend that position. Guest: Yeah. My co-author Vinay Prasad, he has said to me, and I really take this as truth, is that the people who work at the FDA, those people are the most underappreciated group of people in the world, and do an incredible job. Because on the one hand they are being yelled at by people who are saying: 'You are slowing down progress. You are holding up drugs that could save lives. You are responsible for, you know, mortality, morbidity.' And on the other hand there are people like us who are saying, 'Wait. You should make sure that this absolutely works before you approve it and let people be exposed to this.' So, it's a difficult place to work. And it's a difficult sort of road to hoe. I think what I would say is that we want to make sure that the FDA is assuring that we have data that treatments work eventually. There will be pieces that a therapy looks really good in terms of its ability to affect surrogate endpoints, say. And it's a drug that's really necessary, because maybe the treatments for the disease we have out there aren't so good. Now, it's probably completely reasonable that the FDA lets that drug out there and let's that start being used. But it really seems necessary that there should be studies ongoing at the time that the FDA approves that drug that will show us those real endpoints. Those clinical endpoints. Those clinical endpoints that matter. What often happens is that these drugs are approved based on surrogate endpoints. And then we never get that final data, and we are left with, you know, using therapies that might work but we are not sure they do. And that seems like the wrong way to proceed. Russ: So, let me make another analogy between medicine and economics I hadn't thought of until this conversation. Which is, so in the case of the financial sector, we say: We don't want any banks to go broke, because that can lead to chaos and disaster and people lose their money, and they really don't like losing their money. So what we're going to do is we are going to insure banks' deposits so you won't be at that risk. Of course, we understand that changes the incentives facing banks--that they are going to then tend to want to be riskier because they'll still be able to attract investors and depositors, because their money is insured. So then, we have to then, of course, keep an eye on the banks and have rules about what they can invest in, and what their safety is and whether they are approved or not by a ratings agency. And of course, eventually there is an unpleasant symbiotic relationship between the ratings agency and the banks. And they tend to work together--not as independently as they are supposed to. And banks start investing in things that are actually quite risky, but look not so risky; and the ratings agencies go along because that's how they make their money. Etc., etc. So in the medical area, we've got this lovely thing, on the surface, which is third-party payment, either through health insurance or government programs of various kinds--Medicare, Medicaid. So people don't pay for their medicine. So, my interest in finding out whether this works or not is very small. If it doesn't work, that's life. Negative side-effects--well, I don't want that. So we have the FDA. What they mainly worry about is whether there are negative side effects. Unfortunately, that means there's a natural incentive--and I think economists underestimate this part of the FDA and [?] relationship, to keep the industry somewhat happy. Right? Unfortunately it's true that the high cost of FDA approval mean that drugs take a long time to get approved; and that means a lot of drugs that might have been invested in aren't worth it any more. But, at the same time it kind of creates a cartel for the pharmaceutical industry. So, they don't have a lot of competition. Because there's this huge cost of approving a new drug. And they kind of like that. The first part, the delay, the cost, they don't like. Semi-cartel, monopoly thing: that's really great. So, to me, the FDA--of course the people involved in it day to day have a--you know, a hard job. They are good-hearted people. But the influences they are under must be subtle and in a way I feel that the Federal Reserve Governors are under. They are coming into contact with people every day that are not necessarily what the American people want to be doing. It seems like an unpleasant-- Guest: I think that's an amazing analogy. I mean, two subtle things I would add to it, also, is that because of the cost of developing these drugs, there is a very subtle incentive that if companies feel like they are going to have to be held to what might be unreasonable standards, incentives to spend this money and develop these drugs go down. The other thing, which I thought as you were talking about the banks: Physicians really rely on the FDA, because the FDA is in a way our insurance company. You know--the FDA takes the heat when a drug doesn't work or causes harm. Not the physician. So the FDA is getting both pressure and possibly influence in multiple directions. Russ: And you talk about that very thoughtfully in your book. In fact, let's turn to that. This isn't the FDA per se, but it's about the subtle influences that we all operate under. And just to close the economics/financial thing for the moment: and I think the real key here is that our medical system, the way it is structured through government programs and tax deductibility of health care payments, mostly, it just changes the feedback loops that would normally be there. And it does that in the--between patients and doctors, in the investment world also. And I'll leave it at that. Guest: Absolutely.

49:18

Russ: Talk about thought leaders and what you call super-specialists, because I thought that was extremely interesting, about those incentives that face those folks. Guest: Yeah. I think about this in a way that, you know, when we talk about the amount of medicine that can be wrong, you know, there's art to some extent, made up numbers, and they certainly don't affect any one doctor. You may be seeing, you know, a generalist who is practicing from a very clear evidence base because the diseases that that doctor takes care of are common things which are treated by, you know, which have been studied very well. When you start to get sick and see a more and more specialized physicians, maybe, for the problem you have, the therapies that they recommend may be less well studied. And that's because they affect fewer people. And because you are so in need that you are probably more willing to accept therapies that are not as well studied. The other thing is that very often those specialists in medicine today, and it's one of the reasons why American medicine is terrific, is we have people who study such minute areas of medicine that they begin to just understand everything that's just known about that. And it makes them more willing, I think, to adopt therapies based on what they know. Because they are experts. And they feel like it's foolproof. Adding to that, those people are often most involved in the studies; they are most involved with the companies that are developing drugs and devices. And so those are people who, probably--the people you want to see with some of the problems, but maybe ones whose therapies might be most prone to reversal. [more to come, 51:18]

Show more