2016-02-23

By KIP SULLIVAN

In Part I of this series I noted that we have almost no useful information on what ACOs do that affects cost and quality. I described two causes of that problem: The amorphous, aspirational “definition” of ACOs, and the happy-go-lucky attitude toward evidence exhibited by ACO proponents and many analysts. I showed how the flabby “definition” of ACO makes it impossible to operationalize this thing – to reduce it to testable components. And I asked why the health policy community let ACO proponents get away with such a vague description of the ACO. I said the answer lies in the permissive culture of the US health policy community. It is a culture that tolerates, even encourages, the promotion of vague concepts and a cavalier attitude toward evidence.

In this installment, I illustrate these problems – the vague definition of “ACO,” and loose standards of evidence – by examining a paper published last month by the Center for Health Care Strategies (CHCS) entitled, “Accountable Care Organizations: Looking back and moving forward.” In the third installment of this series I will describe the emergence of the health policy culture that tolerates intellectually flabby proposals and a devil-may-care attitude toward evidence.

I chose the CHCS paper because the organization that funded it, the Robert Wood Johnson Foundation, and the organization that wrote it are prominent advocates of managed care and its latest iteration, the ACO. The Foundation describes itself as “an early supporter of the idea that later became known as managed care” T

he Foundation announced last July it “has been supporting … ACOs for several years now.” CHCS was established two decades ago with support from the Foundation, and receives funding from organizations that promote managed care and ACOs.

Moreover, the paper’s authors and funders made it clear they hoped the paper would provide a useful update on what ACOs have accomplished and how they accomplished it. In its July 2015 announcement of the $20,000 grant that supported this study, the Foundation said the study would “inform stakeholders of progress to date by accountable care organizations.” CHCS’s paper claims it “identifies key lessons from ACO activities across the country to date” (p. 1).

Finally, I chose CHCS’s paper because it was just published.

In short, if there is a paper out there that could refute my claim that we have no useful research on what ACOs do for patients that distinguishes them from non-ACO providers, this paper should be it. Its authors are experienced writers and researchers, the authors and the funder clearly wanted to cast ACOs in a favorable light, and the authors had plenty of money to search the literature and interview ACO experts.

But the paper fails to provide any useful evidence on ACOs. With two exceptions, it doesn’t even resort to anecdotes, which is a common tactic among ACO proponents and analysts. (Oddly, the two anecdotes deal with the same service — finding housing for homeless people.)

The paper fails to produce useful evidence for the two reasons I discussed in Part I: The authors accept the vacuous, aspirational “definition” of ACO; and the authors are willing to let opinion substitute for evidence.

A wish is not a definition

The paper begins with the usual wishful “definition” of “ACO,” to wit: “ACOs are designed to achieve the Triple Aim by shifting varying degrees of financial responsibility for patient outcomes to the provider level….” (p. 2). There are two problems with this definition: It uses manipulative language, and the language describes a wish, not an entity that can be tested and rationally debated.

The manipulative, wishful language appears in the statement, “ACOs are designed to achieve the Triple Aim….” Note the passive voice (“are designed”) and the wish expressed by this unidentified voice (“to achieve the Triple Aim”). Who “designed” the ACO and how do we know said “designers” knew how to invent something that simultaneously achieves three goals at once? What research guided them? How credible was that research? We aren’t told. Rather, we are supposed to skip over those questions and just accept the unarticulated assumption that the Original Designers knew what they were doing, and they were basing their design decisions on solid evidence, not the latest in conventional wisdom.

Secondly, this “definition” assumes all we need to know about ACOs is that they inflict financial risk on doctors and hospitals. Nothing further need be specified.

By accepting without criticism the conventional aspiration-based definition of “ACO,” CHCS guaranteed it would not be reporting any empirical evidence on ACOs. CHCS reacted to this self-inflicted quandary the same way L&M Research did (see Part I of this series), namely, by asserting that ACOs have a few abstractly defined “features.” Here is how CHCS described these features (note the expression of hope, the high level of abstraction, and the use of labels designed to persuade rather than inform): “To achieve the Triple Aim, ACO models typically involve three … overlapping components”: “Value-based payment methodology,” … “quality improvement strategy,” … [and] “data reporting and analysis infrastructure.” (p. 2)

Obviously, these poorly defined, “overlapping components” (“redundant” might have been more accurate than “overlapping”) predict nothing about what ACO providers will do, or what services they will provide, that are different from non-ACO providers. Consider just two of the more obvious questions we need to ask about ACOs that CHCS, trapped in its self-inflicted quandary, could not answer: (1) Should ACOs deliver particular services to their sickest “members” or to their entire “population”? (2) If the answer to question 1 is ACOs should provide extra services only to their sickest members, what services should they provide?

CHCS’s report sheds no light on these questions. Instead, the “lessons” CHCS reports amount to mere bromides. The paper contains dozens of examples. Here are three:

“ACO … efforts are often grounded in analyzing the health needs of their attributed patients” (p. 4);

“Many ACO efforts aim to achieve shared savings by eliminating inefficiencies….” (p. 4); and

“ACOs are beginning to look at ways to engage patients….” (p. 9)

Notice all the hedge words in just those three sentences – “often,” “many,” “efforts,” “are beginning,” and “ways.” When “lessons” are expressed so abstractly and with so many waffle words, it is fair to characterize them as useless bromides.

But aren’t bromides the best we can expect from “research” that starts out defining ACOs in terms of the wishes of their proponents?

Disinterest in rigorous evidence

Although CHCS claims it wants to provide useful evidence to ACO leaders and to policy-makers, it is clear from its paper and a subsequent blog comment that CHCS’s highest priority is to cast the ACO in a favorable light and to induce policy-makers and funders to continue supporting ACOs even though the early results are not encouraging.

The evidence indicating that CHCS is more interested in promoting ACOs than analyzing them is of two types: The inferior quality of the evidence CHCS cites to support its claims; and CHCS’s willingness to claim ACOs are saving money without asking what it costs insurers and ACO providers to set up and run ACOs.

The evidence CHCS cites on ACOs’ impact on costs consists exclusively of undocumented press releases from three state agencies.[1] Likewise, with one exception, the evidence CHCS cites with regard to quality consists of documents published outside the peer-reviewed literature.[2]

The only evidence CHCS cites for its claim that ACOs are saving money are short blurbs in two press releases and an “annual report” about Medicaid ACOs from three state agencies, to wit:

Two sentences from a two-page press release from the Minnesota Department of Human Services;

a single sentence in a sidebar of a glossy “annual report” published by the Colorado Department of Health Care Policy and Financing that looks and reads like an advertisement for a political candidate or a health insurance company; and

two sentences on a stand-alone page at the website of Governor Peter Shumlin of Vermont.

None of these documents refers the reader to studies. We simply must take the word of the government officials who wrote them that they used acceptable methods to attribute patients to ACOs, to create control groups, to risk-adjust cost and quality measures, and to detect gaming strategies such as “teaching to the test” and up-coding.[3]

To make matters worse, CHCS failed to cite credible research (certainly more credible than government press releases) that indicates ACOs do not cut costs. The Physician Group Practice Demonstration, which CMS ran from 2005 to 2010, is widely regarded by ACO proponents and neutral observers alike as a test of the ACO concept. That demonstration showed that the ACOs saved CMS a grand total of three-tenths of a percent after taking into account the bonus payments CMS made to ACOs (but not taking into account the expenditures ACOs and CMS incurred to set-up and run ACOs).[4]

To take one more example of research CHCS ignored: Last September Kaiser Health News, using publicly available data that CHCS could have examined, reported that CMS’s two ACO programs slightly raised Medicare costs in 2013. Again, that estimate did not take into account costs to set up and run ACOs.

The second feature of the CHCS paper which illustrates the distant relationship between ACO proponents and the usual rules of science is the paper’s total disinterest in determining what it costs providers and insurance companies to set ACOs up and provide the extra services ACOs allegedly provide. (This same strange habit afflicts research on other managed care fads, including “medical homes” and pay-for-performance.) Claiming that ACOs lower medical costs without asking what it cost ACOs and the insurers they contract with to implement the interventions that lead to lower medical costs is like claiming solar panels reduce heating bills without taking into account the cost of buying and maintaining the solar panels. It is difficult to imagine that this omission – may I call it a “sleight-of-hand”? – would be tolerated in any discipline other than health policy.

But CHCS makes no mention of this issue. The documents published by the states of Oregon, Minnesota and Vermont that CHCS cites as evidence that ACOs are saving Medicaid money did not mention this issue either.

Despite CHCS’s inability to produce evidence that ACOs are working as advertised, CHCS concludes its paper with these cheerful remarks: “[P]olicy-makers and funders should not be afraid to forge ahead on innovative ACO model enhancements,” and they should provide “key support” for “work toward ACO arrangements that improve quality [and] reduce costs. “To make sure policy-makers and funders got this message, CHCS went on to write a blog based on this report entitled,” “How funders can support emerging accountable care organizations to maximize their potential.”

This is not research. This is advocacy of conventional wisdom or, to put it more harshly, promotion of group think. As long as ACO proponents and analysts think their job is advocacy rather than research, the useless definition of ACO will continue to go unquestioned, and the dearth in useful research on ACOs will persist.

[1] Only three of the 42 endnotes in the CHCS paper cite peer-reviewed papers. None of these three address cost, and only one addresses quality. The three papers are: Colla et al., “First national survey of ACOs finds that physicians are playing strong leadership and ownership roles,” Health Affairs, June 2014;33(6); McWilliams et al, “Changes in patients’ experiences in Medicare Accountable Care organizations,” New England Journal of Medicine, October 30, 2014; and Scheffler, “Accountable Care Organizations: Integrated care meets market power,” Journal of Health Politics, Policy, and Law, August 2015.

[2] To its credit, CHCS concedes we have very little useful research on ACO quality because the existing research uses so few measures. “While quality metrics tend to capture performance on specific outcomes…., they may not accurately measure the overall health of the patient,” they observe. “This makes it difficult to assess the true impact and efficacy of ACO arrangements.” (pp. 9-10)

Show more