2016-02-22





Prakash Menon, CEO at Basehealth

Editor’s Note: Prakash Menon is CEO at Basehealth, a science and technology leader that adds genomic precision to predictive analytics and population health management.

Let’s not bury the lead. It’s finally possible to effectively model diseases and predict risk, based on an individual’s unique clinical, lifestyle, environmental, and genetic data. True predictive analytics for healthcarea have arrived at long last.

Back to The Future

Historical medical information that resides in claims data, electronic medical records (EMRs), and elsewhere has enduring value, but those data remain, by definition, anchored in the past. EMRs, for example, give us a much better view of what happened than what is happening now or what will happen next.

Today, faster diagnostics, machine learning on large sets of data, and nascent proteomics among other things promise a real-time understanding of health.

What’s trending? What events signal anomalies? Which are good surrogates for something else? Where are there strong and weak correlations population-wide? Just understanding what’s going on in the present tense is a big deal.

But the most exciting frontier in healthcare and population health is the future. This is what we’ve all been talking about, for what seems like an age. I hope articles about this subject have been popping up in your email feeds as frequently as they have in mine. The time is now: 2016 will be the year of predictive analytics.

Why? The excitement is based on how predictive analytics have proven – in small-scale pilots  – to decrease costs, more effectively direct care, better inform underwriting, and swiftly re-orient our system towards proactive, preventive disease management.

That famous William Gibson quote is especially appropriate here: “the future is already here, it’s just not evenly distributed.” Soon predictive analytics will be prevalent, not merely promising.

It Starts With Precision

In addition to outlining what will take us from early adoption to an early majority in terms of predictive analytics, I also hope to clarify a few buzzwords along the way.

Almost a year ago at the 2015 State of the Union address, President Barack Obama spoke of funding a National Precision Medicine Initiative, catapulting that new term into the limelight:

“We must gain better insights into the biological, environmental, and behavioral influences on [chronic] diseases to make a difference for the millions of Americans who suffer from them. Precision medicine is an emerging approach for treatment and prevention that takes into account individual variability in genes, environment, and lifestyle for each person.”

While definitions of precision medicine vary, one thing is generally agreed upon: precision medicine is about being able to classify people precisely, based on their susceptibility, their microbiology, and/or their prognosis, at a considerably higher resolution than ever before.

As the National Research Council has explained, “preventive or therapeutic interventions can then be concentrated on those who will benefit” (my emphasis).

So what exactly is the relationship between precision medicine and predictive analytics? How do we get from personalizing care at the individual level to intervening preventatively, based on things we understand to be of very high likelihood? This is a distinction that muddies the water for many in our industry.

Allow me this: whole genome sequencing and health data analysis provide perspectives most useful when put to work predicting the future. It’s one thing to see precisely right in front of you. It’s yet another to peer around the corner.

If extrapolating based on initial conditions is the game, then higher-resolution conditions will provide higher-quality output. Precision medicine makes true predictive analytics possible. One is a requirement for the other.

But Context Is Ultimately King

Certainly, in narrow use cases, acute care, and rare disease states, precision itself and alone can be life-changing. It’s important to acknowledge that.

Each year, seven out of 10 Americans die from chronic diseases. The vast majority of America’s healthcare costs are associated with chronic health issues and conditions that we can, in fact, prevent if we can just manage to give patients and providers advance notice.

To give advance notice, we need the data upon which to compute these insights. We certainly need the level of precision genomics provides, but we also need every bit of surrounding context too. To affect broad systemic change, we need a wide purview.

What we’re seeing in the field is that the explosion of information available to us not only from the human genome, classic labs, and family history but also from comprehensive assessment of lifestyle, environmental, dietary, and activity data makes true predictive medicine a “right now” opportunity, and not just the stuff of science fiction.

Which is all to say: prediction requires precision, but it does not require precision alone. Predictiveness comes from a simultaneously wide and narrow aperture. You need a lot of data, but you need high-accuracy tent-poles too.

In a recent whitepaper, Optum similarly argued that today is the tipping point for predictive analytics in healthcare because the data we now have at our fingertips makes much higher-reliability outputs possible:

“Predictive analytics uses regression models on underlying data to predict outcomes. This [fact] is not new to healthcare. The challenge, in past years, has been the underlying data.

[For example], claims data alone does not get at a patient’s overall health or disease-specific functioning. This clinical data is often handwritten, dictated, or incomplete. Hence, predictive modeling [historically] relied on relatively small data sets, often of poor quality, and with limited variables. The result was marginally predictive models.”

In other words, the actual pragmatics of making predictions isn’t rocket science; what’s been holding us back is that we’re only as good as the data that we draw upon. We now finally have the raw materials necessary to project into the future with accuracy.

Why Should I Care?

Here are a few of the business cases. These are the carrots.

These new capabilities can realistically help everyone from administrators modeling things like hospital admissions (and readmissions), to providers understanding things like the likelihood a patient will develop congestive heart failure, to risk-bearing organizations like ACOs doing things like proactively identifying gaps in care, not to mention payers who wish to proactively and progressively drive down risk by investing in population wellness.

But the market inevitability of healthcare’s Triple Aim (background here), which is supported by the NIH, Medicare, Medicaid, and many others, mandates that we simultaneously pursue better care, better health, and lower costs. Those are the sticks. Those are the structural drivers, and they are emphatic.

New Models Will Therefore Prevail

As MEDai recently framed it, “the question is, how can better care, better health, and lower cost be simultaneously achieved when many view them as mutually exclusive?

It is my argument that predictive analytics make the Triple Aim viable. Simultaneity is achievable as a result of these new technologies, which make predictive medicine and, more broadly, predictive health, a real possibility. But all this is going to require organizational change too.

The best example of this is how payers are increasingly partnering with ACOs and other new delivery models to create shared offerings that are integrated up and down the value chain.

Specifically, they want to responsibly collide heterogenous data sets and streams including but not limited to: historic claims information, cost estimates, patient and clinician-reported outcomes, and care plans, not to mention electronic medical records, and genomics.

Combining data allows these organizations to better align their economic interests. That alignment spurs new, better offerings to individuals and organizations alike. This is already a huge trend and one that will continue.

This has been called “the future of insurance,” and Optum lays out how it should work in detail here. No doubt, “strategic IT plays a central role in setting up successful population health models” like these, says Healthcare Informatics.

Perhaps most importantly, this “one-sided risk model,” which has been discussed since early 2013, is finally coming online.

There is, of course, a question of exactly how much data to share, but the value sharing is not in dispute. Aetna’s point of view on this issue can be found here. It also goes without saying that the data that is shared ought to be accurate; this case tells of such a collaboration gone wrong.

We’ve had a few stops and starts, but the larger shift has, if anything, sped up, because, as MEDai also says in the same report I referenced above:

“Payers…partner with ACOs because they have the critical claims data that ACOs need to better monitor patients’ health, identify gaps in care, and manage risk. Reimbursement within these models requires a different kind of tracking system for payments than that used for fee-for-service. Rather than paying for individual services, plans will increasingly reimburse for improved member health and wellness, coupled with demonstrated cost savings.

But this type of reimbursement model requires greater transparency and sharing of health information as well as better methods for tracking to improve decision support and better methods for tracking compliance with evidence-based guidelines. Payers who can provide analysis of quality initiatives and care delivery in real- time will become the partners of choice for ACOs who need to meet quality-of-care performance thresholds.”

Infrastructure 2.0

New, aligned models of care won’t fund success, however, without the right platform running them. Technological infrastructure is also key. Getting the data is just step one, if complicated and a long time coming. Getting value out of the data is another endeavor entirely. This is where the market stands today.

We have both aligned and contorted ourselves to put all this information together. It better have been worth it. IS there really a pot of gold at the end of the proverbial rainbow?

We’ve yet to widely adopt the information systems necessary to unify, normalize, decorate, and crunch the data, though several very good softwares are on offer. Advantages that other industries like energy, finance, logistics, and advertising today enjoy from similar investments remain scant in healthcare across the board. We’re just rounding out the data gathering and plumbing phase, and only just starting to achieve value from all that work. Again, 2016 will be the year all of this comes to a head.

Of course, any mention of a need for increased investment is justifiably met with suspicion these days. This is the easy position for anyone who doubts my analysis. The reason is that, as PwC Health Research Institute has pointed out:

“In most industries, new technology decreases cost. Consumer devices become smaller, more powerful and cheaper over time. Manufacturing equipment is faster, more accurate and lowers the unit cost of production. But even after 10 years of major investments in health technology, the results have largely failed to decrease the cost of health delivery.”

Which is certainly regrettable. But here’s why I think this time is different.

Keep The Faith

Unlike 10 years ago, we can today achieve an interoperable, agile health-IT infrastructure, such as PwC prescribes. Incentives are now finally aligned to “coordinate data collected across care settings so it can be integrated, analyzed and used to provide rapid feedback to consumers and doctors.”

Players like CrossChx are resolving patient identity and normalizing it across hundreds of sites, which demonstrates real on-the-ground progress. Read more about their pioneering work here.

We can then use data analytics to create strategic insights and actionable results. Per PwC, “healthcare executives view data mining and analytics as having the highest strategic importance during the next five years.”

That’s where IBM Watson, Lumiata, Accordion and BaseHealth come in. This report from Rock Health is also worth your time to dig deeper on this subject. The best of computer science is finally being combined with the best of the biological sciences and healthcare informatics to create a perfect storm of innovation.

Action is based on what you can, should, will and even must do. Adaptable systems that incorporate the wide ranges of new data available, and help create actionable recommendations, are the both literally and figuratively the future.

That future is no longer distant. It is finally at hand, and I for one am eager to see the inflection we’ve all been waiting for. If you’re not knee-deep in this stuff already, you need to be soon.

Show more