2016-04-26

When should a young company start looking at its analytics? If Donald Trump is any indication, they should start early and often.

Well into 2016, virtually the entire media establishment was in agreement on the model they were using to understand the GOP race. In it, Donald Trump was a non-starter. He had no endorsements and no support from the traditional players in the Republican party. Even 538’s Nate Silver, the once-whiz kid who correctly predicted 50 out of 50 states in the 2012 general election, wrote him off.

What went wrong was a classic Bayesian misstep. Instead of updating their priors to account for new data, the pundits and commentators clung to them. They saw the contrary evidence; they just chose not to give it weight. What’s more surprising is that not even Nate Silver, the modern-day king of Bayes, was immune.

“Numbers have no way of speaking for themselves,” as he warned in his book The Signal and the Noise, “Data-driven predictions can succeed—and they can fail. It is when we deny our role in the process that the odds of failure rise.”

Analytics is not all about having all the data. It’s not about building the right models—or priors—because you’re not going to guess correctly every time. It’s about being flexible with your priors in the face of new evidence.

If we can learn anything from Trump’s rise, it’s that you’re better off getting a grip on your data early, being fluid about what it means, and revising your priors before they have a chance to dig in.

Otherwise, you’ll get blindsided—just like America did.

What an 18th century English vicar can teach you about analytics

Mike Tyson’s famous adage, “Everybody has a plan until they get punched in the mouth,” serves as a neat summation of the ethos of the Bayesian inference.

First developed by Presbyterian minister and statistician Thomas Bayes, the Bayesian inference is a problem-solving method. It uses algebraic principles to ascertain the likelihood of a certain claim being true, a technique that was revolutionary at the time.

Get a grip on your #data early, be fluid about what it means, and revise your priors.
Click To Tweet



It works by using each new piece of data collected to update your prior—your personal estimation of a certain claim’s likelihood of being true. You might think of the prior as the real-world model that determines how you interpret your data.

First, you take a proposition.

Then, you assign a probability to its being true. This is your prior.

Collect data and incorporate the implications of that data into your previous statement of the proposition’s probability. This is your posterior.

Then, your posterior becomes your prior for the next iteration. Collect more data, repeat, and continue.

Thomas Bayes didn’t invent the idea of updating your assumptions based on new events—he gave us a rigorous, mathematical method for doing so, one that cut out the natural human tendency to put too much weight on comfortable, well-worn models.

When you use the Bayesian inference, you have to take responsibility for your priors. You give them a weight, a kind of flexibility-rating which will determine how much the new evidence you acquire can actually change them. Bayes knew that people tended to be satisfied with their priors, or models of how the world worked, and his inference forces those models to put up or shut up.

When you use the Bayesian inference, you have to take responsibility for your priors.
Click To Tweet

The Donald and the Bayesians

Nate Silver and every other pundit that dismissed Trump did so because they trusted in a model of American politics summed up in the book The Party Decides. According to this book, there were basically two huge reasons why Trump could not be the nominee:

Historical backing: No nominee, from either party, had ever emerged without a majority of that party’s leadership, action groups and endorsements behind him or her.

Theory: No nominee had ever won without that support because party leaders and groups within the party, according to the authors, exercise their influence in such a way that virtually guarantees their chosen nominee the nod.

The fact that no one like Trump had ever won the GOP nomination gave the prior “he has no chance” significant weight—not just for pundits, but for everyone. At the beginning, it was highly unlikely that he would go on to be the nominee.

But as Donald Trump steadily trounced his opponents and moved to the front of the pack, something curiously un-Bayesian started happening. Nate Silver didn’t seem to be weighing the data very heavily. Article after article on 538 denounced Trump and questioned the polls, arguing out of what seemed like civic-mindedness and optimism that these numbers could not be correct.

And why? Because Nate Silver’s prior—that Trump couldn’t win without endorsements—was still holding strong. You’ll notice that 538, as late as February, was using charts like this to understand the race for the nomination:



Silver and other outlets trusted in this prior very heavily largely because it had never failed them before. But there was another aspect at play: people didn’t want to believe what was happening was really happening.

The genius of Bayes, of course, is that your desire for a certain outcome gets incorporated into your posterior—the new prior—that you develop after you see the new data. You have to justify it.

Let’s presume that the journalists and commentators dismissing Trump did, to some degree, hold a personal prior that kept them from weighing new data properly. You have to wonder—if they didn’t, would data like this have convinced more of them, early on, that Trump was a serious threat?



That’s a graph of four contenders for the 2016 GOP nomination and how their poll numbers looked on the day they officially announced they were running compared to two weeks later. As you can see, the average “announcement bump” was a small 1-3% bump. Trump’s, on the other hand, was around 11%.

The media saw numbers like this, but it didn’t cause them to rethink their priors because they believed their assumptions were more predictive than the data.

The developer with an ego

Now imagine you’re running a mobile check-in app with about 12,500,000 daily active users. You just launched this app a week ago. You got one glowing review in TechCrunch, and ever since then it’s been smooth sailing.

When you think about the future of your app, you have one prior: people really love checking in, and that’s the #1 reason why your app is growing so fast. To verify this prior, you start collecting data.

You decide to look at the rise of daily active users in your app alongside the number of users “checking-in” to different locations. You want to see how closely those two numbers are really tracking:

It seems that your DAU count is skyrocketing, but the number of check-ins actually appears, after a small burst of growth, to be leveling off.

You think: your TechCrunch profile was done on 10/2. People must have just downloaded because of the hype, and not because they yet felt the need to really check in anywhere. But they will. You smile, write this data off as a PR-induced aberration, and return to cranking away at the check-in feature.

Your whole team is focused, day and night, on optimizing check-ins for speed. There’s a collective prior at play here, one that says check-ins are the way of the future. And the TechCrunch profile and DAU boost just make that prior even stronger—nothing to worry about. What you don’t realize is that your ego is getting in the way of you seeing what your data is telling you: to re-think your prior.

Then, a week later, you notice that usage is down—way down. You bite your nails and check the DAU numbers graphed against check-ins again:

It turns out that that leveling off was not harmless, or an aberration. Your DAU count was the aberration. The stagnancy you were seeing in the number of check-ins was actually indicative of feature rot deep within your product.

The reason you’re able to realize this even after the TechCrunch profile and the initial DAU boost is that the data is damningly clear. It’s very hard to deny, looking at this graph, that your initial prior model was wrong. In other words, the feature you thought was your app’s core value—the check-in feature—was not actually valuable to the majority of the people using your product.

You incorporate the data into your posterior and suddenly things are looking very different. You’re not sure what your app is all about anymore. But it’s not too late to turn things around.

There are probably a good number of users who have stuck around despite the check-in feature being virtually 100% dead, and your objective now has to be to identify what’s keeping them active—to collect more data and refine a new prior.

Maybe, on a whim, you decide to compare the 7-day retention numbers of two cohorts: those who used your check-in feature, and those who used your photo-sharing feature:

Your check-in feature loses 80% of your users after one day. Your photo-sharing feature keeps 60% of them around for another day. This is the kind of data that overwhelms bad priors. It’s like, as Mike Tyson would say, your prior just got punched in the face.

You have a new prior to work with now—photo-sharing—and some good data to suggest it’s worth checking out.

For an early stage startup, the calcification of faulty priors can mean building the wrong product, or building for the wrong users. It can mean running your app into the ground and missing out on an incredible idea—even a billion dollar one.

How To Use Analytics As An Early Startup

In 2010, TechCrunch posted an announcement about the $500,000 seed round raised by a small Foursquare-like app called Burbn that let users check in to different places, talk to friends about making plans, gain points, and post pictures.

“Besides having a great name,” MG Siegler wrote, “the service is apparently in a very hot space right now: location-based services.” But founders Kevin Systrom and Mike Krieger were less confident—according to their data, people just weren’t using it. Their prior of how people would interact with the app was just not panning out.

That’s when they decided to perform a feature-by-feature usage analysis. What they learned from their analysis was that people really, really liked taking and sharing photos. They had a new prior on which to base their business. All that remained was to find evidence and see if that prior held up.

They looked around at the other apps doing photo-sharing, and identified two main competitors—Hipstamatic and Facebook. The former was good for taking pictures, but not sharing them. The latter was established, but lacked good mobile sharing capabilities. Here was the data—people liked sharing photos on Burbn, and there was no existing elegant solution just for that problem. Their prior, that this could be a good app idea, grew stronger.

Systrom and Krieger stripped Burbn, a bloated version of Foursquare, down to three constituent parts—taking pictures, Liking pictures, and commenting on pictures. They spent a few months putting together the finished version, a native app for iPhones.

They called their new app Instagram.

The post What Donald Trump Can Tell You About Your Data appeared first on Amplitude Blog.

Show more