Prediction Markets Can Increase Transparency, Engagement & Answer Tough Questions
In today’s conversation Robin Hanson talks about activating and engaging internal stakeholders, while increasing transparency around tough and complex issues.
Prediction markets can increase trust, engagement, and elicit honest “answers” to tough questions. Dr. Hanson often uses the term prediction market, but they are also known by the term futures market, or a betting market.
Key feature: You can ask an exact question to a complex problem and the market will find the answer
To put prediction markets to work you need a question that will (in the future), have a clear answer. You also need folks who think they know the answer and are willing to “bet” they are right. Winners get a financial windfall and often the satisfaction (and peer recognition) that they are right!
Click here for a transcript of this conversation
Some implementations of these markets use play money, and only internal or external reputations are at stake. However, when used internally some companies give stakes to everyone involved, and at the end of the market the winners take home their winnings.
If we assume the answers are accurate, is this type of market it worth the price? “I think it depends on the value of the questions you are asking” Harry Hawk
Public prediction markets can operate under the authority of commodities exchanges. In the early 90’s I was an early trader on the IOWA Electronic Market which is famous for it’s US Presidential market and has been running since the late 80’s. Robin notes in the US it is illegal to operate a commodities (e.g., futures) market in onions, or in movies. Clearly onions are a gateway produce.
The “betting” nature of these markets is engaging and that is why these types of markets, and leader boards and other meta-data around them are truly engaging with stakeholders. Of course, betting and following these markets could eat into the time folks would otherwise spend working; probably time everyone is already wasting on social media.
Robin notes the cost of the time spent is the real cost of these markets, “don’t be chintzy with the stakes.”
Markets can operate on an ongoing basis or can have fixed periods ending just before an election or before quarterly results are announced. You might want a market to answer questions like the share price at the enhowed of the quarter, earnings, etc. You could even ask how good a department head, or even how trustworthy a CEO is; which i’m willing to bet few CEOs would be willing to entertain.
Someone has “betting” odds on the Tesla Model 3
Honesty (or at least honest answers) is one of the core attributes of prediction markets. While that may keep some companies from embracing this idea, I think it has a role to play in crisis management. A company that has had its reputation tarnished like EF Hutton many years ago, or Wells Fargo today — they could address this crisis of confidence by transparently operating (in public) a prediction market. The more tough-minded the questions (I think) the more powerful a message the market would send. In a crisis there can be damages to the relationships with internal and external stakeholders and uniquely such a market could help heal both.
Another application is in predicting product ship dates, or quality or both. For example a car company like Tesla which is known for creating amazing cars is also known to miss shipping dates and for odd manufacturing defects. A market could ask a very simple question like, “Will the model 3 ship on the date announced and be listed among the top 5 cars for initial quality by JD Powers” or more nuanced questions like what the range of the car will be, if folks will like the industrial design, if it will come with ludicrous mode (and the price). If Tesla was concerned that a competitor might come to market sooner or with a better valued car, they could run a market asking that question e.g., “if specific competitor will offer a better range or price”
Companies that are facing FUD – Fear, uncertainty and doubt – from the press or competitors, often focus on concerns for external stakeholders, but FUD can also have a critical impact internal stakeholders who may lose morale, become less focused during their working hours, and are often increasingly open to being poached by competitors, all of which can cost a company time, money and focus.
FUD can almost create a self fulfilling prophecy. A prediction market would be brutally honest but if the information at the heart of the FUD is truly unfounded, it could help resolve the angst of both the internal and external stakeholders. Alternatively, if the basis for the FUD is fundamentally true a market would function as an early warning sign that a particular problem is serious; markets could also function as an instant indicator for when a company has actually corrected some or all of the underlying issues.
As brutal as a market’s honestly is, perhaps it’s best quality would be to cut through Spin forcing a company to truly address real issues, not issue platitudes.
My conversation with Robin was focused on internal communications, However, these same techniques could be used in external communications well (as the Tesla and Wells-Fargo examples indicate). Companies that are willing to be transparent could demonstrate transparency and openness by allowing in anyone into the market including competitors.
Unlike polls which have limits on sample sizes, because the market is generating data and can directly answer questions about accuracy and probability companies could start out with a few simple markets that may be short lived (from a few hours to a few days).
I think non-profit organizations could also make use of these markets esp. to increase awareness around an issue and stakeholder engagement. You can imagine a non-profit group asking, “based on xyz measurement, how much will the polar ice cap grow or shrink this year.” Like any exchange or brokerage the charity or foundation could earn small fees from each participant for managing the market, as well as fee for those who want to join in.
So while it can be possible a group to raise funds through a market, I believe raising awareness and engagement across social media would be far more valuable. This would include a spike in earned media when the market is announced, and when the winner(s) is declared, as well shared and owned media during the time that the market is running.
This conversation runs around 45 minutes. I welcome your feedback.
Learn more about Dr. Robin Hanson and his work:
http://hanson.gmu.edu/decisionmarkets.pdf
http://hanson.gmu.edu/impolite.pdf
Home page – http://mason.gmu.edu/~rhanson/home.html
Books:
http://elephantinthebrain.com/ (expected Jan 1, 2018)
https://global.oup.com/academic/product/the-age-of-em-9780198754626
Click here for a transcript of this conversation
Interviews & Talks
Overcoming Bias –
Age of EM Google Talk
Transcript
(00:00:00)
Robin Hanson:
What people tell you may not be the most accurate estimate they could give you, and that’s where prediction markets can come in. They are a mechanism that cost modest resources and consistently give you more accurate answers to the questions you ask, especially when people have the incentive not to tell you their honest answer, and many companies have used prediction markets as a way to produce engagement. Participants have consistently found that they like participating and feel that they are listened to more.
(00:00:25)
Harry Hawk:
Hello. This is Harry Hawk, and welcome to this FIR interview. I would like to welcome to the show Robin Hanson.
(00:00:36)
Robin Hanson:
Thanks, Harry. Great to see you.
(00:00:37)
Harry Hawk:
It’s great to have you here. We’ve known each other for decades. I want to let everybody know you’re someone who thinks a lot about things. We’re here to talk about some really interesting ideas. At the base, you’re an economist?
(00:00:50)
Robin Hanson:
Economics professor, yes, at George Mason University. I have been here since 1999. Before that, I was a computer researcher at NASA and Lockheed for nine years, and before that, I got a master’s in physics and philosophy from the University of Chicago. After all those years, I got a Ph.D. in social science from Caltech, and I did two years of health policy post-doc.
(00:01:12)
Harry Hawk:
Robin, you have a book out from Oxford University Press.
(00:01:16)
Robin Hanson:
I do, called the The Age of Em: Work, Love and Life When Robots Rule the Earth. It came out in June. I’ll have another book coming out this
September
[updated to Jan 1, 2018 – HH], also from Oxford, called The Elephant in the Brain: Hidden Motives in Everyday Life. That’s co-authored with Kevin Simler.
(00:01:30)
Harry Hawk:
The second book may be a little bit more approachable to our audience. What we should know about Robin is that he is somebody who’s looking at things that help us understand the world, sometimes from a slightly different perspective. For example, during the last presidential election cycle, we had all kinds of polls / surveys. Robin, you have an alternative approach. Could you talk about that?
(00:01:57)
Robin Hanson:
The alternative approach of prediction markets or betting markets was actually used to substantial extent in the election markets. Typically, betting markets are somewhat more accurate than just taking polls straight. There’s been a number of studies of that. That doesn’t mean election markets can’t also be surprised by things. The best you can do is to have a probability distribution over the outcomes. A probability distribution means that even when there’s a low probability event, sometimes it’s going to happen. That’s how it played out with the recent elections.
(00:02:24)
Harry Hawk:
I was certainly watching and involved in this from its inception many, many years ago. I was an early participant in the Iowa Electronic Election Market. What has been your involvement over the years in talking about this activity? Because I know, often, when we hear this, we hear the term bet, and we often think about gambling versus the stock market, and certainly, there’s plenty of people who think the stock market is betting as well.
(00:02:52)
Robin Hanson:
For centuries, we’ve had many financial markets. Some are called speculative markets where you can basically buy today and sell tomorrow, or sell today and buy tomorrow, and walk away with the difference. Those markets, for many centuries, have done a good job of aggregating information. That is, if you can guess which way the price is going to go better than the current market price, then you can make money by buying today, say, and selling tomorrow. That’s been a side effect for most of those centuries. These markets haven’t existed in order to create that information, but it’s been there.
(00:03:20)
In my mind, the interesting thing that’s happened in the last few decades is that people have started to think about creating these markets on purpose when they want to know about something, and that, I think, is the interesting new thing. Because you might want to make a decision and want better information to inform your decisions, you might create a market that then gave you that better information to inform your decision. In the elections, if you are trying to decide whether to even show up at all at the polls, I suppose, you might wonder whether it’s going to be a close election or not, and the polls could tell you that, but betting markets can tell you that more accurately.
(00:03:54)
But mostly, the betting markets on elections are of a horse race sort of entertainment following the excitement, and they aren’t that useful for making decisions. It would be more useful to have markets about the consequences of who we elect. That is, for the presidential election, it would’ve been more interesting to have markets on unemployment rates, or war deaths, or health life spans, conditional on who was elected. That might’ve given you information about who to elect and have been more useful. Providing that information is not so much in the interest of each person who’s trading. They’re more trading for fun, and they focus on the topics that they find engaging for fun, which was who’s going to win?
(00:04:33)
Harry Hawk:
I guess we could think of some of those, who’s going to win, and then what’s the impact going to be on education, what’s the impact going to be on FCC regulations, almost like a derivative market of the election.
(00:04:44)
Robin Hanson:
Right. We could’ve had those markets. Sometimes we’ve had markets in primaries, markets in who’s going to be nominated and also who’s going to win eventually, and if you divide those two numbers, what you get is a conditional probability of winning if you’re nominated. So those markets have actually given the parties information about who to nominate if they want to win. Not clear anybody’s really used that information very well, but at least that’s decision-relevant information that these election markets have.
(00:05:09)
Harry Hawk:
The election market got it a little bit wrong. The polls got it a little bit wrong. It was a very tight race, and we do have, on the other hand, a popular vote, which at least a couple people question the legitimacy of, and I think most people don’t question the legitimacy of that 3 million votes towards one candidate over the other. Would the winning in the election market for that candidate, given that the market didn’t necessarily map to the Electoral College, would that sort of somehow validate the popular vote?
(00:05:45)
Robin Hanson:
You can ask betting markets many different specific questions, and then the participants will turn to those specific questions and try to give you the most accurate answer that they can. So who’s going to be elected, given that we have an Electoral College, is just a different question than who will have the most popular vote. You can ask both of those questions of the market, and the market can give you different answers to them, because, in fact, they will have different answers.
(00:06:09)
Harry Hawk:
Did that happen? Was it set up that way?
(00:06:11)
Robin Hanson:
Mostly, the markets have asked who will actually win, because that has been, in fact, the question of most interest, but sometimes in the past, people have had markets where they’ve had it about the popular vote, and then those can give a difference, but they usually haven’t given much of a difference because there’s usually the same answer, but sometimes it’s different. This is fun to talk about, about the election, because everybody’s so eager to talk about the election, and therefore, they want other ways to talk about it, and the betting markets can help you talk about it.
(00:06:35)
What would be useful is to have information about what decision to make not just in the election, but in the rest of society. So I’m actually much more interested in the applications of prediction markets in the rest of society, and in particular in organizations where important decisions are made, prediction markets can and have informed those decisions.
(00:06:54)
Harry Hawk:
So that’s something that I thought would be really interesting. So the audience that is primarily listening to FIR are what we would call PR folks. Really, today the term is professional communicator. They are not limited to press relations. A good chunk of folks, either part of their overall portfolio of tasks or specifically as part of their hiring, do internal communications, all of the messaging within a business.
(00:07:20)
And I should point out I started my first real job on Wall Street…was not internal communications, but internal marketing where I would market very advanced technology applications to an audience of 15,000 people internally, all who had computers at their desk, which probably doesn’t sound very interesting, but this was 1986, at a time when most people didn’t have these sorts of tools. Despite that digression, I want to dive into this and think about large organizations, other tools that we might have, just as you’ve hinted.
(00:07:54)
Robin Hanson:
I’ve got a lot of thoughts. I should just say right up front, I don’t have much expertise in marketing, and in fact, economists in general do a lousy job of collecting expertise in marketing. Marketers definitely know a lot of things that the rest of us don’t know, but the words and language they use don’t map very well onto the kind of concepts economists tend to use, so we tend to be befuddled, and scratch our heads, and not really understand what they’re saying.
(00:08:17)
Harry Hawk:
I think it’s better with communicators, in a sense, because the thing that I’ve learned is that marketers tend to be more like the person on the corner constantly spinning a sign that says we buy gold than looking to build sort of a long-term relationship and looking to build activity over that relationship. You know, you can just think of classical dating. You go out on a date. You plan a second date, a third date. The goal is perhaps to get married or to take a vacation together. Marketers tend to be more like someone out at a bar looking around.
(00:08:55)
Robin Hanson:
They’re making short-term relationships versus long-term relationships. Communicators are more focused on long-term relationships and therefore, long-term communication.
(00:09:03)
Harry Hawk:
Indeed. Now, marketers are quickly learning that that’s kind of a mistake, but that’s a whole other podcast.
(00:09:08)
Robin Hanson:
Right. Now, I’ll frame this just in terms of the basics of making decisions. So communication and marketing, just like the rest of industry and in the other professions, you make key decisions from time to time. In support of those decisions, you collect options, and then you collect information about these options, including not just the cost, perhaps, of these options, but their outcomes. What would happen if you did them? For many of these kinds of analyses, you have, say, big data sets that you can crunch and give you forecasts. For many other kinds of analyses, you just go with your gut.
(00:09:42)
But if most people would agree with your gut, then it’s just a matter of doing it quickly and getting on with it, but then there’s a third set of cases where you don’t really have big datasets that clearly answer the question, and different people’s intuitions disagree, and some people know more than other people, and that’s where prediction markets can shine, especially when people have the incentive not to tell you their honest answer. When people have an incentive to give you whatever answer you want to hear or whatever answer makes them look good or somebody else look bad, then just asking around doesn’t work as well, because then people will tell you what they think you want to hear, etcetera.
(00:10:24)
Harry Hawk:
Right. There’s a huge bias in the company because people at the top may very much want a result to be true. They may want to believe that their workers are trustworthy or happy, and the inverse, you know, the workers may want to know that the folks at the top really know what they’re doing and really care about them. So, as we start to look through an organization and whether it’s thinking about rolling out some new policy or procedure or just polling the organization to understand what people are thinking, people understand when they get a survey, the meta context of the survey, especially when it’s at work.
(00:10:58)
Robin Hanson:
There are some questions you can ask people in a certain way such that they will just tell you the honest answer, and then you just have to ask, and you’ll get the answer, but there are other questions where it’s seen as more sensitive and seen as more risky to just be careful and honest about your answers, and in those contexts, what people tell you may not be the most accurate estimate they could give you, and that’s where prediction markets can come in. They are a mechanism that you can use that cost modest resources and consistently give you more accurate answers to the questions you ask.
(00:11:28)
So the key things you need for a prediction market are a question, a question that you will eventually know the answer to in some way. So will we make the deadline? What will sales be? Various conditional questions. If we change the definition of the project, what’s the chance we’ll make the deadline? If we introduce this communication strategy, what will be the response? It’s important to ask a question that you actually care about the answer, because, way too often, in prediction market world, people will ask safe questions that they think people might be engaged by but that won’t get anybody bothered by hearing the answer they don’t want to hear, and that’s a problem.
(00:12:04)
Harry Hawk:
The meta questions we might find in internal relationships, you know, often large companies will do an annual review of employee satisfaction or ethics-related questions. They may ask about people’s feeling towards the company. I think there’s other more departmental questions from which technology might we adopt? You can imagine also in a company that might have a manufacturing base, and so you have people, different parts of the company. How do you feel about the quality of the product that we’re making, meaning do you believe in the mission, right? We do all these cultural kinds of surveys around how people actually buy into the mission of the company, and I think those are important as well. I don’t know if we’re going to ever really know the answers to some of them.
(00:12:53)
Robin Hanson:
If you’re just asking somebody how do they feel at the moment, maybe there isn’t any other truth you could get that’s closer to it, but if you’re asking people how will people feel a year from now, how will people feel if we adopt certain policies, if we change certain policies, if we do various things, now it’s the sort of thing that markets could more speak to. It might be that, in the short run, people nod and they like something, but in the longer run, they won’t like it, and some people may know about which goes which way.
(00:13:20)
But the key idea is that you need to be asking questions where there are plausibly people out there who know those things. So, like I said, for a deadline is a very classic example. Will we make a deadline? Often, at the high level, management is listening to the people running the projects who are giving them updates about how well it’s going, and then there are people much closer to the ground who often have quite different impressions about the chance of making a deadline, and usual mechanisms don’t get that voice heard, and a prediction market can find out what those people think and get that expressed.
(00:13:50)
Harry Hawk:
In a large company that might have employee stock ownership, we might literally be able to see if, given an option to purchase stock, how many folks kind of buy into the company in a literal sense, but what other kinds of mechanisms might we be able to set up in a company?
(00:14:08)
Robin Hanson:
One of the major purposes of management is to create impression and morale on the rest of the company. So you could have markets on if we switch the leader of this organization from person A to person B or person C, how do people predict that will influence morale? That is, later on, once you do such a switch, you will see some change in morale, and you’re asking people what they think will be the consequence now, because people may know how much people like somebody or would like them if they were to get to know them better, etcetera.
(00:14:41)
So you might have somebody from division A, and you’re going to put them in division B, and you’re wondering how well division B will like this person, and people in division A may know a lot of things about, you know, what’s the chances people in B will like this guy after they’ve been there a year? And that isn’t necessarily just how people in A liked them. Might be how well this person is suited for B, and what sort of things they’re likely to do, and what kind of different people there are in B, and how they’d react to that.
(00:15:08)
Harry Hawk:
Although I think it would take a person with a rather strong sense of self to know that folks are betting on them.
(00:15:15)
Robin Hanson:
That’s the key problem. So, like I said, to do prediction markets, the three things you need are questions to ask, a set of people to ask them of, and some sort of incentives to give these people so they might speak to you, but honestly, the main obstacle in prediction markets in organizations is that they are often politically disruptive. That is they say things that people don’t necessarily want to hear. So imagine that, in the C suite, there was a person who was very smart and very knowledgeable, but high on the autism spectrum. Not very socially skilled.
(00:15:43)
So whenever a subject came up, they just blathered the thing that came off the top of their head as their best estimate with no regard to who was embarrassed by it or how it might conflict with official dogma or anything like that. I predict that person won’t last long even though they’re very knowledgeable and informative about this topics. So prediction markets are kind of like that. That is, you set up a prediction market on a question, and you turn it on, and now it will continue to give you accurate and rapidly-updated estimates on whatever topic you asked it about.
(00:16:13)
But sometimes those answers that it’s giving you are not welcome. They’re not what people wanted to hear, and that’s a problem, because now people retaliate and basically get the market killed so that it doesn’t do that sort of thing.
(00:16:24)
Harry Hawk:
Back to my prior example. If, on a quarterly basis, we let people buy into employee stock ownership plan and just track those bets over time and we also allowed people potentially to go the opposite way, so we actually had…
(00:16:41)
Robin Hanson:
Sure. Sell short. Sure.
(00:16:42)
Harry Hawk:
And again, small amounts, you know, somehow fitting within FCC regulations and so forth, that could give us a long-term tracking study on how people felt about the company, and in a large enough company, we should be able to control for other factors, like people retiring.
(00:16:59)
Robin Hanson:
In a sense, every financial market is a prediction market. It’s just usually predicting something rather complicated to express and not easy to say in words. If you let people trade in the stocks on the company or buy options in the company, that is a market, and it is giving you information about something, but that something is a combination of all sorts of things. It’s a combination of how well they expect the company overall to do.
(00:17:22)
If it’s an option, it’s how well they expect it to pass a certain threshold. It’s mixed in with how they expect the overall economy to do, how they expect that industry to do. It’s not very fine grain in terms of inside the company. So if you could, say, take a large company and break it into 12 tracking stocks that track different parts of the company, now the markets in those different tracking stocks would tell you about those different parts and tell you that even though, overall, people are positive about the company, they think division A is not doing so well.
(00:17:51)
Harry Hawk:
I think that’s fantastic. I mean, you think a company on the size of a GE or Alphabet with all of its different units…I think there’s plenty of companies with plenty of divisions where that could be really interesting, and plus, you’re getting not only the people in the division who had sort of one kind of perspective, but you have the other people who have more of a consumer or more of the other side of the market, the customer perspective potentially, as well as knowing, oh, so and so just moved to head that division. That would be very interesting.
(00:18:25)
Robin Hanson:
So, often, you might have tracking stocks that track products. Even then, there’s not usually very many of them, but you might want to break it down by functional divisions and ask for something that tracked manufacturing, something that tracked distribution, something that tracked marketing, etcetera, and ask about forecasting those particular things, and even further, you might want to make these conditional on key decisions. So, again, one key decision is who’s in charge of those areas? You could ask the market should we consider swapping who’s there now for somebody else?
(00:18:56)
And then when you have particular alternative candidates, you could ask the market about what they think about those candidates, another major decision you make for these divisions. If you’re going to, say, think of switching to a key-enabling technology, you could ask the market should we switch, and which one should we switch to? If you’re, say, thinking of switching ad agencies, you could ask the market should we switch, and if we switch, which one should we switch to? There’s huge potential for getting better information about these, but the caution is usually, you know, these are relatively political decisions, and people put together a political coalition in order to support a decision, and they don’t really want some out-of-control market price fluctuating around that could say no.
(00:19:32)
Harry Hawk:
You know, we think of the stock market that goes on, and on, and on forever, is there anything in the literature? I mean, could we run a 15-day market or a 45-day market?
(00:19:41)
Robin Hanson:
Oh, sure. Run it for an hour if you want.
(00:19:43)
Harry Hawk:
That’s fascinating, too, because then something that’s relatively mundane, like we’re going to switch taglines…and if they have two really good candidates that they really would like to get feedback on the company…MicroBets, so we’re talking maybe pennies. Could do a snap market. Which of these do you like or which color? If I’ve got 10 or 15,000 people in the company, you know, I would think I’d have enough data points and I could run that bet through some of these internal intranets and communication things. Would be very easy to build into Slack, or Yammer, or so forth.
(00:20:20)
Robin Hanson:
Sure. You want to distinguish between whether you’re just getting round data versus somebody’s judgment and analysis. I mean, obviously, often, people just do A/B tests among a set of customers and find out what those customers think, and that’s good data, but often, you need to use judgment as well as data in order to compare it to other things that have been done, etcetera, and the more you’re trying to get that judgment, the more you might want to have a prediction market not just do survey, get just a quick reaction, but ask people what do they think how this will play out.
(00:20:52)
Harry Hawk:
We’re going to pick which one do you think best represents the company?
(00:20:58)
Robin Hanson:
Which one do you think will get the better response? Ideally, you know, you’ll have a tagline, and then you’ll do some surveying after that to see what people thought of it, how they reacted to it, but before you choose the tagline, you’d be asking people who have some experience and judgment, what do you think will be the reaction to this tagline if we put it out?
(00:21:16)
Harry Hawk:
Yeah. Let’s assume that we’re going to the rank and file, you know, so people who may know nothing about marketing or communication, right? They’re not experts, but we ask several questions. Which tagline do you think best represents the company? Which tagline would you be most proud of, and which tagline do you think will get selected? And if we ask those three questions in a one- or two-hour…maybe it needs to be a couple hours if we’re in multiple time zones, but kind of a snap market, people are betting pennies, and the winners get stuff.
(00:21:46)
Robin Hanson:
In the world of prediction market consulting and software out there, that are some companies that are still playing in that space, and what they’ve found most successful is where they’re predicting things that are the farthest from where executives have opinions. So one of those is in new innovation projects. Which new innovation project should we pursue? Nobody’s really tied to any one of them yet, and the other is in replacing focus groups.
(00:22:08)
And so they’re a company, one of them Consensus Point, that actually offers a product where, instead of an ordinary focus group format, you are betting on your reactions to products and things like that, but in fact, not really incentivized financially. It’s just sort of a framing of it, which people find more engaging. So I just would like to make the distinction between a survey, which could be done in the format of a betting market, a betting market where you’re giving people actual incentives based on something later on being found true or false or correct or incorrect.
(00:22:42)
You could just ask these questions. What do you feel about it, and you could do that in many different formats, but if you want to ask them to predict something, you’ll need to have something later on that they’re predicting.
(00:22:50)
Harry Hawk:
In the case where we’re asking which one do you think best represents the company, which one would you be most proud of, and which one do you think will be picked, obviously, we will know the answer to one of those.
(00:23:03)
Robin Hanson:
Right, but that’s still not that useful. What you want to predict is, if we pick this, what would the consequence be? That’s the more useful version of the question. Which of these, if we picked it, would we have a successful response to it? So you want to be estimating that future response.
(00:23:16)
Harry Hawk:
I was thinking, by asking those three questions, we might basically reveal the bias. In other words, which one do you like is straightforward. Which one would you be most proud of may not be the same as like, and then which one do you think gets picked, I think would get to perhaps the political bias within the organization. Like everybody loves A, but they’re never going to go with it.
(00:23:43)
Robin Hanson:
So we already know there are different ways to ask surveys, and for example, in election surveys, it’s been shown that the usual way of asking the question, who would you vote for if the election were tomorrow, doesn’t quite give you as accurate answers if you ask them who would win if the election were tomorrow, because, in that second case, you’re asking people to guess what other people think, and that gives you more information about the overall outcome, and so you can do that in other survey contexts, too, by asking not how do you feel, but how do you think other people feel. You can actually get more accurate information, but that’s still in a survey format where you’re asking them, and they’re not getting any other incentive other than you’re paying them for their time to answer the question.
(00:24:25)
Harry Hawk:
With the three questions that I’m proposing versus what you had proposed, where at least one of those questions, there will be an actual answer and there’s a payoff, and the others actually do have a payoff, meaning, you know, whichever one…winner takes all kind of a betting pool, and we can then later go do surveys about satisfaction or so forth, or we could predict…but do you think we could get to sort of the political bias within the organization by framing a couple of related questions?
(00:24:56)
Robin Hanson:
There are lots of ways to get at lots of kinds of biases. That way may well be one way to get at one kind of bias. I’m personally less interested in trying to come up with a mechanism to overcome bias than to create an incentive for other people to find ways to overcome bias. So the key idea of a prediction market is, if you are a participant in the prediction market and you see the current price and you have a good reason to think that that’s biased because the participants in that market are biased about something, then you have an incentive to go correct that.
(00:25:26)
So the prediction market offers this general incentive to go look for biases anywhere you can find them and fix them, and that’s much more interesting to me to find a mechanism that just generally induces people to find and fix biases. I would rather have that than me having to go try to figure out where all the biases are and to jimmy a formula to somehow fix each one of them.
(00:25:47)
Harry Hawk:
Another one…and again, this was certainly something that I think most folks in the C suite would not want to allow, but there is something…the Edelman TRUST BAROMETER. Edelman is a public relations firm, and they do global survey about trust, and who in organizations is trustworthy? And it turns out the spokesman and the C suite folks are far less trustworthy to the general public than rank-and-file folks, folks who actually work there, as well as, in general, friends.
(00:26:18)
So if you and I know each other and you’re thinking about buying a car, and you know that I bought them or I was looking at a particular brand a few years ago, and that may be buying or not buying, you would be much more trusting in talking to a few of your friends or if you know someone who works at one of those companies, as opposed to if you could magically, you know, call up the CEO and get his opinion.
(00:26:43)
Robin Hanson:
Yeah, there’s possibility there for credible marketing or credible communication. I don’t know which companies would be brave enough to take it, for example, but you could imagine a company committing to telling the public the results of a prediction market on a product as a way to commit to the public that this channel is more reliable than our usual channel. So, you know, if you might imagine introducing a new car, and instead of just saying it’s going to be great and reliable, we set up a betting market on its repair records, and then we publish this betting market on the repair records to say, hey, we can’t really control this bettering market price.
(00:27:17)
If people are participating anonymously and if they think this is going to be an unreliable car, they will push the estimate down, and we’re showing that to you, even though it’s out of control, as a way to show you credible information about our product that we can’t massage as easily. So you should trust it more.
(00:27:31)
Harry Hawk:
If we had a car company and we set up exactly what you said, we’ve got the 2018 model year, and we’re going to set up a betting pool on the first year error rate or…
Robin Hanson:
_____ (00:27:43).
(00:27:43)
Harry Hawk:
Cost of owner…yeah, and we set up the exact same market, and in that other one, anyone can join anonymously. In the other one, we set it up so that it’s all the folks at the dealership and all those auto mechanics that work for us, other very interested parties, like folks in the supply chain…
(00:28:05)
Robin Hanson:
Or our competitors.
(00:28:07)
Harry Hawk:
Well, I don’t know if we’d invite in our competitors. They would be in the outside one.
(00:28:10)
Robin Hanson:
Yeah, but the whole point is we’re trying to make an accurate estimate here, and we’re trying to show you that we’re going to make an accurate estimate by allowing people who would know to profit by saying it’s bad if they actually thought it was bad.
(00:28:22)
Harry Hawk:
If we had the global one public, anyone can join, that’s the outside view. I’m wondering would it be useful to run the same market with the most knowledgeable insight people and see what the difference is?
(00:28:34)
Robin Hanson:
You’d want an integrated market. You’d just want the market which had as many of the most informed people who could participate as possible. So, yes, you would want to show that, in fact, not only was this a market that outsiders could participate, but insiders could. In fact, you had given every employee in your company, you know, 200 dollars budget in this market to bet anonymously, to bet up or down on any products they wanted, and you could ask any employee, and they’ll tell you that that would be advertising that we are committed to telling you honestly about our product reliability.
(00:29:04)
Harry Hawk:
A company that has a reputation at risk…and this is obviously an external exercise now as we switch, but Tesla, right, they’ve taken money from the public for their Model 3, and that alone shows that people are interested in the product and all of that, but it’s a small deposit, but if they ran a betting pool on the delivery…
(00:29:26)
Robin Hanson:
Sure. What quantities would be delivered by what date? What reliability these products will have? How often will they have…what repair? What fuel efficiency will they have? All these sorts of parameters the public is honestly uncertain about, and where that uncertainty is holding back, the company could commit to being as honest as they could about them. Again, it’s a risky strategy, of course, because it means the market could give you bad news. The market could say, you know what, we don’t actually think this product is so great, and you would have committed to letting the public know that.
(00:29:55)
Harry Hawk:
They could get out there, and then communicate around that, and understand that that’s a serious issue, you know, and get a sense of how serious it is, because we could imagine even in the case of Tesla with all those folks who have prepaid deposit might give the company a bit of false confidence as to how the public…
(00:30:13)
Robin Hanson:
So there’s an interesting example of movie markets. Movies are a product which, you know, come out, and then customers are uncertain about their quality, and they have to make a choice well before there’s a big dataset of reliability. There was a group called the Hollywood Stock Exchange. They had a play money market going for several years predicting how well different movies would do, which was informative, and then they moved to create a real money market. They got permission from the Commodity Futures Trading Commission US and spent all the money it takes to do that.
(00:30:40)
And they set up real money markets that would then give customers a more informed estimate about which movies would do how well, and of course, also give investors that sort of estimate, and the movie industry was disturbed by this when they heard that it was almost going to go live. So a few weeks before it was going to go live, they lobbied Congress to pass a law to make that illegal, and so it was the second case in US history where a particular commodity was made illegal. The first was onions in the 1950s, and now movies in the aughts were made illegal so that you’re not allowed to have commodity future markets on movie outcomes.
(00:31:14)
So, apparently, the movie industry executives were concerned that this form of accurate information about future movies would get in the way either of their customers going to the movies or competing investors finding it easier to create movie investment projects using this project.
(00:31:29)
Harry Hawk:
Talk about mechanism for a minute, which is, again, when we’re talking about these prediction markets, we are talking about putting actual money or something tangible of value in the same way that someone would buy a stock or buy an option. It doesn’t have to be whole dollars. They could be pennies. We could give everybody in the company 100 pennies.
(00:31:49)
Robin Hanson:
The key thing is it has to be something people want more of and dislike less of. It has to be something they actually value. So there have been markets in companies where it was all play money except, you know, who did how well was made visible, and people cared enough about that, that that was worth winning so that you could be seen as the winner.
(00:32:06)
Harry Hawk:
You could give everybody two dollars and say the bids are in units of a penny.
(00:32:10)
Robin Hanson:
You could. That’s more like play money. You might as well use play money.
(00:32:13)
Harry Hawk:
No, but real money, and then there’s a winner that’s going to get…
Robin Hanson:
Two dollars.
(00:32:18)
Harry Hawk:
Whatever they get, right? ___ (00:32:20).
Robin Hanson:
But they only start with two dollars. They can’t really win more than 10 dollars, so it’s a pretty nominal amount. Now, it could still be worth doing because there’s a scoreboard, and you want to be seen as the person who did well. To the extent that that reputation for just having done well matters to you, it is valuable, then that’s enough, but that would be similarly true if it was just play money, as long as, you know, that reputation does well.
(00:32:41)
Harry Hawk:
But imagine that, every quarter, you get two dollars into that account, and there’s all kinds of bets going on, and so, over time, that account could be building, and so one that not only ultimately that you could end up with a few hundred dollars in there…
(00:32:55)
Robin Hanson:
The average person, after two years of two dollars a quarter, has eight dollars. So the average account has to be eight dollars. If somebody’s really lucky, they might have 80 dollars, but it’s still pretty small.
(00:33:04)
Harry Hawk:
Well, I was thinking the other thing is that if you bet all your money into one market, then when the next one comes along, you don’t have any to bet. So there’s an arbitrage in people’s minds of do I want to wade into this market or save some sort of ammunition for the next one?
(00:33:20)
Robin Hanson:
So, in terms of inducing incentives, have enough at stake so that people bother to participate. One way to do that is just to have enough actual dollars at stake that they bother to participate, and another is to have the organization make it clear that this is important, and they will be paying attention to who does well and what the market prices are. So another way to create incentives is, to the extent the organization seems to be listening to these prices, then one reason to bet is that you have more money to influence these prices, and you can influence the perception of the company.
(00:33:51)
Harry Hawk:
So adding that extra weight of we’re paying attention to a small amount of money.
(00:33:56)
Robin Hanson:
For you as a company, you have to realize there’s two main costs here. One is the monies that you’re giving these participants to bet with so that they walk away with money. A bigger one, though, is going to be their time. I mean, this is just going to take time, distract them from other things, and you have to think that the answers you’re getting are valuable enough to pay for that time. So I wouldn’t be too chintzy on the money because that’s still going to actually be small compared to the time anyway.
(00:34:19)
Harry Hawk:
What I might suggest is almost counterintuitive, but a lot of companies want to get engagement around the particular topic.
(00:34:27)
Robin Hanson:
And many companies have used prediction markets as a way to produce engagement, and participants have consistently found that they like participating and feel that they are engaged and feel that they are listened to more this way, basically. That somebody is listening to them through this channel, but there’s a danger that if you make the market solely for engagement and not for actual decisions, that people will learn that even though you’re pretending to listen, you’re not actually listening. You just made the markets for engagement. In order for me to participate in…because I want to influence prices that matter, I have to believe that the prices are being listened to and used for things.
(00:35:00)
Harry Hawk:
So then let’s sum this up, and let’s see if I can get it right, and please correct me then when I don’t. We can set up real betting markets internally or externally. Externally, we have, it sounds like, more hoops to go through from a regulatory perspective, but we need to put enough money into the market to make it meaningful, but we can add meaning by putting the weight of the company’s reputation or the degree that our senior executives are listening to and paying attention to something, and if we pick topics that are meaningful to us, but also happen to correlate to areas where we’re seeking overall engagement, we may be able to learn something and boost engagement?
(00:35:40)
Robin Hanson:
Yes. People can be engaged merely by the fact that you’re listening, of course. Some topics will also be just topics they find more engaging naturally, but they’ll be engaged by both the idea that the management is paying attention to these prices and it’s influencing management decision and by the idea that management is paying attention to who’s doing well in these markets, and that they could stand out personally by doing well.
(00:36:02)
Harry Hawk:
I love it. I don’t know that anyone listening is going to rush out and do this, but I believe there’s some value here. I believe there’s some value not only because you can combine knowledge and engagement, but because I think it leads to ways of activating people.
(00:36:17)
Robin Hanson:
I think it’d probably help most for your audience to just go through more examples of the kinds of things you could apply it to. So people might get, yeah, in the abstract, you could bet on things, and bets might give you better information, but what would you bet on, they might say? Well, we talked a little bit about, say, reliability of your product, telling your public, say, about that. You talked about choosing managers or projects. Who’s running the projects?
(00:36:38)
We can also talk about deadlines, and whether you’re going to make the deadline, and what will change the deadline? We can talk about sales and which product variations or marketing variations with influence sales. Choice of ad agencies, choice of suppliers. You could talk about choice of repair. Anytime you’ve got some sort of outcome that matters and several choices that might influence that outcome, you can ask the market to recommend the consequences of different options.
(00:37:03)
You have a deal with some other organization that’s managing maintenance and repair of your product, and you wonder, well, should we switch to this other organization which is offering, perhaps, a better price? What will be the consequences in terms of delay time, repair rates, etcetera, if we switched? And you might have some data, but you might need people to interpret that data in the context of their experience with similar products and similar organizations, and that’s the sort of thing prediction markets can do well, is to get that sort of implicit knowledge that’s in people’s heads out, especially in situations where, if you were just to ask people, they wouldn’t always be honest.
(00:37:37)
Harry Hawk:
Robin, if we had 1,000 people in our call center and we were A/B testing two different scripts for the call center, and we know that there will be a winner eventually, but asking people some questions related to which script might be better, which script would lead to more satisfaction, and so forth, might something like that work as well?
(00:37:55)
Robin Hanson:
Sure. Now, if you’re just doing a straight A/B test, that’s data, and the more directly relevant that data is to your decision, you may not need anything else but the A/B test, but typically, you will have a whole set of A/B tests you’ve done on related situations, and you have a new case in front of you, and you’re wondering what to do in this new case. That’s where you need judgment. That’s where you need people who have looked at these previous cases to have a sense for which are how relevant for this new case. Often, for example, before you do the A/B test, you have to choose A and B.
(00:38:22)
You may have dozens of options, and instead of doing an A, B, C, D, etcetera test over all dozens of the options, you might want to ask the prediction market, well, which of these dozen options seem most promising to you? And then take the two most promising or three most promising options and do a test of those.
(00:38:36)
Harry Hawk:
So whittle down the choices internally before going externally?
Robin Hanson:
Right.
(00:38:41)
Harry Hawk:
I was even thinking the idea that people who are having to deliver these may like the old scripts. They’ve memorized the old scripts to some degree. It might be easier to get buy-in not only knowing that one of them won ultimately, but knowing that the rest of the company also felt that that was the better approach or so forth, but of course, you get the opposite results.
(00:39:01)
Robin Hanson:
Right. So you have a thing you’ve been doing and some new options to pick from. You’d like to whittle down the options, which options to pick. You’d like to estimate these outcomes in the short run and the long run, but you’d also not just estimate, say, customer response. Estimate how long it’ll take for people to adjust. How much pain and trouble will that be, and how much perception of consistency that might cost you?
(00:39:19)
Harry Hawk:
I think these are all very important things, questions about language and tone. I think this is a long-term trend, but certainly one that is accelerating in the last election cycle, this more plain talking, a little less procedural droning speak and more folky vernacular.
(00:39:34)
Robin Hanson:
So one way to think about this is to think about expensive tests and cheap ways to pick expensive tests. So, for most of these things, if you really want to figure out how well it works, you just do the expensive test. You do the big, long test, and focus groups, etcetera and just see what that data says, but before you do expensive tests, you typically have many options for which things to test, and you need to narrow that down, and prediction markets can be a cheap way to narrow that down initially because you can ask the market, if we were to test this, how well would it do? And that gives the incentive to pick hundreds of options and say, if we were to test this, how well would it do? And then only actually test the one, or two, or three that seem to be doing the best.
(00:40:14)
Harry Hawk:
Robin, I know that I think most people listening have a good sense of surveys and how they work, and obviously, one of the key important aspects here is sample size. In prediction markets, do we need to pay as much attention to that or more attention?
(00:40:29)
Robin Hanson:
Well, you’re not really sampling in prediction markets. You are getting information from participants. What you really want to know is the uncertainty in the estimates that you’re getting. That’s what you use sample size to produce in the case of surveys. You use sample size to produce an estimate of the uncertainty. So, in prediction markets, you can actually ask the prediction market itself to give you its uncertainty. That is prediction markets give you probabilities, so it could not only give you an expected value, it could give you a variance around that value.
(00:40:57)
You can just ask the market for its uncertainty about whatever parameter you’re interested in, and then you can know how accurate is that estimate? Markets tend to be very well calibrated. So calibration is basically, when a market says a 70 percent change, how often does it happen? Well-calibrated market, when it says 70 percent change, 70 percent of the time, it’s actually true, and 30 percent of the time, it’s wrong. A market can be very uninformed, but still be very calibrated. That is, it can tell you how uninformed it is. It says, hey, I really don’t know. I have no idea, and it can tell you quite clearly that it doesn’t know.
(00:41:30)
Harry Hawk:
Whereas, in a survey, I really want to get a very big sample size often for some of these things. If I was focusing on internal activities within a company, I could pick different parts of the company to do different markets with? So I could ask middle managers, a few hundreds or a thousand of them, to bet. I could go and find folks on the factory floor who had similar kinds of roles across different parts of the company.
(00:41:53)
Robin Hanson:
Well, you could do those things, but a great thing about prediction markets is that you don’t have to. So, when you’re doing surveys, you have to think about who to ask, and what words to use, and what days to ask them, and who to ask which questions, and how often to re-ask. With a prediction market, you can just set up a single question market, and you let the participants decide which question to participate, when to participate, how often to participate, how to think about the wording of the question. They do all that work for you. You don’t have to do that if you don’t want to.
(00:42:24)
Harry Hawk:
And that would be interesting, I guess, to see, overall, once you’re getting participation and people are aware of it, but I thought, you know, it might be interesting if you’re thinking of switching something very mundane that would affect very few people relating to supplies and stuff, but if you asked all of the folks in all of the different divisions, people we might once have called secretaries, but office managers, people who rely on getting these supplies, they would have some very peculiar and particular knowledge there that we might…you know, do you like A or B?
(00:42:53)
Robin Hanson:
So, in a mature market, you just present lots of questions, and you have lots of participants, and you tell each one of them, look, browse through these questions, and identify the ones you might know the most about, and you should focus on contributing to those questions. So you want to indicate to your participants which are the more important questions, perhaps, by a subsidy of those questions and a bigger subsidy for more important questions, and then say look at these prices. If anything seems off to you, then fix it, please, and if you just consistently can’t seem to find any contributions to make, well, please stop. Go onto the other tasks we have for you.
(00:43:28)
But if you can find a contribution to make, it doesn’t have to be in the area that your job is in. It could be somewhere else in the company. As long as you can find a part of this market where you think the market odds are off and you can make a contribution by correcting those, do that. Over time, you’ll be able to check that judgment. That is, you’re in sales. You think you have some correction to make for marketing, and you try that out, but over time, you’ll see how much you were right, and we want you to adjust your contributions to more contribute in the areas where you’ve been right in the past, and when you thought you knew something, but the market kept telling you that you were wrong, maybe you should back off there and contribute less there.
(00:44:05)
Harry Hawk:
So that sounds like, to me, maybe a little bit more of an endgame, but if a company was a little bit weary about opening this up and how well it would be perceived and all of this, they could start out with inviting specific targeted individuals that they believed had knowledge, or is there a bias in that?
(00:44:20)
Robin Hanson:
Once you pick your initial range of topics, you can pick an initial range of participants you think might be informed about those topics, but you don’t have to know for sure who’s going to be well informed. In a survey or a committee, if you mostly ask bozos who know nothing about your topic, you’re in trouble, because the people who do know will be swamped out by the bozos, but in prediction markets, you can ask a lot of people, and it’s okay if only a small fraction of them know the answer to your question, as long as the other people know that they don’t know, and if many of the others think they know but really don’t know, well, then it just takes a little time for them to find out by being wrong and going away.
(00:44:55)
Harry Hawk:
And that might be useful feedback to them.
(00:44:57)
Robin Hanson:
That is. You don’t want to give them the sense that they’ll be punished too much for being wrong. You want to have the sense, well, there’s an exploration here. We’ll give you some time to try different things and see what you’re good at, and then, later on, we expect you to focus on what you’re good at, but it’s okay if initially you were wrong about what yo