The Efficacy Of Prediction Markets

As I’ve pointed out, I’m a believer in prediction markets. For me, though, it’s more of an intuitive expectation that markets (i.e. revealed preference) are likely to be more accurate than stated preference. The question has come up with Doug, who isn’t yet a believer in prediction markets, as to whether there is any empirical evidence on how reliable prediction markets are.

So I went looking, and found a nice paper titled Interpreting Prediction Market Prices as Probabilities (PDF). I recommend following the link and reading the whole thing, but if you’re not interested, the first paragraph of the conclusion tells you what was discovered (emphasis added):

An old joke about academics suggests that we are often led to ask: “We know it works in practice, but does it work in theory?” This paper arguably follows that model. As discussed above, a variety of field evidence across several domains suggests that prediction market prices appear to be quite accurate predictors of probabilities. This paper suggests that this evidence is easily reconcilable with a theory in which traders have heterogeneous beliefs that are correct on average.

There are several concrete examples given, but as a football fan and occasional gambler, I know how difficult it is to predict football games. Last year, I tracked college games as if I were betting on the games, and against the spread I had a surprisingly good 62.5% record. That’s enough to beat Vegas, but the odd thing is that it was still only about an 80% record straight up picking winners and losers. So to read the below made me a believer in prediction markets:

For this reason we turn to two rather unique datasets. The first was provided to us by Probability Football, an advertising-supported free contest that requires players to estimate the probability of victory in every NFL game in a season.17 Including the pre-season and playoffs, this yields 259 games in the 2000 and 2001 seasons and 267 in 2002 and 2003. On average we observe the probability assessments of 1320 players in each game, for a total sample size of 1.4 million observations. Contestants are scored using a quadratic scoring rule; they receive 100 – 400(w – q)2, points where w is an indicator variable for whether the team wins and q is the stated probability assessment. Truthfully reporting probabilities yields the greatest expected points, a fact that is explicitly explained to contestants.
The top three players receive cash prizes. While these rank-order incentives potentially provide an incentive to add variance to one’s true beliefs, it turns out that given the number of games in a season, this incentive is small. For instance, in 2003, two mock entrants to this contest that simply used prices from TradeSports and the Sports Exchange (a sports-oriented play-money prediction market run by NewsFutures.com) as their probabilities placed seventh and ninth out of almost 2,000 entrants.

Such ideas are fairly simple. As a personal test, if any readers are in offices where you participate in weekly football pools, try a “system” of picking instead of your own intuition, and see how you do. For example, I’ve seen college pick’em pools where you assign a “confidence” rating to your picks for winners. A simple system would be to take the favorite in every game, and assign the confidence ratings to the teams in order of the largest spread to the smallest spread. You may not always have the best weeks, but I would postulate that over the season, you’re going to be in very good shape.

But the key here is that when you’re looking at various ways of determining probability, no method is 100% accurate. Prediction markets, though, have certain features that make them more likely to be accurate than many other “conventional” methods of evaluating probability, like polls. For that reason, and especially now that I have found empirical evidence to support my earlier intuition, I am more and more comfortable in my use of Intrade’s numbers over those of Gallup or Pew Research.