“Shocking”

“Stunning”

“We didn’t see this coming” 

These were the reactions from last night’s historical upset of Hillary Clinton by now President-Elect Donald Trump. About half the country at this moment is thrilled, the other deflated.  But the question seems to keep coming up on both sides of the political spectrum: “how did we get it so wrong”?

In this case, the “we” term refers to the political pundits and pollsters who almost uniformly predicted a decisive Clinton win.  This is but the latest in a string of poll misses for the cadre of election callers—they all missed the latest UK Prime Ministerial election and the Brexit. Wait…isn’t this the age of Big Data? Micro-targeting? Sophisticated prediction models? Highly accurate voter data?

So what is going on here? And does this instruct those of us who build models to forecast everything from our product’s demand in Nebraska to inventory levels in Brazil—everyday problem solvers who must make the call on important decisions that our organization must make routinely?

The answer is yes. This election provided us a useful front row seat on the art and science of forecasting and holds five important lessons about how to make our forecasts, and their interpretation, more successful.

 

1.     A point forecast comes with a distribution

Whenever you see a point forecast—a single number— that by definition comes associated with a probability distribution around it. So the number “5 plus or minus 3” is a DIFFERENT outcome from the number “5 plus or minus 1”. Yet there is a tendency both in the media as well as among business leaders to say, “just give me the number”—which is usually the mean. That, in turn, hides a very real part of an analyst’s output that is only ignored at one’s peril. A quick scan of yesterday’s headlines just before the polls closed showed statements like “Clinton up 4 in ___ state” or “Trump down by 2 in national poll”. This is deception by omission, and it is dangerous.

 

2.     So-called outliers do happen

Just because the mean of a distribution points one way, even strongly, by definition this indicates that any outcome along the distribution is possible if unlikely. I noticed that many polls showing Clinton ahead by a few percentage points caused certain commentators to suggest, “ah, look…she is ‘ahead’”. That ignores the small portion of the distribution that hangs over the other outcome.

Yesterday afternoon Nate Silver gave Trump a 30% chance of winning the election. That’s a pretty high probability! Focusing only on the higher of the two portions of the outcome is unhealthy and incomplete. Don’t dismiss outliers—find out why they exist.

 

3.     Forecasting models are most vulnerable on scale-up

Here’s how Presidential polls work: you take a tiny (and I do mean tiny) sample of voters who self-identify as a voter for candidate X or candidate Y. The sample can be as low as 1700 voters, for a total eligible voting population of about 240 million (roughly 130 million voters actually voted for President in this cycle). Pollsters use models to scale from this small number to many millions, and these models contain all kinds of assumptions about who will vote and how, even including weather effects, traffic, and local headlines. It is in these models that vulnerabilities arise.

Having seen a bit behind the curtains of some of these models I can tell you that many in this cycle were predicated on this being a “traditional” election. This has been by no means a traditional election, particularly with Trump appealing to a unique cohort of voters, many of whom had stayed on the sidelines in the last few cycles.

I’ve seen this same kind of thing happen in corporate circles, where a model that forecasted something last year is trotted out to forecast that same thing this year. Often only the data is changed—not the underlying logic or assumptions. When the model is inevitably wrong corporations make the situation worse by branding models as “bad” and then revert to manual judgment calls alone.  The far better choice is to examine the process that delivers and shapes those models to see if any structural changes are called for in any given year.

 

4.     The Committee of Models Approach is Best

We need to move away from decades of forecasting as an individual endeavor. That is, a smart person retreats to a dark room and bangs out a great algorithm that appears to match history and then touts that as “the” model to watch.

The reality is a bit more complex—what really happens is that some models are good in some regimes and not so much in others. The longest history of computer-based forecasting is in weather—the very first practical mainframe computers were tasked with understanding what the weather will be like tomorrow or the next day.  In weather forecasting, experts use a “committee of models” approach—not relying on any one forecast to magically carry the day, but rather using a collection of models to vote for the direction of the weather. Each model has a unique style and approach. If one model dramatically deviates from the rest, that result in and of itself is highly instructive. Doing the detective work to understand why is paramount to thoroughly understanding how the models portray the dynamics of the system at hand.

 

5.     Never let Subject Matter Experts build models

Its natural to think about predictive model building as performed by people who are experts in that field. However counterintuitive, this is the worst decision that someone can make.

Subject Matter Experts, or SMEs, bring considerable bias into the process of model building from their years of experience of living within the system under study. Those biases in turn reduce their ability to ask naïve and probing questions—questions that result in highly objective (and correct) models across time; models that express the true fundamentals of the system absent of structural leanings.

It is very common to see a political consultant that works for some party or interest group touting a model that they had built. That’s a red flag. Rather, I would prefer to see a team of diverse political consultants put their heads together and advise a “true” model builder—one who is familiar with statistics, process, mathematics, and data—on what such a model should contain and consider. That model builder WILL ask the naïve questions and force those SMEs to come clean on all of their biases (even the ones they don’t realize they have).

 

Summary

So love or hate the outcome, I hope that you will use this unique historical event to learn something new, and put it to practical use. Forecasting is a building block of analytical problem solving. If we build better models, we will likely make better decisions, and inevitably build better organizations.

About the Author:

George Danner is president of Business Laboratory, LLC, an award-winning firm that uses scientific techniques and methods to improve organizational performance.   With more than 30 years of experience in corporate strategy, George keeps his finger on the pulse of the latest trends global data realm.  He recently authored a book on Big Data business strategies, Profit From Science, which debuted as the #1 Bestseller in Business Mathematics on Amazon.  To learn more about George and Business Laboratory, visit www.business-laboratory.com.