The value of forecasting

On Monday night, I attended the Long Now talk given by Philip Tetlock on the topic of Superforecasting. I was disappointed with the talk and thought it was missing the point. But when I said that to a friend afterwards, he asked me what I wished Tetlock had talked about. So I’m going to use this post to espouse my thoughts on forecasting, and why we do it.

Tetlock used forecasting competitions to find “Superforecasters”, which he defined as those groups that most accurately calculated the probability of events happening within 6 to 18 months. And this is where my problems with the talk begin – I don’t agree that the point of forecasting is necessarily to be accurate. And studying the forecasters who can be most accurate in a limited time frame will teach different lessons about what works in forecasting than if different criteria are used.

So what criteria would I use to evaluate forecasts besides accuracy? One of the goals of models and forecasts I build is to provide insight, and, more importantly, change behavior. When I create a business model to see how the an emerging business might evolve over the next few years, I have no illusions that the model will be accurate in predicting revenue. However, a model may be completely wrong, but still useful, if it provides insight on the critical assumptions that drive revenue growth. As Patrick Pichette (Google’s former CFO) would ask me about such models, “What do I have to believe for this business to succeed?” And that’s a very powerful question.

If the model shows that there are critical business levers needed for success (e.g. customer acquisition cost or retention rate), those become the metrics that the leaders of the business can use to determine if they are on track. They can test those assumptions early and change course quickly if those assumptions prove to not be attainable. And this is what I mean about how forecasts can change behavior – by providing insight into what assumptions matter, they can help business leaders determine what to do first i.e. test those assumptions.

But note that accuracy is not one of the criteria I use to evaluate success of the forecast/model. The actual output of the model is not the revenue number – it is the insights gained by creating the model, and the behavior change that those insights drove. To misquote Eisenhower, forecasts are useless, but forecasting is indispensable.

Another thing I didn’t like about Tetlock’s talk was the focus on the short-term of 6 to 18 months. 6 to 18 months is relatively easy to forecast – the world will mostly stay the same in that time frame, and you can do linear projections to predict what will happen. The hard thing is predicting how the world will change in ten years (e.g. 10 years ago, there were no smartphones, and now they are nearly ubiquitous in developed countries). Or, as Bill Gates puts it, “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.” I was hoping that Tetlock would talk more about techniques for long-term forecasting, especially since he was speaking at the Long Now, an organization devoted to thinking about the next 10,000 years. But he ducked that question entirely when my friend asked it during the Q&A session. For those that are interested, I recommend reading Peter Schwartz, one of the Long Now founders, who discusses long term planning using scenario analysis in his book The Art of the Long View.

Tetlock does make the good point that pundits and analysts should be more probabilistic in their thinking and communication. He bemoaned the doubletalk employed by pundits to avoid being wrong, saying “Punditry is the art of giving the appearance of going out on a limb, without actually going out on a limb.” However, even in making his point about probabilities that I agreed with, I was disappointed that he didn’t mention confidence intervals around those probabilities – 70% +- 10% is very different than 70% +- 50%. And while he touched on Bayesian techniques, he could have done more to emphasize the importance of understanding priors and counterfactuals in the process of developing forecasts. But I suppose confidence intervals and Bayesian probability might be too nerdy for most audiences.

So I was disappointed by this talk, because I think Tetlock’s focus on accuracy in short-term forecasting detracts from the much more interesting (to me) topic of how to change behavior by developing insights from building long-term forecasts. So I’m writing about that here, since this blog is for “Deep Thoughts I Have Thunk” (tm Jofish).

One thought on “The value of forecasting

  1. Hat tip to Noah for the link to this article on Why Forecasts Fail. The article is behind a paywall, but I liked this point from Noah’s summary: “… research in the 1970s collected a whole bunch of forecasts and compared how close they were to reality assuming that the more complex the model was, the more accurate it would be. The results, in the end, showed exactly the opposite, it’s the simpler models that outperformed.” I’m a big fan of simple models that drive behavior change as I write in this post.

Leave a Reply

Your email address will not be published. Required fields are marked *