In the David Spiegelhalter podcast, there are several references to Philip Tetlock. He was the progenitor of super-forecasting. For decades he has been running competitions, tournaments if you like, pitting experts against experts, to see who is best at predicting what will happen when the world has a complex social problem, a pandemic for example!

When Tetlock started this work, he found that the experts -scientists, policy makers, journalists, foundation folk- did little better than chance. In fact, there is an inverse relationship between the fame of the expert and the accuracy of his or her predictions.

But some people, in some conditions, are good at the prediction game. And Tetlock has been learning from these people. I will pick out a few lessons under two headings: the people and the process.

The people

  • ‘Foxes’ -people who know a little about many things are better than ‘hedgehogs’ -people who know a lot about one thing
  • People who think in terms of probability, and in fine-grained terms -the chances of this happening are 0.73 (the forecasters score something between 0 -it will never happen- and 1 -this will absolutely happen)
  • People who can work in teams, and will alter their view according to reasoned argument from others
  • People who can admit when they are wrong, and correct course accordingly

The process

  • Gather evidence from a variety of sources
  • Don’t predict too far out: asking what will happen in the next year will reap better results than asking what will happen in the next decade
  • Keep score and find out if the prediction was right, learn from the wrong predictions
  • Hold the predictors to account (even the most confident, assured predictor will become more circumspect if they know their predictions are going to be compared with others in the team)
  • Bind individual accountability to collective accountability. A common tactic in super-forecasting, as described in the podcast, and used for example by Kennedy in trying to figure out how to respond to Cuban Missile Crisis of 1962, is to have two teams of analysts working independently of each of other on the same data sources, a tactic that could be used much more in our work.

It is relational

There is much more to say in terms of method, and Tetlock’s book with Dan Gardner referenced below is a great place to start.

But the take home for me is the relational aspect of this work. There is the relationship between analyst and data, the curiosity, an appropriate level of doubt, the preparedness to come to a conclusion, to decide, to be wrong (and learn from being wrong). And there is the relationship between the analysts, the readiness to work in a team, to take into account divergent views, and where necessary adapt.

None of us in the network are in the super-forecasting game. But, as we shape this third era, we might include some of these relational principles.

Read more: Philip Tetlock and Dan Gardner, Super-forecasting: the art and science of prediction, 2015