We are not in the super-forecasting business but we are, I would argue, in the prediction business. As I read the case studies I see the following kinds of prognosis:

  • Putting the users of services in control of those services leads to something better for those people
  • Connecting the prime movers in a place leads to stronger more effective civil society organisations
  • Bringing people from diverse places to learn together leads to smarter, better use of foundation funds
  • Stripping away the doubt and constraint embedded in funders accountability structures leads to innovation and a better deal for citizens.

We all go about our work as best we can. We know we can do better. But at the end we have to say what we have found. We conclude. Or, in our approach to learning, each week or month we use data to decide what we will do in the next month.

How do we know what we say is true?

I am going to turn to a couple of sources to answer this question. The first is Steven Pinker whose work has popped up in earlier network meetings. His books tend to follow a similar structure. He takes a problem and breaks it down into bite size (he has a big metaphorical mouth) chunks. He then goes through a series of steps:

  • He describes what has happened, the historical decline in violence for example
  • He then sets out some explanations (plural) for what has happened, it could be this, but then it again it could be that.., and comes to some conclusion about his favoured candidates
  • He then looks at exogenous factors, those that are external to the thing he is trying to predict, so does that rise in the power of the state change patterns of violence, and importantly does the chicken of the state come before the egg of reduce violence
  • He then looks for counter examples, countries or contexts where violence has increased in the face of a global reduction, and
  • He is particularly interested in the exceptions, the places that don’t accord to his emerging conclusion.

Pinker is very clear in his views, but concludes each section by saying ‘but maybe I am wrong’. He then looks at the evidence relevant to his doubts. Again he is clear, and again he raises doubts. And so it goes on, until in the end the reader feels very stone has been turned and finds herself crying out ‘ok I believe you, it must be true’.

The second example is more well-known. Austin Bradford-Hill devoted his life to understanding whether smoking causes cancer. At the beginning of his career the assumption was that it did not. At the end of his career, despite the best efforts of the tobacco industry, the assumption is that it does.

Bradford-Hill didn’t rely on a single study or type of studies. He looked at the issue from different ‘viewpoints’:

  • Is there a correlation between smoking and cancer? Do the results of studies show similar patterns? Is the correlation strong or weak?
  • Is there any evidence of direct of effect? Does the smoking come before the cancer, or does the cancer come before the smoking? (Direction is one of the most neglected aspects of learning)
  • Is there a dose effect? Do heavy smokers succumb to cancer at a higher rate than light smokers? What happens to people who stop smoking when young?
  • What is the mechanism of change? Why would smoking cause cancer? (Again, this is a much neglected part of learning. In my field, I hear lots of people telling me that relationships are fundamental to well-being, but how does a relationship get into the body, how does it change health and development?)
  • What is the fit across the evidence base? Is there coherence between emerging explanations for data? In the case of smoking, for example, does the evidence accord with understanding about the causes of say colon cancer?

The list of viewpoints goes on. Even after they have all been exhausted Bradford-Hill would not claim to have proof. He would say something like ‘at present, I can find no better explanation for what we are seeing’.


Read more: Austin Bradford-Hill, The Environment and Disease: Association and Causation, Proceedings of the Royal Society of Medicine, May 1965.