Arguably the strongest contribution of this new way of thinking, this second era thinking, was in the analysis of impact. I have tried to condense the application into a single diagram.

Let’s say we are in Walworth, the place we visited last time. It has a population of roughly 40,000, including 8,000 or so children and adolescents. It wouldn’t be surprising to find there are about 1,000 16 and 17 years olds in Walworth. This could be our starting point.

We ‘screen’ the 1,000 adolescents with the SDQ, and if Walworth is like most places in England we will find 200 with features of a mental health disorder.

The next step is to divide the 200 randomly into an intervention group (N=100) and a ‘treatment as usual’ or TAU group (N=100).

Then we offer the treatment group an intervention thought likely to improve their mental health. Let’s imagine it is Functional Family Therapy described below. The other 100 get what they ordinarily would have received, for some nothing at all, for others another intervention from CAMHS.

Then we wait. The analysis doesn’t enter what is called the ‘black box’ of intervention. It waits to see what emerges from the other side. It waits six months or 12 months -never seven months and 12 days- and applied the SDQ again.

Now we have two sets of scores. More or less everything has been held constant. All the young people live in Walworth. All are agreed to have a conduct disorder. All got tested at two points in time. The only thing that differs is that half got a specified intervention -Functional Family Therapy- and half got TAU or ‘treatment as usual. If the data indicate that the scores for the intervention group are better than TAU then we can be reasonably sure the intervention is making a difference (and not some other variable).

Generally speaking, a scientist will translate the change into what is called an effect size, a standard calculation that allows us to compare impact of all kinds of intervention. A negative effect size shows that things get worse. An effect size of 0.2 is modest but worth getting out of bed for. An effect size of 1.0 suggests somebody has got their sums wrong, because it is never that good.

I am being a little flippant about this. 400 words cannot tell the whole story. It is much more complicated than this.

On the other hand, this is also the essence of what it takes to become an ‘evidence based programme’. Three trials like this on a single intervention will get it onto most lists of effective programmes around the world.