A little too much on machine learning I think. I don’t know enough about it to capture in a few words what it is all about. I am not for or against machine learning. I just say that it's coming. Like Julia Unwin, leader of the Civil Society Futures inquiry, I suggest it is down to us to make it a force for good, and not evil.

But my purpose of sharing with the network goes beyond that. It is making me reflect on how we learn. And I wonder if it may encourage us all to reflect. Let me give two blocks of examples.

The gaps in era 2 approach

In Input 32 above, I suggested three gaps in era 2 analysis:

  • the failure to handle multiple outcomes
  • the inability to take account of the context around the individual, and
  • the reliance on before and after measures, missing estimates of continuous change.

The analysis of the Functional Family Therapy data suggests that these gaps will be filled in Era 3.

How we learn

Less than ten per cent of what I do involves machine learning. But a lot of the 90 per cent is beginning to be influenced by how machines learn. A few examples.

What do we mean by data? In era 2, for good and for ill, we had lots of measures, some collected with paper and pencil, some later on using laptops and phones. All data that required someone to do something just for the sake of research and learning. In the Functional Family Therapy example, we are analysing data that is recorded for purposes other than research and learning. The rough notes of a clinician. In another context it might be an audio recording of routine conversation. Or a video record. CCTV footage is another example. This is all ‘passive data’, the detritus of other activity, information that would otherwise go to waste.

What do we mean by outcomes? In era 2, we did what we could. We focused on outcomes that appeared to matter to people, adolescent mental health for example, or good quality family relationships. We develop measures. But we didn’t know how to combine the information, to think about a person in terms of the health and their development and their relationships with family, friends and work, and, and.... In the future, we should be able to make these sorts of calculations, looking not for 1 -everything is good- or 0 -everything is bad- but a pattern, a patchwork of the pluses and minuses that are the stuff of life.

What do we mean by change and change mechanism? We have talked about logic models before. A tool that has helped make sense of the world of intervention. A tool much loved by foundations in the U.K. We have talked about their limitations. About how they urge us to a linear story that goes from bad to good. Michael was an alcoholic. He went to the Lloyds Foundation Super Programme. Now he goes to Cambridge University. Now we are beginning to think about dynamic models. A model that captures the battle between things that are encouraging us towards progress and things that drag us back. Here is a link to some field notes for our partner WEvolution describing how such a model works for them: https://welearn.ratio.org.uk/12-a-dynamic-logic-model/

The recovery of truth, meaning and ethics. In era 2, one truth was the result of research conducted with what was described as the ‘gold standard’ of method. Richard has an innovation. I evaluate it using a randomised control trial. I find the innovation works. Ergo. It works. That is the truth. The machine asks that humans write down a ‘ground truth’ prior to doing its work. We, for example, have been studying videos of strangers meeting to work out the ‘ground truth’ of when a conversation is or is not taking place. The machine takes that truth and looks for the patterns that may predict it. As described above, the machine cannot ascribe meaning to data, it can only identify pattern. The human ascribes meaning. What a responsibility! The question is back on the learning agenda. Is there any meaning in this? And ethics. We have reflected on Berwick’s observations on the moral absence in era 2, the gaming of results for example. In era 3, ethics comes centre stage.

When do we stop learning? We have projects. We have an innovation and we learn about it for a year, or two, or until the funding stops. As long as there is new data the machine keeps on learning. Is that beyond we humans? To have learning integrated into daily life, continually asking us how we can improve?

What do we mean by collaboration? The Functional Family Therapy work is a collaboration. It involves Care4 and FFT. They have the data, we don’t. We have a question they need answered. It involves Open Lab. They have engineers, we don’t. They need data to train their machines. It involves a machine. It sees things we don’t. We see things it doesn’t.