Iteration: the act of deliberate learning

“For a train of thought is never false.  The falsehood lies deep in the necessities of existence, in secret fears and half-formed ambitions, in the secret confidence combined with a secret mistrust of ourselves, in the love of hope and the dread of uncertain days.”

Under Western Eyes, Joseph Conrad

I have a particular memory of a meeting early on in my career as a civil servant, probably in 2004.  I was a junior member of the team surrounded by civil servants in their 40s and 50s and we were discussing a policy issue.  I can’t remember exactly what it was (I was working on higher education policy at the time), but I do recall feeling a sense of bewilderment at the fatalism of almost everybody else around the table as one idea after another was dismissed as ‘old hat’ - something that we’d tried before and hadn’t worked.  The meeting sticks in my head because I remember thinking that I couldn’t let myself get to that stage, to become so cynical about the prospect of change that I wouldn’t explore options.  

It is hard to reconcile that tone of fatalism with the sense of purpose that existed in the DfE for most of my career there.  I don’t count myself lucky to have worked with people who were genuinely passionate about the cause of education because I think that is the norm across the Department (something I know will not seem obvious to those outside).  Most of my civil service colleagues were motivated by a clear sense of social purpose, of wanting to make a positive contribution to society. 

So how could that sense of purpose sit alongside a cynical dismissal of ideas?

The answer lies, at least in part, in the nature of policy-making and the way that risks are managed, the way ideas need to be presented to get support and funding, and the inability of policy-makers to really learn from past efforts.

As I reflect on dozens (hundreds?) of policy conversations, out-of-hand dismissals seldom said that an idea wouldn’t work, but were far more likely to reflect on how an idea would go down in the system.  In other words, as ideas are discussed, the risk of them being criticised or challenged creates a disincentive to act and adds a level of caution to the process that can kill or neuter a promising train of thought.  “We can’t do that because ‘x’ will kick-off” is a more common consideration than “We can’t do that because it won’t work.”  The converse is also true - that ideas are taken forward because they resonate with a key audience rather than because they are likely to have any practical impact.  This is an essentially cynical bit of the process that cannot help but breed cynicism in the policy-makers involved.

In addition, most policy ideas suffer from the issue of over-promising and under-delivering.  In order to secure buy-in, a policy needs to justify the investment in a short-time frame.  Ministers want to see results, ideally within their tenure in office (which can be very short indeed and even if it isn’t (see Nick Gibb) the threat of it coming to an end at any moment hardly encourages longer-term thinking).  His Majesty’s Treasury (HMT) needs a return on the investment, almost always within the spending review timeframe.  So, if you are lucky, a policy might have three years to prove its worth before funding is at risk.  And of course so many policies are responding to urgent issues and need to show results otherwise they are quickly discredited.  So benefits get inflated, either in terms of the scale of change (for example, promising to deliver 150,000 NPQ places in three years, which was always extremely unlikely) or the pace of change (the current narrative around the small boats crisis offering a good example).  When those benefits inevitably fail to be realised, the level of cynicism amongst all involved ramps up - the sector feels let down, the officials bounced into unattainable policy delivery feel resentful, ministers feel embarrassed, and mistrust at HMT increases again.

One of the inevitable casualties of the rush to get results is proper evaluation.  This isn’t because policy-makers do not see the virtue of establishing the effectiveness of a policy - especially in the early days when there is some momentum behind it - but rather because there is no incentive to complete an evaluation on a policy that has already deemed to have failed (for example because it did not achieve its unattainable aims) or that will continue regardless of the evidence.  

One example that comes to mind is the ill-fated National Teaching Service (NTS).  Launched by Nicky Morgan in 2015, touted as part of the fabric of the ‘school-led system’ at the ASCL conference in February 2016, the NTS was intended to help solve the issue of teacher shortages in designated areas by encouraging teachers from other parts of the country to relocate.  Initially allocated funding for a pilot cohort of 100 teachers in the first year (from September 2016), the vision was to establish an infrastructure to become a core part of the system in the future, with thousands of teachers supported to move into areas with stubborn recruitment challenges.  Yet, by December 2016, the scheme had been dropped as only a handful of teachers had been persuaded to sign-up to the pilot.

I was at the ACSL conference in February 2016, and I inherited the ownership of the NTS later in my career in the DfE, after it had already been dropped as a policy priority.  But, although the numbers who had signed-up to the NTS were low, there were still teachers who had - in good faith - engaged with the government programme and who needed to be supported financially as promised, so a member of my team was responsible for the ongoing management of the scheme.  Protecting that resource was hard - nobody in the Department wanted to keep funding a dead policy when money was tight - but even more difficult was protecting the evaluation of the NTS, or a policy that was already deemed to have failed and nobody wanted to talk about any longer. 

That mattered because the idea of moving teachers around the country continued to come up multiple times in conversations with new ministers and SpAds over the following years, and it felt important to be able to share evidence about the failure of the NTS, to be as clear as possible about what had worked and what had not. That way we can consider whether persuading teachers to move is a possible solution, avoiding past mistakes and building on what had come before. 

Unfortunately, the sad truth is that policy-makers seldom engage with evaluation even if it has taken place, which is why so many ideas feel like we have seen them before - often we have without any sense of learning from them.

Iteration, iteration, iteration - that’s all you need! (h/t Roy Castle)

In our book, Nansi and I show that the way out of this cynical cycle is to embrace iteration, to create a policy-making process that is deliberate in seeking to learn from what has worked well and what hasn’t, and to build on past efforts rather than rush to scrap them (and then too often repeat them unthinkingly).  We suggest four models that can be quickly implemented to begin to lead to better outcomes:

  • End (or fixed) point evaluation - creating a clear timetable for measuring the success of the implementation and delivery of a programme with a commitment to reflect on what is learned;

  • Prototyping - testing different models and ideas earlier in a policy-making process before settling on a ‘final’ model to roll-out;

  • Better piloting - given policies the time and opportunity to develop and change based on evidence (imagine, for example, that the NTS pilot was actually a pilot and rather than being scrapped, the DfE iterated); and,

  • Ongoing review - with clear parameters and timescales so that interventions can be assessed as they are delivering over a longer period of time, giving them the opportunity to embed and prove their worth, as well as demonstrate areas for improvement.

Each of these iterative processes exist in some way in the policy-making process already - indeed there are teams of officials at the DfE dedicated to evaluation and learning.  But they are not core to policy-making and are too easy to ignore and set aside in the name of short-term pressures. Had the NTS been developed and rolled-out using some of these approaches, we wouldn't have said it failed mere months after it launched - instead we would be able to make the effort to learn about what had not worked and have created a framework within which the policy might evolve in the future.

Previous
Previous

Long-term thinking…

Next
Next

Collaboration - building relationships of trust