The Evaluation Task Force
The Minister’s speech includes a passage titled STRONGER ASSESSMENT OF PUBLIC PROGRAMMES in which he refers to the Evaluation Task Force which was established to better inform decisions on whether programmes should be continued, expanded, modified or stopped.
Mr Quin said:
“A productive public sector is not one which is too shy to accept that not everything works. In the commercial world it’s known, recognised, embraced. We need to lose our hang ups.
But we can and must learn from our successes.”
I have some history with this topic.
I was my council’s point of contact with public health officials in the 1990s, and they happened to be significant contributors to the rapidly growing evidence-based medicine movement. Not long after, I worked with Crime Concern, our police Area Commander and the Minister for policing at the time on the early phases of statutory community safety partnerships.
I took an idea to Brookes University in Oxford, and we jointly funded a post which we hired a quantitative social scientist to. She split her time between teaching undergraduates to use statistics and working inside the council alongside policy colleagues adding rigor to the business cases they made for interventions and projects.
Case studies
Sure Start was introduced in 1998 to improve the life chances of children aged 0-4 years by enhancing community-based services. Multiple evaluations, including the National Evaluation of Sure Start (NESS), indicated no significant positive impact, but later studies found longer-term benefits, especially in disadvantaged areas. In response, the decision was taken to transform the programme into Children’s Centres, with a stronger focus on the most disadvantaged areas.
The Work Programme for Unemployment commenced in 2011 to help long-term unemployed individuals find and retain jobs. The Department for Work and Pensions published a series of official statistics which showed that while it helped some groups, there was criticism about its effectiveness, particularly for those with more significant barriers to employment. The scheme was replaced by the Work and Health Programme in 2017, which has a sharper focus on those with health conditions or disabilities.
Community Treatment Orders (CTOs) in Mental Health were Introduced in 2007 to mandate certain patients to follow specific treatments while living in the community. Various studies, including one by the University of Manchester in 2013, showed there was no evidence that CTOs reduced the rate of readmission to psychiatric hospitals but there were concerns about potential infringements on patients’ rights. This has led to an ongoing debate and, while CTOs haven’t yet been ended, there are calls for clearer guidance and a more evidence-based approach to their use.
So I’m an early adopter of relying on evidence to select between options in public policy.
But not an evangelist.
The magic of “social investment”
In the 2000s and 2010s the idea of Social Investment Bonds became popular, and one was put in place as part of Peterborough prisons’ establishment.
An evaluation report published in 2014 said “Figures released this week show that the mentoring and support provided by the charities resulted in a fall in reconvictions in the first Peterborough cohort of 8.4 per cent when compared to a larger national group with similar characteristics. This is encouraging, and provides useful data for future policy-making.”
Although “encouraging”, the data were below the threshold needed to pay out to investors.
A politician called Nick Clegg launched more Social Investment Bonds.
And then the initiative faded away as reductions in public spending degraded outcomes across multiple areas of public policy.
I wrote a blog article criticising the magic of ‘social investment’ in 2014, making the argument that no-one could establish satisfactorily which factors led to the policy outcomes that various initiatives delivered.
It isn’t obvious that this issue of ‘confounding factors’ is fully addressed in the current approach – but this is grounds for caution, not scepticism.
Nudges and the replication crisis
The idea of policy ‘nudges’ that could deliver profound changes in behaviour also took off during the 2010s – but like ‘social investment bonds’, it took some knocks more recently.
A typical nudge is a change on a form from an opt-in to an opt-out.
This nudge upped the numbers of people choosing to pay into workplace pensions.
But it also reduced the numbers of people making the effort to pay more than the bare minimum.
And this has proved to be a common unintended consequence of nudges – you do the thing you’ve been nudged towards, and you don’t think about going beyond that behaviour to contribute a bit more, exercise for longer, etc.
The Financial Conduct Authority found that mandatory risk warnings on investments reduced consumers’ perception of variation in the riskiness of different offers, leading in some cases to worse choices than seemed prevalent previously.
In the field of experimental psychology more generally, it is widely accepted that there is a ‘replication crisis’. So many classic experiments thought to ‘prove’ something enduring about human psychology when re-run have produced different results that the whole field has been brought into doubt.
Be guided but be cautious
Mr Quin last week said:
“…the Evaluation Task Force is launching the Evaluation Registry, which will provide, for the first time, a single online focus for evaluations across government.
The Evaluation Registry has been built from the ground-up to be best-in-class in driving evidence-based policy making. When it launches, it will be one of the biggest stores of information on social policy evaluations in the world, containing over 2000 evaluations from the outset.”
So how do we welcome this backing for attention to evidence while avoiding the traps that have been problematic previously?
By looking for the opposite of what we want to find
Richard Feynman (who shared the 1965 Nobel Prize in Physics) in 1974 explained the difference between science and ‘cargo cult science’ by urging scientists to “put down all the facts that disagree with [your theory] as well as those that agree with it.”
“…the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgement in one particular direction or another.”
Now if we treat this as a behavioural injunction, it won’t get far.
But if we methodically distribute it across the way we govern and manage programmes of work – it just might.
And this is the argument I want to go on to make.
Proper, cautious, adoption of a commitment to evidence-based policymaking has to be embedded in thinking about programme governance because only that will force people to at least acknowledge the principles that need to be observed if the approach is to make a difference to outcomes.
And in this respect, it resembles other valuable disciplines and approaches.
Good risk management. Commitments to equality of treatment. Prudent stewardship of public funds. Pick any valuable generic approach to manging public services you can think of.
None of these things flourish without being embedded in good governance arrangements.
So in another article, we’ll begin exploring what good governance arrangements are, and how to go about developing them.
Ben Ticehurst
Prospect Law is a multi-disciplinary practice with specialist expertise in the energy and environmental sectors with particular experience in the low carbon energy sector. The firm is made up of lawyers, engineers, surveyors and finance experts.
This article remains the copyright property of Prospect Law Ltd and neither the article nor any part of it may be published or copied without the prior written permission of the directors of Prospect Law.
This article is not intended to constitute legal or other professional advice and it should not be relied on in any way.