Evaluation questions the value of aid interventions. We need to keep warranting the value of evaluation.

image

Evaluation questions the value of aid interventions. We need to keep warranting the value of evaluation.

5 min.

As evaluators, do we know what value we bring? The question is not new, nurtured by the rise of the ‘post-truth politics” (KALPOKAS, 2019). Despite the many discussions on the relevance and utility of evaluations, this issue is not resolved and merits attention.

The COVID-19 pandemic has exposed and/or exacerbated any pre-existing issues worldwide. For evaluators too, since March 2020, additional questions arise as to how to ensure that evaluations offer useful contributions to their intended users. Driven by the need to continue to support learning and accountability, evaluations have also adopted new ways of working, and turned virtual for a large part. Do we know how this has affected the utility of our work?  While we acknowledge the new limitations posed by the pandemic situation, we may also need to address this question.

What has evaluation done until now to promote its utility and follow through on its impact?

There has been increasing attention to promoting the utility of evaluations. Over the last decade, publications and guidance on utilization-focused evaluations have proliferated. They speak of stakeholder engagement and participatory approaches; prioritization of findings; sound communication of results; or follow-up processes. It is therefore not that we have not tried. Still, experience keeps demonstrating that these good principles are often not enough to make evaluation influential to decision-making either at the political or at the community levels, either at the collective or at the individual levels. 

So what could be missing to enhance the potential of evaluations to its maximum?

Evaluation processes could lean more towards behaviour change theories. Decision-making, as any action taking, is at the core of these theories (see FRENCH et al, 2012 and PROSHAKA, 2008).  Yet, how often do evaluations integrate behaviour change models in their processes? Often, we seek to explain the pathways of change of the interventions evaluated through theories of change or other models, why not apply the same logic to understand the dynamics influencing the use of our evaluations? This could help evaluators influence attitudes and behaviours of intended users so they can make decisions based on findings and recommendations.

Evaluations also need to tell good stories to grab attention and appeal to the emotions of audiences. Reports we produce to communicate evaluation results are not only often too long, they are also fragmented or too detailed to offer clear messaging. Storytelling is an “important messaging tactic that scientists need to learn to make use of in their communications strategies” aimed at influencing decision-making (DAVIDSON, 2017). Yet, it is under-used. Are we telling a compelling story?

Evaluations finally need follow up processes that are serious and long lasting. We have processes to follow-up on actions induced by our recommendations. Yet, here already, the process has its flaws. Needless mention those evaluations that do not even receive a response by those who would be directly in charge of addressing the issues raised by evaluations.  Still, even when action plans do exist, time can turn much of the nuances of evaluation recommendations underpinnings into dust, and follow-up often becomes attached to nitty indicators of action. The ‘spirit’ of change required is lost. The dilution of ownership partly explains this: who can hold the torch of these promises for years, when everyone has other priorities to attend? Efforts to accompany the transfer of ownership of recommendations from the evaluators to the programme managers do help a great deal, via any dialogue mechanism. Yet again, often the dialogue does not last over time, and the benefits of evaluations diminish.

What if we thought out of the box?  

The logic that links evaluation to programme improvement builds on an architecture that clearly distinguishes the independent evaluator’s ‘territory’ from that of stakeholders who take action. Enhancing utility calls for finding ways to improve this link, without breaching that distinction.

Putting someone behind the post-evaluation action plan might be a good start to increase the likelihood that recommendations will give way to adequate action. If programme managers have another job, and evaluators have moved onto other tasks, and it clearly does not work so well with a shared responsibility, can we finally decide on whose job it should be?

Just as important, is it not time to consider the evaluation results as a starting point rather than an end, and provide tailored attention to its use after its completion? Today, seldom are evaluations understood as continuing beyond the publication of a report, or the conduct of an end-point workshop. Maybe we could change our long-standing mental structures…

A solution to reach a better level of use of evaluation products may lie in encouraging evaluators to adopt a utility-focused process building on behaviour change models and storytelling strategies, and factor in dedicated efforts for post-report time, with as much seriousness as they do the design and conduct of the analysis itself.

Finally, tackling use also requires investing in concrete strategies and resources to support effective communication of evaluation messages to decision-makers. This is a world of undertapped options for evaluators. Many agencies have already invested in such functions, which call for dedicated and expert efforts designed to give the best chance for evaluation knowledge to make its way to its intended purpose: to improve future action.

 

References

DAVIDSON, B. 2017. Storytelling and evidence-based policy: lessons from the grey literature. Online, Nature – Humanities & Social Science Communications. Available at: https://doi.org/10.1057/palcomms.2017.93

FRENCH, S., GREEN, S., O’CONNOR, D. et al. 2012. Developing theory-informed behaviour change interventions to implement evidence into practice: a systematic approach using the Theoretical Domains Framework. Implementation Sci 7, 38 . Available at: https://doi.org/10.1186/1748-5908-7-38

KALPOKAS, I. 2018. Post-truth: The Condition of Our Times. In: A Political Theory of Post-Truth. Palgrave Pivot. Available at: https://doi.org/10.1007/978-3-319-97713-3_2

PROSHAKA, J. 2008. Decision Making in the Transtheoretical Model of Behavior Change. Sage Journals. Available at: https://journals.sagepub.com/doi/10.1177/0272989X08327068