The skeptical turn in evaluation (and what to do with it)

Image
© European evaluation society

The skeptical turn in evaluation (and what to do with it)

4 min.

Impressions on the keynote speech by Estelle Raimondo and Peter Dahler-Larsen at the EES Conference, 10 June 2022 

In their thought provoking, data-packed keynote address to the 14th EES conference in Copenhagen last month, Peter Dahler-Larsen and Estelle Raimondo asked participants to recognize that sometimes, evaluation is more of a problem than a solution. Taking stock of the growth of evaluation as a practice and as a discipline, they argued for a better balance between benefits and costs of evaluation systems.  

What happens when evaluators turn their gaze onto themselves? Sometimes this may lead to navel gazing and auto-congratulation, but this is not what Peter Dahler-Larser[1] and Estelle Raimondo[2] had in store for participants of the 14th annual conference of the European Evaluation Society (EES) in Copenhagen last month. Instead, they delivered a healthy dose of grounded, fact-based self-criticism.

Challenging the “ideology that there is not end to accountability and learning and enlightenment, if only we evaluate more,” they argued for a “skeptical turn”, then went at great length to explain that this does not mean that they view evaluation negatively. Simply, the “skeptical turn” means that evaluation systems should be assessed as rigorously and critically as evaluators assess their “evaluand”, in terms of their utility and impact, as well as their cost-effectiveness.

Their fundamental argument – and I believe it is a very strong one – is that evaluation has been a victim of its own success. That the growing institutionalization of evaluation in public agencies has led to the function being applied indiscriminately across entire portfolios of people or programmes, whether or not there is any likelihood that an evaluation may help. Turning evaluation into a mandatory process does not apply critical thinking where it is needed most. In the speakers’ economic terms, “the distribution of evaluations becomes disconnected from the distribution of problems,” leading to “evaluation dead weight,” i.e., a proliferation of cases where the costs of evaluation exceed the benefits.

They find the roots of the problems in the “bureaucratization of evaluation” – the spread of norms, routines, excessive codification, mandatory features of evaluations, evaluation offices becoming prey to the same problems as those of the institutions they are supposed to hold to account. Public servants operating under strict RMB and M&E regimes can find them counterproductive, as they tend to focus attention on metrics, risk aversion and compliance. “Our own tools have come back to haunt us.”[3]

This analysis tries to account for the “use paradox”, which states that evaluation systems grow faster than evidence for the functional use of their advice. This means that sometimes, agencies can reduce reputational risk and draw legitimacy from having an evaluation system rather than from using it. Here, the speakers point to a “performative use” of evaluation (my term): the set up and operation of complex evaluation systems almost for the sole purpose of showing that the organization does perform evaluations. 

After such a systematic and merciless deconstruction of their livelihoods, the professional evaluators attending were torn between a sense of gratefulness for the candid exploration of some of their deepest concerns, and despair for the extent of the problem exposed patiently, slide after slide, by the keynote speakers. At least that was my reaction. I started to breathe again when the speakers turned their attention to solutions. Here are a few of the many excellent recommendations they made:  

  • Adapting the offer of evaluation services to the need or demand for evaluation (i.e., they reject systematic coverage and the use of standard evaluation grids).  

  • Listening to others more systematically, including to their critique of evaluations.  

  • Use of freer, simpler language in reports.   

  • Embracing some degree of informality and rapport building in relation to evaluated people and institutions.  

  • Attention to unintended effects. 

  • Setting up systems that are lighter to bear for institutions. 

Their advice is encapsulated in the Copenhagen framework for sound evaluation systems. The video of the speech is available to watch on YouTube (starts at 5:25 minutes) 

Highly recommended!  

  • Dear Olivier, you are right: it's not universal, yet it is commonplace among many donors for evaluators and evaluations to be driven by the pursuit of being solely accountable to those who commission them and afford privilege to their administrative requirements and corporate objectives, not of those in need. What Bob Picciotto refers to as mindless evaluation machines, and quite rightly so. 

     

    Best wishes from a strange land - i live in the uk - and hope this finds you very well

  • To Daniel, Thanks, that's a good one. However, to be precise I would say that some institutions use evaluation as a drunk uses a lamppost -- for support rather than illumination. "Some", because I certainly hope that phenomenon is not universal. :-)

  • Interesting analysis on the indiscriminate application and diminishing returns to the practice of late through its "performative" use.

    Reference to how "....sometimes, agencies can reduce reputational risk and draw legitimacy from having an evaluation system rather than from using it" reminds of the analogy the famous classicist and poet AE Housman made in 1903:

    "...gentlemen who use manuscripts as drunkards use lamp-posts,—not to light them on their way but to dissimulate their instability.”

    or in plain english relating to the subject: People Use Evaluation as a Drunk Uses a Lamppost — For Support Rather Than Illumination