Evaluation synthesis: an added boost to corporate learning

image

Evaluation synthesis: an added boost to corporate learning

4 min.
The primary aim of evaluation synthesis is to promote learning and collective reflection. A synthesis is a knowledge product and a means of consolidating and sharing acquired knowledge to strengthen evaluation feedback and learning loops.

Synthesis is nothing new in the evaluation community, but the use of evaluation synthesis has been growing in recent years. This is due in part to the recognition that good evaluation synthesis can be a powerful means of extracting knowledge from individual evaluations – knowledge that is often under-used once the evaluation process is over.

Furthermore, consolidating knowledge (be it technical, operational or institutional) can inform corporate decision-making in a cost-effective way and influence programming, policies and strategies. It can also help to reach a broader audience beyond internal stakeholders and contribute to global knowledge on a certain topic.

In the evaluation world, the term “synthesis” is used in different ways and can include a range of products – systematic reviews, meta-analysis, thematic reviews and rapid evidence assessment, to name but a few. There is no universal definition of evaluation synthesis. Each organization tailors the concept to its own needs and policies.

In the FAO Office of Evaluation, the growing use of evaluation syntheses is down to the rising number of evaluations being conducted and the under-use of the ensuing knowledge.

To encourage the more systematic use of evaluation syntheses, the Office of Evaluation (OED) of the Food and Agriculture Organization of the United Nations (FAO) recently developed a guidance note (link) in which it provided a broad definition of evaluation synthesis, as a way to “capture evaluative knowledge and lessons learned on a certain topic from a variety of existing evaluations through aggregated and distilled evidence in order to draw more informed conclusions (and sometimes recommendations) on a specific topic or question.”[1] Evaluation syntheses differ from wider evidence syntheses, which usually involve many sources of evidence, one of which is evaluation.

The regional evaluation syntheses prepared by FAO OED aim to synthesize results and lessons learned from evaluations on topics of particular relevance to the FAO regions. Thanks to these syntheses, evaluation was, for the first time in 2020, an item on the agenda of FAO’s Regional Conferences, which are part of FAO governance and inform the Organisation’s programme and budget.

The main purpose of the syntheses is to systematically document patterns discerned across evaluations to inform regional decision-making on programming and priorities. A secondary purpose of these syntheses is to enhance the utilization of FAO’s evaluation reports at regional level and create demand for regionally focused evaluations. With these syntheses, OED also reaches FAO Members that would otherwise have limited access to evaluations. Based on this experience and on preparations for our guidance note, we can draw a few lessons and good practices for future synthesis work.

A synthesis can be a complex endeavour. An evaluation synthesis usually relies on multiple data sources and mixed methods of analysis, validation and triangulation of evidence.
 
  • Quality of evaluations is key

It goes without saying that the quality of a synthesis starts with accessibility and the quality of the evaluations in question (relevance, up-to-date evidence, validity, reliability and geographical distribution of the findings). Here, a robust database of evaluation reports, correctly tagged, facilitates the rapid assessment of the evidence available for any given synthesis. A post-hoc assessment system, where it exists, also facilitates the screening of good evaluations. If not, a first step in the synthesis process will be to assess the quality of individual evaluations (a meta-evaluation).

  • Tools to facilitate synthesis work

The use of evaluation synthesis has greatly been facilitated by innovative tools used in the social sciences for analysing large amounts of qualitative data (such as Nvivo). These are also part and parcel of any strategy to enhance corporate learning from evaluations.

  • Triangulation and validation

For an evaluation synthesis to be more than the sum of its parts, it needs to be more than a desk review. This can mean triangulating data from secondary sources, consulting documents outside evaluation reports, validating and complementing data by conducting additional interviews or gathering new data.

  • Involving stakeholders

As for any evaluation, stakeholders should be consulted from the outset and throughout the process. This will confirm priorities, ensure that expectations are realistic and, when preliminary consolidated findings are available, validate and enrich the analysis. It will also be helpful when it comes to disseminating and sharing the results.

  • Evaluability assessment

As we recommend in our guidance, a proper evaluability assessment that looks at these aspects should be carried out prior to embarking on any such undertaking. It is particularly wise to assess potential trade-offs, such as the time and cost effectiveness versus the utility of an evaluation synthesis or other, possibly less costly types of assessment.