RE: Reporting and supporting evaluation use and influence | Eval Forward

Dear Gordon and colleagues,

Before sharing my two cents, let's consider a lived experience. With a team of 4 evaluators, I participated in a five-year project evaluation. As evaluators, a couple of colleagues co-designed the evaluation and collected data. We joined forces during the analysis and reporting and ended up up with a big report of about 180 pages. I have never see fans of big reports, I am not a fan either. To be honest, very few people would spend time reading huge evaluation reports. If an evaluator is less likely to read (once finalized) a report they have produced, who else will ever read it? Off to recommendations. At the reporting stage, we highlighted changes (or lack of it); we pointed out counterintuitive results and insights on indicators or variables of interest. We left it to the project implementation team who brought onboard a policy-maker to jointly draft actionable recommendations. As you can see, we intentionally eschewed the established practice of evaluators writing recommendations all the time.

Our role was to make sure all important findings or results are translated into actionable recommendations. We supported the project implementation team to remain as close to the evaluation evidence and insights as possible. How would you scale up a project that have produced this change (for positive findings)? What would you do differently to attain desired change on this type of indicators (areas for improvement)? Mind you, I don't use the word 'negative' alongside findings. How would you go about it to get desired results here and there? Such questions helped to get to actionable recommendations.

We ensured the logical flow and empirical linkages of each recommendation with evaluation results. In the end, the team owned the recommendations while the evaluation team owned empirical results. Evaluation results informed each recommendation. Overall it was a jointly produced evaluation report. This is something we did for this evaluation and it has been effective in other evaluations. With the participation of key stakeholders, evaluation results are relatively easy to sell to decision-makers.

In my other life of an evaluator, such recommendations are packaged into an Action Tracker (in MS Excel or any other format) to monitor over time how they are implemented. This is the practice in institutions that are keen on accountability and learning or hold accountable their staff and projects for falling short of these standards. For each recommendation, there is a timeline, person or department responsible, status (implemented, not implemented, or ongoing), and way forward (part of the continuous learning). Note that one of the recommendations is about sharing and using evaluation results which require extra work after the evaluation report is done. Simplify the report in audience-friendly language and format such as a two-pager policy brief, evaluation brief, and evaluation brochure based on specific themes that emerged from the evaluation. I have seen such a practice relatively very helpful for a couple of reasons:

(i) evaluators are not the sole players, there are other stakeholders with better mastery of the programmatic realities

(ii) implementation team has got space to align their voices and knowledge with evaluation results

(iii) the end of an evaluation is not, and should not be, an end of evaluation, hence the need for institutions to track how recommendations from evaluation are implemented for remedial actions, decision- or policy-making, using evaluation evidence in new interventions, etc.

To institutionalize the use of evidence from evaluation takes time. Structural changes (top-level) do not happen overnight nor do they come from the blue, there are small but sure steps to initiate changes from the bottom. If you have the top management fully supporting evidence use, it is a great opportunity not miss out. Otherwise, don't assume, use facts and the culture within the organization. Build small alliances and relationships for evidence use, gradually bring on board more "influential" stakeholders. Highlight the benefits of evidence and how impactful it is for the implementing organization, decision-makers and the communities.

Just my two cents.

Over to colleagues for inputs and comments to this important discussion.

Jean Providence