RE: Reporting and supporting evaluation use and influence | Eval Forward

Dear all,

Knowledge, experiences, and thoughts being shared on this topic are very insightful and helpful. Thank you for your contributions! Here are some of the takeaways I have picked so far. More contributions/thoughts are most welcome. Let's also remember to complete the survey.

Concise evaluation report

Writing too many pages, i.e. a voluminous evaluation report would make the reader/user bored, reading some things they knew about or looking to get to the point. Very few people, including evaluation users, would spend time reading huge evaluation reports. In fact, even evaluators are less likely to read (once finalized) a report they have produced! Some of the recommendations are: 

  • Make an executive page less than 4 pages (writing on both sides), highlighted on findings and conclusion and recommendations based on findings.
  • Make a summary of fewer than 10 pages, more tables, diagrams, and findings on bullet points.
  • The full report should take 50 pages.
  • Highlight changes (or lack of them) and point out counterintuitive results and insights on indicators or variables of interest. 

Beyond the evaluation report: use of visuals

Until evaluations will be mainly perceived as bureaucratic requirements and reports, we will miss out fantastic possibilities to learn better. It is so unfortunate that we assume that "report writing" alone is the best way to capture and convey evidence and insights. Communicating evaluation findings in a concise, comprehensible, and meaningful way is a challenge. We need both literal and visual thinking to make use of evaluation by summing up findings in a more visual way through the use of graphics, drawings, and multimedia. For example, the UN WFP in the Asia Pacific region is combining evaluation with visual facilitation through a methodology called EvaluVision. It is helpful to involve people who might have fantastic learning, analytical, and communication skills, but who are not necessarily report writers.

However, the challenge is that visuals are often seen as "nice" and “cool''. Everyone likes them and feels they are useful, but a normal report still has to be developed, because this is what evaluation commissioners including funders want. 

A paradigm shift  in making recommendations

Often, there are gaps between findings, conclusions, and recommendations in the evaluation report which can negatively affect the quality and use of evaluations. Traditionally, evaluators would proceed to make recommendations from the conclusions, however, letting the project implementation team to bring on board a policy-maker to jointly draft actionable recommendations can help improve evaluation use. The evaluator's role is to make sure all important findings or results are translated into actionable recommendations by supporting the project implementation team and policy-maker to remain as close to the evaluation evidence and insights as possible. This can be achieved by asking questions that help to get to actionable recommendations and also ensuring logical flow and empirical linkages of each recommendation with evaluation results. The aim should be for the users of the evaluation to own the recommendations while the evaluation team owns empirical results. With the participation of key stakeholders, evaluation results are relatively easy to sell to decision-makers. Stakeholder analysis is, therefore, key to understanding the interest, influence, and category of stakeholders to better support them to use evaluations.

Lessons from audit practice: Can management feedback/response help?

Should feedback be expected from users of evaluation? Typically, draft evaluation reports are shared with the implementers for review and approval. In the auditing field, there is mandatory feedback in a short time, from the client who must respond to the auditor's observations both positively and negatively. Perhaps, as mentioned elsewhere, working with the users of evidence generated through an evaluation in the form of findings and conclusions to make actionable recommendations may serve as a management feedback/response. However, the communication and relationship should be managed carefully so that evaluation is not perceived to be an audit work just like in some cases it is perceived to be “policing”.

The Action Tracker

An Action Tracker (in MS Excel or any other format) can be used to monitor over time how the recommendations are implemented. Simplifying the evaluation report in audience-friendly language and format such as a two-page policy brief, evaluation brief, and evaluation brochure based on specific themes that emerged from the evaluation is a practice relatively very helpful for a couple of reasons:

  • Evaluators are not the sole players, there are other stakeholders with better mastery of the programmatic realities.
  • The implementation team has got space to align their voices and knowledge with evaluation results.
  • The end of an evaluation is not, and should not be, an end of the evaluation, hence the need for institutions to track how recommendations from the evaluation are implemented for remedial actions, decision- or policy-making, using evaluation evidence in new interventions, etc.

Alliance and relationship building for evidence use

Typically, there are technical and political sub-groups or teams. In some situations, technical teams report to an administrative team that interfaces with the policy makers. Evaluators often work with the technical team, and may not get access to the other teams. The report and recommendation parts are trivial irrespective of the process followed. The issue of concern is the delay in the time between report submission and policy actions in developing countries. Institutionalization of the use of evidence is key to enhancing the use and influence of evaluations but may take time, particularly structural changes (top-down) approach. Having top management fully supporting evidence use it is a great opportunity not to miss out. However, small but sure steps to initiate changes from the bottom such as building small alliances and relationships for evidence use, gradually bringing on board more "influential" stakeholders, and highlight the benefits of evidence and how impactful it is for the implementing organization, decision-makers and the communities is also very helpful

Real-Time Evaluations

Evaluation needs to be rapid and timely in the age of pandemic and crisis situations. We need to 'communicate all the time'. One of the dimensions of data quality is timeliness. Timeliness reflects the length of time between data becoming available and the events or phenomena they describe. The notion of timeliness is assessed on the time period that permits the information to be of value and still acted upon. Evaluations should be timely for them to be of value and acted on.

Beyond evidence use

The ultimate reason for evaluation is to contribute to the social betterment or impact. This includes, but at the same time goes beyond the mere use of evaluation results that change policies or programs. In this way, the use of evaluation per se stops being evaluations’ final objective, since it aims at changes that promote improvements in people’s lives. Demonstrating how an evaluation contributes to socio-economic betterment or impact can enhance the value, influence, and use of evaluations.

Gordon Wanzare