Reporting and supporting evaluation use and influence: Tips from evaluators

©CIAT

From the EvalForward community Reporting and supporting evaluation use and influence: Tips from evaluators

7 min.

Evaluation use is a key issue for the evaluation community. The aim of evaluation is to be influential, so it should be of use to policymakers, programme developers, project planners and managers.

I recently used a survey of evaluators to explore the concept of evaluation use, how evaluation practitioners view it and how this translates into their work – in other words, how evaluators are reporting and supporting evaluation use and influence.

Evaluation use and utilization: an outline

Michael Quinn Patton’s utilization-focused evaluation (UFE) approach is based on the principle that an evaluation should be judged on its usefulness to its intended users. This requires evaluations to be planned and conducted in ways that increase the use of the findings and of the process itself to inform and influence decisions.

The Africa Evidence Network has long advocated for evidence-informed decision-making (EIDM), whereby people who need to make choices use the best available evidence to motivate their decisions. In this context, the evidence can be scientific research, but equally other forms of evidence are of value in informing decision-making, such as citizens’ voices, census data or expert opinion, for instance. In essence, EIDM aims to make use of the best available evidence for the decision at hand – evidence that is fit-for-purpose, suitable for the context and scalable for the decision to be taken.

In Maximizing the Use of Evaluation Findings,[1] the Asian Development Bank identified four types of evaluation use: instrumental, conceptual, political or symbolic, and process. Instrumental use refers to the impact of an evaluation based on directly observable effects, such as policy change or the introduction of a new programme initiative. Conceptual use is more indirect and refers to the use of evaluations to influence thinking about issues more generally. Political or symbolic use is more retrospective, in that evaluation is used to justify decisions already made about a policy or programme. Process use refers to how stakeholders are affected as a result of participating in an evaluation, such as changes in their perceptions, values, thoughts or behaviours that may further influence their decisions.

What are evaluators doing? Tips from the survey

To understand the methods, approaches, tools and techniques evaluators are using to develop and present findings in ways that are useful to intended users, and to encourage them to use them, I prepared a survey based on the Better Evaluation Rainbow Framework[2] on reporting and supporting use of findings. Better Evaluation recommends that evaluation findings be developed and presented in ways that are useful to intended users and support them in making use of them by identifying reporting requirements, developing reporting media, ensuring accessibility, developing recommendations and supporting use.

The survey received responses from 70 evaluators and here are some of the results.

To communicate evaluation findings, most evaluators continue to use written reporting formats, mainly  final reports (23.6 percent) and executive summaries (20.1 percent). Post cards (0.9 percent) and news media communications (3.5 percent) are the least used.

For presenting evaluation findings, PowerPoint and flipcharts are the most commonly used formats (42.6 percent and 22 percent, respectively). Videos, displays and posters (9.2 percent, 12.1 percent, and 14.2 percent respectively) are far less used, despite being quite compelling in communicating evaluation findings. When it comes to presentation events, in-person and virtual conferences and verbal briefings are equally used to communicate evaluation findings.

Among the more creative ways of communicating evaluation findings is photographic reporting (50 percent). There are alternative creative ways of communicating evaluation findings that are less used, yet can be fascinating, such as the use of theatre (3.0 percent), poetry (4.5 percent) or cartoons (6.1 percent).

On the use of graphic design elements, Word formatting (29.6 percent), report layout (27.2 percent), and images (23.5 percent) are the approaches generally used in communicating evaluation findings. Colour (19.1 percent) is the least used graphic design approach. Descriptive chart titles, headings as summary statements, plain language and applied graphic design principles are widely used by the evaluators surveyed to make evaluation findings easy to access and usable. Other approaches, such as emphasis techniques, the elimination of chart junk, 1:3:25 principles, and accessibility considerations for colour-blind audiences and low-vision and blind audiences are rarely used.

On the development of recommendations from evaluation findings, commonly used methods are group critical reflection (20.7 percent),  individual critical reflection (19.0 percent), beneficiary exchange (16.7 percent), external review (15.5 percent) and participatory recommendation screening 13.2 percent), while chat rooms (5.7 percent), world café (5.2 percent) and electronic democracy (2.3 percent) are far less used.

To support the use of evaluation findings, responders identified annual reviews (23.8 percent), recommendation tracking (21.8 percent), policy briefings (17 percent), conference presentation (15.6 percent) and social learning (12.2 percent) as the most frequent approaches. The least common approach was the data-use calendar (5.4 percent).

The EvalForward discussion

In parallel, the EvalForward discussion attracted more than 30 contributions with rich and varied views. Here is an overview.

Concise evaluation report

Voluminous evaluation reports are likely to bore the reader/intended user.

Some of the recommendations shared in the discussion include:

  • an executive summary of less than four pages, highlighting findings, conclusions and recommendations
  • a summary of less than 10 pages, with more tables, diagrams and findings in bullet points
  • a full report no more than 50 pages
  • highlighted changes (or lack thereof), counterintuitive results and insights into indicators or variables of interest.

Beyond the evaluation report: the use of visuals

Other participants noted that until evaluations are viewed as more than just reports that fulfil bureaucratic requirements, we will be missing out on fantastic possibilities to learn. The reporting of evaluation findings can be made more engaging by thinking visually and considering how to summarise and synthesis findings through the use of graphics, drawings and multimedia

A paradigm shift in crafting recommendations

Traditionally, evaluators craft recommendations from the conclusions. However, it was suggested bringing on board a policymaker to jointly draft actionable recommendations and policy briefings could help improve evaluation use. Actionable recommendations which have been developed with those who will use them can be one way to increase the chance that the evaluation will be used.

Lessons from audit practice: can management feedback/response help?

Other options to facilitate use of an evaluation is to require a management response to the evaluation and through the use of an action tracker (in Microsoft Excel or any other format) that can be used to monitor how the recommendations are implemented over time. The survey responses also noted, however, that these methods and tools are not always enough on their own, as often decisions can sometimes be made on a political level rather than based on evidence from the evaluation.

Alliance and relationship building for evidence use

Having the full support of top management for evidence use it is a great opportunity that should not be missed. However, small, steady steps to initiate changes from the ground up, such as building alliances and relationships for evidence use, gradually bring on board more "influential" stakeholders, highlighting the benefits and impact of evidence to the implementing organization, decision-makers and communities is also very helpful.

Timeliness of evaluations

The notion of timeliness is assessed based on the time period in which the information can be of value and acted upon. Evaluations should be timely to be of use and value. One approach that may be useful here is real-time or rapid evaluations.

Beyond evidence use

Arguably, the ultimate reason for conducting an evaluation is to contribute to social betterment and have a positive impact on people or the planet. This includes, but also goes beyond, the mere use of evaluation results that change policy or programmes. Demonstrating how an evaluation contributes to socioeconomic betterment or creates other positive impacts can enhance the value, influence and use of evaluations.

In conclusion

Reporting and supporting evaluation use and influence is crucially important to ensure that evaluations are influential. There are traditional approaches, methods, tools and techniques that evaluators use to report and support the use of evaluation findings. However, innovative and creative approaches are emerging that seem very powerful, but have yet to be fully explored by evaluators. The time may be ripe for evaluators to start thinking beyond reports and embrace these approaches fully, also in collaboration with experts in these fields, as this may enhance the influence of evaluations.