How do you communicate difficult evaluation findings?

photo
©Elisabeth Kuria

From the EvalForward community How do you communicate difficult evaluation findings?

5 min.

In the latest EvalForward Talks session, I had the opportunity to share my experience on communicating difficult evaluation findings, drawing from the rapid evaluation I am managing on the assessment of service delivery of the health and water sector projects in Kericho & Kilifi Counties.

One of the objectives of this evaluation is to identify lessons for technical officers and policymakers to improve service delivery. For this to happen, however, there definitely needs to be a certain degree of trust in the evaluation and confidence in the whole process.

The context of this project evaluation is that of decentralized levels of government. The assignment had to deal with limited understanding and quite a strong misconception of evaluation, which represent national challenges. In Kenya, when it comes to evaluation, many feel that it is an audit or policing tool, and given the widespread corruption in the country, people tend to be defensive when they hear about anything related to evaluation.

Findings emerging were both positive and quite critical (I must clarify that even though there were some very good achievements, this is not the focus of my experience sharing here).

Specifically, my sharing is on some of the critical findings. For instance, there was a sustainability/viability gap of some health installations, constructed without taking into account the cost of staff needed to manage them. In addition, some water projects were not completely implemented, with pipelines still not functioning and not connected to serve the intended communities despite being formally completed a few years ago. Adding to these drawbacks, communities felt cut out from the planning and implementation stages of these projects, and raised this as a concern as it affected their lives. So there was also a gap in trust to be addressed.

In this situation, I needed to counter the misconception of evaluation as a “policing tool” and ensure constructive feedback and follow-up on the evaluation findings.

Against this backdrop, here is what the evaluation process took on under my coordination.

First, it was anticipated that advocacy would be necessary and this was carried out to demystify evaluation and get the buy-in from relevant stakeholders. Preparations for communicating and buy-in of findings by counterparts/donors/decision makers started earlier and were done at all stages of the evaluation processes. Communication expanded as much as possible to all relevant stakeholders including political leaders and technical officers at all levels. This ensured transparency and extensive consultations with a common understanding. Trainings on rapid evaluation for technical officers were offered for a better understanding of the process.

All of the above required evaluation field missions at the inception phase with more money and time spent compared to the ordinary inception reports based mainly on desk reviews. The key message in all these missions and consultations was that evaluations are about learning and improvement and not “personalization” and/or “pinning down”.

A reference group to ensure quality control, composed of representatives from the participating institutions was in place.

My expectation is that once evaluation findings are presented in short and reader-friendly briefs – instead of long reports – stakeholders will quickly be able to take note of them and become interested in the recommendations’ implementation towards improvement.

Upon presentation of my experience in the EvalFoward informal session, participants in the room voiced concerns that also in their contexts there are issues with understanding evaluation, and also taking the process and findings personally, especially in certain cultures.

A participant affirmed that in many cases, communication of critical results poses a problem and needs to be well thought through, especially because often projects and their outcomes are “personalized”.

Some of the lessons and suggestions I retained from the discussion are:   

  • Make the facts speak for themselves: when it comes to communicating results, there are ways to let the evidence speak for you. This means bringing the factual transparently at the forefront in a way that does not challenge the project or Organization and using different types of communication. An example is video interviews with people that were involved as recipients of the interventions to tell their story on what happened, or blogs where issues are addressed in a clear and informal language.
  • Share preliminary results ahead of time, so that they do not come as a surprise, and time allows one to digest, ask questions or make sense of them.
  • Involve target communities and stakeholders during different stages of the evaluation, as opposed to going there, asking tons of questions and leaving again. This will also create a sense of trust and ownership.
  • Contribute to developing a “keen on learning & improving” culture as opposed to “lynch mob”, where evaluation may be perceived as “weaponized”. In the context of an organization, this may be done by celebrating failure in a fun and easy way by developing stories of change and sharing them in events – which can help towards a more open knowledge exchange culture and showing how much we can learn from them and be inspired.
  • Track activities – similar to a management response approach used in big organizations – asking the project team to react on evaluation findings and recommendations by creating a checklist and time schedule specifying which recommendations will be implemented or worked on, by when, and how.
  • Target communication to different target audiences.

It goes without saying that rigorous, transparent and evidence-driven sound evaluation will only benefit the image and recognition of evaluation among stakeholders. In this regard, triangulation of data sources, proper mapping of stakeholders and careful choice of methods are key to support acceptance of the findings, and the practice of evaluation in general.

I would once again like to thank all participants to the session and invite all of you to add any additional comment or feedback that I may have missed in the comment box below.