Reporting evaluation results or communicating evaluation results?

Reporting evaluation results or communicating evaluation results?
14 contributions

Reporting evaluation results or communicating evaluation results?

Youth farmers beneficiaries in Kenya 2020
© FAO/Luis Tato

Dear Members,

In my view, there is a difference between reporting evaluation results and communicating evaluation results and I would like to start a discussion on these two processes.

Reporting evaluation results involves technical content and is usually limited to communication between the commissioner, the evaluator and the intervention partners. It typically leads to an evaluation report, the objective of which is to provide detailed and specific information for decision-makers, so that they can adjust their intervention. Most commissioners provide guidelines on reporting evaluation results and there are several handbooks on how to report on evaluation.

Communicating evaluation results is aimed at all stakeholders, including beneficiaries and affected people, so the audience is broader. The objective is to inform and explain results to stakeholders. In my opinion, while a quality evaluation report is a must, effective and well-targeted communication of the results is the best way to bring about the desired change. Therefore, a willingness to communicate and to pay special attention to the message, the tools, the channels and the language are needed.

If communities are expected to implement the activities of a development intervention, the results should be communicated through effective channels. In the case of an agricultural intervention, for example, the recommendations of an evaluation should be communicated to institutions as well as farmers, consumers and non-governmental organizations. Communication campaigns should be organized, including radio and television spots, as well as short videos for social media (often the only way to reach rural areas or vulnerable communities), meetings, sketches and so on. This should help spur the desired behavioural change in the targeted audiences. In other words, to have an impact, it is better to have a key performance indicator that involves a communications campaign on evaluation results.

To quote Jennifer Green,[1] "audience participation in evaluation increased the likelihood that the findings of an evaluation would be utilized".

As the time allocated to the evaluation does not usually allow the participation of all stakeholders, these can be reached through a communications campaign on evaluation results. To achieve this, a specific budget needs to be allocated to the communication of evaluation results.

My questions are:

  • Who should fund this campaign ‒ the intervention partners or the evaluation office?
  • To what extent should evaluators be involved in communicating their findings to stakeholders?
  • Should evaluators make recommendations on communicating their results? This means asking evaluators to possess another skill.

What do you think?

Malika

 

[1] Asian Development Bank. 2008. Maximizing the Use of Evaluation Findings. Manila, The Philippines. https://www.adb.org/sites/default/files/evaluation-document/35880/files/evaluation-findings.pdf

This discussion is now closed. Please contact info@evalforward.org for any further information.
  • Thank you all for very interesting contributions and insights. We seem to agree that reporting is the first step to communication about results. Most of the time, reporting is technical with data and results on project outcomes, with recommendations and lessons learnt. Then the commissioner validates, communicates the results to all stakeholders and develops a communication plan with a larger audience.

    Here some highlights from participants:

    Esosa Tiven Orhue suggests creating  harmony between the two elements for programme/project implementation by all stakeholders. This is possible if communication about results is included at the design stage of the intervention.

    For John, there is “plenty of reporting, but little communicating”. In addition, John suggests that “no one was to have a hand in project preparation and design until they have done at least five years of M&E.”  UNEP document shared by John has two lessons learnt related to our discussion. 1) Lack of ownership and shared vision due to insufficient stakeholder consultation processes during the design leads to poor project design and, 2) inefficient project management includes” Inadequate dissemination and outreach due to poor use of available dissemination methods”. 

    Most times, communication about the project results and evaluation targets the stakeholders consulted at the design and implementation phases. These are usually the immediate implementing partners (sphere of influence).  Thus, the sphere of interest is usually excluded leading to no change or if change occurs, it is not documented. This results (as John said) in losses of past experiences and the risk of repeating the same mistakes.

    Lal agrees with John while Silva adds that  “if we stick to conventional evaluation formats, we might make minor improvements but always miss out on the potential of evaluations, in the broader sense.” I can’t agree more with Silva since I see evaluators as change makers.

    Finally Gordon suggests that the communication about evaluation and evaluation results should be budgeted as part of the overall project and should be implemented by the commissioners and project managers. 

    If we agree that stakeholders include direct project/programmes implementing partners (sphere of influence)  as well as  impacted population (intended and not intended beneficiaries) then Esosa, John and Silva's suggestions should be considered for successful implementation.

    As summary, the debate about  whether ‘development aid works’ has been going on for at least a decade now. When mapping outcomes we need to think of the change we want and therefore communicate with the population at the design, implementation, closing and give them insights about evaluation results. This will empower them and give them the tool to implement the programme/project. Consequently, at  the next programme design, they will bring their perspective in  lessons learnt from previous programmes, thus, avoiding repeating mistakes. This should lead to avoid unnecessary activities and foster programme implementation. 

    I wish you all good end of the week

    Malika

     

    Links

    1. Lessons Learned from Evaluation: 

    https://wedocs.unep.org/bitstream/handle/20.500.11822/184/UNEP_Evaluati…

    2. A Comparative Study of Evaluation Policies and Practices in Development Agencies

    https://www.afd.fr/sites/afd/files/imported-files/01-VA-notes-methodolo…

     

  • Dear members,

    This is an interesting discussion!

    Reporting is part of communication and a report is one of the communication tools. Ideally,  every project,  program, or intervention should have a clear communication plan informed by a stakeholder analysis ( ...that clearly identifies roles, influence, and management strategy). A specific communication plan for evaluations can also be developed. A communication plan typically has activity and budget lines and responsibilities and should be part of the overall project,  program,  or intervention budget. It may not be practical for the evaluator to assume all responsibilities in the evaluation communication plan but can take up some, particularly the primary ones since communication may be a long haul thing especially if it is targeting policy influence or behaviour change, and, as we all know,  evaluators are normally constrained by time. Secondary evaluation communication can be handled by the evaluation managers and commissioners with the technical support of communication partners.

    My take. 

    Gordon 

  • Very important discussion. It is, however, constrained by a narrow understanding of evaluation: a conventional consultancy. Sticking to this format - i.e.  accepting as a starting point that evaluation is mostly about putting some recommendation in a report - limits possibilities and innovation.

    We should reframe evaluation as a set of thinking processes and practices allowing programme stakeholders to gauge the merit, the achievements, and the learning from a programme. Consultants might have diverse roles within it (and might not even be necessary). The possibilities are endless. If evaluations are designed with users, use, participation in mind, the entire approach to communication and involvement changes from the start.

    It is very unfortunate that we keep on sticking with conventional, routine evaluations and never consider the cost opportunity of missing out on more interesting options. This message goes in the right direction, indicating the urge to shift from reporting to communication. But if we stick to conventional evaluation formats, we might make minor improvements but always miss out on the potential of evaluations, in the broader sense.

  • Greeting!

    I can’t agree more [with John], and if I may say so, yours is an excellent presentation of facts all too often ignored or rather, brushed under the carpet by the very nature of ‘committee’ism’ which seems to be the prefered method of project design and laying down implementation strategy. Committees provide a fertile ground for various forms of hobby horse play, vociferous promotion of pet theories or methods, not to mention some particular humble servant of that body.

    Cheers!

    Lal.

  • Very interesting discussion and full marks to Marika for making the distinction between reporting and communicating. There is plenty of reporting, but little communicating, in the same way that there are plenty of lessons but not much learning.

    In an ideal world perhaps the solution should be to say that no one was to have a hand in project preparation and design until they have done at least five years of M&E.

    As it is, in this imperfect world, it is almost inevitable that the jelly will always fall off the plate and never reaches project designers.

    Why is this? Well, for starters, in projects for agricultural/rural development lasting around five years by the time the Project Completion Report (PCR) or Implementation Completion Report (ICR) or Implementation Completion and Results Report (ICRR) comes out in, say, Year 7, all the cast who were involved in project design have moved on or disappeared.

    Others have already mentioned that M&E pumps, but there is no tube directly connecting what M&E produces with those involved in project design and implementation. Funding agencies have tried to cover this deficieny by periodically publishing collections of lessons learned from PCRs (or ICRs or ICRRs). In order to make these relevant to the general reader the lessons are so boiled down as to appear almost banal, such as "Lack of ownership and legitimacy for project outputs /outcomes caused by lack of adequate stakeholder participation / representation" (https://www.unep.org/resources/other-evaluation-reportsdocuments/unep-e… ).

    Such groanings do not make exciting reading. Moreover they are unlikely to carry much weight with highly motivated project designers or managers whose motto is all too often "I did it my way"

    As if all this was not enough there are the various political pressures in government to skew project design in one direction or another; particular polcy ideas or fashions on the part of the funding agency and occasionally the impact of Messianic staff or consultants projecting their own miracle cures.

    The result of all this is that little if any attention is paid to past experience and often the same mistakes are made over and over again - such as:

    -  assuming that all government agencies will cooperate without individual funding - when it is very clear that no budget, no activity or collaboration.

    -  including project items that require legislation - promised on Day1 but may take over 5 years.

    -  "strengthening" the project by sending many staff off for training - just when they are most needed.

    -  insuring all will go well by having some luminary imported project manager, who for a number of reasons only arrives in Year 3 

    -  expecting project staff to make regular visits to remote project sites - when government insists severe control of travel costs.

    -  project activities both complex and extensive in area - when main constraint is project management capacity. 

    One further problem is that financing agencies often want to "do something new". For whatever reason it is often decided after a succession of similar projects, just when everything is going smoothly and lessons from earlier project phases are in fact being embodied in later tranches of lending that the financing institution FI moves away and any linkage between M&E and project design is thereby broken.

    In trying to answer the three questions I think the onus should be on the project designers/implementers to do due diligence before getting into project design to see what lessons have been learned from previous operations - it is for them to dig up the PCRs, ICRs and ICRRs and try to incorporate the findings in the new project's design.. 

     

  • Dear colleagues/members,

    My EvalForward contribution.

    These are two distinct inseparable views in monitoring and evaluating systems that produce results for policy making and implementation. When these two views are understood there is tendency of improvement and implementation strategy of research and finding that enforces desirable policy. The reporting is the analytical tool that communicates the findings of a research to stakeholders. Which makes it empirical fact that once accepted becomes evidential policy.

    Reporting helps to know the actual data needed to communicate the rightful results that could improve the outcome of programmes or projects implementation for economy and human development in any sector. Reporting tools help to maximize this actual fact of M&E information system for programmes and include the state or level of project delivery. This reporting tool must be available for better and clear communication that is understood by public and private sector stakeholders.

    This function systematically from M&E reporting state to M&E communicating state to stakeholders, partners and implementing organizations which could form economic policy for growth and development depending on the sector. So, understanding this important knowledge mechanism for organizations or nations could improve the governing system for better decision and policy whether in agriculture, science, humanity or any sector. They are the knowledge basis for policy making and implementation.

    In addition;

    The two elements need to harmonize and create synergy for common ground that affects the outcome of results.

    This should begin from the start, that is, from the design stage to implementation stage to have a clear knowledge of the programmes/project for decision and implementation by stakeholders.

    Their recommendations should be empirical facts for improvement, implementation and execution by stakeholders.

    Thank you.

    Esosa

  • Thank you all for your great contributions.

    Most contributors suggest that evaluators should be involved in communicating about results  at least in providing recommendations on key messages and tools  (ex. Norbert TCHOUAFFE TCHIADJE, and Karsten Weitzenegger). Messages and recommendations are mainly directed to intervention partners and decision-makers (ex. Aparajita Suman and Mohammed Al-Mussaabi). Key messages should be fine tuned by evaluator (ex.  Aparajita Suman, Karsten Weitzenegger and  Jean Providence Nzabonimpa). 

    Emile Nounagnon HOUNGBO suggests that “Stakeholders, including project managers, have more trust in the evaluator's technical findings and statements”. This puts the quality of the evaluation in front and places  the evaluator as communicator to validate the intervention results and recommendations. I believe that If we expand the idea to the large public,  the recommendations for a development project will have a better chance to be implemented.

    Most suggest that a specific communication budget should be allocated. This budget should be managed by the evaluation entity (ex. Ekaterina Sediakina Rivière). This will provide flexibility in priority setting according to type of intervention, targeted audience and type of messages.

    Jean Providence Nzabonimpa describes evaluators as change agents. As such, we need to go beyond submitting reports and contribute to the successful implementation of recommendations. 

    In summary, evaluators should be involved in communication campaigns for recommendations. A specific budget needs to be allocated and managed by evaluation units. The latters should also make provision for public communication of evaluation results and recommendations in the terms of references.

    The justification for the above is that any intervention affects intended and not intended beneficiaries. Therefore, in my opinion, communicating and organizing communication campaigns are justified. Thus, In addition to decision-makers, it is necessary to inform and educate the beneficiaries (intended and not intended) about evaluation results and recommendations. This should guarantee implementation of recommendations at scale.

    Key messages should be developed by evaluators who should also suggest the tools and languages since they know and understand the intervention, its results, and the audience.

    Malika

  • Great topic, great discussions!

    Evaluation and communication are two sides of the same coin, trying to achieve similar goals (disseminating evaluation evidence for use in decision-making). By the way, they both require different skillsets. That's not a big deal.

    Off to the topic. Assume we as evaluators are all teachers. We prepare lessons, ready to teach, I mean facilitate the learning process. Shall we fold our arms after the preparation and finalization of the lesson? Not at all. I am not alone, I guess, to rightly believe that the teacher will follow through even after teaching, facilitating a learning process. Building off the previous lesson, the teacher will usually recap before starting a new one. Interesting, it seems our evaluations should be informing subsequent evaluations as well!

    The teacher scenario also applies here, at least in my school of evaluation practice. The essence of evaluating is not about producing reports, or reporting results. Then what? For whom and why such evaluation results are reported? Not for filing, not for ticking the box. It would be heart-breaking if we as teachers, after investing in time and resources, prepare class notes and guidance and our students never use them. Would anyone be motivated to prepare notes and guidance for the next lesson? Very few would do. As passionate and professional as we are (or should be) as evaluators, we are change agents. In our ethical and professional standards, never should we rest satisfied with reporting of evaluation results without following through to ensure evidence thereof is used as much as possible. Some evaluation principles include utility of evaluations.

    To the good questions you raised, my two cents:

    • Each evaluation has (or should have) a plan for dissemination and communication (or a campaign plan for evaluation evidence use). This needs to be part of the overall evaluation budget. Evaluators need to keep advocating for dissemination of evaluation results in different formats and for different types of audience even when evaluations are completed, even one or more years ago.

    • If there are people who understand better evaluation results, the evaluator is one of them. Alongside other stakeholders who participated in the evaluation process, s/he should be part of the communication processes to avoid any misconstruing of messages and meaning by external communicators. Communicators (some organizations have specific roles such as communication for development specialists) are experts who know the tricks of the trade. Our allies.

    Happy reading contributions from colleagues.

    Jean Providence

  • Ekaterina Sediakina Rivière

    Ekaterina Sediakina Rivière

    Principal Evaluation Specialist Evaluation Office, UNESCO

    I’m not quite sure that I see reporting and communicating as two distinct processes. Reporting, in my view, is related to monitoring. When we issue an evaluation report, we are in fact communicating about the evaluation, albeit in a longer and more technical format. I do agree that fewer people are likely to read such a report. However, I would include the donor amongst those in that first group, as it is highly likely that the donor would be interested in reading about the technical details. I also encourage the publication of full evaluation reports, so that any stakeholder interested in reading all the details is given the opportunity to do so.

    Regarding the funding of the communicating campaign, I believe that this should come directly out of the evaluation budget. This means that the funder is the same as the donor of the intervention that is being evaluated. However, it is the evaluation office and/or the entity that is commissioning the evaluation that should have control of the evaluation budget and thereby it is that same entity that should be responsible for developing a communication strategy for the evaluation and its related funding.

    Regarding the involvement of evaluators in communicating findings to stakeholders, I believe that they should be the primary communicators indeed. The evaluators are the ones that are external and independent to the intervention under evaluation; thereby, they benefit from a neutral image/reputation and stakeholders expect to hear about the evaluation’s findings from a trustworthy and neutral source. Consequently, I suggest that evaluators are expected to present the evaluation findings in various formats and particularly during presentations/webinars with key audiences, whoever these may be.

    Finally, I do not see recommendations on communicating the results of the evaluation as part of the scope of any evaluation exercise. However, an evaluation plan and even an evaluation inception report can outline a communication strategy for a given evaluation, including the roles and responsibilities that will underly it.

    Katia

     

  • Engaging stakeholders begins at a program's earliest stages (conception) and continues through closure (and evaluation). This should serve as a key communication indicator. Evaluators should broadly involve relevant stakeholders through an effective communication process to ensure precise and useful feedback.

    Evaluators should present their findings clearly and provide actionable recommendations. A clear and compelling presentation of findings coupled with targeted recommendations tailored for different stakeholder groups can maximize the potential for evaluation insights to drive meaningful action.
    Following the completion of an evaluation report, stakeholders should be informed of the findings in their own language and given an opportunity to provide final feedback.

  • Hello everyone,

    Evaluating development projects/programmes is a sensitive activity. There is often a lot at stake. Often, commissioners are not ready to assume the results of evaluations. This reality means that only certain actors are committed to the accuracy of the evaluation results, while others see them as an exposure or sanction of their management inefficiency. When we are lucky enough that a part of the actors responsible for the implementation of the project/programme are willing to have the results communicated, we are in a happy situation. In these cases, the technical analyses and recommendations of the evaluator, previously given and clarified to a few key actors, must be precise and clear in order to allow relevant decisions to be taken. It must be admitted that evaluation plays an important role in improving the quality of project/programme implementation in order to increase its contribution to development. 

    To my knowledge, project implementers have often wanted the evaluator to be actively involved in communicating the results, in order to give them the highest possible level of credibility. Stakeholders, including project managers, have more trust in the evaluator's technical findings and statements.

    To improve the quality of communication, it would be desirable for the evaluator to be responsible for post-evaluation work, in which the results are put into a communicable form for decision-makers, partners and beneficiaries. For greater certainty, the cost of this communication could be included in the evaluator's remuneration and specified in the terms of reference of the call for applications which recruited him/her. This would ensure that the results are reported systematically and in good form. But the commissioners and those responsible for the implementation of the project must be in agreement. This is the real challenge.

    Thank you.

    Dr Emile N. HOUNGBO

    [Original contribution in French]

     

     

  • My experience says evaluation findings without communication may not be as impactful. In most cases, the evaluation findings are supposed to inform either the design of new programs or propose changes/ insights for design of next phase. In either case, communication is important- not just for the internal/ core team that commissioned the study but for all stakeholders.

    Specifically, on the questions:

    • Who should fund this campaign ‒ the intervention partners or the evaluation office?

    The intervention partners should make provisions for this right at the design stage, unless the project/ programme deals with sensitive data of any kind. The evaluation office should ensure that the findings are presented in a usable form- need not be a campaign ready content but something less jargonized to help stakeholders take decisions, as necessary.

    • To what extent should evaluators be involved in communicating their findings to stakeholders?

    Evaluators needn't be involved in the communication of the findings per se but MUST be available to ensure/ validate that the essence of the findings doesn't get lost in the design of communication campaigns. Sometimes, the attempt to simplify the messaging leads to dilution of the core finding.

    • Should evaluators make recommendations on communicating their results? This means asking evaluators to possess another skill

    This is tricky. It is ideal if the evaluators (agency/ team) has the additional skill (sub-team) to make recommendations on communicating their result but this may not be essential. However, the evaluators must help in shortlisting/ finetuning the recommendations from a communication perspective.

       

       

       

    • Evaluators should be involved in communicating their findings to stakeholders and should provide recommendations on how to effectively communicate evaluation results. This can help to ensure that the evaluation findings are accurately understood and used to inform decision-making.

      Evaluators play a critical role in communicating evaluation findings to stakeholders. They are often the experts on the evaluation methodology, data analysis, and interpretation of results. As such, evaluators should be involved in communicating their findings to stakeholders to ensure that the information is presented accurately and effectively.

      However, the extent to which evaluators should be involved in communication efforts can vary depending on the evaluation context, stakeholder needs, and resources available. In some cases, evaluators may take a more active role in communicating their findings, such as presenting results at stakeholder meetings or developing communication materials. In other cases, evaluators may provide technical support to stakeholders in their own communication efforts.

      Regardless of the level of involvement, evaluators should provide recommendations on communicating evaluation results. These recommendations should be tailored to the specific stakeholders and context of the evaluation, and should be based on the evaluator's expertise in data analysis and interpretation. Some recommendations that evaluators may provide include:

      Identify key messages: Evaluators can help stakeholders identify the key messages that should be communicated, based on the most important evaluation findings and implications.

      Use plain language: Evaluators should recommend using plain language that is understandable to the intended audience, avoiding jargon or technical terms that may be confusing.

      Provide context: Evaluators can help stakeholders provide context for the evaluation findings, including the evaluation methodology, data sources, and limitations of the data.

      Highlight implications: Evaluators can help stakeholders identify the implications of the evaluation findings, including what actions or changes may be necessary based on the results.

      Use visuals: Evaluators can recommend using visuals, such as graphs or charts, to help stakeholders understand and interpret the evaluation findings.

       

    • Thanks for your insights. To contribute to your last question "Should evaluators make recommendations on communicating, their results?"

      The answer of this question will depend on the circumstances and what the requester wants; if the evaluator is involved in research for instance, my answer is yes, making  recommendations here shows the effectiveness of your results and the way forward. If you deal with policy makers , recommendations in communicating your results are ways to translate them into actionable policies. Recommendations also allow to clarify your results.

      Thank you