Reporting and supporting evaluation use and influence

Rainbow Framework
©FAO/Hkun Lat

Reporting and supporting evaluation use and influence

Dear EvalForward members,

Use of evaluation has attracted a lot of attention recently due to the acknowledgement of the critical importance of evaluation for Informed Decision Making (EIDM) in policymaking, programs development, and projects planning and management...

Significant strides have been made by both state and non-state actors globally to strengthen the supply of, and the demand for evaluations, however, there is still a lot of room for improvement in the use of evaluations. I am interested in hearing from evaluators the methods, approaches, tools, and techniques they have been using to develop and present findings in ways that are useful for the intended users of the evaluation, and support them to make use of them

As we engage in this discussion, I'll also highly appreciate it if you can take 10 minutes to complete this simple and quick survey to help me gather global practices for local benchmarking on this topic. The survey is anonymous with absolutely no personally identifiable information captured. The survey is based on the Rainbow Framework’s 7th cluster - reporting and supporting use of evaluation findings. The Rainbow Framework (link) is a planning tool that can be used to: commission and manage an evaluation; plan an evaluation; check the quality of an ongoing evaluation; embed participation thoughtfully in evaluation; and develop evaluation capacity.  It has 34 different evaluation tasks, grouped by 7 color coded clusters to make it easier to choose and use appropriate methods, strategies or processes for evaluation.

TAKE THE SURVEY

Thank you

Gordon Wanzare
Monitoring & Evaluation Specialist
Chairperson, Evaluation Society of Kenya (ESK)

This discussion is now closed. Please contact info@evalforward.org for any further information.
  • Dear all,

    Thank you for the insightful and very helpful contributions to this discussion I initiated. Thanks also for completing the survey, we got 70 responses from evaluators. This discussion has been very rich and as part of continued knowledge sharing, we will synthesize the contributions, analyze the survey responses, and do a blog post in the coming few days which I believe you will find helpful. Please be on the lookout for the blog post on the EvalForward website!

    Thank you! Asante! Merci! Gracias! Grazie! शुक्रिया, ඔබට ස්තුතියි, நன்றி, Salamat, Takk skal du ha, Bedankt, Dankeschön ...

  • I agree with John and Silva's earlier comments. 

    Evaluators responsibility is to give recommendations not solutions. But recommendations will help "solutions". 

    What is missing in evaluation practice is:  

    a. Most recommendations are unrealistic and not achievable. 

    b. Time is up that evaluators should cultivate a vision for the future. 

    c. Findings and recommendations should be program (holistically) wise.

    d. Evaluators should design their own evaluation indicators and this should be included in TORs. 

     

  • From my interest the most important impact of evaluations is the guidance they provide for future projects in guiding them in evolving to better serve the intended beneficiaries. Unfortunately, at least for smallholder agriculture development projects, the evaluations have become more a mechanism for propagandizing projects regardless of how well they are relied upon or avoided by the beneficiaries.

    To some degree this is necessary as the up-front cost in getting projects from conception to implementing contract and finally an opportunity for detailed discussions with beneficiaries can be a couple million dollars and over two years of extensive time and effort by a multitude of people. With that much up-front costs before the implementer have a chance for detailed interviews with beneficiaries to fully ascertain their needs, no one wants to learn that the beneficiaries were not really interested in the activity and mostly avoiding the project for various valid reasons.

    However, when used more for propaganda then guidance, there may be some benefits to the implementer in terms of project extensions and future projects, but no real benefit for the beneficiaries. Instead, projects, that by most standards would be considered a failure, become more deeply entrenched in the future development projects, reducing any effort to adjust projects, and squandering massive amounts of development investments that could better be used in an alternative approach.

    Please review the following webpage:

    https://agsci.colostate.edu/smallholderagriculture/appeasement-reporting-in-development-projects-satisfying-donors-at-the-expense-of-beneficiaries/

    Preventing an evaluation effort become more a propaganda tool instead of guidance for future projects may be more how data is analysis than what is collected. This could be as easy as avoiding aggregate analysis in favor of more percent analysis. In dealing with smallholder farmers, each managing one to two hectares, one can quickly count some impressive numbers of individual participating and get a good propaganda appraisal, but it will say nothing about what would be possible. However, if expressed in terms of percent of potential beneficiaries, the impressive aggregate number could become highly questionable. The same could be said about the amount of produce marketed through the project. The total volume can sound impressive, but if prorated to individual members could be only a small percent of their production, perhaps representing only in-kind loan repayments, with the bulk of their business going to the very people the project is attempting to replace to provide the beneficiaries a better business deal. If converted to a community basis, which is what development projects are based on, it would be nearly a trivial amount.

     The real need for an evaluation is, before beginning the evaluation to develop a set of targets for what would separate success from failure and do this as percent rather than an aggregate number. This would be the percent of the farmers activity participating in program, or the percent of farmers produce marketed through the project, as well as the percent of the communities produced marketed through the project. For example: 70% of the farmers activity participating instead of 150 famers involved, or 80% maize production marketed through the project with only 20% side-sold. This could easily give a very different perspective of how well the project was received and relied upon by the smallholder beneficiaries and result in make some major adjustment to future projects that would better serve the beneficiaries.

    Please review the following webpages:

    https://agsci.colostate.edu/smallholderagriculture/mel-impressive-numbers-but-of-what-purpose-deceiving-the-tax-paying-public/

     https://agsci.colostate.edu/smallholderagriculture/perpetuating-cooperatives-deceptivedishonest-spin-reporting/

    https://agsci.colostate.edu/smallholderagriculture/request-for-information-basic-business-parameters/

    Thank you

  • Dear Silva

    That is beautifully put , and points to the integral value, and values of an evaluator. I often view our role as both facilitator and translator to understand the language of context, Culture and experience and translate it into the language of technical theories, institutions, resources and decision-making, with the hope of strengthening connection, understanding and positive flow between them to facilitate the patterns and solutions that emerge. 

    Thank you for taking the time to make such a great explanation.

    Kind regards 

    Dorothy Lucks

  • Greetings!

    All too often, people who devote themselves to a field, begin to miss the ‘whole’ owing to their very specialisation. It is so easy not to see the forest for a particular species of a tree, shrub or a bush. This reductivism is all too familiar to the most, and some have even invented a new phrase to re-describe it, viz., ‘thinking in silos’.

    Perhaps, someone not burdened with a specific field expertise might see what could have escaped a professional. After all, Shakespear and James Watt did not attend university courses in their areas, but they managed to achieve a lot. So, a humble evaluator  might see what has eluded an expert with yards of experience.

    Cheers!

    Lal.

  • Hello

    I practice humility by asking myself a different question:

    If people who have been working on an issue for a long time, with a much better understanding of the context did not find a good solution... how could I, an external evaluator, do so?

    As an evaluator I cannot certainly find solutions but i can - with a facilitative and not an expert approach:

    * help to find "missing pieces" of the puzzle, by bringing, in the same place, the views and ideas of different actors.

    * help articulating and systematizing reality better, so that people can have a better map on which to find solutions

    * capture ideas, lessons that too often are implicit and that - if shared - can help changing the way of working

    * share some ideas about things that I had seen working elsewhere (but, watchout, I would always do this in the evidence gathering phase, as a way to get feedback on these "conversation starters". and people often find quickly a lot of things to be checked and improved)

    * create spaces, within the process, for people to be exposed and react to evidence, as it is shared

    * identify what seem to be the priority concerns to address - linking them to challenges, opportunities, possibilities surfaced.

    This is not research. And these are not solutions.

    There is a whole world of things amongst "problems" and "solutions"... it includes learnings, possibilities, systematized evidence.

    And I see people really interesting and willing to engage with these... Much more than when I used to preach some simple solutions to them. :-)

     

    Also, an evaluation does not always highlight "problems". There are often so many solutions that are just left hidden.

    And evaluations have also a role in finding these and to help valuing the work done, and the many challenges solved, which should never just be given for granted.

  • Olivier Cossee says, “evaluators need to propose reasonable solutions to the problems they raise… the hard part is to propose something better, in a constructive manner.”

    A reasonable solution is, at least implicitly, a Theory of Change. It should be explicit: a tentative goal, a first step toward the goal, and some intermediate steps. The hardest part is that first step. Taking that first step should answer the question, “Might this be worth considering?”

    John Hoven

  • Hi Silvia and all,

    I agree that evaluators are not the only one trying to find solutions, and that programme managers and decision makers should not be off the hook, but I do think that evaluators need to propose reasonable solutions to the problems they raise.

    Otherwise I don’t see their value added, nor what makes evaluation different from research. Also, an evaluation that would shy away from proposing solutions would be in my opinion a rather facile and negative exercise: it’s not so hard to spot issues and problems, anyone can do that; the hard part is to propose something better, in a constructive manner. Forcing oneself to come up with reasonable alternatives is often an exercise in humility, in that it forces one to realize that “critique is easy, but art is difficult”.

    All the best,

    Olivier

  • Dear all, 

    I agree this is a wonderful discussion and I am fully in line with Silva.

    Sound like a platitude here, but I was always convinced M&E systems should be owned and developed by programme team/stakeholders/beneficiaries engagement. Though, for the last 18 months, I was fully engaged to support a programme at this level (this is under programme direction) thinking it was the greatest opportunity of all time and I would make the best of it.

    But the result is disappointing, to say the least. Beware whenever you ear « we don’t want to shoot ourself in the foot, do we? ». Or whenever official communication is about self-promotion/self-gratification (how fantastic we are, etc.) while beneficiaries did not witness yet anything happening in their direct surroundings or daily life. 

    I think one of the key is where evaluation fits in the organizational chart (see below). How do M&E officers at project level interact with M&E officers at programme level and so on? How do M&E people in charge at programme level interact with any evaluation office at Managing director level or any existing « independent evaluation office » attached to the executive board. Setting up a MEAL system being a support function but also serving accountability : how both functions coordinate and complement one another?

    Also, do we have M&E professional at project level but also at stakeholder levels (governments, donors and primarily beneficiary representative levels, etc.) and are they all connected before we can say a M&E system is in place?

    A good practice I have seen is to have the steering committee (contractually) validating the M&E system at the end of inception period.

    Another common thought is that evaluation is a state of mind rather than complex technical instruments (Lal. mentions How to make ‘planners’ understand an evaluation. - which is correct though sometimes evaluation is not only spontaneously understood but pushed in internally).

    Then you have a risk of a « double sentence » for final beneficiaries. Engaged programme managers that even intuitively fully embrace evaluation and make the most of it while at the same time « reluctant » ecosystem that will use blackholes in the organisation chart for evaluation to take place too late, not linked to strategical decision-making and finally extract nice color charts with « number of people trained » and usual stuff. 

    Happy to participate, hope this conversation keeps going  ;-)

    Cheers, 

    Sébastien 

    Galea

     

  • Greetings!

    As for an evaluator’s ability to suggest a better approach to solving a problem, I think one must take two aspects of the matter into consideration.

    First, an evaluation is undertaken to ascertain how successful a given approach is to achieving some pre-determined objective. In my example, it was improving public health of an unnamed country. The political authorities opted for an ultra-modern cardiac unit in the capital of a land where there was hardly any primary health care for the majority.

    Durinng pre-project evaluation, this would be obvious to an evaluator who looks at reality as it is, rather than as an academic exercise. True, it is not always as simple as this seems to be. Even so, I believe an evaluator who is not afraid to apply his common sense to the existing local realities of a given place would be able to make some sensible suggestions on some generic changes to a plan intended to attain a goal. The evaluator may not be competent to recommend an specific action, but generic changes ought to be within his ken.

    In the ‘public health’ example, it is obvious to an informed evaluation that primary health care has a logical priority over a fancy cardiac unit of limited utility. Of course, he would not be competent to recommend the nuts and bolts of how such a health care system may be established.

    Cheers!

    Lal.

    PS:

    Let us remember evaluation is concerned with enhancing the quality of life of real people in some way, and it is not to be conflated with some abstract enterprise dealing with theoretical entities.

  • Wonderful discussions here! So then, do we really need evaluations? Why spend all the resources to undertake evaluations? Are evaluators then needed? Just random questions popping up in my head after reading this thread! 

    Regards,

    Obando 

     

  • Clarity... of course, absolutely! Elevator pitch... yes and no.

     

    An elevator pitch is very useful as an entry point.

    But there should then be a recognition that the purpose of a good evaluation is to unveil the complexity of reality (without being complicated).

    It can give new elements and ideas, but not the solution.

    The elevator pitch is the entry point, it highlights main areas to be addressed, and it can certainly outline some pressure points.

    But I am not so sure that we can always offer a crisp idea of possible solutions.

    As they say, "for each problem there is always a simple solution. And it is wrong".

     

    Solutions are to be found, as Bob so well said - beyond the evaluation.

    (or within it only if it is a participatory one, where key local actors are truly engaging in formulating findings, and truly own the process)

     

    So the tools and messages we need are not just the elevator pitches, but these helping to convey and navigate complexity in simpler, actionable ways.

     

    Being aware that it is not for the evaluator to hammer messages, but for the project stakeholders to own them.

  • Hi everyone!

    I too have been following the thread with much interest. I cannot agree more with Lal about the need for clarity, brevity and freedom from jargon. Until you are able to explain in a few simple words the project that you have evaluated to someone who knows nothing about it (like your grandmother or your uncle), then you haven’t evaluated it yet. You’ve just collected data, that’s all. You have yet to digest this data, to see what it means, to synthetize it into a crisp, usable diagnostic of problems and possible solutions. Being able to summarize an evaluation into a clear and convincing “elevator pitch” is key to utility. It is also important for the evaluator to be able to hammer this clear message again and again, consistently across different audiences.

    Cheers,

    Olivier

  • Greetings!

    I have followed this discussion with interest, and it seems to me that the point one tries to make here is that evaluation ought to bring about a desirable change in the way a policy/strategy/tactic i.e., a field implementation is intended to attain its objective. Otherwise, evaluation would be just ‘much ado about nothing.’ Be it an impressive report or a set of colourful graphics. Here, I cannot agree more with Sylva.

    Other participants have already noted several obstacles to progress such as political expediency, incompetence, corruption, indifference among the decision-makers, lack of resources, unacceptable donor interference, etc. All these assume that a given evaluation has been understood, but ...

    We can hardly take this ‘understood’ for granted; I think this is the point Sylva is raising here. If I am right, the question then is what precise form an evaluation ought to take in order to facilitate such an understanding while hoping that it might induce the policy makers/strategists/field planners to revise their approach towards achieving a pre-determined goal.

    In other words, evaluation would then guide the revision of the previous approach towards attaining the same objective. This process may have to be repeated as other conditions influencing achievement of a goal could change. An extreme example of such an influence is the present Corona infection.

    Here, we have identified two basic problems:

    1. How to make ‘planners’ understand an evaluation.
    2. How to induce them to revise their plans in line with an evaluation. It seems that this is far more difficult, especially in view of the obstacles we have just mentioned earlier.

    However, restricting ourselves to our first question, I might suggest an evaluation take the form of a short critique of the generic actions a plan embodies. As a concrete example, let us saya plan suggests that in order to improve public health, the authorities plan to put up an ultra-modern cardiac unit in the capital of a country. The donor is full of enthusiasm and endorses the project. Meanwhile, the country involved hardly offers primary health care to its citizens.

    Here, in my view, the pre-project evaluation would be short and lucid, and would run as follows:

    “This project would have an extremely limited beneficial effect on the public health of the country, and it is proposed that the available funds and human resources are deployed to provide primary health care at centres ocated at X, Y, Z etc.” This is something that has actually happened and I have suppressed the country and donor’s names. I do not think the actual evaluation report looked anything like my version, but it must have been impressive in its thickness and uselessness.

    So, are the evaluators willing and able to concentrate on the practical and guide the hands that feed them towards some common good with few lucid jargon-free sentences?

    Cheers!

    Lal.

     

  • Dear Mauro, 

    I trust this mail finds you well. Certainly as Dorothy says, Evaluation has a component on presenting the findings and feedback before the final report.  

    For me that is the best part of the evaluation exercise. In some cases, I requested organizations to provide a good representation during the presentation of the findings and feedback, including different levels such as sub nationals, central level, sometimes field officials. 

    The benefits of presenting evaluation findings are that you will be able to engage in data verification and also gather some further qualitative information, as well as overcome the misunderstanding that evaluation is a fault finding exercise etc.

    You also could have pre-discussion with the stakeholders on the Evaluation process. 

    Isha

  • Great take-away...

    One point to stress.

    Going beyond the report does not mean "make a visual report".

    A visual report is nicer, but still a report.

    "Going beyond the report" means to consider the evaluation as a process that does not end just one product - being visual or not.

    Communication of findings, sharing of ideas need to happen throughout, in many forms.

    A good evaluation does not need to be a "report".

    I advocate for strategies, options for sharing ideas and findings with different audiences, throughout.

    Which might NOT include a report. Report writing is extremely time consuming, and takes away a big % of evaluation time.

    Is it the best investment? Is it needed? We are so used to think that an evaluation is a report that we do not question it.

    Also... beside real time evaluations there is "real-time information sharing".

    This is something too little explored. Yet it can create big changes in the way evaluation happens.

    It is about sharing preliminary ideas, evidence, so that people involved in the evaluation can contribute to shape findings.

    Again: we are so used to think that we share the "end products" that the possibilities of real-time information sharing are not really understood...

    Thanks again for the great summary, it really help to step up discussion and to generate new ideas

    (and, you know what? It is a good example of "real-time information sharing" of new ideas! :-)

  • To Dorothy's excellent observation that political reasons often overpower the factual, objective findings of an evaluation report, I will also add another factor that rears its head - often an ugly head - financing.

    Too often, financial considerations, rightly or wrong become the sole criterion for the next steps, not the content of an evaluation report.

    Sad, but true.

    V. Suresh

    Council of Europe

    Strasbourg, FRANCE

  • Dear Mauro

    You raise a good point. There is usually feedback prior to finalization of the evaluation report. Often this is mainly from the internal stakeholders of the initiative (policy, program, process, project) that is being evaluated and from the commissioner of the evaluation. This is extremely useful and helps to ensure that the reports are good quality and the recommendations are crafted to be implementable. Unfortunately, the stakeholders for the evaluation content are often not the decision-makers for resource allocation or future strategic actions. Consequently while there is a formal feedback process, the decision-makers often do not engage with the evaluation until after the evaluation is complete. For instance, we are currently evaluating a rural health service. There are important findings and the stakeholders are highly engaged in the process. But the decisions on whether the service will be continued is central and decisions are likely to be made for political reasons rather than on the evaluation findings. It requires evaluation to gain a higher profile within the main planning ministries to exert influence on the other ministries to take decisions on evidence rather than on politics. We are still a long way from this situation but the shift to evaluation policy briefs is a good move that give ministerial policy officers the tools to properly inform decision-makers.

    Kind regards

    Dorothy Lucks

  • Dear all,

    Knowledge, experiences, and thoughts being shared on this topic are very insightful and helpful. Thank you for your contributions! Here are some of the takeaways I have picked so far. More contributions/thoughts are most welcome. Let's also remember to complete the survey.

    Concise evaluation report

    Writing too many pages, i.e. a voluminous evaluation report would make the reader/user bored, reading some things they knew about or looking to get to the point. Very few people, including evaluation users, would spend time reading huge evaluation reports. In fact, even evaluators are less likely to read (once finalized) a report they have produced! Some of the recommendations are: 

    • Make an executive page less than 4 pages (writing on both sides), highlighted on findings and conclusion and recommendations based on findings.
    • Make a summary of fewer than 10 pages, more tables, diagrams, and findings on bullet points.
    • The full report should take 50 pages.
    • Highlight changes (or lack of them) and point out counterintuitive results and insights on indicators or variables of interest. 

    Beyond the evaluation report: use of visuals

    Until evaluations will be mainly perceived as bureaucratic requirements and reports, we will miss out fantastic possibilities to learn better. It is so unfortunate that we assume that "report writing" alone is the best way to capture and convey evidence and insights. Communicating evaluation findings in a concise, comprehensible, and meaningful way is a challenge. We need both literal and visual thinking to make use of evaluation by summing up findings in a more visual way through the use of graphics, drawings, and multimedia. For example, the UN WFP in the Asia Pacific region is combining evaluation with visual facilitation through a methodology called EvaluVision. It is helpful to involve people who might have fantastic learning, analytical, and communication skills, but who are not necessarily report writers.

    However, the challenge is that visuals are often seen as "nice" and “cool''. Everyone likes them and feels they are useful, but a normal report still has to be developed, because this is what evaluation commissioners including funders want. 

    A paradigm shift  in making recommendations

    Often, there are gaps between findings, conclusions, and recommendations in the evaluation report which can negatively affect the quality and use of evaluations. Traditionally, evaluators would proceed to make recommendations from the conclusions, however, letting the project implementation team to bring on board a policy-maker to jointly draft actionable recommendations can help improve evaluation use. The evaluator's role is to make sure all important findings or results are translated into actionable recommendations by supporting the project implementation team and policy-maker to remain as close to the evaluation evidence and insights as possible. This can be achieved by asking questions that help to get to actionable recommendations and also ensuring logical flow and empirical linkages of each recommendation with evaluation results. The aim should be for the users of the evaluation to own the recommendations while the evaluation team owns empirical results. With the participation of key stakeholders, evaluation results are relatively easy to sell to decision-makers. Stakeholder analysis is, therefore, key to understanding the interest, influence, and category of stakeholders to better support them to use evaluations.

    Lessons from audit practice: Can management feedback/response help?

    Should feedback be expected from users of evaluation? Typically, draft evaluation reports are shared with the implementers for review and approval. In the auditing field, there is mandatory feedback in a short time, from the client who must respond to the auditor's observations both positively and negatively. Perhaps, as mentioned elsewhere, working with the users of evidence generated through an evaluation in the form of findings and conclusions to make actionable recommendations may serve as a management feedback/response. However, the communication and relationship should be managed carefully so that evaluation is not perceived to be an audit work just like in some cases it is perceived to be “policing”.

    The Action Tracker

    An Action Tracker (in MS Excel or any other format) can be used to monitor over time how the recommendations are implemented. Simplifying the evaluation report in audience-friendly language and format such as a two-page policy brief, evaluation brief, and evaluation brochure based on specific themes that emerged from the evaluation is a practice relatively very helpful for a couple of reasons:

    • Evaluators are not the sole players, there are other stakeholders with better mastery of the programmatic realities.
    • The implementation team has got space to align their voices and knowledge with evaluation results.
    • The end of an evaluation is not, and should not be, an end of the evaluation, hence the need for institutions to track how recommendations from the evaluation are implemented for remedial actions, decision- or policy-making, using evaluation evidence in new interventions, etc.

    Alliance and relationship building for evidence use

    Typically, there are technical and political sub-groups or teams. In some situations, technical teams report to an administrative team that interfaces with the policy makers. Evaluators often work with the technical team, and may not get access to the other teams. The report and recommendation parts are trivial irrespective of the process followed. The issue of concern is the delay in the time between report submission and policy actions in developing countries. Institutionalization of the use of evidence is key to enhancing the use and influence of evaluations but may take time, particularly structural changes (top-down) approach. Having top management fully supporting evidence use it is a great opportunity not to miss out. However, small but sure steps to initiate changes from the bottom such as building small alliances and relationships for evidence use, gradually bringing on board more "influential" stakeholders, and highlight the benefits of evidence and how impactful it is for the implementing organization, decision-makers and the communities is also very helpful

    Real-Time Evaluations

    Evaluation needs to be rapid and timely in the age of pandemic and crisis situations. We need to 'communicate all the time'. One of the dimensions of data quality is timeliness. Timeliness reflects the length of time between data becoming available and the events or phenomena they describe. The notion of timeliness is assessed on the time period that permits the information to be of value and still acted upon. Evaluations should be timely for them to be of value and acted on.

    Beyond evidence use

    The ultimate reason for evaluation is to contribute to the social betterment or impact. This includes, but at the same time goes beyond the mere use of evaluation results that change policies or programs. In this way, the use of evaluation per se stops being evaluations’ final objective, since it aims at changes that promote improvements in people’s lives. Demonstrating how an evaluation contributes to socio-economic betterment or impact can enhance the value, influence, and use of evaluations.

    Gordon Wanzare

  • Dear Isha, Silva, Dorothy, and all,

    Great to hear the same frustration and aspiration for evaluation, use and influence.

    I was drawing a session at Asian Evaluation Week, happening this week, and the topic clear to me was real time evaluation. Evaluation needs to be rapid and timely in the age of pandemic. We need to 'communicate all the time' Please find a drawing attached.

    keisuke graphic

    If you haven't signed up, please join EvlauVision linkedin group. It has about 100 members, but the group discussion is still quiet. I really  need your active participation to  stir up the network who want to bring visual thinking to the evaluation, leading to action, not for learning for the sake of learning, but to make better decisions and apply.

    https://www.linkedin.com/groups/13949912/

    And tomorrow, Sept 8, from 11:00 a.m. - 12:00 n.n (Manila time), EvaluVision will featured at Asian Evaluation Week. Please register at https://adb.eventsair.com/aew/ The learning session starts at 9:00 a.m.

    Cheers,

    Keisuke

     

  • Good morning everyone.

    I am a verifier of complex technical projects for an Italian company, and have long been very interested in the evaluation of international social and technical projects.

    From reading the emails, I see that a lot is said about the length of the reports and the best way to make them easily understandable.

    What I didn't understand well is whether feedback is expected.

    In the audit reports that I produce for my company, there is a mandatory feedback in a short time, from the client who must respond to my observations both positively and negatively.

    The answer helps to improve my work and the client's work to solve the problem.

    I ask if this also happens in your evaluation reports.

    Thanks

    Mauro Fianco

  •  Dear Isha and all

    Well said. I agree. With all the new tools that we have in our hands there is opportunity for evaluation to be more vibrant, less bureaucratic and ultimately more useful!

    Kind regards

    Dorothy

  • Dear all,

    I agree with Silva, evaluation reporting is very bureaucratic. Even if people don't read, still having a big report seems to all "work well done". 

    But very few will read it all, may be not at all, they may only read the Executive Summary page. 

    The requirements for evaluation reports should comes with the TORs, however sadly TORs are too very uninformative, lack innovation or vision. Mostly I should say they are cut and paste. 

    Evaluation hasn't changed much on the above aspects. 

    Best regards 

    Isha

  • Oh, well done!

    Great to see that there is some recognition of the value of pictures and visuals.
    The materials you shared are really helpful and inspirational, thanks.

    Now... as someone who thinks visually and in pictures I consistently tried to sum up findings in a more visual way.
    Graphics, drawings, multimeria are seen as "nice" and cool. Everyone likes them and feel they are useful.

    But, guess what? I then have to produce a normal report, because this is what donors want.
    So, visuals are to be done as an aside. Of course, for free.

    Time for reporting is already usually not enough in a consultancy, so if you want to prove that visuals or other media are better, you basically need to work for free.
    Because, at the end of the day, you will still have to write the proper report.

    The bottom line?

    Until evaluations will be mainly perceived as bureaucratic requirements and reports...we will miss out on fantastic possibilities to learn better.
    And also, to involve people who might have fantastic learning, analytical, communication skills, but who are not report writers.
    It is so unfortunate that we assume that "report writing" alone is the best way to capture and convey evidence and insights...

  • Dear Gordon and colleagues,

    Communicating evaluation findings in concise, comprehensible and meaningful way is a challenge. How many people actually read 150 page report?

    UN WFP in Asia Pacific region is attempting to solve this issue by combining Evaluation with visual facilitation. The methodology is called EvaluVision.

    3 min explainer video and e-book hopefully give you idea of how to use visualization and facilitation technique to design, validate, and disseminate evaluation. There are more videos and case studies in the e-book.

    Next week, on 8 September from 9:00 – 12:00 a.m. (Manila time), we will present EvaluVision as a part of ADB-WFP organized learning session "Engage to Communicate: Stakeholder Analysis for Communicating Evaluations" during the Asian Evaluation Week. The learning session will be 9:00-12:00, but EvaluVision is set for 11:00-12:00. For more information, please visit AEW site . I attached the e-flyer as well.

    We need both literal and visual thinking to make use of evaluation.

  • Dear all, I totally agree with your comments. The ultimate reason for evaluation is to contribute to this social betterment or impact. This includes, but at the same time goes beyond the mere use of evaluation results that change policies or programmes. In this way, the use of evaluation per se stops being evaluations’ final objective, since it aims at changes that promote improvements in people’s lives.

    In this way we have understood evaluation to identify seven stories of evaluation that make a difference in Latin América. We have published these stories in the book “Leaving a Footprint”, edited by Pablo Rodriguez Bilella and me, available for free in Spanish and English. You can access the book trailer here. The book illustrates how evaluation itself has this potential to produce a positive impact in people’s lives.

    The follow illustration on this book summarize the main idea behind these seven experiences.

    image by Tapella

  • Is there any chance that we could stop thinking that an evaluation is... a report?

    So many possibilities would be unlocked.

  • Dear Jean and Dr Rama Rao,

    Thank you for this accurate exchange of experience that is very instructive especially regarding the topic of the management response of an evaluation.

    I very like the statement, while rather obvious, that ‘the end of an evaluation is not the end of the evaluation’. Formulated that way it can support the efforts to organize the follow-up processes of recommendattions.

    I’ll get inspired  by your more than ‘two cents’ …in our own organisational improvements in that domain.

    Looking forward and sincerely yours,

     

    Anne-Pierre MINGELBIER

    Head of Evaluation Unit

    RBM Change Program Coordinator

  • Dear Jean, Gordon and others,

    Thanks for the such good topic for a discussion. Sadly you are correct on raising these issues .
    Many times I have seen evaluation reports that are bulky, with too many things to read and also a big gap between findings versus recommendations.

    Writing too many pages will make the reader bored, reading some things they knew about or looking to get to the point.

    My recommendation is keep it simple.

    1. make an executive page less than 4 pages (writing on both sides), highlighted on findings and conclusion and recommendations based on findings.

    2. make a summary less than 10 pages , more tables, diagrams and findings on bullet  points

    3. full report should take 50 pages.

    Best regards
    Isha

  • Dear Jean,

    Thanks for crisp summary of evaluators experiences and concerns. You have shared the articulated and unarticulated views of most evaluators. The practice of giving space to recommendation developers and implementers is a learning. In practice evaluators need all the 100-200 pages full report to record the facts. This is scientific and ethical too. The shorter version with set of recommendations are made as per the objectives of the study (ideal) and expressed needs of the sponsor (?), often more towards the later. Lot happens between these two. As evaluator, the bulk report, whether some one uses it or not, satisfies  evaluators scientific urge.   

    Some thoughts on the sponsors (implementer) role. In reality, evaluators report with recommendations is appraised by sponsor with few recommendations compatible or of priority to them. This is a natural process. 

    The difficult part is coping with the expectations of the sponsors sub-groups. Typically, there are technical and political sub-groups or teams. in some situations technical teams report to an administrative team which interfaces with the polity. We often work with the technical team, and may not get access to the other teams. Technical teams in turn keep changing their expectation as per the administrative/political systems expectations of a policy or policy change. This is sensitive and perhaps rapidly changes with time, much more rapid than the evaluation time. This is the crux. The report and recommendation parts are trivial irrespective of the process they are made. The issue of concern is the delay in the time between report submission and policy actions in developing countries.   

    Regards,

    Dr D  Rama Rao
    Former Director, ICAR-NAARM, Hyderabad
    Former ND, NAIP & DDG (Engg), ICAR, New Delhi
    Mobile:+91 9441273700

  • Dear Jean,

    I would take this opportunity to thank you . Most developing countries are failing because evaluators do their part  very well , but when it comes to sharing the information with policy makers and  other important stakeholders for actionable recommendations  to be taken  up  ahead  we leave the reports on the shelves.

    But today I  have learned alot from you   and thank you for your good advice. this is the way forward.

  • Dear Gordon and colleagues,

    Before sharing my two cents, let's consider a lived experience. With a team of 4 evaluators, I participated in a five-year project evaluation. As evaluators, a couple of colleagues co-designed the evaluation and collected data. We joined forces during the analysis and reporting and ended up up with a big report of about 180 pages. I have never see fans of big reports, I am not a fan either. To be honest, very few people would spend time reading huge evaluation reports. If an evaluator is less likely to read (once finalized) a report they have produced, who else will ever read it? Off to recommendations. At the reporting stage, we highlighted changes (or lack of it); we pointed out counterintuitive results and insights on indicators or variables of interest. We left it to the project implementation team who brought onboard a policy-maker to jointly draft actionable recommendations. As you can see, we intentionally eschewed the established practice of evaluators writing recommendations all the time.

    Our role was to make sure all important findings or results are translated into actionable recommendations. We supported the project implementation team to remain as close to the evaluation evidence and insights as possible. How would you scale up a project that have produced this change (for positive findings)? What would you do differently to attain desired change on this type of indicators (areas for improvement)? Mind you, I don't use the word 'negative' alongside findings. How would you go about it to get desired results here and there? Such questions helped to get to actionable recommendations.

    We ensured the logical flow and empirical linkages of each recommendation with evaluation results. In the end, the team owned the recommendations while the evaluation team owned empirical results. Evaluation results informed each recommendation. Overall it was a jointly produced evaluation report. This is something we did for this evaluation and it has been effective in other evaluations. With the participation of key stakeholders, evaluation results are relatively easy to sell to decision-makers.

    In my other life of an evaluator, such recommendations are packaged into an Action Tracker (in MS Excel or any other format) to monitor over time how they are implemented. This is the practice in institutions that are keen on accountability and learning or hold accountable their staff and projects for falling short of these standards. For each recommendation, there is a timeline, person or department responsible, status (implemented, not implemented, or ongoing), and way forward (part of the continuous learning). Note that one of the recommendations is about sharing and using evaluation results which require extra work after the evaluation report is done. Simplify the report in audience-friendly language and format such as a two-pager policy brief, evaluation brief, and evaluation brochure based on specific themes that emerged from the evaluation. I have seen such a practice relatively very helpful for a couple of reasons:

    (i) evaluators are not the sole players, there are other stakeholders with better mastery of the programmatic realities

    (ii) implementation team has got space to align their voices and knowledge with evaluation results

    (iii) the end of an evaluation is not, and should not be, an end of evaluation, hence the need for institutions to track how recommendations from evaluation are implemented for remedial actions, decision- or policy-making, using evaluation evidence in new interventions, etc.

    To institutionalize the use of evidence from evaluation takes time. Structural changes (top-level) do not happen overnight nor do they come from the blue, there are small but sure steps to initiate changes from the bottom. If you have the top management fully supporting evidence use, it is a great opportunity not miss out. Otherwise, don't assume, use facts and the culture within the organization. Build small alliances and relationships for evidence use, gradually bring on board more "influential" stakeholders. Highlight the benefits of evidence and how impactful it is for the implementing organization, decision-makers and the communities.

    Just my two cents.

    Over to colleagues for inputs and comments to this important discussion.

    Jean Providence