What can evaluations do in terms of capacity development?

What can evaluations do in terms of capacity development?
25 contributions

What can evaluations do in terms of capacity development?

FAO Zambia

Dear members,

FAO Office of Evaluation is aiming at an inclusive and participatory approach throughout its evaluations. This also involves consultations with key stakeholders on the evaluation use, evaluation deliverables and the conduct of stakeholder workshops to discuss evaluation conclusions and recommendations.

This inclusive approach proves to be an effective way to minimize conflicts, increase ownership and build capacity in both functional and technical skills throughout the evaluation process.

Some of the functional skills developed during our evaluations are related to analytical capacity, creative thinking, active listening, problem solving.

On the technical side, after our inception workshops, most stakeholders report to have a better understanding of the theory of change, of other tools to map results (e.g. outcome harvesting), as well as of the difference between the tracking of outputs and the monitoring of outcome level indicators related to agriculture and rural development.

We are curious to hear about any experience you have in relation to evaluations that led to developing of capacities, even very far from the one FAO Office of Evaluation is adopting.  

  • Do you have any experience in evaluations that helped developing capacities of evaluands and other actors, including beneficiaries? It would be great if you could share some practical examples.
  • What about participatory evaluations? Did those lead to any clear development of capacities (even soft skills?)
  • Did you evaluate the capacity development component of the evaluation itself, and how?

Your experience can feed into the Capacity Development Evaluation Framework that we are developing and help in valuing the capacity development component of evaluations.

Many thanks!

Luisa Belli and Lavinia Monforte

FAO Office of Evaluation

This discussion is now closed. Please contact info@evalforward.org for any further information.
  • Dear members, 

    Below is a summary of the main points raised during the discussion. Many thanks to all participants!

    Luisa and Lavinia

    Evaluation and capacity development

    • Evaluation is first and foremost a learning process that involves the evaluators as well as the ‘evaluand’ stakeholders.
    • Learning is one of the most influential factors for the long-term sustainability of interventions.
    • Capacity development activities during the evaluation process may include, f.i., rebuilding the ToC with the evaluation stakeholders.
    • Learning and therefore capacity development are more limited when the evaluation stakeholders do not engage, or when the evaluator does not involve them. 
    • Evaluation can help identify training needs and provide mechanisms for better and more effective development of capacities.

    Participatory approaches in evaluation and capacity development

    • Involving stakeholders in the evaluation process influences the extent to which they will enhance their capacities as a direct result of the evaluation.
    • The soft skills of evaluators, such as communication and facilitation skills, are key when it comes to ensure effective participatory approaches; this may go beyond what can be taught in a classroom and requires a focused work on personal attitudes and a lot of practice.
    • Using consultative groups can be an effective way to involve stakeholders’ representatives throughout the evaluation process. It usually enhances the likelihood of them accepting and acting upon the recommendations.
    • Cited methods applied in participatory evaluations include: Outcome Harvesting, Resource Mapping, Institutional Mapping and Community Farm calendar.
    • The recently released “Inclusive Systemic Evaluation for Gender equality, Environments and Marginalized voices” offers a guide for developing capacities though participatory approaches. 

    A systemic and integrated approach across capacity development dimensions is key in supporting an evaluation culture in countries. In Kenya, for instance, evaluations at county level conducted under the EvalSDGs/EvalVision programme are targeting a wide range of stakeholders allowing a vertical integration of capacities and the emerging of an enabling environment for evaluation. Another example is the “Focelac” project in Costa Rica, which is targeting both individual and institutional capacities while at the same time creating a favourable environment for evaluation though promotion of norms and standard, data availability, etc.  

    To end with, some members shared their suggestions on how to evaluate training and capacity development, through a simplified version of the KirkPatrick model. Other templates that can be used to track the outcome of training events were also made available during the discussion. 

  • I have a practical example of conducting a number of nutrition and food security evaluation using participatory evaluation method. Each representative from all stakeholders were participating in the evaluation process, so that, they are the one in the result debriefing team for stakeholders. Therefore you forward your result and recommendation without fear of rejection from any of the stakeholders. In the meantime, there is a high livelihood of evaluation use by the stakeholders.

  • Merci pour les contributions des uns et des autres qui sont édifiantes à plus d'un titre. En effet, au  delà de tous les aspects techniques, se trouve un enjeu majeur qu'est la participation des parties prenantes. Une évaluation n'a d'intérêt que si elle a une finalité formative. C'est pourquoi il est nécessaire  de s'assurer de l'implication des parties prenantes dans le processus d'évaluation.

    Prenons le cas de l'évaluation d'un projet de solidarité internationale mis en oeuvre par des organisations de la société civile (OSC) sur la base de subventions versées  par appels à projets.

    Un comité d'évaluation comprenant des gestionnaires du programme, les OSC, ainsi que les bénéficiaires a été mis en place. Le mandat d'évaluation y est expliqué. Ainsi,  ce comité a suivi  le processus jusqu'à la fin. La logique du programme a été mieux étayée,  des données fiables ont été collectées. Au finish, les résultats de l'évaluation sont acceptés par tous, car reflètant la réalité sur la base de la rigueur méthodologique de l'évaluation.

    Les gestionnaires se sont engagés à mettre en oeuvre les recommandations pour améliorer les processus de mise en oeuvre du programme.

    En rappel, nous sommes dans un contexte d'apprentissage où l'on ne doit pas se méfier de la fonction d'évaluation, car au delà  d'être une fonction de contrôle est également un outil pédagogique pour s'améliorer.

    En définitive, s'il y'a une chose à retenir de l'évaluation des programmes, c'est qu'elle vient renforcer les saines pratiques de gestion dans un contexte de gestion par résultats. C'est dans cette optique que toutes les parties prenantes devraient s'inscrire.

    Cordialement 

     

    Christian.

  • Dear EvalForward members,

    Thank you for engaging in this lively discussion and for sharing your experiences with evaluations that helped in developing capacities of evaluands and of beneficiaries.

    As evaluators, we know the difference it makes to truly engage with the evaluands and the programme team and how this influences the results and use of the evaluation. As Anis Ben Younes rightly said, evaluation in such cases becomes a learning process for all those who are involved, a process of sharing knowledge for the benefit of improving our actions, programmes and policies.

    The examples shared from Costa Rica and Kenya, where the capacity development and participatory aspects of evaluations are central, are proving how this approach is instrumental in supporting ownership on the evaluation at institutional level.  The experiences presented in using participatory methodologies such as outcome harvesting, resource mapping and institutional mapping helped communities in improving the use of their assets and knowledge.

    Certainly, there are still challenges to address, such as the lack of capacities of evaluators, the cases when evaluands and project officers refuse to get involved in the evaluation and evaluators that work in isolation and do not engage.

    Feel free to keep sharing your lessons and experiences on developing capacities of evaluands and other actors through the evaluation process.  

    As mentioned, we are working on a capacity development evaluation framework that we are going to pilot in upcoming evaluations to see how this can be applied to evaluation of different types such as project, strategic, thematic and country programme evaluations.

    Please note that we will draft a summary of all contributions and resources shared by participants and circulate it through the Community. 

    Luisa and Lavinia

     

  • Cécile Ebobisse

    Cécile Ebobisse

    Hello,

    I accompanied a friend in the evaluation of the gender profile of an agricultural program the PNIA. This was a very rewarding experience that allowed me to discover and apply the gender maker, to analyze gender mainstreaming in rural areas and to make recommendations. We were able to note that the share of the budget reserved for women's activities was very small.

    Cécile EBOBISSE

    Cameroon

     

  • In Benin, we have not yet addressed the evaluation of capacity development.

    Capacity development is something we are working on. Currently, we are in partnership with the Center for Sociology Studies and Political Science of the University of Abomey-Calavi (CESPo-UAC), to develop a Certificate in Analysis and Evaluation of Public Policies. This is a 3-week certifying course for evaluation actors who wish to strengthen their capacities in this area. In addition, the Journées Beninoise de l’Evaluation are an opportunity for us to train (in one day) government actors, NGOs and local authorities on different themes. Apart from that, we organize training seminars for these same actors every three to five days. This year, for example, we will train (in 5 days) the managers of the planning and monitoring-evaluation services of the 77 communes of Benin, on the elaboration or the reconstitution of the theory of change of the Communal Development Plans (their documents of strategic planning). We did it the year before for the actors of the Government and NGOs.

    But we have never undertaken to evaluate these capacity developments. We will get there gradually.

  • Dear All,

    This is an interesting discussion, please see below my comments.
    Evaluation is not just about doing and writing a report. It also includes competencies in effectively designing, managing, implementing and using M&E. It includes strengthening a culture of valuing evidence, valuing questioning, and valuing evaluative thinking.  In terms of "Evaluative Thinking" it is not just looking at the programme, looking at the data analysis and giving the conclusions and recommendations, but also forward thinking beyond the programme. Looking for an unexpected theory of change rather than the expected one, it is visionary thinking, horizontal broad approaches. These are not capacities that can be learned by in training classes or in workshops only, but also need to develop a curriculum in the field. Joint and participatory evaluations are one of the methods to gain these experiences.

    Isha

  • Dear Luisa and Lavina,

    Thanks for highlighting the critical need to focus on capacity building for various stakeholders  i.e. towards more participatory and effective evaluation.

    Currently, the Evaluation Society of Kenya (ESK) jointly with government (at national and county levels) with the World Bank's funding support are undertaking a pilot project in 2 of Kenya's Counties. 

    Notably, under the EvalSDGs/EvalVision (2016-2020) for  promoting the evaluation of the SDGs and their alignment to our country's Vision 2030, Devolution and "Big Four" agenda [Food and Nutrition, Universal Health Coverage, Affordable Housing and Manufacturing].

    The devolved level is the center of development execution in our country.

    The initial focus is on the water and health sectors. The activities are advocacy, trainings and rapid evaluations for related projects in each sector.

    The project puts special focus in strengthening the capacities of the various stakeholders. I mean towards their effective participation in the rapid evaluations and utilization of the findings for more evidence-based investment choices and service delivery.  

    So far, successful advocacy events towards buy-in/ ownership and more evaluation awareness (and which has been left behind in our country vis-à-vis monitoring) have been held.  

    These have been completed for one of the counties.

    They have targeted various categories of stakeholders i.e. each with a different targeted messaging as follows:

    • High political and executive leadership (governor/MPs, senator and cabinet secretaries at that level).
    • Members of the County Assembly (MCAs - they are the equivalent of MPs at the national level with oversight, budgetary and citizenry representation roles). Their buy-in just like the above category is deemed very critical towards the inculcation of a national culture and practice for evaluation (and which is currently weak).
    • Technical teams including directors, chief officers, planning and M&E officers.

    More stakeholder participation will be effected during the rapid evaluations including for the public and beneficiaries. The findings will also inform the earmarked trainings that will be customized, accordingly.

    It is expected that this will be replicated to the other 47 counties as we go along and across all sectors, in due course.

    NB: See more by scrolling down these links:  

    https://mobile.twitter.com/esk_kenya  

    https://m.facebook.com/EvalSocietyKE/

    Kind Regards,

    Jennifer

    Evaluation Society of Kenya

  • Greetings!

    Naturally, it is important to enhance the skill of evaluators; but apart from some general considerations applicable to every evaluator, one must not overlook the wide variety of projects involved, makes it necessary for an evaluator to develop certain skills specific for each project type. For instance, the skill needed to assess a road is categorically different from what is needed to evaluate the successful completion of a health facility, say, a hospital.

    Let us assume that a given project has been successfully completed, and an excellent evaluation has been made. Is it reasonable to assume that is all what's required? Some may be tempted to say, what else? We've done what we've been hired for, and now our job is done well. True, as far as it goes.

    If we are content with that, I think we have missed something crucial. That is simply this; when the celebrations are over and project personnel and the evaluators depart, how well will the beneficiaries utilize what has been put in place? Would they be able to undertake necessary maintenance and improvements on their own? Would they be able to make good use of it? Or would it remain a successfully completed monument to the planners' lack of sense of proportion? In other words, a white elephant or a prestige project of little or no utility.

    It is this aspect of capacity building I tried to bring to the fore in my first comment on this subject. I believe it is the duty of an evaluator to ascertain the public's ability to use what is planned, and if necessary to induce the planners to incorporate into project plans measures to enhance user's competence to benefit from it.

    Cheers!

    Lal Manavado

    Senior advisor

    Norwegian Directorate of Health

  • Dear Mesfin,

    Thanks for sharing your experience and tools for participatory M&E and especially how these are applied in conflict environments.

    I have also interacted with both tools (Resource Mapping and Institutional Mapping) as well and the experience is quite similar with yours.

    Almost all of the projects under my PMU (Program Management Unit) are contributing to resilience building to climate change effects with specific focus on food and nutrition security.

    In order to measure and track resilience levels of our target population we conducted a study called Resilience Profile Analysis using a software pioneered by FAO called SHARP tool (Self-evaluation and Holistic Assessment of climate Resilience of farmers and Pastoralists) to do data collection and analysis. 

    The field exercise was conducted in two phases, 1 field data collection using the questionnaire contained in the Sharp tool, and 2; a detailed exchange with the study groups to understand the socioeconomic and other factors which underpin their resilience scores. To conduct the second phase of the study, we applied different Participatory Rural Appraisal (PRA) tools including Community Resource and Institutional mapping as well as the Community Farm calendar. 

    Each of these tools helped us understand the real dynamics of the communities studied: the way the community is organized, their main activities in a year including social and economic activities, the resources they have to share (or don’t have), and their location within the community as well as the different institutions which interact with the community and for what purpose. They community could literally draw their village map from this exercise!

    This exchange, as it was related to resilience building, revealed some good lessons about how the community members live and relate with one another, especially when disaster struck. In fact, for one of the communities which actually experienced flooding in the previous year, the study revealed that unity and care among community members are important contributors to resilience. It also reveals that where a community uses shared resources for their livelihoods, this strengthens everyone’s resilience levels. And, from another perspective, an analysis of the institutional map brought to the table key concerns of effectiveness and sustainability of the different initiatives which were supported or driven by some of the institutions operating in their communities, thus orchestrating an emerging spirit of community ownership and accountability on the part of stakeholders and driving the spirit of sustainability. Emerging Behavioral change became apparent.

    In essence l, therefore, I want to emphasize that climate change is in fact like a conflict zone and development work in resilience building does have some similar experiences as working in conflict zones. Also, I can observe that some the tools could be borrowed and applied between and among the two situations, albeit the focus could be tweaked depending on the sociocultural context and time. I encourage you to interact with the SHARP tool by searching the internet so you can be familiar with the key indicators or resilience, which I can appreciate can be related to the key development concerns in conflict zones (http://www.fao.org/in-action/sharp/en/)

    Thoughts!

    Paul Mendy

    The Gambia

  • Good evening dear members of EvalForwARD,

    I think capacity building or "LEARNING" is the essence of evaluation. We have always insisted as members of the evaluation professional community on its particularity and distinction from other functions such as inspection or audit with emphasis on, at least, two things. Values ​​(focus on the human) and learning. Moreover, in recent years, several INGOs have transformed their M & E function into MEL or MEL, where L = Learning.

    Now, one can ask the question: the capacity building (I prefer the word capacity development) from whom? How? Why?

    ... All actors involved in the evaluation process learn from it. ALL, including comissioners, donors, government, users of deliverables, beneficiaries especially when it is a participatory approach.

    Through its approach of questioning, evaluation encourages the questioning of all that is evident in a project by verifying the relevance of its theory of change, the effectiveness of its intervention, the efficiency of its use of resources, sustainability of results and sustainability of the action (and many other criteria ...).

    We learn ourselves as evaluators during the evaluations we conduct and we combine that knowledge to share it again during the following mission with new clients. It is an iterative process that is spreading thanks to the sharing of knowledge at all scales (project, program, institution, community ...) as it is the case for this group where we share the results of our different experiences.

    Take the example of a student: the exam is not just a test to measure one's degree of learning. The exam, as an evaluation mission for a project team, is an opportunity to prepare, to review courses, to do research, to discuss with colleagues unclear questions, to discover tips and tricks that he would have never discovered by doing a simple passive learning "without challenge".

    Today, the evaluation community is increasingly sensitive to issues of knowledge sharing and the use of evaluation results (presenting results in increasingly user-friendly and intelligible forms) so that capacity development is accompanied by greater change at the institutional level or at the level of the daily lives of the final beneficiaries.

    Finally, I think that the purpose of evaluation is to measure, explain and promote change. In the world of development, this change passes by the Human and this Human does not stop to learn and to develop. Evaluation is therefore at the heart of this capacity development.

  • I would like to contribute with my own experience but I would first like to reaffirm that participation in evaluation is beneficial under certain conditions. I worked for the Ministry of Health in Morocco and for the United Nations system.

    Sometimes I find myself as an evaluator and sometimes as a commissioner of the evaluation, in both cases the different evaluations led me to develop my technical skills on the technical object evaluated and on the tools and process used.

    I concluded that when the commissioner participates in the evaluation since the design of the terms of reference and the methodology, the results of the evaluation will be relevant and useful. But when the evaluator evolves alone on one side, isolates himself in conducting his evaluation discreetly without involving users of evaluation in what he seeks to prove through his tools (interview, focus group, mid-term and final validation workshops), often in this case the result is disappointing and the problem will arise first at the level of the validation of the results and the recommendations will be dead letter, it is a pure waste of the resources. As a result, the quality and the profile and the behavior of the expert also comes into play. When participation is effective everyone everyone helps to inform the evaluator about the sources of the data to help him better interpret events and figures, when the evaluator is skillful he enjoys the dialogue and mutual feedback that is beneficial to both sides.

    As far as I am concerned, the most recent example (December 2018) concerns the evaluation of the implementation of the maternal death surveillance system in 5 countries in the Arab region (Morocco, Sudan, Egypt, Tunisia and Jordan). Methodology design based on WHO standard tools, nomination of a country evaluator who collected data from the country, organization of an inter-country workshop for synthesis, sharing of results and drafting summary reports for the region with policy briefs for advocacy with policy makers. Sponsors and teams from all five countries appreciated their participation in the process and the relevance of the recommandations.

  • Dear Colleagues

    I found this a very interesting topic and discussion. Let me share with you my experience working in conflict sensitive environment.

    Our project was typically focusing on conflict sensitive development. We had conducted evaluation through participatory methods with beneficiaries and key local actors with a number of tools such as Resource Mapping and Institutional Mapping.

    We witnessed that Resource Mapping helped beneficiaries to understand and enhance their capacity on how their local resources are creating social cohesion within their communities and other communities through common resource utilization. This was also taken as a lesson to plan on how to use the resources for peace building interventions. It is important that the guiding questions fit the purpose of the evaluation. The reason is because participatory tools usually are used to evaluate a number of projects which are different in nature.

    In another case, Institutional Mapping also uses a tool to show actors involved how local institutions are benefitting the communities and how they interact with each other. This provided more capacities to understand how important it is to link different institutions to improve coordination with the specific locality especially for local actors.

    Regards

  • Dear Kebba and all,

    Thanks for sharing your thoughts on the following important questions:

    1. Could we develop capacity through evaluations?
    2. Could evaluations help in capacity development?

    For both questions #1 and 2 my answer is YES, we can and do develop capacities through evaluations. A typical case in point is that each time I conduct an evaluation for a training session, consultants or service providers request for a copy of my evaluation report and the tool itself. They note that the tool has the potential to help them improve their preparedness, delivery and follow up support. 

    Besides this feedback, it’s a given that the purpose of evaluations is to identify gaps, improve planning and inform decision making. In terms of capacity building, evaluations help us identify further training needs areas and provide mechanisms for better and more effective delivery of capacity building actions and means of tracking results or impact.

  • Dears,

    Thanks for sharing such important issue to discuss. 

    I am to tell about my personal experience in this regard: 

    Capacity building of evaluand: I think every evaluation helps in developing capacities if it follows the best practices along the journey, working with clients hand by hand to climb the mountain starting from the ridge (uptake of the mission), and mutually understand the evaluation objectives (ToR), then taking the client team to the summit where they feel the need and the importance of evaluation and contribute to tools development and data collection (in other words create the "buy-in"), and later going down the slope with analyzing the results, formulating recommendations and conclusions down to utilizing results, where the evaluators accompany clients to ensure achieving the change.

    Actually this is the mountain model (INTRAC/ C4C-Consultants for Change) which I fully respect and appreciate as a typical one. But the reality is different; as a consultant for sure my blame goes to clients who usually do not walk the mountain with you claiming they are busy, and rather put you on the start point, give you some food for thought (documents and guidance) and wait for you on the ridge on the other side of the mountain and rarely even you receive a phone call from them during the journey. They get the report and say bye bye. However, coming back to the issue of learning, it occurs but not to the best it could be. For example restructuring the ToC with a client is a capacity building activity, it is a chance to direct the client on better formulating the ToC. Another thing, giving feedback on the ToR is another opportunity to educate clients.

    Participatory evaluation role in capacity building: I think it is the same as in the previous point if we follow the best practices and the client is willing to contribute effectively in the mission, then the whole process is a capacity building. Myself the first time I learned about evaluation (15 years ago) it was when I was asked to act as a contact person on behalf of my employer organization to coordinate for the company conducting the evaluation of a 3 year program. This task was repeated again for another program. So giving me the chance to read the ToR, connect the evaluators to communities, participate  in some data collection activities and later reviewing report, were my first capacity building opportunities in evaluation.

    Evaluating Capacity building component: This does not happen actually in all evaluations I participated in so far.

    This are my quick thoughts of the issue.

    Good Luck

    Naser Qadous

  • Dear All,

    I wish to bring forward a few insights to this important discussion. Perhaps we may first need to analyze the question - "what can evaluations do in terms of capacity development?" For me, I am seeing it from 2 angles:

    1. Could we develop capacity through evaluations?

    2. Could evaluations help in capacity development?

    1. This might focus on who is involved in evaluations, how are they prepared, what can they go through, what lessons have they learned, how are they prepared to feed such lessons into future programs?
    1. This might actually look at how evaluations could be better designed to develop capacities of all actors, so that they are able to identify key weaknesses/challenges (in terms of delivery capacity) in a project or program setting and proffer suggestions/new ideas/solutions for redress of course considering sustainability, accountability and other aspects of programming.

    The debate continues!!!

    Many thanks!!

    Kebba Ngumbo Sima

  • Thanks Dorothy for sharing this link. I find it interesting and useful.

    My projects fund several training activities and tools like this help to enrich my perspectives for evaluating the training activities to ensure effectiveness, relevance and sustainability.

    Currently, I use a standards template which pursues to measure/evaluate the extent to which the training concept, plan, implementation and evaluation measures post-implementation meet the prescribed standards. The tool adopts a highly transparent and participatory manner in that all stakeholders involved in the training activity are evaluated within the same premises and time through somewhat of a focused group discussion. The process is simple as it only requires respondents to answer Yes, No or Not Quite and since it is participatory, it allows for further insight into the evidence to justify the responses. For instance, an evaluator can probe further to understand why the scores. It is flexible as it allows for one to introduce some ranking or grades e.g. Yes = 10, No = 0 and Not Quite = 5, depending on the specific needs of the evaluator and user.

    In essence, the key indicators to measure, as suggested in KirkPatrick's Model, are more or less found in the tool which I am using:  According to the Kirkpatrick's Model, it focuses on four points: 1) the degree to which participants find the training favourable and relevant to their jobs (Reaction); 2) the degree to which participants acquire the intended knowledge, skills, attitude, confidence and commitment based on their participation during the training (Learning); 3) the degree to which participants apply what they learned during training when they are back on the job (behaviour); and 4) the degree to which targeted outcomes occur as a result of the training and the support and accountability package (Results).

    It's interesting the points of convergence here... Here you can find the tool I am currently using for this exact purpose: https://dgroups.org/?nprypl28

     

  • Andrea Meneses and Nataly Salas

    Andrea Meneses and Nataly Salas

    Proyecto Focelac

    Dear colleagues,

    There is no doubt that capacity development in evaluation is crucial to guarantee an effective evaluative practice at all levels. In the Focelac project (Promotion of capacities and articulation of actors of the evaluation of Latin America), implemented by DEval (German Institute for evaluation of development cooperation), together with the Mideplan (Ministry of national planning and economic policy of Costa Rica), we work with a systemic approach to capacity development.

    The approach proposes the development of capacities in evaluation in three levels of incidence:

    • Creation of a favorable environment for evaluation (evaluation standards for the region, national evaluation agenda, availability of data, allocation of budget for evaluations, among others)
    • Development of institutional capacities (parliaments that use evaluation, institutionalization in the public sector, training programs in quality assessment, strengthened evaluation networks, informed civil society that uses evaluation, etc)
    • Individual capacities (trained actors, evaluation managers, sensitized political and civil society actors, etc.)

    The application of this approach in Costa Rica has contributed to institutionalization of evaluation and provided elements of sustainability to progress, such as the preparation of a National Evaluation Policy (PNE) and the establishment of a National Evaluation Platform, where the different actors articulate efforts in this domain and follow up on the Policy.

    In Latin America, the application of the systemic approach includes actions in the following areas:

    1) Inclusion of participation in the evaluation processes

    A variety of actions have promoted the participation of actors in the evaluation processes:

    • Training: design of a five-day course for participatory evaluation programming. Taught in Guatemala and Chile. Taught in shorter formats in Mexico, Costa Rica and Ecuador.
    • Institutionalization: support for the preparation of a participatory evaluation guide for MIDEPLAN (Ministry of Planning and Economic Policy of Costa Rica).
    • Practical experiences of participatory evaluation where the participants are the same actors that form the evaluation teams: support for the participatory evaluation in Valle de la Estrella (Costa Rica) and the participatory evaluation program in Deutsche Welle Akademie (DWA) ( in Bolivia, Ecuador, Guatemala and Colombia), TECHO and Country Service (in Chile) (You can find the final report of the participatory evaluation in Costa Rica at this link: http://foceval.org/wp-content/uploads/2016 /12/20170228_Informe-final-EP.pdf )
    • Research: articles on the evaluation experience in Costa Rica and on the application of the principles of collaborative evaluation approaches.

    ii) Capacity building in Young and Emerging Evaluators

    The project is carrying out actions to strengthen the capacities of young and / or emerging evaluators that favor their integration in the labor market. The actions executed are grouped into four strategies:

    • Learning assessments: teams made up of Young and Emerging Evaluators.
    • Practical training (training, scholarships, mentoring programs).
    • Inclusion of a Young and Emerging Evaluator in evaluation teams.
    • Sensitization of organizations / institutions on the inclusion of Young and Emerging Evaluators.

    iii) Collaborative construction of an Evaluation Capacity Development Index

    The project is coordinating, together with the World Food Program, the construction of an Index of Capacities in Evaluation (ICE), which allows to measure the capacities and evaluation practices in the context of policies, programs and social services of Latin American countries.

    The main function of the index is the improvement of the evaluation agenda of and with the national authorities as the main recipients, facilitating the exchange between countries and organizations based on the identification of critical areas and good practices to share.

    Representatives of the governments of the region, the Latin American Evaluation Network (RELAC), national VOPEs, UN agencies, as well as foundations, institutes and research centers, and institutions that provide training and development of national capacities in evaluation, such as CLEAR and FIIAPP, participate in the development of the index.

    This systemic approach in development of capacities in evaluation has allowed us to address different key areas for the establishment of a culture of evaluation in the countries in which we have worked.

    Andrea Meneses and Nataly Salas

    Proyecto Focelac

  • Dear All

    EVALSDGs has generated an "Insight" on using the Kirkpatrick model for evaluation of training events that may contribute to evaluation of capacity development based on the experience of UNITAR. See the Insight here: https://evalsdgs.org/wp-content/uploads/2019/04/04_evalsdgs-insight-kirkpatrick-model-final.doc.pdf

    Kind regards

    Dorothy Lucks
    Executive Director of SDF Global and Co-chair of EVALSDGs

  • Sometime in 2018 my project collaborated with West Africa Rural Foundation (WARF) which is a regional NGO specialist in building capacity for rural development initiatives, in order to do Outcome Assessment using the Outcome Harvesting concept. Outcome Harvesting happened to be a pretty new concept, at least for our case in The Gambia.

    In order to embark on the exercise we first exchanged on the details of implementation modalities and key partners to engage in the exercise. Outcome Harvesting is a highly participatory approach and stakeholders involve all parties to the project, including beneficiaries. The process requires both quantitative and qualitative data to provide evidence of outcome achievements.

    Given that the tool has not been applied in the country before, or that it’s relatively new, we appreciated the fact that the target participants in the exercise should first be trained on the concept. As such we devoted the whole of the first working sessions to do an introduction of Outcome Harvesting, it’s rationale and approach to the point that our target participants were comfortable using it to identify activities which contribute to specific outcomes going backwards from the outcome level to outputs. 

    During the second round of the same assessment, this time targeting another set of outcomes, our technical partner (WARF) decided the was no need for the first working session on the introduction of OH, given that this was the second series. However what we did not pay close attention to was that this series involves a whole new set of participants who have no idea about OH. So we went straight into the session and by the time we got to the group presentation on the OH exercise, we saw clear evidence that this second group did not do as well as the first group. The linkages between the outcomes and the initiative which brought them about was weak, thus requiring more evidence generation.

    The purpose of sharing this experience is to provide evidence that for participatory evaluation to be effective the evaluators capacity should be built first. This also points that there is no short cut to capacity building and any attempts to do so will have a negative effect on the quality of results.

    I also want to add that not just the participants should have their capacities built but also the client. In my case, we spent some good time sensitizing  the Project Director and entire senior staff of the project. If findings of Outcome Assessment must be used I think the client is in better position to implement and appreciate the findings if they’re partners in the implementation of the tool.

    Just my thoughts please, thank you.

    Paul Mendy

    National Agricultural Land and Water Management Development Project

    Gambia Evaluation Association

  • Josephine Njau

    Josephine Njau

    AGRA

    Thank you Isha for this response.

    The recommendations you have proposed are a good guide to a participatory evaluation if carefully implemented taking note on the soft skills. There are many times we talk of being participatory yet the approach of the evaluators does not leave room for this.

    Josephine Njau

    AGRA

    Kenya

  • Dear Luisa and Lavinia,

    Thanks for sharing your work and thinking on Evaluation of Capacity Development and the framework you are working on.

    Unfortunately, many of these frameworks are useless if we do not develop the capacities of evaluators in the first place. Most frameworks are uninformative and lack of room for subjective and suitable adjustments.

    For instance, you ask about participatory evaluations. Most TORs are participatory in general (I think it is always cut and paste) and indicate methods that are generic and not up to evaluation expectations. In agriculture very often evaluators are either agriculture specialists or researchers; they are not evaluators and are not able to capture the specific needs of farmers that differ one from the other despite being all farmers. Often they use blanket commonly known questions.

    Participatory approaches are certainly a way to carry out a meaningful evaluation and to develop capacities of evaluands and beneficiaries, but to get there we have to make sure participation is effective and not a token. 

    Here some recommendations:

    1. We need to develop capacities of evaluators on the use of participatory approaches so that they are able to understand farmers’ points of view.
    2. Develop a tool / guideline for participants on participatory sessions.
    3. Elaborate soft skills as well as on the job skills on evaluation for young and emerging evaluators
    4. Develop capacities of trainers and facilitators on how to address marginalized people and indigenous communities in line with the focus on equity and gender.
    5. Guidelines: competences on selecting evaluation organization and external evaluators (individuals). 

    Isha Miranda

    Independent consultant and trainer, Sri Lanka

  • Dear Luisa and Lavinia,

    The Extent of Capacity Development as an Indicator of Success.

    I am happy to see this long neglected aspect of evaluation has received the attention it deserves. Other things being equal, one has too often seen otherwise successful endeavours quietly fizzle out when the outside professionals had left it. The reason is simple: when a project has been completed, the locals in charge simply lacked the know-how and skills needed to run it efficiently, maintain it or a combination of both.

    It is impossible for a pragmatist envisage a just ‘one off’ project, i.e., when it is successfully completed, no further human effort is needed to keep it going. Of course, one may argue that running a refugee camp provides a good counter-example, because once all the refugees have been properly assimilated into the host society or repatriated, the project is truly finished. But, in real life, one seldom sees such except in a few rather affluent countries. Besides, vast majority of projects evaluated are concerned with enhancing the daily lives of ordinary citizens of a country.

    Therefore, it stands to reason that when planning a project, it is vital to its success to begin with the overall purpose of the effort. It is simply to improve some aspect of daily lives of some target group. At this point, it is so easy to let a planner’s reductive imagination soar above the rosy clouds. We have already seen two examples of that in the previous EVAL-ForwARD forum, viz., a road and a bridge.

    I think it is crucial that the evaluators come in at this point to emphasize that unless it can be established beyond any reasonable doubt that the potential beneficiaries of a project are willing and able to derive its benefits, it would be futile to initiate it.

    Never under estimate their willingness and ability. Many successfully completed public health projects languish unused, for the culture of the intended beneficiaries does not value good health as highly as it is done by other cultures. Likewise, desire for prestige has driven some to plan advanced telecom networks to provide cellular telephony to rural youth. Here, their ability to use them for ‘developmental purposes’ has been overlooked. Facts are simple; areas involved lack good basic road transport and the target group is hardly literate. So, cell phones will provide a source of entertainment and long-distance gossip. Hardly a benefit especially in view of its cost and the consequences.

    After these longish preliminaries, let us assume that the project involved is indeed appropriate i.e., it will really benefit the target group because the members of it are willing and able to use it. Capacity building cannot influence this willingness for it belongs to another category, but it is vital to one crucial aspect of this ability viz., the ability to run the project well and to keep it in running order while undertaking the improvements it needs in the long term.

    I am not certain to what extent the capacity of the public to benefit may be enhanced unless it is integrated as an essential component of a project. This is especially true in cases where the overall objective has been to improve public nutrition. Other things being equal, a project to increase food production would not lead to better nutrition unless the target group has an adequate dietary competence, i.e., knowing what to eat, how to prepare it, etc.

    So, it would be reasonable to affirm capacity building is an indicator of success in evaluation, and ought to be incorporated into a project at its inception. However the question, whose capacity and to do what, needs careful consideration. At the theoretical level, one can distinguish between two sub-groups in a target population, the overall beneficiaries and those who are expected to continue the operation of a project on its completion. I hope this might be of some use.

    Cheers!

    Lal Manavado

    Norwegian Directorate of Health, Norway

  • Dear Luisa, Lavinia, Ellen and other EvalForward colleagues,

    I have taken note of your important work on inclusive bottom-up evaluation approaches and tools that empower evaluands and stakeholders. Please keep the good work up. 

    We also have some tools in that direction that provide frameworks for design and utilization of beneficial evaluation and approaches. Some of the templates can be shared. 

    I believe there will be opportunities to demonstrate how these work. 

    Kindest regards 

    Nelson Godfried Agyemang 

    Rector, Institute of Certified Management Consultants (ICMC) Ghana
    Secretary General, Coalition of Farmers Ghana (COFAG)
    Facilitator, Innovation Working Group of World Farmers Organisation (WFO)

  • Dear Luisa and Lavinia,

    Thank you for your email which made me smile broadly. A recently published a UN Women guide, which I co-authored, Inclusive Systemic Evaluation for Gender equality, Environments and Marginalized voices (ISE4GEMs) holds capacity development of evaluands, beneficiaries, stakeholders at the heart of its methodology (see link to guide below). I also have included a link to a publication from Danny Burns, a colleague in the UK that is doing this exact work with very marginalized groups.

    We used this approach in Guatemala with six communities of indigenous artisans, local partners, and the sponsoring NGO with great success. All participants seemed to better understand, describe and clarify the intended and unintended impact of the program in which they had participated. They were also able to identify non-human marginalized voices (e.g. their culture, climate change) which contributed to positive and negative outcomes of their involvement with the NGO program.

    I will be adapting and using ISE4GEMs in two upcoming development evaluation projects in Cambodia, Vanuatu, Kenya and Colombia and will be able to share more about how the capacity development bolstered (or not) the evaluation after July 2019.

    ISE4GEMs: http://www.unwomen.org/en/digital-library/publications/2018/9/ise4gems-a-new-approach-for-the-sdg-era#view

    Danny Burns: https://www.sciencedirect.com/science/article/abs/pii/S0377221717310366

    I would welcome scheduling a call with you to talk about your experiences and learn more about your work. I am based in California, USA.

    Ellen D. Lewis

    Ethos of Engagement, USA