Carlos [user:field_middlename] Tarazona

Carlos Tarazona

Senior Evaluation Officer

My contributions

    • Dear Nick,

      Thanks for your very interesting post, and for sharing your experience in grappling with this enormous challenge. I wish to share a couple of thoughts, from my role as evaluator and as a commissioner of evaluations.

      In the short-term, I wonder the value that stakeholders may give to evaluations done at distance, and in the medium-term, the threat this pose to evaluation as a profession.

      What is the added value of an evaluation that is done from distance? In a couple of ongoing evaluations with large field components we are facing some issues whose consequence we should not underestimate as they might reduce the credibility of the whole exercise (inability to observe first-hand changes, reliance on the evaluand to select who participates and who does not, limits to triangulation with beneficiaries and local partners, etc.) and put our teams in danger of being challenged in case they come up with negative or erroneous if not inaccurate findings.

      Then, evaluation as a profession: if we are doing things from distance and without credible triangulation and bottom-up participation, what makes us different from those doing reviews or even performance audit? If we advocate for distance evaluations, and colleagues/partners realize that these can be done cheaply and in a non rigurous manner, we may have issues in the future i) selling evaluation as a distinctive and truly learning tool, and ii) getting adequate evaluation provisions/budgets.

      Linked to this, the moral imperative for evaluators of not making harm. In view of all the unknowns that this pandemic is bringing, it is our duty not to put more people at risk, neither local evaluators nor beneficiaries. Trying to postpone evaluations if feasible, at least till it becomes clearer what we could safely do in the field and what we cannot, it will just be a fair and ethical thing to do.

      Best regards,



  • Nowadays many donors, international organizations, government agencies and NGOs promote their use as a way to ensure that their day-to-day activities are aligned with their ultimate aims. I raised this topic with the Community and asked members about their experience with their use, and their views on the main value added of ToCs. Several members shared their experience and volunteered ideas for ensuring adequate use of ToCs, such as the following:

    • ToCs add value when they are co-generated in a participatory manner with project managers, technical experts, implementers and beneficiaries.
    • ToCs can be powerful communication tools to share the project’s
    • Thank you Jackie and Richard, and all the previous commenters! It has been an interesting discussion, with so many different points of view and insights. We will soon be wrapping this up and summarize the learning in an Evalforward's blog. Keep an eye on it!

      Best to all,



    • Dear Colleagues,

      I wanted to raise one aspect that has apparently become a standard procedure when doing ToCs - many colleagues have said that they use this tool largely for programme design/implementation/evaluation.  However, literature suggest that "Theories of Change may start with a program, but are best when starting with a goal, before deciding what programmatic approaches are needed." (see AEA presentation shared in a previous post)

      Thus, the starting point for ToC should ideally be development/humanitarian goals in a particular theme (poverty/hunger reduction, climate change adaptation, rural development, women empowerment, saving lives) that have been identified by key stakeholders (usually Government, since they represent us all, or humanitarian actors in their absence) for a given geographical area (country, state/region, province, district), and not the programme (or project) specific goal.

      As an example of this, in a recent evaluation of FAO's contributions to the development of the food and agricultural sector in Mexico (, we used the Mexican government's theories of change to map and then evaluate FAO's contributions. The Mexican government (an OECD member) indeed had by law to develop theories of change at different thematic and geographic levels (national/state) as part of their long-term (national development plans) and medium-term (strategies and programmes) planning process, often with CONEVAL advice ( This together with the fact that FAO had planed its programme of work along the lines of the Mexican's theories of change enabled the evaluation to assess FAO's contributions against these frameworks.

      I was wondering if other colleagues have experienced developign ToC having locally-agreed/owned development/humanitarian goals as starting point (and not the specific funding agency goal in mind), and whether they think this is a feasible way forward in their own countries/agencies.

      Best regards,



    • Dear Elamin,

      Thanks for your question. A few years ago the American Evaluation Association had a discussion on this topic (ToC vs Logic Models). Below is a link to the presentation made by Helene Clark (The Center of Theory of Change) during this session

      Best regards,



    • Dear Colleagues,

      Thanks for your very interesting contributions. If anyone could contribute with specific examples of ToC application, either in developing or developed countries, and highlight how this was useful for the programme under evaluation it would be great.



  • Prospective users of such information, however, have become less capable to take full account of this development, in part due to lack of time and capacities to absorb all the information. Decision-makers in particular demand and expect from researchers and evaluators to provide them with products to facilitate the uptake of such knowledge. Evaluation synthesis can fill this gap by building on findings from different sources to get a better understanding of the effectiveness of a program or policy. 

     A synthesis is the integration of existing knowledge and findings relevant to a topic, and has as its main objective to

    • Dear Malika,

      Thanks for sharing your experience with document analysis. It is indeed a great way to have early insights into the effectiveness of a programme or policy, although -depending on the amount of materials for review- it could also be a very demanding task.

      You raised a very good point regarding the issue of accessibility to documents. With the advent of the internet it is assumed that information is becoming globally available, while digitalization is making reports and research more and more accessible through online means. 

      What is the perspective in the global south? Are government (evaluation) reports and research from academia easier to access? Are they available in a format and language that make them suitable for synthesis and meta-analysis?

      Best regards,



    • Dear All,

      A colleague has shared with me a link to a series of synthesis reports of impact assessments on various agricultural topics done by the CGIAR:

      Hope you find it useful too!

      Best regards,



    • Dear Olivier and Lal,

      Thank you very much for your contributions.

      Regarding a definition of synthesis (or synthetic approach), for the purpose of this discussion, we can define it as the process of reviewing, assessing and synthesising existing literature or data to produce a series of outputs (products and services).

      A first step in this process is to review the quality of the literature or data that will be aggregated to ensure that is comparable and meet the research protocol requirements. Afterwards, the synthesis is conducted often by academic disciplinary experts, but can also be done by inter- or transdisciplinary working groups or evaluators drawing on knowledge from across academia and beyond, the latter to ensure a comprehensive analysis and avoid the pitfals raised by Lal.

      There are many guidelines out there on how to conduct synthesis reviews, especially in the areas of health and education. Perhaps the ones better know and applied in the field of agriculture and rural development are those developed by 3IE and Campbell Collaboration, accessible at this link:

      Best regards,



    • Dear Members,

      Thanks to those who contributed to the discussion on the use of synthesis and meta-analysis in development evaluation. The exchange supported my preparation for the What Works Global Summit 2019 (, where synthesis and meta-analyses are discussed as tools for designing, implementing and assessing programmes and policies. A synthesis is the integration of existing knowledge and findings relevant to a topic, and has as its main objective to increase the applicability of evaluation findings and develop new knowledge through the integration process. It is promoted as an approach that addresses the challenge of "information overload", delivering products that distil relevant evidence for decision-making.

      Here are the main issues shared by participants: 

      • Synthesis work needs to be focused and pragmatic in order to attain its goal.
      • Evaluation documents needs to be easily available, and the researchers need to interact with key stakeholders throughout the preparation process.
      • Meta-analyses of evaluations allows the identification of structural themes that affect performance, which is of interest to donors and programme managers.
      • A more holistic approach is required to enhance the engagement and outreach of the learning, i.e. by organizing dedicated workshops to discuss the findings of the evaluation synthesis.
      • Meta-analysis can provide very valuable insights on the performance of a programme or approach, and guide primary data collection. It can however be very time-consuming and thus not always feasible to undertake within the timeframe of an evaluation.
      • Meta-analysis can be an important tool for decision-makers but is not well known among evaluators. More training on its usage could help broadening awareness and application of the approach.

      Taking on this last point, EvalForward will organize a webinar soon for members interested in learning more on synthesis and meta-analysis. Stay tuned!


  • When evaluating projects, we identify issues that affect their effectiveness and note that these often originate from flaws at design and/or implementation stages.

    In early July, I raised this topic with the Community and asked which good practices members would recommend to governments, donors, international organizations and non-governmental organizations to improve project effectiveness.

    Several members shared suggestions and ideas for ensuring high-quality projects, such as the following:

    • Quality projects should clearly contribute to a bigger developmental impact of the country.
    • Projects should have clear objectives, milestones and monitoring of deliverables.
    • Commitment and ownership by the government is key; this goes
    • Dear members,

      Thanks to all of you for contributing to this discussion. 

      Here is a summary of the suggestions and ideas shared:

      • Quality projects should clearly contribute to a bigger developmental impact of the country.
      • Projects should have clear objectives, milestones and monitoring of deliverables.
      • Commitment and ownership by the government is key; this goes beyond the signing of the financial agreement and includes the actual commitment of human and financial resources.
      • Participation of beneficiaries since the design stage can help in ensuring that the project address a problem that is relevant to them. 
      • Projects should be relevant to the beneficiaries: “to the point that they are conscious of their right to hold project management units and concerned ministries to account”.
      • Having qualified management teams, who will ensure that projects are implemented with efficiency, quality, meritocracy, inclusiveness and sustainability in mind.
      • Donors and steering committees have a critical role in guiding project implementation, provided that members are knowledgeable about the project and have time to add value to it, and able to critically review implementation and provide directive for improvement.
      • There is a limited use of past lessons to inform new project design. Programmes should develop and validate key lessons, innovations and good practices experienced and these should inform new design/appraisal missions and future interventions.
      • There is reluctance to stop poorly conceived projects, even when it is clear that these will not achieve their objectives.
      • Projects should have a sustainability plan and exit strategy, including hand over of interventions to existing institutions or structures that are prepared to take over the management and operationalization of project outcomes. 

      I look forward to further exchanges with the Community!