Can we use an evidence-based, evolving Theory of Change to achieve "local learning” during project design?
Dear Members,
I would like to discuss an issue that has been bubbling up in the development / peacebuilding community: successful, sustainable projects need to understand the desires, capabilities, and relationships of each local context. Often, the relevant local issues are unknown and the local learning questions must be discovered “at the conceptual stage of the project / program”, when the project staff begins to interact with the local people.
How to do this is the focus of this discussion:
Can we use an evidence-based, evolving Theory of Change to achieve "local learning” during project design, i.e. discover the desires, capabilities, and relationships of a specific local context ?
An evidence-based, evolving Theory of Change is one that is continually revised based on evidence rather than assumptions.
Please share any view or experience. Further questions to drive the discussion are:
- What is a real-world example of a localized project design?
- What would an evidence-based, evolving Theory of Change look like for that project?
- Do you know someone who might give this a try? What opportunities and obstacles do you see?
- How does your own work relate to the topic question?
Many thanks,
John
John Hoven
independentDear all,
Let me close this discussion topic with some reflections on: What have we learned? What should we learn next?
Two things we have learned are:
1. Large-scale projects can be customized to local desires and capabilities. CGIAR does this, to ensure that farmers adopt their agricultural research.
2. An evolving ToC can be done with hardly any money or skill. (See attachment.) This allows locals to be full partners in a village-level development project.
Three things to learn next:
1. Donors can offer contracts that do not specify actions, outcomes, or indicators of success. (See Grandori et al. and Reuer.) That would remove a major obstacle for local learning. Businesses do this when contracting for innovation, because the actions and desired outcomes aren’t known when the contract is signed.
2. Nested ToCs let you zoom in to see more detail, like internet maps. Close-up ToCs feature a specific village, product market, or social group. They evolve rapidly during the start of a project (every week or two, not every 6 months or a year). Zoom out to see a ToC with less detail (actors are categories rather than named groups). This categorical ToC can become a first guess ToC for a new project (e.g., Figure 2, page 5 in the Community-driven development evaluation by IFAD).
3. An evolving ToC gets revised when evidence strongly disproves one step in the chain of cause-and-effect, or it is confirmed when evidence strongly validates the step. That evidentiary proof delivers the accountability that donors require.
Thank you all for your contributions. Please feel free to email me with additional thoughts.
John Hoven
jhoven@gmail.com
Diagne Bassirou
Responsable Suivi-EvaluationResponsable Suivi-Evaluation WACA- West Africa Coastal Areas Management programDear John,
Thank you for this broad and rich theme.
A project can be defined as an action or compound actions to bring about change or change on a topic or topics in a defined locality. The ToC theorizes the expected change, as does the results framework, which groups together expected results on the basis of planned and budgeted activities throughout the execution of the project. However, it is important to note that the ToC must be more objective than a vision in its definition, it must be based on the results of a diagnosis of the intervention area in a contextual (social, economic, environmental, life style, etc.) and situational way in relation to the project. The definition of a problem tree is very important for objectively parameterising the ToC. Now, it is a theorising of the expected change in the project defined on the basis of contextual and situational evidence at the start of the project, so its revision is not essential if the project framework remains intact. However, if, for example, the project's intervention framework is revised at the end of a mid-term review, as well as the results framework, it becomes imperative to revise the ToC in order to be consistent with the evolution of the intervention. This task is the responsibility of the project team and stakeholders who have a clear picture of the intervention and who can also decide in a participatory manner on these different changes.
Following the example of the Global Agriculture & Food Security Program Missing Middle Initiative (GAFS MMI) Senegal project, after the definition of the annual programs of work and budgets and partnerships, the project team, with the support of the stakeholders, began a revision of the project results framework followed by a slight re-parameterisation of the ToC because the changes did not require a global revision, which may be the case in other projects or programmes depending on the re-planning and revision.
[contribution posted originally in French]
Annette Scarpitta
Co-Director and U.S. Representative, The Rwenena Project Independent Consultant and Congo Federation of Smallholder Farmer Organizations - S. Kivu (FOPAC)John Hoven and I have spent considerable time developing an evolving ToC for my work serving the community of Rwenena, S. Kivu province, DRC. Along with local NGO partners, I have been active in programs I co-create and direct in the Ruzizi Plain for 10 years. I am currently working with professional agronomists and community facilitators at the Congo Federation of Smallholder Farmer Organizations-S. Kivu (FOPAC).
Most of our team's premeditated plans for advancing community-led development in 2020 were overtaken by 3 concurrent crises: a deadly flood in the town of Uvira, after which 75 IDP families joined the Rwenena community; crop devastation in Rwenena from the same flood; and mandatory stay-at-home orders as the threat of COVID-19 hit. Each one contributed to dire food insecurity.
Largely in response to these conditions came a positive outcome: the formation of a licensed and certified women’s enterprise whose members produce and sell their own formula of hand-sanitizing gel. This initiative emerged over a tense period of several months, and every week presented new and tentative changes.
John and I have tracked the course of this development with frequent assessments using an innovative approach to reflect local conditions. The flexibility of an adaptive ToC made perfect sense: we experimented with an evolving graphic tool that incorporated new sets of conditions to evaluate and act upon. Components included root causes, cause-and-effect, capabilities, actions, and outcomes. In another section, the tool featured programmatic sensitivities, with history, financial liabilities, cross-cultural inputs, challenges, actions, and successes. Another Congolese NGO is willing to try out an evolving ToC for a new project in Rwenena, Ituri province, and/or N. Kivu province.
Janvier Mwitirehe
Evaluation Researcher Horizon of Excellence LtdHello John,
The answer is NO.
Thanks
John Hoven
independentIt seems that both at individual and organizational level there are attempts to ensure the ToC is not “cast in stone” as mentioned by Jackie Yptong, and that there are some good examples of using ToC for learning like in CGIAR.
I thank all of you for your contributions and describing how you or your organization are using ToCs.
I would like to go a little further and ask you if you know actors / donors that would be ready to start projects at local level with no assumptions and to develop a ToC as they develop their understanding of what is needed on the ground?
In this case the project would not start with pre-defined outputs but only with a general / broad outcome, the causal pathways to reach it not defined yet.
Below my feedback to Erdoo and Janvier and follow up questions for their consideration.
Erdoo Karen Jay-Yina says that CGIAR agricultural research programs are learning to use Theories of Change more effectively. Their ToCs focus on “the mechanisms of change by which the new agricultural product gets adopted by a farmer. Can farmers use new technologies? Do they even want to? TOCs should identify the mechanisms of change based on evidence and testable hypotheses. Stakeholder farmers should be involved from the outset of the research.” (ISPC 2012 pp. 14, 23, 7, 25) Erdoo says, “When the underpinning ToC and the evidence are revisited, captured and tracked coherently, then process tracing or contribution analysis of particular causal pathways is made easier.”
My follow-up questions for Erdoo:
Does CGIAR use process tracing to make evidence-based predictions? Have you encountered others in the development / peacebuilding community that are using ToCs as a learning tool?
Janvier Mwitrehe cites two reviews of USAID’s use of ToC. In Tanzania, “USAID/Tanzania did not anticipate the need to revisit the foundational Theory of Change. However, after its second year, it became clear that the original Theory of Change and the reality of implementation were not aligned. Some of the activities were not implementable, due to changes in the local context. The original Theory of Change was a binding constraint to the Activity’s successful implementation.” A 2019 review of TOC as an Adaptive Management Tool confirms USAID’s use of ToC as a contractual binding constraint: “The main purpose of a TOC review is to ensure alignment of the TOC with the goals you are trying to address. Factors that might prompt a special review of a theory of change include failure to influence the next level outcome as expected, previously unknown causal pathways, and significant changes in the political or environmental conditions of the local context.”
I think these two examples show that for USAID, a ToC is a contractual binding constraint rather than a tool for learning. My new follow-up question is this: Janvier, are you aware of any discussion within USAID of the need to adapt to a local context, rather than just adapting to changes in the context?
John Hoven
Richard Tinsley
Professor Emeritus Colorado State UniversityI very much appreciate the interest in localized project design, but consider it a real logistic challenge to obtain. The problem is related to the lead time and cost associated with bringing a project on-line so you can extensively meet and discuss the needs of the local beneficiaries. Typically from conception to contract implementation takes at least 2 years with the up-front investment of over a million US$. During this extended period most of the critical decisions are made regarding the type of innovation to be considered and staff hired to accommodate this. Thus by the time you have the opportunity for detailed discussion on needs, there is little flexibility in approach. Also, once implemented the MEL information to evaluated the program is often tilted to appease the donor as needed to secure contract extensions and future funding.
Please review the following webpages:
https://smallholderagriculture.agsci.colostate.edu/project-development-…;
https://smallholderagriculture.agsci.colostate.edu/mel-impressive-numbe…;
Thank you.
Jackie (Jacqueline) Yiptong Avila
Program Evaluator/Survey Methodologist Independent ConsultantDear Colleagues,
I am happy that John has brought up ToC for discussion. I often wondered why evaluations review/mention the ToC at the inception phase but seem to ignore it during the analysis; seldom assess the validity or robustness of the ToC and the accompanying assumptions. Instead, Evaluation focus on the OECD criteria and rarely have I seen the validity ToC addressed in evaluation reports, have you ?
I do not think that ToC are cast in stone hence I like to think that evaluations can/should show data that 1. confirm that the assumptions are valid 2. Prove or disprove that the expected outcome and results are realistic, 3. Identify which one will be realised and 4. for those that will not happen, what can be done/changed in the intervention so that the results be achieved; else tell us what will happen if the intervention is carried on as is.
I would like to suggest an article from John Mayne who sadly passed away last December. In it, John discusses the criteria for a robust ToC and a tool for carrying out analysis of ToCs which he discusses in an ex-ante or ex post setting. Please see https://www.researchgate.net/publication/321510354_Theory_of_Change_Analysis_Building_Robust_Theories_of_Change
Kind regards,
Jackie
Janvier Mwitirehe
Evaluation Researcher Horizon of Excellence LtdDear John, Thank you for your feedback and very good follow-up questions.
The answer to both questions is YES. In addition, I would like to indicate that, in USAID perspective, interventions (activities or projects) are designed within both local assumptions and evidence; and that assumptions are one of the components of the Theory of Change (TOC). For those with the need to know, let’s first define USAID development, CLA framework, and then explain how TOC may be reviewed along with examples:
USAID development Context
For USAID to operate in any country, it first of all defines a strategic plan (called country development cooperation strategy-CDCS) aligning it with partner country priorities. CDCS lays its foundation on country risks analysis and consultations with different Government and partners institutions. The CDCS portrays a county development goal with sector development objectives supporting that goal and intermediate results falling under DOs. Every DO is described along with respective risks and assumptions. Let’s note that the performance monitoring plan (PMP) is designed along with the strategic plan.
In line with the strategy, one or more projects are designed to contribute to the achievement of development objectives. A "project" refers to a set of complementary implementing mechanisms or "activities," over an established timeline and budget, intended to achieve a discrete development result, that is often aligned with an intermediate result in a CDCS. This one goes with its associated project MEL plan.
Finally, the activity which is the level of implementation. USAID implements its strategies and projects through activity design and implementation. An activity carries out an intervention or set of interventions, typically though an implementing mechanism such as a contract, assistance program, or partnership. This goes with its associated activity MEL plan.
Putting in simple terms, any activity is viewed within a certain project which at in turn responds to a certain development goal. Activity MEL plan feed into project MEL plan which also feeds into PMP.
CLA Framework
To cope with an evolving environment, USAID integrates Collaborating, Learning, and Adapting (CLA) into is program life cycle to ensure that interventions are coordinated with others, grounded in a strong evidence base, and iteratively adapted to remain relevant throughout implementation. CLA framework consist of managing adaptively through continuous learning.
USAID definitions:
Pausing and reflecting on a regular basis helps identify what’s working and what needs adapting and it allows USAID to consider the impact of changes in the operating environment or context. Examples of pause and reflect opportunities include portfolio review, learning events, team meetings, communities of practice, learning networks, etc.
TOC review
In USAID context, the underlying logic of a project/activity is captured in the TOC. A strong TOC is a narrative that summarizes the context, identifies points of leverage for change, specifies the needed outcomes, describes the interventions expected to realize those outcomes, and states the associated assumptions.
The process of developing the TOC should be participatory, involving broad engagement with local stakeholders and a series of dynamic critical thinking exercises to examine the body of evidence, draw out different viewpoints, and reach consensus on the best possible approach given the available information.
Therefore, TOC design is always based on local evidence and assumptions; and given the rigorous process, it is less likely to be poorly understood. Even if this may happen or assumptions change, regular pause-and-reflect actions will help realize that gap and adaptive management should follow. For instance, for Food for security activities, there is a requirement for partners to develop a Theory of Change (TOC) for their activities and to review it “whenever there is new evidence, or when there are changes in the context that affect assumptions or hypothesized pathways of change” and, at a minimum, annually.
We have examples where TOC was reviewed:
In conclusion, USAID has different options to adapt its interventions: at activity, project or strategy level. Through its collaboration, USAID may also co-create. For this to be possible, its learning should be strengthened.
Thanks.
Janvier M
John Hoven
independentWhat I love about this forum is that it brings out such a broad array of perspectives. Let me summarize briefly, and suggest a way forward on each perspective.
The issue is using an evidence-based, evolving ToC to design a localized project.
Jean Providence Nzabonimpa offered some compelling reasons to embrace the concept: “There are important factors unknown at the design stage of development interventions… Keeping the ToC intact throughout the life of a project assumes most of its underlying assumptions and logical chain are known in advance and remain constant. This is rarely the case… Assume X outputs lead to Y outcomes. Later on one discovers that A and B factors are also, and more significantly, contributing to Y… A project which discovers new evidence should incorporate it into the learning journey.”
Follow-up question for Jean Providence: Can you describe a specific project that illustrates your point? Do you know anyone who might use an evidence-based, evolving ToC to design a localized project?
Serdar Bayryyev highlights “community-driven development” projects, which focus on social capital and empowerment. A case study review of these projects used a theory of change based on the assumption that a participatory implementation process supports people-centered development processes.
Follow-up question for Serdar: Have you seen a community-drive development project that used an evidence-based, evolving ToC to design a localized project? Do you know anyone who might give this a try?
Janvier Mwitirehe says that “we operate in fast evolving environment that need to be considered.” This can be done through USAID’s “collaborating, learning and adapting” (CLA) framework, which says that “critical assumptions central to a TOC must be periodically tested – which is a central feature of assumption-based planning.”
Follow-up question for Janvier: Suppose a project environment is poorly understood, but not rapidly evolving during the first 6 months that the project staff interacts with local people. Will the CLA framework help the project staff design a ToC based on local evidence rather than assumptions? Is USAID receptive to using an evidence-based, evolving ToC to design a localized project?
Carlos Tarazona, Senior Evaluation Officer FAO, says that “In the FAO Evaluation Office we have used Theory of Change (ToC) … for evaluation purposes only.”
Follow-up question for Carlos: Have you seen an evidence-based, evolving ToC used for real-time evaluation? If someone wanted to use an evidence-based, evolving ToC to design a localized project, could they get helpful advice from an expert in real-time evaluation?
John Hoven
Erdoo Karen Jay-Yina
Senior Evaluation officer CGIAR's Independent Advisory and Evaluation Service (IAES)Dear John and colleagues,
Excellent question, which sparked reflections based on insights from the recently completed independent reviews of 12 CGIAR research programmes (CRPs).CRPs are global research-for-development programmes covering themes from single-crop programmes like RICE, to integrated cross-cutting programmes like Climate Change, Agriculture & Food Security (CCAFS).
How does your own work relate to the topic question?
Following earlier announcement, the evaluation function of the CGIAR Advisory Services only recently completed independent & rapid reviews which covered the quality of science as well as the effectiveness of the outcomes achieved, zooming in on progress along ToC and usefulness of the ToC.
What is a real-world example of a localized project design?
The evidence need not be de-coupled from the risks and assumptions as together they give a big picture of the ground realities, irrespective of the size and type of intervention/program/initiative. To put things in perspective, ToC for CRPs are layered. First, all CRPs have a ToC which contribute to the CGIAR overall Strategy and Results Framework. Cascaded down, the CRPs in turn, have different Flagship programmes (FP) - each FP contributes through specific impact pathways nested within the overall ToC. The FP ToC were co-designed and developed in collaboration with project teams, reflecting bottom-up approach- the process much appreciated overall in reviews.
CRP reviews found that, although most CRPs incorporated evidence fed in from previous independent evaluations and impact assessments from conceptualization and during implementation, the ToC had varying levels of use and evolution. Overall, for some of the CRPs, the reviews found value in the process- in cultivating ToC-thinking even among scientist but limited evidence in its use as a measurement tool, linking it to the results framework.
What would an evidence-based, evolving Theory of Change look like for that project?
Given the global nature of CGIAR and majority of CRPs, grounding in the context has been key. ToC are very context/programme-specific. Framed within the context, one of the CRPs (Forests, Trees and Agroforestry-FTA) had a considerably evolved ToC-use. It had annual targets adapted and indicators suited to the field realities. Some CRPs did not make any changes to their ToC (WHEAT). One of the conclusions for its Review (WHEAT) was that its’ ToC was good for “(1) priority setting, (2) assessing contribution of scientific outputs, (3) seeking and justifying funding, (4) mapping trajectory to impact and (5) reporting” but unsuitable for assessing the effectiveness of CRP or flagship. Why? The review report says “because that was not its purpose.” Intentionality matters when developing ToCs, in order not to limit its usage in evidence-generation, learning, reporting and reprogramming, ToC development and iteration teams have to be intentional about co-designing it as an iterative evidence tool, tying in the indicators, linking the drivers and risks, testing the assumptions and causal pathways.
What opportunities and obstacles do you see?
Adaptive management was not found to be necessarily tidy, having revisions of ToC based on evidence, assumptions and risks could make the process as well as aggregation and reporting of results cumbersome. Yet this can be managed if reporting is consistently structured based on the indicators and targets linked to the (updated/revised) ToC. The suite of metrics have to reflect design, implementation and scale-up of scientific innovations on the ground and be flexible, useful and coherent to allow progress to be tracked in a way that gives a clear picture of progress and the context has to promote a learning-by-doing approach. When the underpinning ToC, the evolution of the system and CRP metrics, and the evidence, with associated risks and assumptions, are revisited, captured and tracked coherently, then process tracing or contribution analysis of particular causal pathways is made easier. On the other hand, when ToC are not context-specific (time and place), which was the case in one of the CRPs (Grain, Legumes and Dryland Cereals-GLDC), accurate reflection on progress is challenging, as some TOC impact pathways may become obsolete.
Reading other responses has been interesting, obviously, your question sparked an intriguing discussion. Should you and colleagues be interested in more information on earlier reflections around ToC in CGIAR and actual CRP Reviews, you can check out the hyperlinks.
Best regards,
Erdoo Karen Jay-Yina
Janvier Mwitirehe
Evaluation Researcher Horizon of Excellence LtdDear all,
The reality is that we operate in fast evolving environment that need to be considered when implementing our programs or projects. As colleagues pointed out, constraints brought up by COVID-19 for example should push to adapt in order to successful achieve the intended results.
This can be done for example introducing what USAID called " collaborating, learning and adapting (CLA) framework" [CLA Tool Kit Landing | USAID Learning Lab] which involves a set of practices integrated in the program cycle to ensure that programs are coordinated with others, grounded in a strong evidence base, and iteratively adapted to remain relevant throughout the implementation. With this, identified critical assumptions central to a TOC must be periodically tested – which is a central feature of assumption-based planning - and if no longer valid then adaptive management steps employed in response.
It is true that this brings an implementation complexity which also requires the use of new, and still evolving complex-responsive evaluation methods. Under such conditions, there is need to integrate data science* in MEL activities.
Note: Data science is an inter-disciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from many structural and unstructured data. Data science is related to data mining, machine learning and big data (https://en.wikipedia.org/wiki/Data_science).
Best,
Janvier
Jean Providence Nzabonimpa
Regional Evaluation Officer United Nations World Food ProgrammeDear John,
Happy 2021 to you and all our colleagues on the platform!
Thanks for raising a critical and much-intriguing question worth looking into as evaluators. I am sure I cannot do justice to the important points you have raised but at least I can share my two cents. I hope colleagues will also keep coming in for a richer discussion.
It is true we assume we understand issues affecting local communities. We thus design interventions to meet their needs. I completely agree with you. There are important factors unknown at the design stage of development interventions. When little is empirically and theoretically known about a community, little may be done and achieved. Ideally, we need to known the unknowns to design proper interventions and serve better the target communities. But it is unfortunate that it does not work all the time like that, it is not linear, more so in the pandemic-stricken era. We base on what we know to do something. In that process, we learn something new (i.e. evidence) which is helpful to redefine our design and implementation. The complexity of our times, worsened by COVID-19, has pushed all evaluators to rethink their evaluation designs and methods. It would be an understatement to point out that we all know the implications of social (I personally prefer physical) distancing. Imagine an intervention designed through face-to-face results chain as its underlying assumption to achieve the desired change! Without rethinking its Theory of Change (ToC), the logic underlying such an intervention may not hold water. This scenario may apply and rightly prove we need time-evolving ToC. In my view and professional practice, my answer is in the affirmative. We need time-evolving, evidence-informed ToC. We use assumptions because we do not have evidence, right?
Keeping the ToC intact throughout the life of a project assumes most of its underlying assumptions and logical chain are known in advance and remain constant. This is rarely the case. I believe the change of the ToC does not harm but instead it maximizes what we learn to do better and benefit communities. Let’s consider this scenario: assume X outputs lead to Y outcomes. Later on one discovers that A and B factors are also, and more significantly, contributing to Y than their initial assumptions on X outputs. Not taking into account A and B factors would undermine the logic of the intervention; it undermines our ability to measure outcomes. I have not used outcome mapping in practice but the topic under discussion is a great reminder for its usefulness. Few development practitioners would believe flawed ‘change’ pathways. Instead, I guess, many would believe the story of the failure of the ToC (by the way I hate using words fail and failure). Development practitioners’ lack of appetite to accommodate other factors in the time-evolving ToC when evidence is available are possibly the cause of such failure. In the end, evaluation may come up with positive and/or negative results which are counterintuitive, or which one cannot be linked to any component of the intervention. It sounds strange, I guess, simply because there are pieces of evidence which emerged and were not incorporated in the logic of intervention.
I guess I am one of those interested in understanding complexity and its ramifications in ToC and development evaluation. I am eagerly learning how Big Data can and will shed light on the usually complex development picture, breaking the linearity silos. As we increasingly need a mix of methods to understand and measure impact of or change resulting from development interventions, the same applies to the ToC. Linear, the ToC may eventually betray the context in which an intervention takes place. Multilinear or curvilinear and time-evolving, the ToC is more likely to represent the real but changing picture of the local communities.
I would like to end with a quotation:
“Twenty-first century policymakers in the UK face a daunting array of challenges: an ageing society, the promises and threats for employment and wealth creation from artificial intelligence, obesity and public health, climate change and the need to sustain our natural environment, and many more. What these kinds of policy [and development intervention] challenges have in common is complexity.” Source: Magenta Book 2020
All evolves in a complex context which needs to be acknowledged as such and accommodated into our development interventions.
Once again, thank you John and colleagues for bringing and discussing this important topic.
Stay well and safe.
Jean Providence
Serdar Bayryyev
Senior Evaluation Officer FAODear John,
In my view, the Theory of Change, or logical framework, or any other method used to guide the design of the development intervention (project) is critical. These methods should be based on comprehensive analysis of the development context and the critical issues to be addressed to meet the needs and desires of local communities.
Multilateral organizations have some examples of "community-driven development" projects. For example, the International Fund for Agricultural Development (IFAD) has recently published an evaluation synthesis of "Community-driven development in IFAD-supported projects" (IFAD, April 2020), which is based on the review of case studies of community-driven development projects. The theory of change used for this synthesis was based on the assumption that social capital and empowerment are at the center of the community-driven development approach. This theory of change assumes that participatory implementation process "...is expected to achieve a truly sustainable transformation of rural livelihoods by building poor peoples' capacities to make use of a wider range of livelihood options and by transforming community-government relations to better support people-centred development processes". This theory of change is illustrated in Figure 2 on page 5 of the synthesis paper accessible via the following link:
https://www.ifad.org/documents/38714182/41898849/ESR+CDD+-+final+with+c…
Kindest regards,
Serdar Bayryyev
Food and Agriculture Organization
Carlos Tarazona
Senior Evaluation Officer FAODear John,
Thanks for starting this interesting discussion! The approach that you outline is very similar to how Outcome Mapping (OM) is used for planning and monitoring purposes. In the FAO Evaluation Office we have used Theory of Change (ToC) and OM for evaluation purposes only.
I believe you can find some examples on evolving ToC and the application of OM in the real world at the Better Evaluation website: https://www.betterevaluation.org/en/plan/approach/outcome_mapping
Best regards,
Carlos