How best to evaluate capacity development?

©DSW SLALE

From the EvalForward community How best to evaluate capacity development?

7 min.

In our last EvalForward Talks session on 30 April, I shared my experience of how to conduct a capacity-development project review from a monitoring and evaluation (M&E) perspective, and I would like to thank all those who participated for their comments and insight. Herewith a brief overview of the key points that emerged on how to capture the effects of capacity development (trainings, workshops and mentoring activities) in a meaningful way.

The twists and turns of capacity development

As you know, a broad approach is needed when evaluating advocacy and capacity-development projects ‒ and this doesn’t really fit with the linear processes of project logframes and related indicators. While planned outputs, outcomes and impacts remain important landmarks for project implementation, project teams and evaluators should be able to look beyond indicators and targets when assessing project effectiveness. We may be able to plan and anticipate certain elements of implementation and their effects at the project proposal stage, but how likely is it that awareness-raising exercises, advocacy activities and training and mentoring sessions will achieve precisely and solely what we were targeting? The knowledge and skills accrued during these projects are retained and put to good use over a longer period by a wide variety of people in highly diverse environments. Thus, while we may plan in linear fashion, reality can take many unexpected twists and turns.

My experience of monitoring and evaluating capacity development

By way of example, take our Strengthening Local Advocacy Leadership in East Africa capacity-development project, which saw 12 youth-led organizations advocate for family planning and reproductive health rights in Kenya and Tanzania. They received a year of foundational training and then got the opportunity to apply for a two-year sub-granting phase.

First up ‒ and very much in the spirit of Silva Ferretti’s “Understanding firstEvalFoward Talks session (26 March 2021) ‒ project staff, beneficiaries and external evaluators all needed to have a clear, common understanding of the project concept, the theory of change and the rationale behind the capacity-development efforts. Without common understanding, expectations can drift in various directions and the effectiveness of the project may suffer. To this end, project planning involved a core project team, including key knowledge bearers in the implementing country offices, to ensure that each team member had a clear overview and understanding of the project’s goals. An M&E framework was developed at the start of the project, including a glossary of key terms and an indicator reference sheet, and this became a living document, clarifying any uncertainties that arose in the central document.

Secondly, a flexible donor and flexible project design were important to foster openness and opportunities to explore new ways of measuring advocacy and capacity-development effects and to facilitate adaptation to the twists and turns of project implementation.

Here are some of the methods and tools used in the project’s M&E:

  1. The Advocacy Capacity Review (ACR) approach, designed by experts at the Aspen Institute. This is a facilitated 2–4-hour process for assessing organizational capacity to advocate for family planning and reproductive health. The ACR was designed to enable organizations to reflect on their own skills and learning priorities, rather than adhering to the pre-set standards of traditional organizational capacity assessment. It aimed to assess the organizational and advocacy capacity of the 12 youth-led groups and their key capacity-development priorities. The results served as baseline and directly informed the development and design of the project’s Capacity Development Facilitators Guide for the 12 months of foundational training. Further interim assessments reported on progress and led to individualized capacity-development plans.
  2. Traditional pre- and post- tests, as well as questionnaires and surveys, were used to collect initial feedback on the quality of capacity-development measures and any need for improvement. While such exercises are valuable for gaining insights into how to improve capacity-development materials and approaches, however, they offer little information on the long-term effects of capacity-development interventions. Indeed, as one participant noted during the session:

It is one thing to learn and check through post-testing whether and to what extent trainees have learned what has been taught. It is another to practise and apply the knowledge, and you can only measure this when you know what the initial problem or knowledge gap was, and whether this problem has now been solved using the new knowledge. Only then will you see the change. This is something you will see in the long run. The success of capacity development can be measured only indirectly through the resolution of problems previously caused by a lack of knowledge.”

The aforementioned ACR approach is a valuable tool in analysing long-term changes brought about by capacity-development interventions.

  1. All of the refresher training sessions and advocacy strategy network meetings conducted provided scope for peer-to-peer knowledge and skills exchanges in community-of-practice sessions to capture skills gains and demonstrate the value of peer learning and networking.
  2. Ongoing monitoring tools, such as advocacy message uptake, communications and policy involvement trackers, help to identify and review the integration of advocacy messages in local policy agendas, improvements in local visibility and interaction in political forums and advocacy networks.
  3. Mentoring reports and trackers help to keep track of mentoring involvement and progress to identify changing needs, but also serve to intensify mutual trust, learning and support.
  4. Monitoring visits are used to maintain close relationships, review implementation on both ends, while discussions are held to support mutual accountability.
  5. During our final year, with a strong focus on reviewing, learning and follow-up project planning, and as an innovation in times of COVID-19, we introduced monthly one-hour ideation sessions with all project stakeholders to enable trustful, open exchanges on areas of project success and, more importantly, areas for improvement. While we failed to gather meaningful feedback from online questionnaires, these sessions proved helpful in terms of learning and ideas for improvement.
  6. The use of a lessons-learnt tracker helped us to continuously capture lessons learnt and potential ways of adaptation.
  7. Over the past two years, we have provided tools to capture stories of change and held a series of storytelling training sessions, so that our “project family” could become storytellers and showcase their most inspiring learnings, detailing how these affected their lives and their advocacy work. Their accounts will be showcased and shared at a “story of change” conference. Their narratives help us to identify capacity-development effects beyond our linear logframe world, while the conference aims to spur knowledge exchange, networking and alliance building.

We plan to have a final short ACR round and then focus mainly on new approaches (to us), such as outcome harvesting, to learn more about the unexpected effects of our project intervention. As another participant noted during the session:

Outcome harvesting could help to categorize the information gathered from, for example, storytelling, surveys and interviews, to find patterns in the types of changes [experienced by] those who had been through a capacity-development intervention, and also how they did things differently and, in turn, influenced others to do things differently.”

Learnings

To summarize, my main learnings on evaluating capacity development are:

  • It is crucial to invest a lot of time and be willing to learn and adapt.
  • Capacity development cannot be a one-off exercise; it needs careful, tested design (a needs assessment and a highly participatory approach), to incorporate regular and inclusive review and adaptation cycles and to build a trustful learning and sharing environment without hierarchies and clear mutual accountability.
  • This creates ownership and leads to an enriching, transformative and sustainable process.

What are your experiences, lessons learnt and good practices in monitoring and evaluating advocacy and capacity development-focused projects? We are very keen to form an insightful community of practice in this area! Feel free to get in touch, either through this thread or by contacting me directly at cornelia.rietdorf@dsw.org.