Monitoring, evaluation and learning: How can we maximize capacity?


Monitoring, evaluation and learning: How can we maximize capacity?

7 min.

Growing interest in MEL

Monitoring, evaluation and learning (MEL) has recently gained momentum in the context of agricultural innovation and development projects.

MEL provides a helpful framework and tools to accompany the implementation of targeted interventions, with a view to improving agricultural sustainability.

The MEL system fosters continuous evaluation and learning, which enables the adaptive management of transformational projects. It requires a systematic effort to measure implementation progress and simultaneously enhance continuous and real-time learning among those involved, be they farmers and other rural dwellers, civil society representatives, researchers, policymakers or evaluation practitioners. It builds on a variety of tools, approaches and indicators to assess results, integrate lessons and improve impact. It thus supports improvements in project performance.[1]

Yet, the practice of MEL is less of a priority compared with the typical requirements of evaluation commissioners, who give priority to evaluation reporting in line with theories of change and logframe. Building a MEL culture is a process and, to this end, it is important to mobilize the necessary capacity.

Experiences of maximizing MEL capacity were discussed on the recent webinar jointly organized by EvalForward and the LIAISON project.

What did the LIAISON project reveal about the assessment of agricultural innovation?

The LIAISON project (funded by the European Union’s (EU) Horizon 2020 facility from May 2018 to October 2021) brought together a mix of academics and practitioners from 15 European countries with a view to unlocking the potential of working in partnership for innovation in agriculture, forestry and rural business. Among its several work streams was one entirely dedicated to testing and validating approaches to the evaluation of agricultural innovation in multi-actor settings.

With the project team, I was involved in developing a set of tools together with other projects and networks that would act as “case studies”. The core principles of the LIAISON evaluation work were developmental evaluation practice, participatory co-design and learning. We examined both widely practised and lesser-known quantitative and qualitative tools. We looked at the established innovation measurement indices, such as those developed by the EU under the Common Agricultural Policy monitoring and evaluation (M&E) framework, those of the Organisation for Economic Co-operation and Development, the International Food Policy Research Institute’s Agricultural Science & Technology Indicators (IFPRI-ASTI) and scientific performance indicators (scientometrics and altmetrics). We observed, however, that they were not sufficiently focused on the interactivity of innovation and measured it from the perspective of donors rather than project beneficiaries.

Evaluation is needed, but not prioritized

The evidence collected during the LIAISON project underscored the role of evaluation in the optimization of agricultural innovation. Among other things, the case-study partners observed the need for more participatory approaches and MEL. LIAISON consortium members took on project case studies, spanning agricultural innovation projects implemented at national, regional and international level (most supported with EU funding). As part of our approach, we shifted the focus to evaluation needs at project level rather than donor level.

Most of the projects we worked with did not have internal resources that could be directed to an organized evaluation process. The projects were lacking either financial or other capacity (such as time and personnel), but had myriad evaluation needs. Aside from measuring project progress, they wanted to explore, in particular, the project team’s collaboration with external stakeholders (interactivity). Formal evaluations often failed to pay attention to such interactivity, or it was entirely absent from projects.

At the same time, we realized that interactivity took diverse forms in each project. For instance, it could be specifically oriented to virtual reality or face-to-face interactions. Projects welcomed the idea of having such voluntary evaluation work and confirmed it was helpful for optimizing ongoing project implementation, learning lessons from completed projects and forming a solid base for future undertakings. The stakeholder networks and topics of individual projects varied, so indicators had to be tailored to their unique settings. In the complex environments in which the projects operated, MEL was considered crucial to supporting the quality of project at each stage of the project lifecycle.

How to develop MEL capacities with limited resources                                            

While working with the case-study projects, we were confronted with the limited resources available at individual project level. To plug the gap and boost project capacity, we developed a cascading capacity-building system. The experienced evaluators trained LIAISON researchers, advisory service providers and non-governmental organizational partners (non-professional evaluators) in the basics of M&E. The individuals participating in the training remained in touch with their respective case-study partners and jointly developed the main components of the MEL system. Within a short time (no more than five working days), each case-study team was able to co-design its evaluation plan, including core components such as evaluation scope, questions, indicators and methods of data collection.

The experienced evaluators also provided coaching to clarify any emerging issues, especially in relation to narrowing the scope of the evaluation and the number of indicators involved, which proved to be most challenging. Often, the case-study projects identified more evaluation questions and needs than the team was able to fulfil with its resources. For instance, one project dealing with value-chain development, while another, centred on agroecology, wanted to focus on their performance, specifically, in the agricultural field in question. Hence, the overall evaluation had to be streamlined to optimize existing capacities.

At the same time, we learned that when dealing with the evaluation of agricultural innovation, there is no one-size-fits-all approach in terms of methodology, evaluation questions or indicators. The nature of these projects is also highly interactive, so standard approaches such as theory of change and logframe, may not suffice to measure the change involved.

So, can we conduct evaluations without a theory of change? The interactivity of interventions calls for specific indicators (which we developed with our case studies) and theory-of-change alternatives, such as systems approaches, social network analysis or participatory impact pathways analysis. To address this gap, we tested and validated a number of tools and approaches, a summary of which can be found in the LIAISON Toolbox.[2] They proved helpful in depicting complex project realities and supporting the learning process at various project stages. Both qualitative and quantitative tools were useful in observing the changes that projects triggered, especially by focusing on the interactions between actors. We also developed a set of “interactivity indicators” together with the individual case-study projects ‒ a unique set for each case.

A role for experienced evaluators in supporting MEL practices

One of the most important takeaways from the LIAISON evaluation work is that non-professional evaluators can play a very active role in MEL. Even with limited resources, time constraints and the initial hesitance of some team members, we were able to succeed in building basic evaluation capacity and enthusiasm.

We recognized the huge potential and attraction of MEL as a way of advancing both evaluation and project implementation. People working with projects, especially those aimed at bringing about positive change, are often interested in and capable of reflection on what is happening, but sometimes lack the tools to do so. Experienced evaluators could play more of a role here in providing coaching and guidance to other project team members, as they are often able to develop evaluative thinking and, indeed, enjoy it.

In the coming months, more tools will be made available on the LIAISON website at