A lack of learning in the monitoring and evaluation of agriculture projects

A lack of learning in the monitoring and evaluation of agriculture projects
17 contributions

A lack of learning in the monitoring and evaluation of agriculture projects

© Bernard Crenn

Monitoring and evaluation (M&E), monitoring, evaluation, accountability and learning (MEAL), monitoring, evaluation and learning (MEL), monitoring, evaluation, research and learning (MERL) monitoring and results management (MRM) or whatever you choose to call it (or them?), should help us learn from experience. Sadly, this is not always the case.

There is an apparent irony in the fact that systems supposedly designed to help us learn from experience have been so reluctant to learn from their own experience. In my view, this is in large part due to the isolation of M&E within programmes and projects, to working in silos and collecting data that do not feed into or help with management decisions. A conclusion reached by an overview of monitoring and evaluation in the World Bank in 1994. 

Increasing production benefits is the expected outcome of all agriculture projects that deliver support directly to farmers, so too, market system development programmes. However, and in stark language, anyone with a basic grasp of statistics knows that it is virtually impossible to determine yield and production trends in rain-fed smallholder farming systems within the implementation periods of most projects, let alone attribute them.

And yet, these impossible measurements are still being included in the briefs of project M&E efforts, and donors and managers still see them as valid pursuits. Such efforts are methodologically tricky and time consuming to do properly. The opportunity costs of surveying such indicators are not insignificant – more simple ways of learning from farmers are how they rate and respond to the support, to what extent this varies and the closer relationship with managers and field staff that comes with this task. Information on changes in farmers production practices or not is more important for project management decision-making so should not be trumped by measuring the assumed consequences of these changes

See my experience shared in this blog.

What do you think? I would be happy to hear other experiences…

This discussion is now closed. Please contact info@evalforward.org for any further information.
  • Dear all

    Thank you so much for your useful contribution. They are  very nice discussions and reflect the merits and dismerits of MEAL systems. I know you have coverd most of the ideas and experiences. Just a few points need to be considered based on our practical experience:

    1. There a need for developing a culture of understanding that MEAL is designed for supporting better decision making based on evidence not for spying or mistakes finder.

    2. MEAL and program need to  work together not in isolation, and MEAL activities findings should be used for better improvements and decision making to meet the needs and gap on the grounds not as Emma Nthandose Gausi puts it 'In most projects/programs the M&E process has been relegated to a data collection and processing ONLY activity. And this data/information is also always done to satisfy donor requirements. Learning in M&E should be prioritised throughout the project cycle.' https://www.evalforward.org/members/emma-gausi 

    3. For getting reliable and accurate feedback from stakeholders in the communities like farmers or any other, 1. sharing clear information about everything we do in our interventions, ( information needed like project objective, duration, criteria of selection, benefits and change that communities will get, 2. real participation and involvement of the targeted people from the design to closing of our projects. 

    4. Increase the advocacy of humanitarian values, principles, and accountability principles so the stakeholders would participate in changing /adapting the local policies and understandibg to enhance best quality and practice, Cooperation and acceptance. 

    Thanks

     

  • Dear All,

    Many thanks for all your varied and useful responses. Informed by these, I have put together some concluding remarks. I hope you find them useful.

    The trick to make monitoring useful is to avoid leaving it in the hands of people  who may be fluent in theorising, using overly complicated language and well versed in the array of methodologies, yet may not be  natural judges of performance. Understandably, this puts off many team members and managers.

    As some of you mentioned, M&E activities often throw up a mass of numbers and comparisons that provide little insight into performance. Rather, they are used for justifying the investment; and may even paint a distorted picture of the reality. Managers need to take ownership of monitoring - to find measures, qualitative as well as quantitative, that look past the current budget and previous results and ask questions, answers to which  determine how the programme or project can best attract and retain clients or beneficiaries in the future.

    Five takeaways from the discussion in the form of tips:

    1. Avoiding elephant traps in design

    • Change takes time. Be realistic when defining the outcome (what changes in the behaviours and relationships among the client groups will emerge) and the impact (what long term consequences will such changes stimulate geographically outside the programme area and/or in the lives and livelihoods of  clients).
    • For market system programmes: i) farming systems are systems too, and need an adequate diagnosis; ii) don’t make premature reference to system level change during the pilot phase among the hierarchy of results in order to treat impact as being solely about farmer level change; and iii) the crowding in phase is, by definition, impact in a geographical or spatial sense, and rarely is it possible to observe, let alone ‘measure’ this within pre-ordained project time frames; see here for a ‘watered down’ version of how M4P (making markets work for the poor) learns from itself

     https://assets.publishing.service.gov.uk/media/5f4647bb8fa8f517da50f54a/Agriculture_Learning_Review_for_publication_Aug_2020.pdf

    • Ensure the outcome and its indicators reflect the needs and aspirations of those in need, not those of donor- for example, do not assume all farmers aspire to increase returns to land (i.e. yield/productivity gains). Often the limiting factor is labor and not land.

     

    2. Distinguishing competencies for M with those for E

    • Clearly explain how monitoring is driven by helping managers resolve decision-making uncertainties, often found among the assumptions, through answering questions. And in doing so, clearly distinguish these from questions that evaluators  - who often come from a research background – are trained to answer typically for by the donor.
    • Use the useful analogy of accounting (monitoring) and audit (evaluation) to help make the distinction – they are done for different reasons, at different times and by and for different people. You can be a “best in class” evaluator by developing methods, delivering keynotes at conferences, getting published, teaching, attending an “M&E” course. Do these skills and experiences make you “best in class” at monitoring? No, not necessarily and rarely indeed. Yet it is surprising how much sway and influence evaluation and evaluators have on monitoring practice – developmental evaluation?

     

    3. Negotiating information needs with donors

    • Unambiguously define what information is needed to manage implementation by balancing the need to be accountable to the client as much as, if not more than, to the funder, and do it before developing a theory of change and/or a results framework.
    • Focus these information needs on the perceptions of client farmers, and their reception and acceptance or rejection of the project – being accountable to them will aid learning, more so than that from being accountable to funders and learning about their administrative issues; and
    • Do not limit management’s and client information needs to indicators in a logframe and blindly develop a “measurement” or “M&E” plan. Taking this route leads to a qualitative degeneration of monitoring. Assumptions, or the unknown, often matter more for monitoring than indicators when working in unpredictable operating environments. And: “Beating last year’s numbers is not the point; a performance measurement system needs to tell you whether the decisions you’re making now are going to help you and those you support in the coming months”.[1]

    ​​​​

    4. Integrating the monitoring process into those of other people and processes

    • Build into the job descriptions of those who deliver the support asking questions they can use to develop relationships with and better learn from clients – see 3a) above;
    • Use Activity Based Costing as a way to encourage financial specialists to work with those responsible for delivering the outputs – this helps cost the activities so link financial and non-financial monitoring (It will help you answer value for money questions, if required.)
    • Good management decision-making is about making choices, and monitoring information needs to inform these. A decision to stop doing something or do something differently should be analyzed as closely as a decision to do something completely new.5. 

    ​​​​

    5. Being inquisitive while keeping it simple

    • Ignore the pronouncements of  rigid methodological dogmas or standard. As some of you mentioned, there is a lot of really useful material out there. Old and new. Take the risk of thinking for yourself….
    • Keep it simple and avoid making it more complicated by keeping up with fads and jargon that isolates monitoring through impenetrable language

     

    “If you can't explain it simply, you don't understand it well enough.” 

  • Dear Richard,

    Thank you for providing the link to your reflections on M&E. A telling and thought-provoking read. I especially liked, yet was surprised at how, the issues you raise persist, notably:

    # 3 On the limited consequence of research plots (on farm?) regarding the spread of practice/technology on the farmer's other plots and/or among other farmers in the community.

     - And all in the face of farming systems research with its focus on systems thinking and Chamber's work on farmer first dating back to the 1980's. How can we remind people associated with today's Agriculture Market Systems Programmes of these, and others lessons?

    # 4 On how donors assume that land, not labour is the limiting factor with the unlettered indicator of choice being physical or financial runs to land  - yields - without bothering to find out why which smallholder farmers  cultivate what.

     - Your reference, later into the document, to Kipling's poem "White Man's Burden" reminded me of William Easterly's book with the same (borrowed) title. His central message is about how imposition by the west of large, grand schemes thought up by "friends of Africa"  - Tony Blair' Africa Commission, Sachs and Millennium Villages and Obama's Feed the Future programme. In Agriculture, unlike Health and Education, farmers are not patients treated by doctors or pupils taught by teachers, they are the experts.

    Last week there was an interesting EvalForward webinar on Evaluation and Climate Resilience. One thing that interested me was how little the evaluations revealed about indigenous "Climate Smart" agriculture. The term seems limited to practice being introduced to farming communities without necessarily learning about how, for example, indigenous concepts of soil-moisture dynamics could explain contrasting seasonal and inter-annual fluctuations in agricultural productivity, nutrition, health, mortality and even marriage rates across a soil-type boundary.    

    #11 On how M&E is more about covering up failure and its fit with taxpayer expectations. Peter Dahler Larssen's (mindless) Evaluation Machines define a god example of what I think you refer to here.  He and Estelle Raimondo presented a great expose of current evaluation practice at last year's European Evaluation Conference. On the taxpayer issue, there some interesting research a few years ago that highlighted how UK taxpayers don't want numbers, rather stories of how and why Aid works, or not. Thing is, DFID is not accountable to the UK taxpayer, but the Treasury (who want numbers). Numbers, as Dahler-Larden and Raimondo say, is one of evaluations blind spots. 

     

    Apologies for the Monday afternoon rant, and thanks again for pitching in with your writing. 

  • In a previous posting I provided the M&E section of a larger document I am preparing reflecting on my 50+ years of assisting smallholder communities. The full document is now available on the smallholderagriculture website I manage, Please note the material is more concerned with factual accuracy the being politically correct. The direct link is: 

    https://agsci.colostate.edu/smallholderagriculture/wp-content/uploads/s…;

    I hope you have a chance to review it and provides insight on how to better service smallholder communities. Thank you. 

  • To you all, my thanks for sparing time to share your experiences and insights. I will be posting, based on your comments, some conclusions and tips when the discussion closes next week. 


    Meanwhile, I wanted to make some initial responses drawn from your comments.


    1. The trick to make monitoring useful is not to leave it to people who may not be natural judges of performance, whether they are employees of donor agencies or their agents. People who are fluent in developing frameworks and theories of change, use overly complicated language and are well versed in an array of methodologies insisted on by the donor. Understandably, this puts off many team members and managers. It seems boring and onerous. So much so that, for some, it is not clear that it is even a profession. Perhaps, monitoring is but a contrived learning process unique to development aid?


    2. The fashion of adding more letters to the acronym, M&E, such as L - Learning, A – Accountability, R – Results appears to be more for affect, not effect. I, like some of you, query why some consider this either revealing or helpful. It defines the fatuity in which some of us toil.


    3. It also distracts from the most important feature many of you point out. To listen to and so learn from those that matter most - the ultimate clients or beneficiaries. They are also the experts. Too often their voices and objectives are crowded out  by those of donors typically set out in log or results frameworks. Accountability to donors, not to beneficiaries appears to more commonplace than would be expected or hoped for, and being so is burdensome for other stakeholders.


    4. As some of you mentioned, the inevitable result is a mass of numbers and comparisons that provide little insight into performance. Some even require a suspension of belief given typical implementation periods. Rather they are often used for justifying the investment to donors; and may even paint a distorted picture of the reality. Beating last year's numbers is not the point.


    5. Managers need to take ownership of monitoring - to find measures, qualitative as well as quantitative, that look past the current budget and previous results and ask questions. Questions that reveal answers to help determine how the programme or project can better be attuned and responsive to so better "land" or be acceptable to clients beneficiaries in the future.  

    Many thanks again and please, if there are any further contributions or responses to the above...

    With best wishes and good weekends,


    Daniel 
     

  • [Contribution originally posted in Spanish]

    Excellent comment Nayeli! However, there is still a long way to go to implement the M&E culture in projects, programmes and policies in our countries (I am speaking from Uruguay). In my country there has only been an evaluation agency for the executive branch of government for a few years now (I think Mexico is more advanced in this respect, as FAO provided an independent evaluation service for public policies a few years ago). In the Legislative Branch they are considering the best way to advance in the M&E of their work.

  • Sorry but I would like to take exception to Ablaye Layepresi Gaye comments concerning farmers' lack of experience and knowledge. I wonder if what he is observing is really the impact of limited operational capacity to comply with recommendations. Can he confirm that the farmers have access to enough labor to complete various crop management activities in the desired timely manner. That is 300 diligent person hours per hectare for manual (hoeing) land preparation. Does this labor have the necessary 4000 kcal/day diet that will allow them to undertake a full day of agronomic field work, or are they limited to 2000 - 2500 kcal/day, which after subtracting 2000 kcal/day for basic metabolism only leaves a few hundred kcal/day for labor, at 280 kcal/hr is good for perhaps two hours of diligent effort. Thus, is what he is objecting to is the rational compromises farmers have to make in adjusting the recommendation to their limited operational capacity. Rather than emphasis on badgering farmers on information they already have a reasonable knowledge off, but not the resources to utilize, would it be better to facilitate access to additional operational resources that will allow them more readily comply with desired management practices? This is an area that is traditionally overlooked and falls into an administrative void between agronomists and social scientists assisting smallholder communities. Please review the following webpages:

    https://agsci.colostate.edu/smallholderagriculture/wp-content/uploads/s…;

    https://webdoc.agsci.colostate.edu/smallholderagriculture/OperationalFe…

    https://agsci.colostate.edu/smallholderagriculture/calorie-energy-balan…

    Thank you

  • [Original contribution posted in Spanish]

    The process of implementing M&E schemes has been recent, perhaps in the last 6 or 7 years in a formal manner. The situation in countries that are subject to rural development and conservation projects from international donors, such as Mexico, is that the installed capacities respond to donor requirements and not to consolidated profiles. It is very interesting that in Mexico and Latin America, the development of M&E has been focused on programs in human rights, economy and education.

    It seems to me that as a sector we should make more use of the tools developed by UN agencies focused on rural development (FAO, WFP, IFAD), but their mainstreaming would be even more effective if these agencies were to seek promotion with governments and promote them as a reference to be followed.

  • [Original contribution posted in French]

    Nowadays the real problem that we see after a long monitoring and evaluation in the agricultural sector is the lack of experience of the farmers but especially of basic knowledge which is something to be rectified.

     

  • This is an interesting discussion. I think most of times when designing M&E systems we fail to reflect on how the monitoring and evaluation process would benefit the whole spectrum of stakeholders of an intervention especially the farmers.. In most projects/programs the M&E process has been relegated to a data collection and processing ONLY activity. And this data/information is also always done to satisfy donor requirements. Learning in M&E should be prioritised throughout the project cycle. There is usually little or no "M&E" of the M&E systems in our interventions that would help us understand if they are effective. I have seen a few farmer evaluations where farmers were involved in some M&E activities for learning purposes. But mostly, the farmers do not value it because its an imposed activity. They do not understand what and why they are doing the process as it feels like its being imposed on them. 

  • Dear Daniel,

    Thanks for starting this discussion. From my experience, the development partners continue to use long-term impact indicators that are unlikely to be attained after the project's life and are used to inform decisions. The challenge is of course the true impact takes time, the interventions may provide building blocks on which the impact will be realized at some future date. We cannot model shocks that tend to affect long-term indicators. as you have correctly identified, the methods and data to credibly measure this are expensive. Many partners are clearly not willing to meet the costs. The fallback is less credible evaluations that are mainly undertaken in a BAU (business as usual) model to tickmark processes.

    A key thing that works especially if you track people for a long time is to share both data and knowledge. Our Institution share data in an effort to enhance learning over time, especially as key indicators such as knowledge acquisition and behaviour change cannot be observed in the short term.

  • I fully agree with Daniel’s assessment of the M&E process. Too often it is used as propaganda tool to promote programs that, by most standards, are near total failure. This can do wonders for getting project extensions and future projects but does nothing for the beneficiaries. It must be recognized that while M&E can document a project process the 2 most important contributions of M&E is:

    1. Provide guidance to future projects to better serve the beneficiaries, and

    2. Be the only real voice of the beneficiaries, as most projects were more imposed than collaborative, leaving the beneficiaries only voice the degree they participate or avoid projects. The M&E process needs to fully identify this.

    One thing to look for is the degree M&E are reporting aggregate results or percent results. The aggregate results would be more an indication of propaganda agenda, while the percent results would be more a guiding analysis that could lead to improved programs.

    A couple weeks ago my university sponsored an international symposium which I contributed presentation entitled “Reflections on 50+ Years Assisting Smallholder Farming Communities”. I have also prepared a complete write-up of the presentation that will shortly be posted on my website: https://smallholderagriculture.agsci.colostate.edu/ .

    The presentation does contain a major discussion on M&E which I am excerpting below. I hope you find it useful and please provide any comments or refined data you feel appropriate.

    Link to the excerpt

  • Of course there is no doubt that capacity is lacking across Africa and the world overall in M&E generally not just for Agricultural projects or programs. But it is partly because many of us do not want to learn on our own using the vast programmes on the internet. Just like Monitoring a lot of literature exists on evaluation, for example many of us should interest ourselves to the writings of Michael Quin Patton - the founder of Utilization-focused evaluation, you will be amazed, but also his debates on systems versus frameworks in measuring performance. And, finally, read literature from OECD/DAC and tweak it to your area of intervention.

    However, for me it is important to contextualize the debate to provoke the need to document learning from the agriculture projects so as to address the knowledge gap and to be able to replicate the processed across the world.

    Many thanks

    Francis 

  • Dear Daniel and other Evalforward members,

    evaluation has been primarily developed and used mechanically and served mainly the tick mark purpose (donor accountability) rather than for learning and improvement. Now, we know the indicators and so-called ‘log frame’ become more or less redundant in the complex situations in which most agricultural projects are run.

    Please allow me to share my recent experience. I am part of a team to assess the contribution of budget support with a small TA (3 years intervention) by a donor to the government to implement a national agriculture development strategy in one of the South Asian countries. As an evaluator, I have noted the following issues during the evaluation process:

    a)       The ‘Budget Support’ is provided to the government treasury and not earmarked for the agriculture sector. In this case, there is a high possibility of fungibility. We do not know whether the sector received the fund they have the opportunity to have incremental work. And how to evaluate the contribution.

    b)      The funding contract contained ambitious and not-relevant targets: The programme has six targets with annual milestones to be fulfilled for getting the fund. These targets are not only ambitious for the 3 years intervention but they are also outside the scope of the agriculture ministry. For example, decreasing the stunting percentage and increase in the percentage of land ownership by women at the national level. This is not a direct intervention from the ministry of agriculture and there are many other main contributing responsibly to attain that in a long period. There were also inadequate coordination and collaboration mechanisms among the ministries and government agencies to get information/progress. In addition, there are no M & E systems to collect data from the sub-national level.

    c)       The governance structure has also been changed from a unitary to a federal structure. The three tiers of the government are functioning on their own without having proper coordination and reporting mechanisms. Institutions and policies are in the process of development where as there exists a serious capacity gap. In this case, it has been difficult for the ministry to collect data and compile reporting.

    In this context, the log frame is still there without revision and evaluators are asked to assess the contribution of the fund on those indicators/targets. Both implementing agencies and the donors are still trying to attribute the impact of the fund which is like ‘squeezing water from a stone’. Perhaps, a push is still a far way to make the M & E approach more contextual and useful.  

    Agree: ‘here we go again’ and ‘repeat’ unfortunately.  

    Ram Chandra Khanal, PhD
    Independent evaluator, Kathmandu, Nepal

     

  • [Original contribution posted in Spanish]

    Thanks for sharing Daniel, I agree with much of what you have shared, I think your contribution is very important. I have been involved in M&E for years in development projects - in urban areas -, and I agree that instead of bringing the different stages of the project cycle closer together, in an attempt to separate and specialise - I don't quite understand why -, specialisations such as "learning" (which is part of Monitoring) and accountability (which is part of Monitoring and mainly of Evaluation) have been "added", the latter because if we are talking about development, improvement of production and living conditions, it is necessary to "give back" to the target group everything that has been done.

    I would like to add that - in my opinion - there is a step that may be overlooked: the intervening team must know what is going to be done, how it is going to be measured and when it is going to be measured, not only as actions and activities that can be seen in isolation, but as contributing to the value chain. This is where monitoring begins, this is where learning takes place (on the implementers' side), this is where accountability is learned. 

    We have theorized too much in a continuous improvement effort, but I think it does not help. I agree that it is more important to know the perception of the target group, because then we can know if we have contributed to the sustainability of the project (I know that this is measured after the intervention), but qualitative studies tell us about the reception, acceptance and appropriation of the project at the end of the project, this information will tell us what we should do?

    This sometimes clashes with donors, who fund for certain periods that do not always allow for the cultural appropriation of the intervention (this is another issue).

  • Interesting contribution Grace: M&L as part of implementation, and E&R (not sure why R ...) later in time by different teams. Makes sense.

    From my experience (and I come from the public sector, national and international public sector) it is already assumed that in any project you need to carry on what you called the M&L work, but it is difficult to include in a meaningful way E&R in a way to make a real contribution to (i) the project stakeholders, (ii) the implementing organization, and (iii) the funding organization. I found there is a large opportunity to improve on this (M&E: timing, contribution to who).

    Best

    Vicente  

  • From where I sit, "it" (M&E, MEAL, MEL, MRM, MERL) continues to under deliver in terms of learning and contribution to improvements because it is not really an identifiable professional practice and it is conceptually messy which makes it messy in practice. When I first heard of M&E (I was coming from private sector) I remember thinking 'is that a thing'? This is because it sounded like if in the private sector we created something called Accounting and Audit (A&A) and treated it like it was one professional endevour. However enthusisatic we might be, those are two distinct but complementary things that are done at different times for different purposes and by different peoples. This same logic applies to Monitoring and Evaluation. Knowing whether we are implementing activities as planned, knowing how this implementation is 'landing' with the target population and (crucially) adjusting/discarding those that are not landing well is the purpose of monitoring, it is every day activity and (crucially) it is an integral part of implementing a project. On the other hand, knowing what changes occurred (and if possible how much of that is as a result of the project) is the purpose of evaluation. the M and the E are different things, happening different times and by different people. When we lump them together and hope M&E tells us "what impact/difference we are making" then we start having the problem of prioritising measures of consequencies (or searching for attribution for those so inclined - which steps into research realm) at the expense of searching for and providing information that is useful for making decisions on what matters most i.e. implementing the projects to the best of the abilities of those involved, listening to the populations to learn how/whether the project is working for them and adjusting accordingly. 

    That said, there are elements of evaluation (with a small 'e' as we say where I work) that happens during the day to day monitoring. that small 'e' evaluation is reflections and learning from monitoring information. So in another universe where we are all willing to discard "it" and start all over again, we could have Monitoring and Learning (M&L) as part of day to day project implementation and then Evaluation and Research (E&R) which is commissioned at different times, by different peoples.