“Here we go again” - A lack of learning in the monitoring and evaluation of agriculture projects

landscape DRC
Bernard Crenn

“Here we go again” - A lack of learning in the monitoring and evaluation of agriculture projects

8 min.

There is an apparent irony in the fact that systems supposedly designed to help us learn from experience have been so reluctant to learn from their own experience.

Monitoring and evaluation (M&E), monitoring, evaluation, accountability and learning (MEAL), monitoring, evaluation and learning (MEL), monitoring, evaluation, reporting and learning (MERL) monitoring and results management (MRM) or whatever you choose to call it (or them?), should help us learn from experience. Sadly, this is not always the case.

There is an apparent irony in the fact that systems supposedly designed to help us learn from experience have been so reluctant to learn from their own experience. In my view, this is in large part due to the isolation of M&E withing programmes and projects, to working in silos and collecting data that do not feed into or help with management decisions. And it is erroneous to assume that managers can always predict exactly what questions need answering and why. This is a conversation that needs to happen before, not after, settling on indicators in a log or results framework.

I was recently listening to the song Here we go again by the Isley Brothers. It made me think of a 1994 World Bank report on the M&E of agriculture,[i] which highlighted the limitations and consequences of measuring high-level results ‒ such as crop yields and production benefits. This is something that was well documented throughout the 1980s and early 1990s.

The disconnect between monitoring and management

I think the general consensus is that monitoring is an integral management function. That said, one of the World Bank’s main findings was how disconnected monitoring was from management ‒ almost cast adrift. It put into perspective distracting conversations about the differences and (synergistic) relationship between M&E, while leaving management out of the equation.

This separation remains quite commonplace today. In-house M&E specialists or functions often work in silos. The processes of, say, developing a theory of change and/or results framework are typically detached from other processes (such as learning, financial, operational and decision-making) and people. M&E has remained a separate profession to management, exacerbated of late by third party monitoring, a contradiction in terms, and contracting out one of management’s main responsibilities, learning, to learning or evaluation ‘partners’. 

Take agriculture as an example. Thirty years ago, the consensus was that monitoring should be driven by a need to measure indicators of crop performance. Surveying and analysing this data took up all or most of the resources allocated to M&E and was quite a complicated and involved undertaking. Little has changed. I often ask myself how such descriptive evidence can inform what management decisions. And, of course, it can’t: its goal is to validate the donor’s original investment decision.

Moving on from monitoring by numbers …

There is nothing wrong with projecting numerical results in a diagram or matrix. However, monitoring efforts should treat these as projections, not “truths”. Most results defy prediction. Agricultural projects work in highly uncertain environments. Context matters. The great management thinker of the last century, W. Edwards Deming, said that “management by numerical goal is an attempt to manage without knowledge of what to do, and in fact is usually management by fear”.[ii] He also observed that the most important things to know are often the unknowns. As he went on to say in The New Economics, “It is wrong to suppose that if you can’t measure it, you can’t manage it – a costly myth”.[iii] What Deming refers to are what the M&E community refer to as assumptions. For M&E purposes, assumptions matter as much as if not more than the results themselves. Unfortunate, then, that the community’s obsession with indicators, qualitative or quantitative, typically gives assumptions short thrift.

Deming’s view chimes with my experience of heading up an M&E department in the N’gabu Agricultural Development Division in the Lower Shire Valley of Malawi for three years in the late 1980s. The department’s main function was to collect annual datasets on crop production, the main outputs being estimates of crop production and productivity, broken down by crop, district, gender of the household head and farming practice. The data were largely used to assess the agricultural performance of the two districts in the valley, Chikwawa and Nsanje.

I noticed, after presenting the first round of survey results, how little use senior management made of the data. Very few senior managers came to my office; most of the people who did were on donor missions, were from non-governmental organizations or were researchers or officials from the Ministry of Agriculture or the Central Statistical Office. Enumerators in the field had limited contact with peer extension workers. I wanted to find out why there was so little interest from my colleagues. After all, measuring these things had taken considerable time and was methodologically tricky. I sat down with the Director of Agricultural extension to explore the reasons. He told me that that while the survey results were interesting, they did little to inform the actions of his and other departments (for example, research, crops and the women’s programme).

When we discussed what questions my team could help him answer, he reeled off four examples:

  1. What are the farmers’ impressions of extension agents’ performance?
  2. How many farmers adopt their messages and how does this vary by message, crop and gender of household head?
  3. Why do some farmers adopt message x? On how many of their plots do they adopt the message and for how many seasons?
  4. Why do others not adopt the same message and what are the multiplier effects of this rejection among neighbouring farmers?

Image by Dazed and Confused

I discussed this with my supervisor – a statistician from the World Bank. He advised me to prepare a revised survey that focused on the interaction between extension agents and their client farmers with a focus on ensuring that the survey treated farmers as subjects of conversations on issues that mattered to them; not objects of a survey that interested the enumerator. He also reminded me of the well-established mathematical impossibility of establishing statistically significant trends in crop yields in rain-fed farming systems within a five-year programme period, let alone attributing them to an intervention. One can measure them, he said, but “the ignorance in doing so is surpassed only by those who believe the result”.

It was a salutary lesson for a young M&E professional. For my team and I to connect with our colleagues, we had to provide evidence that was useful to them, information that explored farmer responses to extension support and how this varied. Understanding this provided a basis for remedying rejection among farmers and replicating the successes of those who had adopted and retained the advice they received.

 … and then returning to them

Fast forward to today. Market-system development programmes in the agricultural sector are designed to increase the farm incomes of those underserved or excluded from the market systems on which farmers and their families depend. These programmes aim to facilitate or trigger change in the behaviours of and relationships between “system players” or “actors” ‒ those farmers buy from (input markets) and sell to (product markets); those who set and enforce the rules, be it local and/or national governments or indigenous norms and values; and those who offer support services, such as information, skills, technology and money.

In other words, market-system development programmes do not deliver services directly to the farmer, but aim to stimulate change in specific functions of the market-system where they are failing farmers. However, nearly all the results frameworks or theories of change I have seen for such programmes focus on production or productivity, not systemic change, at the outcome level.

This often has adverse consequences for M&E. The systemic rationale of market system development programmes is frequently compromised by focussing more on measuring the assumed consequence of market system change at farm and farmer level; that is, rather than market system change itself. The opportunity costs are not insignificant: providing evidence that effectively tests, not ignores, the main assumption - that system players are responsive and attuned to the needs of growing numbers of poor farmers. Evidence that proves, or not, markets are working for the poor.

So, why have M&E systems, supposedly designed to generate learning from experience, been so slow or reluctant to do so themselves? What do you think? I’d be delighted to hear your experience.

  • D'apres nos annalyse est notre expériences on peut dire que l'évaluation  est une forme d'etude tres sérieux qui demande  une serieuse etude  avec des hypothèse suivi d'une confirmation pour confirmé le résultat exact .

    De ce faite on peut dire l'évaluation du résultat agricole a toujours montré des résultats découragent par rapport au l'exploitation familiale agricole vu la plupart de ses exploitations agricoles familials est géré par des personnes qui  ne connais rien sur l'agriculture. 

  • Thank you for this thought-provoking piece, Daniel! These are the kinds of questions that MEL practitioners, programme managers, and donors should be answering. Unfortunately, as aid/donor budgets shrink due to competing demands, other factors other than decades of learning appear to drive the agenda. Real change will remain a distant dream until learning becomes an integral part of programme design and implementation that considers the beneficiaries’ needs.

  • Thanks for this interesting post Daniel. It really hits the nail on the head of why data is not used for decision making. It also brings up the role of qualitative data in M&E - most indicators and targets tend to be quantitative, so "trends" can be "measured". I agree that the four examples listed in your post are much more insightful but 3/4 of the questions can only be answered through collecting qualitative data. How can we raise the profile of qualitative information in the eyes of managers and leaders who only want to see numbers?

  • Hi Daniel, thanks for raising such a challenging and interesting topic. M&E is a very "bitter" work because it requires to be planned in advance (at the moment when the rest of the team, when there is a complete team, is impatient to start the work in the field), requires to have clarity about the causes and effects (so, a convincing Theory of Change), to ellaborate it with a participatory methodology, and then measure, and just then start to have results that can be analyzed and (hopefully) to have enough material as to do evaluation. 

    Purely "agricultural production" projects have better chances to carry on sound evaluations and arrive to meaninful results. "Development" projects have broader challenges, and Theory of Change usually is more complex. Here it is more important the have the real participation of all the actors involved in the development process. However, there is a trade-off between "the perfect" and "the possible" M&E system. Here the participation of all the relevant stakeholders can be a good way to find possible + useful M&E system.

  • Thanks so much for this post. On top of what you say, to have significant data, each small farmer should have exact data about their production - considering crop variety, quality and current market prices. Getting this data and the systems needed to collect them is a work in itself, requiring technical capacities, discipline, and tools. To do this properly, we should transform each small farmer or extension worker into a mini data collection and management officer, and more is needed (what about crop diseases, type of soil, the workforce in the family, and weather - just to mention a few?).

    The sad part of M&E now is how we impose the burden of (irrelevant) measures on beneficiaries, local actors, and small intermediaries. And to a level, we do not ask ourselves. All this for nothing of a practical impact on change. One day someone should denounce the opportunity cost and the distortion caused by asking for irrelevant metrics just because we need an indicator to put in the log frame.

    Also, we are confusing M&E with research. So we have irrelevant M&E for decision-making. And poor attempts at getting data and evidence, which should be done with other means, competencies resources to be useful and credible.