RE: A lack of learning in the monitoring and evaluation of agriculture projects | Eval Forward

Dear All,

Many thanks for all your varied and useful responses. Informed by these, I have put together some concluding remarks. I hope you find them useful.

The trick to make monitoring useful is to avoid leaving it in the hands of people  who may be fluent in theorising, using overly complicated language and well versed in the array of methodologies, yet may not be  natural judges of performance. Understandably, this puts off many team members and managers.

As some of you mentioned, M&E activities often throw up a mass of numbers and comparisons that provide little insight into performance. Rather, they are used for justifying the investment; and may even paint a distorted picture of the reality. Managers need to take ownership of monitoring - to find measures, qualitative as well as quantitative, that look past the current budget and previous results and ask questions, answers to which  determine how the programme or project can best attract and retain clients or beneficiaries in the future.

Five takeaways from the discussion in the form of tips:

1. Avoiding elephant traps in design

  • Change takes time. Be realistic when defining the outcome (what changes in the behaviours and relationships among the client groups will emerge) and the impact (what long term consequences will such changes stimulate geographically outside the programme area and/or in the lives and livelihoods of  clients).
  • For market system programmes: i) farming systems are systems too, and need an adequate diagnosis; ii) don’t make premature reference to system level change during the pilot phase among the hierarchy of results in order to treat impact as being solely about farmer level change; and iii) the crowding in phase is, by definition, impact in a geographical or spatial sense, and rarely is it possible to observe, let alone ‘measure’ this within pre-ordained project time frames; see here for a ‘watered down’ version of how M4P (making markets work for the poor) learns from itself

 https://assets.publishing.service.gov.uk/media/5f4647bb8fa8f517da50f54a/Agriculture_Learning_Review_for_publication_Aug_2020.pdf

  • Ensure the outcome and its indicators reflect the needs and aspirations of those in need, not those of donor- for example, do not assume all farmers aspire to increase returns to land (i.e. yield/productivity gains). Often the limiting factor is labor and not land.

 

2. Distinguishing competencies for M with those for E

  • Clearly explain how monitoring is driven by helping managers resolve decision-making uncertainties, often found among the assumptions, through answering questions. And in doing so, clearly distinguish these from questions that evaluators  - who often come from a research background – are trained to answer typically for by the donor.
  • Use the useful analogy of accounting (monitoring) and audit (evaluation) to help make the distinction – they are done for different reasons, at different times and by and for different people. You can be a “best in class” evaluator by developing methods, delivering keynotes at conferences, getting published, teaching, attending an “M&E” course. Do these skills and experiences make you “best in class” at monitoring? No, not necessarily and rarely indeed. Yet it is surprising how much sway and influence evaluation and evaluators have on monitoring practice – developmental evaluation?

 

3. Negotiating information needs with donors

  • Unambiguously define what information is needed to manage implementation by balancing the need to be accountable to the client as much as, if not more than, to the funder, and do it before developing a theory of change and/or a results framework.
  • Focus these information needs on the perceptions of client farmers, and their reception and acceptance or rejection of the project – being accountable to them will aid learning, more so than that from being accountable to funders and learning about their administrative issues; and
  • Do not limit management’s and client information needs to indicators in a logframe and blindly develop a “measurement” or “M&E” plan. Taking this route leads to a qualitative degeneration of monitoring. Assumptions, or the unknown, often matter more for monitoring than indicators when working in unpredictable operating environments. And: “Beating last year’s numbers is not the point; a performance measurement system needs to tell you whether the decisions you’re making now are going to help you and those you support in the coming months”.[1]

​​​​

4. Integrating the monitoring process into those of other people and processes

  • Build into the job descriptions of those who deliver the support asking questions they can use to develop relationships with and better learn from clients – see 3a) above;
  • Use Activity Based Costing as a way to encourage financial specialists to work with those responsible for delivering the outputs – this helps cost the activities so link financial and non-financial monitoring (It will help you answer value for money questions, if required.)
  • Good management decision-making is about making choices, and monitoring information needs to inform these. A decision to stop doing something or do something differently should be analyzed as closely as a decision to do something completely new.5. 

​​​​

5. Being inquisitive while keeping it simple

  • Ignore the pronouncements of  rigid methodological dogmas or standard. As some of you mentioned, there is a lot of really useful material out there. Old and new. Take the risk of thinking for yourself….
  • Keep it simple and avoid making it more complicated by keeping up with fads and jargon that isolates monitoring through impenetrable language

 

“If you can't explain it simply, you don't understand it well enough.”