How do we move forward on Evaluation Systems in the Agriculture Sector?

©New Times Newspaper

How do we move forward on Evaluation Systems in the Agriculture Sector?

Dear EvalForward members,

Agriculture is Rwanda's major economic sector. It accounts for 24 percent of gross domestic product (GDP) and about 64% percent of total employment with (82%) females and  (63% ) Males  are employed in the sector.

While we have a quite good Monitoring as part of the M&E function and a sound Management information system in the Ministry of Agriculture and Animal Resources, we have no national evaluation system and this absence constrains evidence-based decisions in the sector and limits the learning for future projects.

Evaluations would provide evidence for future investments, inform policies in  the agriculture sector and inform major sector decisions. It is a management tool and staff efforts would be directed in the right direction hence allowing better accountability.

Of course, the need for a national Evaluation system is beyond the agriculture sector and cuts across most sectors.

We are currently winding up our Vision 2020, with big targets in agriculture, food and Nutrition security in particular. It would be ideal to have a country evaluation of all vision 2020 targets, agriculture inclusive to inform the next country program. 

I am really interested in learning from members in other countries:

  • How are you managing with or without an evaluation system? If you do have an evaluation system, how much are we missing out as a country?
  • And how do we influence our leaders and partners to move this forward, what is needed in terms of leadership, resources, and capacities?

I would love to hear from you!

Judith Katabarwa,

Vice President, Rwanda M&E Society

This discussion is now closed. Please contact info@evalforward.org for any further information.

Dear EvalForward Members,

 

We have closed our discussion on the topic "how do we move forward on the Evaluation System in the Agriculture sector?"

I would like to take this opportunity to thank each one of you for the very useful contributions towards the topic, every contribution came with very unique views yet very insightful. 

We are working on synthesizing all the shared contributions and should be able to share the consolidated insights soon. I have confidence that they will benefit our various country M&E systems specifically in the agriculture sector.

 

Best Regards,

Judith

Greeting!

Incorporating evaluation into national policies faces several challenges. It is not very easy to distinguish clearly the differences in evaluation at national, regional and local i.e., field levels. This is very important, because what evaluation involves at those levels are very different. The attachment included here might prove useful in navigating through those difficulties.

Cheers!

Lal Manavado

 Hello

I am Raoudha from Tunisia, I thank you all for such important and constructive sharing, in the same vein, I think that the question of monitoring and evaluation should take into account both the quantitative and qualitative aspects. Of course, the quantitative aspect only makes sense if it goes hand in hand with the qualitative aspect, and this process can lead to results that are capable of identifying the lessons learned from the evaluation exercise because I think that evaluation is not really an objective in itself but rather a means of learning in order to rectify, adjust, revise and improve. By evaluating we are supposed to learn.

 

[This comment was originally posted in French

Dear Judith,

Very interesting topic and I find all responses interesting too to help to establish an evaluation system at national level. My advice is very limited as I don’t have information on the ongoing monitoring system and the vision or specifics core goals to 2030, as the evaluation system will inspired from this two reference documents.

I find the more difficult phases is to establish a monitoring system with validated year targets. A SMER (Six Months Evaluation Report) could be adopted. The first SMER would be for analysis of the progress of goals and target and if necessary make some reorientation in the programme and the second in the same line with a consolidation part for the annual evaluation. However the SMER approach could be challenging for evaluation since evaluation is for results and the evidence needs time. It would be more relevant for evaluation to have a five year vision format.

So if you decide to keep the yearly period for monitoring and evaluation, I advise to take this approach of SMER with a template based on specifics goals and targets that can help in defining the evaluation quantitative and qualitative questions.

I also recommend using digital collection at the national level and validating the results of the evaluation through a participatory approach.

Josephine Njau

Josephine Njau

Program Coordinator, Monitoring and Evaluation at Alliance for A Green Revolution in AfricaKenya

This remark made by Miriam clearly indicates where we need to start the change. When the M&E system focuses on bean counting the learning piece gets lost and people tend to focus on performance based on numbers. There is need to have the learning questions determined at the conceptual stage of the project/ program. In some cases you find that what people consider as KM is the number of knowledge products that will be developed and that is not comprehensive enough to incorporate the learning agenda.

 

Josephine

Interesting discussion on M&E and learning. One country that has an integrated M&E system that is linked to the national M&E is Madagascar. IFAD set up a systems called SEGs which aimed to link M&E and knoweldge managment see https://www.slideshare.net/benoitthierry948/madagascarsegsifad-rome-2007

One of the challenges that I have seen is that M&E is normally a bean counting exercise for most projects. There is no learning incorporated into the process. When I managed a knowledge management and learning grant, we tried to change the mindset of the projects from M&E to a learning oriented M&E. Most projects do not start with learning questions that guide implementation and so they are always fumbling through their project. As a result they are unable to show results. The idea was to incorporate qualitative aspects into the quantitative process.   

I really like the car dashboard discussion - I will borrow it to use somewhere.....thanks for sharing.

--

Miriam Cherogony

Independent Consultant

Development Finance, Financial Inclusion, and Knowledge Management Expert

 

I will write a bit of a long analysis in few days time. Bit busy with work.

However, one thing we need to be very clear about is that this pandemic created challenges to the evaluation process.

If i may this, concept of evaluation is in very profound moment. You got to be innovative, think outside the box. Evaluation is not in crisis but evaluators are.

Isha 

 

Dear Renata,

Thank you indeed for your very useful insights into the ongoing topic of discussion. The study on evaluation capacities of agriculture ministries  is a good reference too.

Yes Rwanda is doing well in tracking and reporting performance, thus an evaluation system would build on the existing efforts.

I like the proposed entry points towards the process of developing a functional evaluation system. The Rwanda M&E society will build on this approach in collaboration with other major stakeholders in the sector.

Best Regards,

Judith

Dears,


Thank you for this important issue. I think that the Public sector is far from Evaluation. The M&E system is something good to have if it is a full system in its broad definition, not only a written protocol and procedure of "how" and "musts", which is the case in our region (in ministries). Our experience is that you cannot get any data from existing systems. What is needed is an Evaluation Culture and lobbying for evaluation. VOPES should have a role in this, but without support, their impact will be limited especially as most VOPES in the region are too young and working on Voluntary basis. VOPES can act as mobilizer or facilitator to bring stakeholders together to enhance the evaluation culture through actions on land. To give an example of actions: select common national   indicators or even SDG related to agriculture and try to systematize the process of collecting data about it at country level. VOPES in different countries can work jointly and even form a consortium.

I hope my idea is clear enough

Thanks

Naser Qadous

Palestine

Dear All,

It is true that Rwanda has been doing great if we consider efforts put in managing for accountability in all activity sectors. Referring to the car dashboard illustration, the effectiveness of the evaluation system should be viewed through the wholistic perspective of the M&E system. In fact, either monitoring or evaluation aims at the same thing: ensuring the effective and efficient attainment of the goals/objectives/mission.

The evaluation can complement, confirm or contradict the results of the monitoring. Being more systematic and rigorous, the evaluation can provide more credible explanations and clarity [in the eyes of stakeholders] that, for different reasons, cannot be provided by the monitoring itself.  Therefore, if evaluation system fails as a component of the M&E system, the whole system has already failed.

A well-crafted (effective) M&E system should:

In absence of a national M&E system, the country misuses financial resources and misses learning opportunities. If there were a national evaluation system, this would limit the number evaluations to the ones that are really worthwhile. It would also put in place a national strategy for learning from the evaluation reports to inform future program designs and policies. 

Moreover, my research work has realized up to four challenges that current M&E systems are facing following the fact that they have not yet taken advantage of big data in time the world has embarked on a digital era characterized by the availability of information growing at exponential rates: the inaccurate performance measures, inability to make reliable predictions to inform future planning and new program designs, delayed implementation of corrective actions recommended by the evaluation undertakings, and the inability of some users to understand and use monitoring and evaluation reports. [It may  be strange to many but I can provide more insights on this if need be].  In this regard, I would also like to highlight that Rwanda has developed and adopted a National Data Revolution Policy in 2017 which provides that big data analytics should be used in monitoring the development progress and insightful research activities, etc. The key take-away is that to boost the Monitoring as well as the Evaluation functions, there is need for national commitment to install well functioning systems and to build capacities; otherwise, the issues of national accountability and ownership will persist. 

Finally,  there is need to take advantage of the commitment  and willingness of Rwanda Government  and M&E society to check again the completeness of the national MEL guidelines [developed last year and for which we provided inputs] and enforce it. Then we will be done.

 

Best,

Janvier

 

 

 

Dear Judith and all,

The topic of how to move forward evaluation in agriculture is the one of ideas that brought to the development of this CoP!

In 2019, FAO and EvalForward studied capacities for evaluation in Ministries of Agriculture, for which we interviewed officers in the Ministries of Agriculture of 23 countries. The study revealed a disparate situation depending on countries, including some with still very limited capacities in relation to evaluation, M&E or even Results-based management (here the link to the report and to the briefing note). Rwanda was not in our sample, but it looks to have well-established M&E systems and performance measurements (such as the annual imigo mentioned by Olivier).

As known, evaluation can be useful to bring the depth of the analysis to the data and M&E system and help to identify weak spots. An institutional set up would provide the framework for carrying out strategic evaluations and support the demand for evidence and the willingness and capacity to use it.

Based on the experience of countries that have succeeded in developing an evaluation system, some entry points to start the process can be:  

  • Finding influential leaders championing evaluation and who could influence the move towards an effective evaluation function at national and sectoral level.
  • Pilot evaluation with the involvement of public officers as an opportunity to proof the value of such exercise, including testing rapid evaluations to provide feedback in a reasonable time on pressing policy issues.
  • Connecting with countries that have established evaluation systems and initiate collaborations with initiatives such as Twende Mbele that aim to support these processes. 
  • Lobby for evaluation through key stakeholders, such as the Producer organizations, the VOPEs, academia and NGOs.

In many countries, budget cuts in the agriculture sector have led to reduced investments in human resources and skills development undermining M&E functions and attempts to develop evaluation capacities in the sector. It is great to see that this is not the case of Rwanda. The centrality of agriculture in the economy should be leveraged to make the case to advance on the tools to improve evidence generation and use in the sector.

We look forward to the views of other members!

Renata

Dear Olivier,

Thank you so much for your contribution towards this important topic, the annual performance Contracts and evaluations are an excellent monitoring approach, they assess the achievement of annual targets at district and sector level as well as at institutional level, very good monitoring tool for output level results tracking. However the fact that these are done annually, they are not fit for a detailed midterm or endline evaluation process, which usually tracks outcomes and impact, and informs future projects.

Imihigo could indeed be part of the National M&E system and would give tremendous inputs but may not replace the much needed national evaluation system.

 

Best,

Judith

Dear Jean Providence,

Thank you for your very useful feedback on the topic of Evaluation Systems, I like very much the scenario used of a dashboard, it explains well the different aspects of a Monitoring, Evaluation and Learning system. I agree that Monitoring is an important part of the system but it is limited to "vanity indicators" as you call it, yes most countries tend to go for the low hanging fruits, but there is need for a functional Evaluation system as well to complete our National M&E System and be able to learn from the whole process.

 

Looking forward to more contributions from Colleagues here.

 

Judith

Hello Judith,

Thanks for sharing this topic to get reflections and insights from other countries. Below are my two cents (being Rwandan but practising M&E elsewhere):

I usually use a car dashboard to illustrate the twinned nature of Monitoring and Evaluation. A functional Monitoring system feeds into Evaluation system. A functional Evaluation system in turn feeds into Monitoring processes.

As a control panel for tracking progress and the conditions of a car, a functional dashboard shows the conditions of the car. The driver needs to keep tracking or checking progress to reach the destination. Imagine driving a car without a dashboard! Strange, risky, accident-prone, etc.

The driver uses the same dashboard to evaluate and decide when to see the mechanic, stop by a petrol station to refuel or for additional tyre pressure. Sometimes, the driver (i.e. project manager) can by himself take corrective measures from their experience and knowledge of the car system (i.e. project). This is equivalent to using monitoring data or process evaluation to fix issues. Using monitoring results, the driver (or project manager) may learn a lesson here and there to keep the car running (or the project) on the right track.

But in the end, there are technical issues beyond the driver's (or the project/program manager's) control. In such as case, the driver needs to service the car or seek technical expertise for informed guidance. When it is beyond the driver's control, we are talking about change (outcome or impact level). At this level, we need fresher eyes to add a new perspective to the way we have been seeing the conditions of our car. We need evaluation to be on the safer side, more objective, closer to the desired outcome.

Monitoring system is about low hanging fruits, that is why most organizations and countries alike find it easy to set it up. Evaluation is technically demanding and it is the ultimate goal of proper monitoring. We monitor to ensure we achieve process results (under our control). We evaluate to prove or disprove we reached expected change-level results (beyond our control). Monitoring is limited to "vanity indicators" (a term from colleague on social media) such as numbers trained, kgs distributed, etc. Without Evaluation system, what works or does not work would not logically and objectively be identified with evidence. True change would not be rewarded by scaling up or replicating successful projects, etc. Without evaluation system, we fail or succeed without our knowledge and we can't be proud it it.

Having Monitoring system is like having institutional capacity or meeting institutional requirements so that we are able to report to xyz. But having Evaluation system is like having human capacity, expertise required to navigate complex development landscape so that what works is kept. What does it mean to M&E in practice? Let me save this for another day.

Looking forward to more reflections from other countries.

With kind regards,

Jean Providence

Thanks, Kayitesi for this topic. Mainly in Rwanda, there are (Imihigo) district plans for each year that cover all sectors including agriculture. The performance for each district is evaluated at end of a year. So, I would like to know if this also can be classified as an evaluation system.

Thank you