Monitoring and evaluation: is this the perfect combination to meet the needs of decision-makers?

Monitoring and evaluation: is this the perfect combination to meet the needs of decision-makers?
15 contributions

Monitoring and evaluation: is this the perfect combination to meet the needs of decision-makers?

photo
©FAO

Dear EvalForward Members,

I was lucky enough to work at the Benin Evaluation Office from 2015 to 2021, and in the last few months I have been involved in monitoring government projects, programmes and reforms.

Based on my experience, although monitoring and evaluation are closely related, it seems that monitoring data supported by a good statistical system is better to meet the requirements of policymakers, first and foremost because it allows them to make decisions promptly and in real time. Evaluation is essential for learning from our performance and from our mistakes in managing public interventions. It is also useful for research and scientific progress because of the wealth of empirical knowledge emerging from evaluative studies. However, in short, monitoring is the most appropriate tool to help make quick evidence-informed decisions, especially in an emergency.

I have further reflected on these issues in the case of Benin in this blog.

What is your experience and opinion on this subject?

  • Do decision-makers in your countries use monitoring and statistical data or do they rely on evaluation?
  • What benefit do they – the decision-makers – really see in evaluation?
  • How should evaluation evolve to be more responsive to the needs of decision-makers – for example, on reporting times?

Thank you in advance,

Elias SEGLA

Head of Monitoring and Evaluation
Presidency of the Republic of Benin

 

This discussion is now closed. Please contact info@evalforward.org for any further information.
  • Summary of the discussion

    Overall, it can be said that, depending on the contexts, the relevance of this combination can be questioned in order to better meet the needs of decision-making.

    Three key questions [addressed in the discussion]:

    1.       Do decision-makers use monitoring and statistical data or do they rely on evaluation?

    Evaluation takes time and its results take time. Few decision-makers have the time to wait for them. Before you know it, their term of office is up, or there is a government reshuffle. Some may no longer be in office by the time the evaluation results are published.

    Therefore, monitoring data is the primary tool for decision support. Indeed, decision-makers are willing to use monitoring data because it is readily available and simple to use, regardless of the methods by which it was generated. They use evaluative evidence less because it is not generated timely enough.

    Because monitoring is a process that provides regular information, it allows decisions to be made quickly. Good monitoring necessarily implies the success of a project or policy, as it allows for the rapid correction and rectification of an unforeseen situation or constraint. Moreover, the more reliable and relevant the information, the more effective the monitoring. In this respect, various government departments (central and decentralised, including projects) are involved in producing statistics or making estimates, sometimes with a great deal of difficulty and with errors in some countries.

    However, the statistics produced need to be properly analysed and interpreted in order to draw useful conclusions for decision making. This is where there are problems, as many managers believe that statistics and data are already an end in themselves. Yet statistics and monitoring data are only relevant and useful when they are of good quality, collected and analysed at the right time, and used to produce conclusions and lessons in relation to context and performance. This is important and necessary for the evaluation function.

    2.       What added value do decision-makers really see in evaluation?

    Evaluation requires more time, as it is backed up by research and analysis. It allows decision-makers to review strategies, correct actions and sometimes revise objectives if they are deemed too ambitious or unachievable once the policy is implemented.

    A robust evaluation must start from existing monitoring information (statistics, interpretations and conclusions). A challenge that evaluation (especially external or independent) always faces is the availability of limited time to generate conclusions and lessons, unlike monitoring which is permanently on the ground. And so in this situation, the availability of monitoring data is of paramount importance. And it is precisely in relation to this last aspect that evaluations struggle to find evidence to make relevant inferences about different aspects of the object being evaluated. The evaluation should not be blamed if the monitoring data and information are non-existent or of poor quality. On the other hand, one should blame an evaluation that draws conclusions on aspects that are lacking in evidence, including monitoring data and information. Therefore, the evolution of the evaluation should be parallel to the evolution of the monitoring.

    Furthermore, it is very important to value the data producers and to give them feedback on the use of the data in evaluation and decision-making. This is in order to give meaning to data collection and to make decision-makers aware of its importance.

    3.       How should evaluation evolve to be more responsive to the needs of decision-makers - for example, on reporting times?

    A call for innovative action: real-time, rapid but rigorous evaluations, if we really want evaluative evidence to be used by decision-makers.

    When evaluation is based on evidence and triangulated sources, its findings and lessons are very well taken into account by policy makers, when they are properly briefed and informed, because they see it as a more comprehensive approach.

    4.       Some challenges of monitoring and evaluation

    In developing countries, the main constraint of monitoring and evaluation is the flow of information from the local to the central level and its consolidation. Indeed, information is often unreliable, which inevitably has an impact on the decisions that are taken.

    Unfortunately, the monitoring and evaluation system does not receive adequate funding for its operation and this impacts on the expected results of programmes and projects.

    However, the problem is much more serious in public programmes and projects than in donor-funded projects. Most of the time, donor-funded projects have a successful track record, with a minimum of 90% implementation and achievement of performance objectives thanks to an appropriate monitoring and evaluation system (recruitment of professionals, funding of the system, etc.). But after the donors leave, there is no continuity, due to the lack of a strategy for the sustainability of interventions and the monitoring and evaluation system (actors and tools). The reasons for this are often (i) the lack of transition between project M&E specialists and state actors; (ii) the lack of human resources or qualified specialists in public structures, but also (iii) the lack of policy or commitment from the state to finance this mechanism after the project cycle.

    5.       Questions in perspective

    • How can M&E activities be resourced to function (conducting surveys, collecting and processing data, etc.)?
    • What are the most effective ways to carry out monitoring in an inaccessible area, such as a conflict zone? 
    • How can we get decision-makers to finance the sustainability of interventions (including the monitoring and evaluation budget) for the benefit of communities, and how can we raise the level of expertise of government agents in monitoring and evaluation and then maintain them in the public sector?

    6.       Approaches to solutions

    The availability of resources for the functionality of monitoring-evaluation mechanisms must be examined at two levels: firstly at the level of human resources and secondly at the level of financial resources.

    At the level of human resources, projects and programmes should already begin to integrate the transfer of monitoring-evaluation skills to beneficiaries to ensure the continuity of the exercise at the end of the project.

    With regard to financial resources, in the budget planning of the components, it is always necessary to introduce a provisional line for the transversal components in order to have availability for their functionalities. Today, this line is included in several donor frameworks.

    One option for reducing costs is to rely as much as possible on users (farmers, fishermen, etc.) to collect data (instead of using only "professional" surveyors).

    7.       Conclusion

    Monitoring and evaluation are both important (in the sense that they are two different approaches that do not replace each other) and monitoring is important for evaluation (better evaluations are made with a good monitoring system). So they are two different but complementary approaches.

    Good monitoring data is the basis for good evaluation. They complement each other, monitoring provides clarity on the progress of implementation and adjustments, if any, in implementation and with an evaluation, in addition to validating the monitoring data, it is possible to give them meaning and explanation, timely information for decision makers.

    Monitoring data enables the design of subsequent actions and policies. With evaluation, the design is adjusted and the monitoring can also be improved, because as an evaluator I try to provide recommendations to improve the monitoring.

  • As I review the various contributions to this important discussion, I note how often the problem centers around financial resources. I think this is often critical but needs to be considered in the overall economic context of most host countries. As I look at the overall economic environment of host countries, I defined them as economically suppressed.  That is, they are serving a mostly impoverished population that spend up to 80% of income or farm production for family essential food needs. This leave very little discretionary income to purchase other necessary goods and form a tax base to provide the government revenue to support government services. No Taxes/no services. Thus, most host country civil services are barely able to provide personnel benefits in term of salaries, retirements, health care and possible housing to the civil officers. This leaves little or no funds for operational cost such as travel to field locations for conducting a M&E analysis of projects. Thus, they have little choice but to rely on references experiences on the effectiveness of various innovation, which may or may not be accurate, or even more propaganda than analytical. Also, under these financial restrictions, perhaps it is better to assume effectiveness and use scarce financial resources to promote other innovations. Please review the following webpages:

    https://agsci.colostate.edu/smallholderagriculture/financially-suppress…;

    https://agsci.colostate.edu/smallholderagriculture/financially-stalled-…;

    Thank you

  • Hello dear friend.

    How do we change the paradigm? We say that we are committed to sustainable development, i.e. development outcomes must be progressive over time, sustained and resilient to all unexpected changes (economic crisis). So what we are looking for in our respective countries is strong political commitments and significant funding from our own resources for our economic and social development plans. Particular emphasis must be placed on financing the monitoring and evaluation mechanism that has been set up. 
    To do this, we need to develop a culture of evaluation at national level. All stakeholders must agree on this. The question is how to do this as we are very late and the few resources we have must absolutely boost change through rational management? Let's think about this together.  These ideas that are on our minds.

    Thank you very much.

  • The questions asked are very relevant. For my part, I have found that decisions are based more on the raw data of the monitoring without analysing the why of this or that situation. This leads us to an eternal restart.

    Added value of evaluation: Really looking at the evaluation before making a decision will allow you to decide with knowledge of the causes and understanding of the ins and outs.

    In our developing countries, public administrations generally do almost no evaluation after an intervention. Investments financed from the national budget are not evaluated, they are renewed without really analysing whether they produce the desired effects or not. It is imperative to establish a culture of results in our public administrations. This will necessarily lead everyone to evaluate their practices and take corrective measures whenever necessary.

  • Dear Elias, Dear colleagues.

    The questions asked are relevant, but the answers vary according to the context of the experiences. For my part, I have been working in the fields of monitoring and evaluation for about 20 years, with a major part (60-65%) in monitoring and the rest in evaluation.

    In relation to the first question, yes, the first resource for decision-makers is monitoring data. In this respect, various state services (central and decentralised, including projects) are involved in producing these statistics or making estimates, sometimes with a lot of effort and errors in some countries. But the statistics produced must be properly analysed and interpreted to draw useful conclusions for decision-making. This is precisely where there are problems, because many managers think that statistics and data are already an end in themselves. This is not the case at all, and therefore statistics and monitoring data are only relevant and useful when they are: of good quality, collected and analysed at the right time, used to produce conclusions and lessons in relation to context and performance. This is important and necessary for the evaluation function that has to follow this.

    So in relation to the second and third questions, in view of the above, a robust evaluation must start from existing monitoring information (statistics, interpretations and conclusions). One challenge that evaluation (especially external or independent) always faces is the availability of limited time to generate conclusions and lessons, unlike monitoring which is permanently on the ground. And so in this situation, the availability of monitoring data is of paramount importance. And it is precisely in relation to this last aspect that evaluations have difficulty in finding evidence to make relevant inferences about different aspects of the object being evaluated. The Evaluation should not be blamed if the monitoring data and information is non-existent or of poor quality. On the other hand, one should blame an evaluation that draws conclusions on aspects that suffer from lack of evidence, including monitoring data and information. So the evolution of evaluation should be concomitant with the evolution of monitoring.

    Having said that, my experience is that when evaluation is evidence-based and triangulated, its findings and lessons are very well taken on board by policy makers, when they are properly briefed and informed, because they see it as a more comprehensive approach.

    This is my contribution

     

  • Thank you very much Jean Marie for your very relevant contribution. The big challenge is to raise awareness among stakeholders for a paradigm shift.

    We are together.

    Ciao

     

  • Hello everyone,

    Monitoring and evaluation are both important (in the sense that they are two different approaches that do not replace each other) and monitoring is important for evaluation (better evaluations are made with a good monitoring system). So they are two different but complementary approaches.

    To come to the question of the means given to the monitoring system to benefit from quality data:

    - It is very important to value the data producers and to give them feedback on the use of the data in the evaluation and decision making. This is to give meaning to data collection and to make decision-makers aware of its importance.

    - One option for reducing costs is to rely as much as possible on users (farmers, fishermen, etc.) to collect data (instead of using only "professional" surveyors).

    The last very important point is that the major challenge is the overall coherence of the system, because it is necessary to have motivated and reliable data collectors at the local level, and these data must also be partly comparable and able to be aggregated at the national level, otherwise we end up with a mass of local data from which nothing can be drawn at the supra-level. This work of articulating the scale, which consists of framing the monitoring system without "locking" local data collection into filling in indicators that they do not understand and that are not useful to them, is very important and constitutes the key skill that a national monitoring officer must have.

    There is often a multiplication and overlapping of data collection and processing systems for management, monitoring and evaluation, whereas a system with shared relevance would be beneficial in many ways (think or rethink the institutional architecture for M&E).

  • Dear colleagues
    The availability of resources for the functionality of monitoring and evaluation systems must be examined at two levels: firstly at the level of human resources and secondly at the level of financial resources.
    At the level of human resources, projects and programmes should already begin to integrate the transfer of monitoring and evaluation skills to beneficiaries to ensure the continuity of the exercise at the end of the project.
    As far as financial resources are concerned, as a colleague said in the discussion, this problem has been solved in most donor frameworks. But it should also be stressed that beyond implementation, monitoring and evaluation of the impact of projects and programmes is also of great importance, and very often we note the lack of financial resources to do this at the end of programmes. 
    So we also need to start thinking about how this issue should be taken into account in the design of projects or programmes for better sustainability. 
    Best wishes and happy Women's Day.

    Dinisse SYLVA 
    Local development specialist
    Management of local development programmes and projects
    Project monitoring and evaluation
    Alumni of Corps Africa/Senegal
    Member of the Francophone Network of Emerging Evaluators 

  •  Mr Djime 

    Indeed the problem is no longer at the level of projects financed by donors. Most projects have a conclusive balance sheet with generally a minimum of 90% execution, achievement of performance objectives thanks to an adapted monitoring and evaluation system (recruitment of professionals, financing of the system, etc.).
    Now the difficulty is that, when these projects leave, there is no continuity most often because there was no strategy for the sustainability of the interventions nor a monitoring and evaluation mechanism (actors and tools) for the reasons following:
    - For lack of transition between project M&E specialists and state actors
    - For lack of human resources or qualified specialists at the level of public structures (who at a certain level of expertise work in the private sector)
    - but also for lack of policy or financial commitment of the government to finance this system after the cycle of the project.
    Consequently, the challenge is how to get state decision-makers to finance interventions (including the budget for monitoring and evaluation) for the benefit of communities and, on the other hand, how to raise the level of expertise of state agents in monitoring and evaluation (device and mechanism, mastery of applications and software,  analyzes for decision-making and evidence, etc.) and then maintaining them in the public sector.

    Best regards

  • Dear Elias and Colleagues,

    Thanks for sharing and discussing this important topic. The more we discuss, the more we understand how to address the issues affecting evaluation practice. To begin with, Monitoring and Evaluation, are they different or are they two sides of the same coin? A perfect combination in theory, but largely mismatching in practice, as Elias posited.

    With an anecdote and some thought-provoking, or controversial views (I hope I get more than one!), I will look at Monitoring and Evaluation, each in its own right, and end up with my personal reflection. First, I encourage colleagues to (keep) read(ing) Ten Steps to a Results-Based Monitoring and Evaluation System by Kusek and Rist. Though published in 2004, it still sheds lights on the interlinkages of Monitoring and Evaluation. Note that I disagree at some propositions or definitions made in that textbook. But I will quote them:

    "Evaluation is a complement to monitoring in that when a monitoring system sends signals that the efforts are going off track (for example, that the target population is not making use of the services, that costs are accelerating, that there is real resistance to adopting an innovation, and so forth), then good evaluative information can help clarify the realities and trends noted with the monitoring system”. p. 13

    Monitoring as the low hanging fruits. Anecdote. One decision-maker used to tell me that he prefers quick and dirty methods to rigorous, time-consuming evaluation methods. Why? No wonder it is easy and quick to get an idea about implemented activities and ensuing outputs. By the way, monitoring deals with all that is under the control of implementers (inputs, activities and outputs). A discussion for another day. With Monitoring, it is usually a matter of checking the database (these days, we look at visualized dashboards) and be able to tell where a project stands in its implementation, progress towards (output/outcome?) targets.

    Evaluation as the high hanging fruits: In a traditional sense, Evaluation tries to establish whether change has taken place and what has driven such change and how. That’s the realm of causality, correlation, association, etc. between what is done and what is eventually achieved. Evaluation is time-consuming and its results takes time. Few decision-makers have got time to wait. In no time, their term of office comes to an end, or there is government reshuffle. Some may no longer be in the office by the time Evaluation results are out. Are we still wondering why decision-makers  prefer Monitoring evidence?

    My understanding of and experience in M&E, as elaborated in Kusek and Risk (2004), is that a well-designed and conducted Monitoring feeds into Evaluation and Evaluation findings show (when the project is still ongoing) what to closely monitor. A good Monitoring gathers and provides, for example, time-series data, useful for evaluation. Evaluation also informs Monitoring. By the way, I am personally less keen on end-of-project evaluations. It seems antithetical for an evaluation practitioner, right? Because the target communities the project is designed for do not benefit from such endline evaluations. Of course, when it is a pilot project, it may be scaled up and initial target groups reached with improved project, thanks to lessons drawn from Evaluation. Believe me, I do conduct endline evaluations, but they are less useful than developmental, formative, real-time/rapid evaluations. A topic for another day!

    Both Monitoring and Evaluation make one single, complementary, and cross-fertilizing system. Some colleagues in independent evaluation offices or departments may not like the interlinkage and interdependence of Monitoring and Evaluation. Simply because they are labelled 'independent'. This reminds me of the other discussion about independence, neutrality and impartiality in evaluation. Oups I did not take part in that discussion. I agree that self- and internal evaluation should not be discredited as Elias argued in his blog. Evaluation insiders understand and know the context which sometimes external, independent evaluators struggle to grasp to make sense of evaluation results. Let’s park this for now.

    Last year, there was an online forum (link to the draft report), bringing together youth from various Sahel countries. Through that forum, youthful dreams, aspirations, challenges, opportunities, etc. were discussed/shared. Huge amount of data were eventually collected through the digital platform. From those youth conversations (an activity reaching hundreds of youth), not only was there proof of change in the narrative but also what drives/inhibits change and youth aspirations. A perfect match of monitoring (reaching x number of young people) and evaluation (factors driving or inhibiting desired change). When there are data from such youth conversations, it is less useful to conduct an evaluation to assess factors associated with change in the Sahel. Just analyze those data, of course develop an analytical guide to help in that process. Using monitoring data is of great help to evaluation. There is evidence that senior decision-makers are very supportive of insights from the analysis done on youth discussions. Imagine waiting till time is ripe for a proper evaluation! Back to the subject.

    All in all, decision-makers are keen on using Monitoring evidence as it is readily available. Monitoring seems straightforward and user-friendly. As long as Evaluation is considered an ivory tower, sort of rocket science, it will be less useful for decision-makers. The evaluation jargon itself, isn't it problematic, an obstacle to using evaluative evidence? My assumptions: Decision-makers like using Monitoring evidence as they make decisions as fire-fighters, not minding quick and dirt but practical methods. They use less evaluative evidence as they don't have time to wait.

    A call to innovative actions: real-time, rapid but rigorous evaluations, if we really want evaluative evidence to be used by decision-makers.

    Thank you all. Let's keep learning and finding best ways to bring M&E evidence where it is needed the most: decision-making at all levels.

    Jean Providence Nzabonimpa

     

  • Dear Mr Djimé

    the availability of resources for the functioning of the monitoring-evaluation system is still a challenge. However, it should be noted that monitoring-evaluation, like communication and coordination, remains cross-cutting to the different components of a project, programme or policy for efficiency in implementation. Thus, in the budget planning of the components, a provisional line must always be introduced for the cross-cutting components in order to have availability for their functions. Today, in many donor frameworks, this line is foreseen. However it is as a real challenge for public programmes and projects, which unfortunately do not give much importance to monitoring and evaluation...

  • Good morning dear colleagues.

    What we are saying about Monitoring and Evaluation is very important in the development process for the change of our economic, social and cultural conditions of our States. We had in front of us the commitments of our States (ODD, African Union...). Unfortunately, monitoring and evaluation systems do not receive adequate funding for their functioning and this impacts the expected results of programs and projects. So, the question I would like to ask: 

    Based on your experiences, what can be done to ensure that the Monitoring and Evaluation activities get the appropriate resources for their functioning (Conducting surveys, data collection and processing ....)?

    Thank you.

  • Thank you for the discussion. 

    I have a concern: What are the most effective ways to monitor in an inaccessible area, such as a conflict zone ?

  • Good evening dear members of EvalForward

    The question posed by Elias is very interesting.

    Monitoring, being a process that provides regular information, allows decisions to be taken quickly. Good monitoring necessarily implies the success of a project or a policy, since it allows for the rapid correction and rectification of an unforeseen situation or a given constraint.

    Moreover, the more reliable and relevant the information, the more effective the monitoring.

    In developing countries, the main constraint to monitoring and evaluation is the flow of information from the local to the central level and its consolidation. 

    Indeed, information is often unreliable, which inevitably has an impact on the decisions that are taken.

    As for evaluation, it requires more time, since it is based on research and analysis. It allows decision-makers to review strategies, correct certain actions and sometimes revise objectives if they are deemed too ambitious or unachievable once the policy has been implemented. 

     

  • Dear Elias,

    I find this discussion pertinent and your point of view on the relevance of this combination with the ultimate goal of meeting the information needs in the decision-making process.

    In my opinion, depending on the projects or programs that vary in their typologies and scale of intervention in terms of themes and/or geography, the relevance of this combination can be questioned in order to better respond to the needs of decision making.

    If we take the example of state or even political programs, monitoring is the strongest element, given the permanent demand for information to make immediate decisions. However, it should be noted that at any given moment, the observation or analysis made on the basis of monitoring data to make decisions is nothing other than an "evaluation", I can even say extraordinary, similar to the ordinary evaluations predefined in an M&E system (baseline, midterm, final evaluation and impact evaluation). So I think that it would be a question of revising the periodicity of the classic evaluation and instead of monitoring and evaluation we are in a situation of monitoring-evaluation that is to say a simplified evaluation as permanent as the monitoring in a parallel way with a periodicity defined by the needs of information in the process of decision making while taking into account the urgencies in the process ....