Overall, it can be said that, depending on the contexts, the relevance of this combination can be questioned in order to better meet the needs of decision-making.
Three key questions [addressed in the discussion]:
1. Do decision-makers use monitoring and statistical data or do they rely on evaluation?
Evaluation takes time and its results take time. Few decision-makers have the time to wait for them. Before you know it, their term of office is up, or there is a government reshuffle. Some may no longer be in office by the time the evaluation results are published.
Therefore, monitoring data is the primary tool for decision support. Indeed, decision-makers are willing to use monitoring data because it is readily available and simple to use, regardless of the methods by which it was generated. They use evaluative evidence less because it is not generated timely enough.
Because monitoring is a process that provides regular information, it allows decisions to be made quickly. Good monitoring necessarily implies the success of a project or policy, as it allows for the rapid correction and rectification of an unforeseen situation or constraint. Moreover, the more reliable and relevant the information, the more effective the monitoring. In this respect, various government departments (central and decentralised, including projects) are involved in producing statistics or making estimates, sometimes with a great deal of difficulty and with errors in some countries.
However, the statistics produced need to be properly analysed and interpreted in order to draw useful conclusions for decision making. This is where there are problems, as many managers believe that statistics and data are already an end in themselves. Yet statistics and monitoring data are only relevant and useful when they are of good quality, collected and analysed at the right time, and used to produce conclusions and lessons in relation to context and performance. This is important and necessary for the evaluation function.
2. What added value do decision-makers really see in evaluation?
Evaluation requires more time, as it is backed up by research and analysis. It allows decision-makers to review strategies, correct actions and sometimes revise objectives if they are deemed too ambitious or unachievable once the policy is implemented.
A robust evaluation must start from existing monitoring information (statistics, interpretations and conclusions). A challenge that evaluation (especially external or independent) always faces is the availability of limited time to generate conclusions and lessons, unlike monitoring which is permanently on the ground. And so in this situation, the availability of monitoring data is of paramount importance. And it is precisely in relation to this last aspect that evaluations struggle to find evidence to make relevant inferences about different aspects of the object being evaluated. The evaluation should not be blamed if the monitoring data and information are non-existent or of poor quality. On the other hand, one should blame an evaluation that draws conclusions on aspects that are lacking in evidence, including monitoring data and information. Therefore, the evolution of the evaluation should be parallel to the evolution of the monitoring.
Furthermore, it is very important to value the data producers and to give them feedback on the use of the data in evaluation and decision-making. This is in order to give meaning to data collection and to make decision-makers aware of its importance.
3. How should evaluation evolve to be more responsive to the needs of decision-makers - for example, on reporting times?
A call for innovative action: real-time, rapid but rigorous evaluations, if we really want evaluative evidence to be used by decision-makers.
When evaluation is based on evidence and triangulated sources, its findings and lessons are very well taken into account by policy makers, when they are properly briefed and informed, because they see it as a more comprehensive approach.
4. Some challenges of monitoring and evaluation
In developing countries, the main constraint of monitoring and evaluation is the flow of information from the local to the central level and its consolidation. Indeed, information is often unreliable, which inevitably has an impact on the decisions that are taken.
Unfortunately, the monitoring and evaluation system does not receive adequate funding for its operation and this impacts on the expected results of programmes and projects.
However, the problem is much more serious in public programmes and projects than in donor-funded projects. Most of the time, donor-funded projects have a successful track record, with a minimum of 90% implementation and achievement of performance objectives thanks to an appropriate monitoring and evaluation system (recruitment of professionals, funding of the system, etc.). But after the donors leave, there is no continuity, due to the lack of a strategy for the sustainability of interventions and the monitoring and evaluation system (actors and tools). The reasons for this are often (i) the lack of transition between project M&E specialists and state actors; (ii) the lack of human resources or qualified specialists in public structures, but also (iii) the lack of policy or commitment from the state to finance this mechanism after the project cycle.
5. Questions in perspective
How can M&E activities be resourced to function (conducting surveys, collecting and processing data, etc.)?
What are the most effective ways to carry out monitoring in an inaccessible area, such as a conflict zone?
How can we get decision-makers to finance the sustainability of interventions (including the monitoring and evaluation budget) for the benefit of communities, and how can we raise the level of expertise of government agents in monitoring and evaluation and then maintain them in the public sector?
6. Approaches to solutions
The availability of resources for the functionality of monitoring-evaluation mechanisms must be examined at two levels: firstly at the level of human resources and secondly at the level of financial resources.
At the level of human resources, projects and programmes should already begin to integrate the transfer of monitoring-evaluation skills to beneficiaries to ensure the continuity of the exercise at the end of the project.
With regard to financial resources, in the budget planning of the components, it is always necessary to introduce a provisional line for the transversal components in order to have availability for their functionalities. Today, this line is included in several donor frameworks.
One option for reducing costs is to rely as much as possible on users (farmers, fishermen, etc.) to collect data (instead of using only "professional" surveyors).
7. Conclusion
Monitoring and evaluation are both important (in the sense that they are two different approaches that do not replace each other) and monitoring is important for evaluation (better evaluations are made with a good monitoring system). So they are two different but complementary approaches.
Good monitoring data is the basis for good evaluation. They complement each other, monitoring provides clarity on the progress of implementation and adjustments, if any, in implementation and with an evaluation, in addition to validating the monitoring data, it is possible to give them meaning and explanation, timely information for decision makers.
Monitoring data enables the design of subsequent actions and policies. With evaluation, the design is adjusted and the monitoring can also be improved, because as an evaluator I try to provide recommendations to improve the monitoring.
RE: Monitoring and evaluation: is this the perfect combination to meet the needs of decision-makers?
Summary of the discussion
Overall, it can be said that, depending on the contexts, the relevance of this combination can be questioned in order to better meet the needs of decision-making.
Three key questions [addressed in the discussion]:
1. Do decision-makers use monitoring and statistical data or do they rely on evaluation?
Evaluation takes time and its results take time. Few decision-makers have the time to wait for them. Before you know it, their term of office is up, or there is a government reshuffle. Some may no longer be in office by the time the evaluation results are published.
Therefore, monitoring data is the primary tool for decision support. Indeed, decision-makers are willing to use monitoring data because it is readily available and simple to use, regardless of the methods by which it was generated. They use evaluative evidence less because it is not generated timely enough.
Because monitoring is a process that provides regular information, it allows decisions to be made quickly. Good monitoring necessarily implies the success of a project or policy, as it allows for the rapid correction and rectification of an unforeseen situation or constraint. Moreover, the more reliable and relevant the information, the more effective the monitoring. In this respect, various government departments (central and decentralised, including projects) are involved in producing statistics or making estimates, sometimes with a great deal of difficulty and with errors in some countries.
However, the statistics produced need to be properly analysed and interpreted in order to draw useful conclusions for decision making. This is where there are problems, as many managers believe that statistics and data are already an end in themselves. Yet statistics and monitoring data are only relevant and useful when they are of good quality, collected and analysed at the right time, and used to produce conclusions and lessons in relation to context and performance. This is important and necessary for the evaluation function.
2. What added value do decision-makers really see in evaluation?
Evaluation requires more time, as it is backed up by research and analysis. It allows decision-makers to review strategies, correct actions and sometimes revise objectives if they are deemed too ambitious or unachievable once the policy is implemented.
A robust evaluation must start from existing monitoring information (statistics, interpretations and conclusions). A challenge that evaluation (especially external or independent) always faces is the availability of limited time to generate conclusions and lessons, unlike monitoring which is permanently on the ground. And so in this situation, the availability of monitoring data is of paramount importance. And it is precisely in relation to this last aspect that evaluations struggle to find evidence to make relevant inferences about different aspects of the object being evaluated. The evaluation should not be blamed if the monitoring data and information are non-existent or of poor quality. On the other hand, one should blame an evaluation that draws conclusions on aspects that are lacking in evidence, including monitoring data and information. Therefore, the evolution of the evaluation should be parallel to the evolution of the monitoring.
Furthermore, it is very important to value the data producers and to give them feedback on the use of the data in evaluation and decision-making. This is in order to give meaning to data collection and to make decision-makers aware of its importance.
3. How should evaluation evolve to be more responsive to the needs of decision-makers - for example, on reporting times?
A call for innovative action: real-time, rapid but rigorous evaluations, if we really want evaluative evidence to be used by decision-makers.
When evaluation is based on evidence and triangulated sources, its findings and lessons are very well taken into account by policy makers, when they are properly briefed and informed, because they see it as a more comprehensive approach.
4. Some challenges of monitoring and evaluation
In developing countries, the main constraint of monitoring and evaluation is the flow of information from the local to the central level and its consolidation. Indeed, information is often unreliable, which inevitably has an impact on the decisions that are taken.
Unfortunately, the monitoring and evaluation system does not receive adequate funding for its operation and this impacts on the expected results of programmes and projects.
However, the problem is much more serious in public programmes and projects than in donor-funded projects. Most of the time, donor-funded projects have a successful track record, with a minimum of 90% implementation and achievement of performance objectives thanks to an appropriate monitoring and evaluation system (recruitment of professionals, funding of the system, etc.). But after the donors leave, there is no continuity, due to the lack of a strategy for the sustainability of interventions and the monitoring and evaluation system (actors and tools). The reasons for this are often (i) the lack of transition between project M&E specialists and state actors; (ii) the lack of human resources or qualified specialists in public structures, but also (iii) the lack of policy or commitment from the state to finance this mechanism after the project cycle.
5. Questions in perspective
6. Approaches to solutions
The availability of resources for the functionality of monitoring-evaluation mechanisms must be examined at two levels: firstly at the level of human resources and secondly at the level of financial resources.
At the level of human resources, projects and programmes should already begin to integrate the transfer of monitoring-evaluation skills to beneficiaries to ensure the continuity of the exercise at the end of the project.
With regard to financial resources, in the budget planning of the components, it is always necessary to introduce a provisional line for the transversal components in order to have availability for their functionalities. Today, this line is included in several donor frameworks.
One option for reducing costs is to rely as much as possible on users (farmers, fishermen, etc.) to collect data (instead of using only "professional" surveyors).
7. Conclusion
Monitoring and evaluation are both important (in the sense that they are two different approaches that do not replace each other) and monitoring is important for evaluation (better evaluations are made with a good monitoring system). So they are two different but complementary approaches.
Good monitoring data is the basis for good evaluation. They complement each other, monitoring provides clarity on the progress of implementation and adjustments, if any, in implementation and with an evaluation, in addition to validating the monitoring data, it is possible to give them meaning and explanation, timely information for decision makers.
Monitoring data enables the design of subsequent actions and policies. With evaluation, the design is adjusted and the monitoring can also be improved, because as an evaluator I try to provide recommendations to improve the monitoring.