RE: Peut-on se contenter de l'évaluation seule pour s'assurer de l'atteinte des ODD? | Eval Forward

Dear all,

It has been more than a month since we started this discussion on the mismatch between monitoring and evaluation, although these two functions have always been considered complementary and therefore inseparable. However, as a first reaction, I must express my surprise that only 4 contributions have been recorded for this theme. Why such a weak reaction from our group members?

Beyond this surprise, I have reviewed the 3 reactions that address specifically the issue of monitoring and evaluation practice and propose to relaunch the debate on this topic so that we can draw some recommendations. For the record, and in order to be clear in my recommendations, I will focus my intervention on the monitoring function to distinguish it from the evaluation practice in any monitoring-evaluation system because it seems to me that the term 'monitoring-evaluation' hides very poorly the existing mismatch between the two functions, as these do not receive the same attention both nationally and internationally.

As the first to respond, Natalia recommends that theories of change would be more useful if they were developed during the planning or formulation phase of the intervention and would serve as the foundation of the monitoring-evaluation system. This is the essence of the theory of monitoring and evaluation in what many specialized textbooks suggest.

She also suggests that evaluations could be more useful in terms of learning from the intervention if ToC and evaluation questions are fed from questions formulated by program teams after analysis of monitoring data.

But isn’t that what we are supposed to do? And if that's it, then why in general it's not how it is done?

In her contribution, Aurélie acknowledges that evaluation is better developed as a practice than her sister function of monitoring, perhaps since evaluations are done primarily when supported by dedicated external funding, thus linked to an external funder. This is in fact the general case that can easily be observed in the least developed countries. She also asks the question: why has the monitoring function not yet received the same interest from donors; why are monitoring systems not required as a priority, given the essential nature of this tool to learn from past actions and improve future actions on time? She even seems to give a bit of an answer by referring to a study: countries need to develop a general, results-based management culture, which begins, even before monitoring, with results-based planning. But she does not  explain why it is not yet in place, despite the fact that it has been 4 years since the SDGs were launched. She concludes her contribution by acknowledging that in many institutions, both national and international, monitoring is still largely underestimated, under-invested and suggests that it is up to evaluators to play a role in supporting the emergence of the monitoring function, in their respective spheres of influence; even if it means putting aside the sacrosanct principle of independence for a time. But she does not show us how evaluators can succeed in bringing out this much-desired monitoring function where large donors and large capacity building programs have failed.

The third contribution comes from Diagne, which begins by recognizing that when developing a monitoring-assessment system, there is a greater focus on functions and tools rather than on the field - or scope - and the purpose of the system, taking into account the information needs of the funder and other stakeholders. He says that if the main purpose of a monitoring-assessment system is to accompany the implementation of an intervention in a sense of constant critical reflection in order to achieve the results assigned to this intervention and to alert on the critical conditions of its implementation, then a review - I would say personally, redesign - of the monitoring-evaluation system is necessary. And he concludes by stressing that development policies do not give enough importance to monitoring and evaluating the SDGs; they merely compile data from programmes implemented with foreign partners to express progress against a particular indicator, which is far from good practice with regard to monitoring and evaluation. 

At least two contributions (Aurèlie and Diagne) recognize that a major overhaul of monitoring&evaluation is needed in the era of results-based management and all other results-based corollaries.

What we can note from all these contributions is that there is unanimity on the interest and importance of strengthening the complementarity between monitoring and evaluation as two functions that reinforce each other, but we do not know how to build a monitoring function worthy of the current practice of evaluation. As it is said, identifying the good causes of a problem is already half the solution to this problem. The major cause of the mismatch between monitoring and evaluation is that evaluation has been consolidated by funders and development partners because it addresses their concerns over the performance of the programs they fund or implement. On the other hand, monitoring is a more beneficial function for countries receiving development assistance and such a function does not yet seem to be important to the country's governments for several reasons. As there is very little external investment in the monitoring function at the national level, this fuels the mismatch between these two functions. So if there is anything that can be done to mitigate this inadequacy, then donors and development partners should be encouraged to invest in strengthening national monitoring and evaluation systems and conduct programmes in order to convince the governments of the recipient countries of development assistance of the interest and importance of strengthening national monitoring-evaluation systems.

Let us hope that this contribution will relaunch the debate on this topic...

Mustapha