RE: Peut-on se contenter de l'évaluation seule pour s'assurer de l'atteinte des ODD? | Eval Forward

Hello to all the community,

I fully believe that monitoring and evaluation are two distinct functions that must complement each other harmoniously. The first contributes to feeding the latter with reliable and quality data, and the second, by qualitative analysis of the secondary data provided by the former, contributes to the improvement of their interpretation. Thus, monitoring and evaluation provide evidence for informed decision-making.

It is true that for a long time the two functions were confused under the term "Monitoring&Evaluaion" terminology through which the evaluation was obscured for the benefit only of the monitoring activity. It would seem, therefore, that the evaluation is trying to take revenge over monitoring in recent years, with its institutionalization under the impetus of a leadership that has not yet achieved the necessary alchemy between the two inseparable functions.

For example, the case of Benin, of which I would like to share with you here some elements of the results of the evaluation of the implementation of the National Evaluation Policy (PNE) 2012-2021, a policy that aimed to create synergy between stakeholders in order to build an effective national evaluation system through the Institutional Framework for Public Policy Evaluation (CIEPP). The National Evaluation Policy distinguishes between the two functions by stating:

“Evaluation [...] is based on data from monitoring activities as well as information obtained from other sources. As such, evaluation is complementary to the monitoring function and is specifically different from the control functions assigned to other state structures and institutions. [...] The monitoring function is carried out in the Ministries by the enforcement structures under the coordination of the Directorates of Programming and Prospective and the Monitoring-Evaluation units. These structures are responsible for working with the Office for the Evaluation of Public Policy and other evaluation structures to provide all the statistical data and information and insights needed for evaluations.

Organizational measures planned for in pages 32 and 33 of the attached National Evaluation Policy document were therefore taken, measures that clearly reveal the ambition to bring the two functions into symbiosis by creating the synergy between stakeholders necessary for the harmonious conduct of participatory evaluations.

On the test of the facts, the results-based management movement and budgetary reforms in Benin have induced the culture of monitoring and evaluation in public administration. But is this culture reinforced with the implementation of the National Evaluation Policy, and more generally by the institutionalization of evaluation?

The evaluation regime in departments today shows that the implementation of the National Evaluation Policy has not had a significant impact on improved evaluative practices. The programming and funding of evaluation activities, the definition and use of monitoring-evaluation tools (inherently monitoring), commissions for evaluations of sectoral programs or projects are the factors that the field data were able to analyze. As a result, departments are less focused on evaluation activities than on monitoring and evaluation.

Resources allocated to evaluation activities in departments have remained relatively stable and generally do not exceed 1.5% of the total budget allocated to the ministry. This reflects the low capacity of departments to prioritize evaluation activities. Under these conditions, it cannot be expected that evaluative practices will develop a great way. This is corroborated by the execution rate of programmed monitoring and evaluation activities, which is often in the order of 65%. Added to this is the fact that the activities carried out are predominantly related to monitoring. Evaluations of projects or programs are rare. Sometimes even the few evaluations carried out in some departments are carried out at the behest of the technical and financial partners who make it a requirement.

However, since the adoption in the Council of Ministers of the National Methodological Assessment Guide, there has been an increase in evaluation activities in departmental annual work plans, particularly on the theory of change and the programming of some evaluations. These results already show the existing dynamics in departments.

In addition, few departments have a regularly updated, reliable monitoring and evaluation database. The development of technological infrastructure to support the information system, the communication and dissemination of evaluation results at the departmental level reflect the state of development of evaluative practices as presented above.

In the end, the state of development of evaluative practice at the departmental level is justified by the lack of an operational evaluation program. As a result, the National Evaluation Policy has not been able to have a substantial effect on evaluative culture in departments in the absence of this operationalization tool, the three-year evaluation program.

When we go down to the level of the municipalities, the situation is even more serious, because the level of development of monitoring and evaluation activities (inherently monitoring) is very unsatisfactory. The evaluation provided very specific data on that. I am happy to share the evaluation report if you are interested.

All this allows me to answer clearly the 4 questions of Mustapha to say:

  1. Evaluation and monitoring are complementary practices and both necessary for the assessment and correction of the performance of a development action, as I said from the first paragraph.
  2. Evaluation values and capitalizes on tracking data, if and only if the monitoring practice is well structured and its data production system is well managed. The case of Benin, which I have just described briefly, shows that in the field of monitoring, there is still a great deal of work to be done to ensure that these two functions can be properly aligned in order to be a real decision-making tool together.
  3. I believe that leadership at all levels of the national monitoring system needs to be strengthened, namely:
  • at the individual level: ministers and other top management decision-makers, senior and middle managers, (project managers, monitoring-assessment actors, etc.);
  • at the organisational level: structures (directions and monitoring-assessment units)
  • at the process level: training workshops, seminars, journal activities, data collection, etc.

There is also a need to strengthen:

  • the technical capabilities/professionalism/skills of the players,
  • networks of partnerships that allow people to learn and capitalize on each other's experiences (the EvalForward forum is a good example),
  • reviewing development planning and the choice of development indicators, including the internal coherence between the various national planning documents, and the consistency of these documents with international development agendas (domestication of the SDGs or their alignment with national development plans is a clear example).
  1. At the institutional level, the attention paid to monitoring and evaluation must be equal to 50 - 50. The two functions are inseparable and contribute equally to the same goal: the production of evidence for informed decision-making.

Thank you all.

[this is a translation of the original comment in French]