RE: How to Measure the Impact of Monitoring and Evaluation Work | Eval Forward

Highly-esteemed Abubakr,

Thanks for bringing this important issue on monitoring & evaluation in one of the most important challenges of any M&E system related to its 'social learning' dimension. Besides, it was quite informative to read the contributions submitted within this debate – due to your suggestion – especially the ones of Ronald and Zahid.

The situation you depict is the one that is similar to what you might find in other African countries – I was involved between 2013 and 2015 in a very interesting AfDB initiative entitled "Africa 4 Results" and had a chance to visit some Western and Eastern African countries to face a very similar situation.

I don't have all necessary information to argue anything about your country but I have the feeling that in your case, the building of a National M&E seems to have started from the "harware" part and did not pay attention to the "software" issue. Sometimes I have the weakness to believe that in your case much attention was given on projects and projects monitoring collected data do not fit into national policies. And for this I would join my voice to Ronald and Zahid's contributions.

Having that said, we need to acknowledge that the construction of a national M&E system must start the publishing of a M&E general legal framework that will first will impose upon a Government to have a mid-term strategic plan of "multi-dimensional" development to which is annexed a results framework. This national strategic plan must have been prepared through a "true" participatory approach et be endorsed at end by the Parliament.

At the second level, this national "multi-dimensional" development plan will serve for each sector as a reference framework to establish a mid-term strategic sectoral plan to which is annexed a sectoral results framework. Each strategic sectoral plan must be approved by the Government and should bear a results framework that links the sectoral strategy to the mid-term national development plan.

At this level, any new project or programme will need to have a results framework that will link this project or programme to the sectoral plan. This is the "software" part that I mentioned above.

After that, the "hardware" part of the national M&E system is setup upon a concept note showing the inter-relations between the different levels of the national M&E system; the standard form of M&E unit at the different levels; the data collection procedures and methods; the reporting system and its timing; etc.

With this, one can assume that once monitoring data is collected at a project level can easily be aggregated at the sectoral level, getting the sectoral plan to feed back into the national strategic plan.

In such a situation that you bring in, starting with the "hardware" part, the majority of Government high and line staff might feel that M&E is just and additional "administrative" workload that is imposed from the top and lack of conviction in M&E will be very apparent.

Thinking of disseminating M&E results is highly recommended but talking about M&E "value for money" may just be seen inappropriate as M&E work is a sort of "quality insurance" or "life insurance" for development, and using such a metaphor, one can easily admit that having a "quality insurance" or a "life insurance" has certainly a cost, but omitting to have that insurance will certainly have a "at-least-ten-times" higher cost. This is why I believe the concept "value for money" is not the right concept to a given M&E system. I do not ant to be too much provocative but I feel that this issue of "value for money" is just a "proxy" indicator for a lacking conviction towards M&E work.

Kind regards

Mustapha 

Mustapha Malki, PhD
535 avenue Ampere #5
Laval, QC, Canada