RE: Monitoring and evaluation: is this the perfect combination to meet the needs of decision-makers? | Eval Forward

Dear Elias and Colleagues,

Thanks for sharing and discussing this important topic. The more we discuss, the more we understand how to address the issues affecting evaluation practice. To begin with, Monitoring and Evaluation, are they different or are they two sides of the same coin? A perfect combination in theory, but largely mismatching in practice, as Elias posited.

With an anecdote and some thought-provoking, or controversial views (I hope I get more than one!), I will look at Monitoring and Evaluation, each in its own right, and end up with my personal reflection. First, I encourage colleagues to (keep) read(ing) Ten Steps to a Results-Based Monitoring and Evaluation System by Kusek and Rist. Though published in 2004, it still sheds lights on the interlinkages of Monitoring and Evaluation. Note that I disagree at some propositions or definitions made in that textbook. But I will quote them:

"Evaluation is a complement to monitoring in that when a monitoring system sends signals that the efforts are going off track (for example, that the target population is not making use of the services, that costs are accelerating, that there is real resistance to adopting an innovation, and so forth), then good evaluative information can help clarify the realities and trends noted with the monitoring system”. p. 13

Monitoring as the low hanging fruits. Anecdote. One decision-maker used to tell me that he prefers quick and dirty methods to rigorous, time-consuming evaluation methods. Why? No wonder it is easy and quick to get an idea about implemented activities and ensuing outputs. By the way, monitoring deals with all that is under the control of implementers (inputs, activities and outputs). A discussion for another day. With Monitoring, it is usually a matter of checking the database (these days, we look at visualized dashboards) and be able to tell where a project stands in its implementation, progress towards (output/outcome?) targets.

Evaluation as the high hanging fruits: In a traditional sense, Evaluation tries to establish whether change has taken place and what has driven such change and how. That’s the realm of causality, correlation, association, etc. between what is done and what is eventually achieved. Evaluation is time-consuming and its results takes time. Few decision-makers have got time to wait. In no time, their term of office comes to an end, or there is government reshuffle. Some may no longer be in the office by the time Evaluation results are out. Are we still wondering why decision-makers  prefer Monitoring evidence?

My understanding of and experience in M&E, as elaborated in Kusek and Risk (2004), is that a well-designed and conducted Monitoring feeds into Evaluation and Evaluation findings show (when the project is still ongoing) what to closely monitor. A good Monitoring gathers and provides, for example, time-series data, useful for evaluation. Evaluation also informs Monitoring. By the way, I am personally less keen on end-of-project evaluations. It seems antithetical for an evaluation practitioner, right? Because the target communities the project is designed for do not benefit from such endline evaluations. Of course, when it is a pilot project, it may be scaled up and initial target groups reached with improved project, thanks to lessons drawn from Evaluation. Believe me, I do conduct endline evaluations, but they are less useful than developmental, formative, real-time/rapid evaluations. A topic for another day!

Both Monitoring and Evaluation make one single, complementary, and cross-fertilizing system. Some colleagues in independent evaluation offices or departments may not like the interlinkage and interdependence of Monitoring and Evaluation. Simply because they are labelled 'independent'. This reminds me of the other discussion about independence, neutrality and impartiality in evaluation. Oups I did not take part in that discussion. I agree that self- and internal evaluation should not be discredited as Elias argued in his blog. Evaluation insiders understand and know the context which sometimes external, independent evaluators struggle to grasp to make sense of evaluation results. Let’s park this for now.

Last year, there was an online forum (link to the draft report), bringing together youth from various Sahel countries. Through that forum, youthful dreams, aspirations, challenges, opportunities, etc. were discussed/shared. Huge amount of data were eventually collected through the digital platform. From those youth conversations (an activity reaching hundreds of youth), not only was there proof of change in the narrative but also what drives/inhibits change and youth aspirations. A perfect match of monitoring (reaching x number of young people) and evaluation (factors driving or inhibiting desired change). When there are data from such youth conversations, it is less useful to conduct an evaluation to assess factors associated with change in the Sahel. Just analyze those data, of course develop an analytical guide to help in that process. Using monitoring data is of great help to evaluation. There is evidence that senior decision-makers are very supportive of insights from the analysis done on youth discussions. Imagine waiting till time is ripe for a proper evaluation! Back to the subject.

All in all, decision-makers are keen on using Monitoring evidence as it is readily available. Monitoring seems straightforward and user-friendly. As long as Evaluation is considered an ivory tower, sort of rocket science, it will be less useful for decision-makers. The evaluation jargon itself, isn't it problematic, an obstacle to using evaluative evidence? My assumptions: Decision-makers like using Monitoring evidence as they make decisions as fire-fighters, not minding quick and dirt but practical methods. They use less evaluative evidence as they don't have time to wait.

A call to innovative actions: real-time, rapid but rigorous evaluations, if we really want evaluative evidence to be used by decision-makers.

Thank you all. Let's keep learning and finding best ways to bring M&E evidence where it is needed the most: decision-making at all levels.

Jean Providence Nzabonimpa