Thank you Eriasafu for this relevant topic. To develop an inclusive MEAL system, the chances are very good chance in starting with developing an all-inclusive theory of change and capturing the baseline of all-inclusivity issues. This then ensures inclusivity in creating the log frames and work plans and eventually trickles down to indicators that include both qualitative and quantitative; inclusive interventions conducted; collecting data that is disaggregated; Data collected by applying diverse methods of qualitative and quantitative methods and reporting. And also inclusivity in getting feedback from the beneficiaries.
The project/program team may fail to understand what is meant by inclusivity if the implementation starts at a later stage.
Thank you for this conversation. In fact, the same discussion had been started on Twitter by Tom Archibald (https://twitter.com/tgarchibald) with very fascinating points coming out. He also shared this. https://t.co/ynI88BlvZp?amp=1
Well, in evaluation, unfortunately, racism is present.
Looking at the Evaluations conducted in African countries, you realize most consultancies are given to a consultant from the global north even one with less experience or one just starting. The more experienced evaluator from the south is given an opportunity as a data collector (in some instances), and this is only because of existing protocols, or language barrier and terrain challenges. And this goes for Evaluations conducted in the global North too, the opportunity is still given to the same evaluators again, hence very minimal chances of the global south evaluators.
Payment is also not the same. If we hold all factors constant, the consultant from the global North is highly remunerated as opposed to one from the south. This is in addition to the already incurred expenses, of bringing them into the country, expensive accommodation, and DSAs.
Some of the donor-funded programs and international NGOs, bring in the consultants from their own country to conduct evaluations for the projects in the global South.
As an evaluator, I once looked at an Evaluation report of a program conducted by a consultant from the global North and was surprised. The report did not highlight any Evaluation criteria or methodology. Some of the contents in the report included complains about an officer who arrived late and another who got sick during the evaluation process and also expressed anger that at a certain point, an FGD Interviewee mentioned the word mzungu (Mzungu is a Swahili name for a “white person”).
On another instance, the organization simply brought a photographer from the global North, to take pictures to include in the report, but the person ended submitting our pictures taken via our smartphones, which we were sharing on the WhatsApp group that we had created to communicate when in the field for data collection. And to make it worse, he labeled his name.
I could go on and on but racism in Evaluation is present and it’s deep too, but those most affected are the Evaluators in the global South.
Thank you, Nick Maunder, for bringing this up. The Covid-19 pandemic put to test every evaluator on how innovative and resilient one can be. My president’s directive to work from home, reached us when we are in the process of data collection in the field. This was an outcome harvesting. We had a replanning meeting over the night and decided to prioritize the focus group interviews (the stories) to avoid being locked down away from our homes. Focus group discussions would have been challenging to collect through skypes or telephone calls. The storytellers (FGD) are mostly community members. They have challenges in accessing bundles, telephones, internet, and most other forms of communication.
The rest of the data from the key informants (substantiating) was collected through skype meeting, telephone calls and WhatsApp calls. It was not easy as put, but eventually, we were happy with the effort we put and the data collected.
Therefore, the evaluators need to continually remind themselves on what the goal of a particular evaluation is. And how best they can gather the data.
The practice of Monitoring cannot be replaced by evaluations but should feed into evaluations. Monitoring should go hand in hand with implementation of the development programs, projects and interventions. It is through monitoring that the inputs, outputs and processes applied are checked. As well as the important concept of timeliness as the programs awaits the evaluation to measure and draw important conclusions.
Therefore as a monitoring and evaluation consultant, I would say that both the M and the E are critical and complete each other.