Musa K. Sanoe

Musa K. Sanoe

National M&E Specialist
United Nations Food and Agriculture Organization (FAO)
Liberia

Monitoring, and Evaluation professional for the past 17 years in Liberia and supporting development projects. I worked with five (5) international organizations prior to joining the Food and Agriculture Organization of the United Nations. I worked with the American Refugee Committee (Currently call Alight International), Norwegian Refugee Council, Education Development Center, ACDI VOCA, and Plan International. 

My contributions

    • Evaluative Assessment- I have heard this recently. While I haven't seen a concrete EA report, judging from what I have read, I find it difficult to accept EA as an important step towards credible evaluation. A program/project with a sound logframe, and M&E Framework where indicators are clearly defined, with well-defined disaggregation methods, data sources, methods of data collection, analysis, etc. all defined before implementation. The project has implemented DQA throughout, to be aware of the data quality issues, and has taken measures to improve the quality of data. The project/program has implemented After Action Reviews (AAR) and other reflections to correct gaps. 

      Amid all these, well-defined M&E Framework, consistent DQA, etc. I do not feel that EA is important, but a smart way of filling any loopholes that are likely to be picked up by the evaluation team. I do not think this is the best way of using donor funds. I rather strengthen the M&E System that will deliver and ensure that the project/program is evaluative at all times rather than putting resources into conducting EA, and after evaluation. 

    • This is interesting. To make the roles of the evaluation manager meaningful in the evaluation process, a proper orientation, and clear roles and responsibilities matter. I see many of the discussants emphasizing the relevance of the evaluation manager throughout the process. At what points (intervals) the evaluation team needs to bring in the evaluation manager? This is critical. Again, the independence of the evaluation needs to be protected to get credible results. It would be interesting to bring in the evaluation manager at the beginning and end of every critical step. This can take the form of debriefing, to allow the evaluation manager to contribute. I have led and participated in an evaluation where the evaluation manager tried to get involved with dictating which participants to be sampled. Similarly, the evaluation manager also attempted to model the minds of the participants. These actions undermine the independence and the credibility of the evaluation.

  • Disability inclusion in evaluation

    Discussion
    • I think this is an important conversation, the issue of disability inclusion and by extension other marginalized groups in evaluation. No doubt that inclusion will bring everyone on board and an opportunity to be heard. 

      I think one of the barriers to disability inclusion and others such as the LGBTIQ is the lack of appropriate communication strategy. In most societies in Africa for example, LGBTIQs do not accept being called gay, lesbian, bisexual, etc. same as a person with disability would not accept being disabled for fear of being stigmatized. Using languages that are not offensive in drafting our tools would impact their inclusion and participation. Spending more time identifying, and strategizing an appropriate method including defining the right language would ensure active participation and inclusion of everyone in the evaluation. Reflecting on the issue of visualization, could be an important way to draft the tool, without asking questions like, are you a disabled person, you are a gay? Alternatively, which of these (emoji) appropriately describes you?

      In short, to enhance disability inclusions, we need take a deeper reflection to understand the contexts in which we are conducting the evaluations. 

    • Dear Daniel,

      Great we generally agreed on some points about evaluation. But to your point, "evaluations" commissioned and paid for by the Liberian govt that assess donor performance incl the FAO in the agriculture sector?

      To the best of my knowledge, I think our gov't is doing some sort of donor performance assessment. These assessments cover all sectors including agriculture. The most recent was the 'Joint Sectoral Portfolio Performance Review' held on June 19-29, 2023. The review takes stock of all interventions in different sectors in relation to government priorities. The exercise is a holistic approach to evaluating sectoral performance which cut across donors. https://frontpageafricaonline.com/news/government-of-liberia-collaborates-with-united-nations-and-development-partners-for-sector-portfolio-review/)

      In addition, the Ministry of Finance and Development Planning has a system for assessing IPs, during their reaccreditation. Technicians do an assessment of the previous interventions, as a prerequisite for obtaining accreditation, and even sectoral clearance. 

    • I agree with Silva Ferretti's point, that evaluations are not the lengthy reports we write. Unfortunately, long reports remained the main expected products. I think this is because we are using evaluations as a donor's requirement rather than for our own learning purposes, and a tool for improvement. The moment we go away from seeing evaluations as donor's requirements we will start to be more inclusive, and participatory in all our evaluation processes including developing processes that are more inclusive for all. My emphasis is actually on people who do not understand the meaning of 25%, and 40%. 

    • I am a new member of this important forum and a group of professionals. Interesting discussion point, Can visual tools help evaluators communicate and engage better?

      I have no doubt that visualizations are important tools for communicating and engaging better, especially with stakeholders. My only concerns are, what kind of visualization, and which stakeholders? For example, in my country, Liberia where the vast majority of the stakeholders and specifically beneficiaries are illiterate. Of course, presenting fancy charts/graphs, tables in percentages, are really meaningless and won't communicate anything to them at all. An evaluation is supposed to promote accountability. This places explicit responsibility on us as evaluation practitioners to share with our beneficiaries (I mean illiterate ones). Charts/ tables and other visual aids may not communicate anything substantial to these people. For example, 25%, and 40% on charts and table has no meaning to them. Extract innovations are required to factor these people amongst stakeholders that require to participate in the sharing of the evaluation's findings. I have pilot-tested sharing evaluation findings with this group of people without using nice charts, and tables.