Jean Providence [user:field_middlename] Nzabonimpa

Jean Providence Nzabonimpa

Monitoring and Evaluation Officer
United Nations Integrated Strategy for the Sahel
Senegal

More about me

Jean Providence Nzabonimpa (PhD) is a social, behavioral, educational, and public health researcher and evaluator, a development and humanitarian practitioner with 16 years of experience in project design, implementation, performance monitoring, outcome and impact evaluation, social marketing, and applied research. Using behavior change theories and communication as an approach to achieve project outcomes and impact in public health, education, and other social development sectors, currently keen on human face in technology, he brings rigorous methodological approaches to development interventions, generating and using evidence for decision-making and impact.

With specialization in mixed methods research, he innovates methodologically when it comes to impact, behavior change, and user experience research and evaluation. With more than 30 research and evaluation studies, coupled with a strong background in education, language use, public health, and capacity development, he uses advanced social science evaluative, analytical, communicative, and programmatic knowledge and skills to generate evidence and insights to impact the lives of the poor and vulnerable people. Since 2009, he is an apt user and advocate of ICT in program monitoring and evaluation for real-time access to data and evidence. Expert user and trainer in data analysis using SPSS (expert level) and STATA (lesser extent) for quantitative data analysis, ATLAS.ti and MAXQDA for qualitative data analysis. He is SCRUMMaster certified, Core Humanitarian Certified, ATLAS.ti certified professional trainer, and a certified peer-reviewer.

    • Dear Gordon and colleagues,

      Before sharing my two cents, let's consider a lived experience. With a team of 4 evaluators, I participated in a five-year project evaluation. As evaluators, a couple of colleagues co-designed the evaluation and collected data. We joined forces during the analysis and reporting and ended up up with a big report of about 180 pages. I have never see fans of big reports, I am not a fan either. To be honest, very few people would spend time reading huge evaluation reports. If an evaluator is less likely to read (once finalized) a report they have produced, who else will ever read it? Off to recommendations. At the reporting stage, we highlighted changes (or lack of it); we pointed out counterintuitive results and insights on indicators or variables of interest. We left it to the project implementation team who brought onboard a policy-maker to jointly draft actionable recommendations. As you can see, we intentionally eschewed the established practice of evaluators writing recommendations all the time.

      Our role was to make sure all important findings or results are translated into actionable recommendations. We supported the project implementation team to remain as close to the evaluation evidence and insights as possible. How would you scale up a project that have produced this change (for positive findings)? What would you do differently to attain desired change on this type of indicators (areas for improvement)? Mind you, I don't use the word 'negative' alongside findings. How would you go about it to get desired results here and there? Such questions helped to get to actionable recommendations.

      We ensured the logical flow and empirical linkages of each recommendation with evaluation results. In the end, the team owned the recommendations while the evaluation team owned empirical results. Evaluation results informed each recommendation. Overall it was a jointly produced evaluation report. This is something we did for this evaluation and it has been effective in other evaluations. With the participation of key stakeholders, evaluation results are relatively easy to sell to decision-makers.

      In my other life of an evaluator, such recommendations are packaged into an Action Tracker (in MS Excel or any other format) to monitor over time how they are implemented. This is the practice in institutions that are keen on accountability and learning or hold accountable their staff and projects for falling short of these standards. For each recommendation, there is a timeline, person or department responsible, status (implemented, not implemented, or ongoing), and way forward (part of the continuous learning). Note that one of the recommendations is about sharing and using evaluation results which require extra work after the evaluation report is done. Simplify the report in audience-friendly language and format such as a two-pager policy brief, evaluation brief, and evaluation brochure based on specific themes that emerged from the evaluation. I have seen such a practice relatively very helpful for a couple of reasons:

      (i) evaluators are not the sole players, there are other stakeholders with better mastery of the programmatic realities

      (ii) implementation team has got space to align their voices and knowledge with evaluation results

      (iii) the end of an evaluation is not, and should not be, an end of evaluation, hence the need for institutions to track how recommendations from evaluation are implemented for remedial actions, decision- or policy-making, using evaluation evidence in new interventions, etc.

      To institutionalize the use of evidence from evaluation takes time. Structural changes (top-level) do not happen overnight nor do they come from the blue, there are small but sure steps to initiate changes from the bottom. If you have the top management fully supporting evidence use, it is a great opportunity not miss out. Otherwise, don't assume, use facts and the culture within the organization. Build small alliances and relationships for evidence use, gradually bring on board more "influential" stakeholders. Highlight the benefits of evidence and how impactful it is for the implementing organization, decision-makers and the communities.

      Just my two cents.

      Over to colleagues for inputs and comments to this important discussion.

      Jean Providence

    • Dear John,

      Happy 2021 to you and all our colleagues on the platform!

      Thanks for raising a critical and much-intriguing question worth looking into as evaluators. I am sure I cannot do justice to the important points you have raised but at least I can share my two cents. I hope colleagues will also keep coming in for a richer discussion.

      It is true we assume we understand issues affecting local communities. We thus design interventions to meet their needs. I completely agree with you. There are important factors unknown at the design stage of development interventions. When little is empirically and theoretically known about a community, little may be done and achieved. Ideally, we need to known the unknowns to design proper interventions and serve better the target communities. But it is unfortunate that it does not work all the time like that, it is not linear, more so in the pandemic-stricken era. We base on what we know to do something. In that process, we learn something new (i.e. evidence) which is helpful to redefine our design and implementation. The complexity of our times, worsened by COVID-19, has pushed all evaluators to rethink their evaluation designs and methods. It would be an understatement to point out that we all know the implications of social (I personally prefer physical) distancing. Imagine an intervention designed through face-to-face results chain as its underlying assumption to achieve the desired change! Without rethinking its Theory of Change (ToC), the logic underlying such an intervention may not hold water. This scenario may apply and rightly prove we need time-evolving ToC. In my view and professional practice, my answer is in the affirmative. We need time-evolving, evidence-informed ToC. We use assumptions because we do not have evidence, right?

      Keeping the ToC intact throughout the life of a project assumes most of its underlying assumptions and logical chain are known in advance and remain constant. This is rarely the case. I believe the change of the ToC does not harm but instead it maximizes what we learn to do better and benefit communities. Let’s consider this scenario: assume X outputs lead to Y outcomes. Later on one discovers that A and B factors are also, and more significantly, contributing to Y than their initial assumptions on X outputs. Not taking into account A and B factors would undermine the logic of the intervention; it undermines our ability to measure outcomes. I have not used outcome mapping in practice but the topic under discussion is a great reminder for its usefulness. Few development practitioners would believe flawed ‘change’ pathways. Instead, I guess, many would believe the story of the failure of the ToC (by the way I hate using words fail and failure). Development practitioners’ lack of appetite to accommodate other factors in the time-evolving ToC when evidence is available are possibly the cause of such failure. In the end, evaluation may come up with positive and/or negative results which are counterintuitive, or which one cannot be linked to any component of the intervention. It sounds strange, I guess, simply because there are pieces of evidence which emerged and were not incorporated in the logic of intervention.

      • With the above, a localized project would be a project with full local colours. With different sizes and forms, all coming in to play their rightful place. This does not mean being too ambitious (too many colours can blur the vision, just kidding but never mind, I wear glasses!). A project which discovers new evidence should incorporates it into the learning journey. Such project is more likely to achieve its desired outcomes. In view of time-evolving context, a project with a static ToC is more likely to become irrelevant over time.
      • In my view, a ToC needs to be dynamic or flexible in complex and time-evolving settings. Is there any development context which can be fully static for a while? I guess no. This reminds me of systems theories and complexity theories without which we would easily fall into the trap of linearity. In my view, there is no harm to start with assumptions but when evidence emerges, we should be able to incorporate new evidence in the implementation theory and program theory which, if combined, may constitute the full ToC for development interventions. No longer are projects looked at in silos (I guess we have seen coherence as a new OECD DAC evaluation criteria!). In my view, there is a need to understand the whole picture (that is, current + future knowns) to benefit the part (that is, current knowns only). But understanding the single part will less likely benefit the whole.
      • The challenges with evolving ToC are related to impact evaluations, mostly Randomized Control Trials. With the evolving ToC, the RCT components or study arms will get blurred and contamination uncontrollable. In statistical jargon, unexplained variance will be bigger than necessary. While there are labs for natural and physical sciences, I believe there are few, if any, reliable social and behavioural science labs. The benefit of knowing how to navigate complex ToC is that one may learn appropriate lessons and generate less questionable evidence on impact of development projects.

      I guess I am one of those interested in understanding complexity and its ramifications in ToC and development evaluation. I am eagerly learning how Big Data can and will shed light on the usually complex development picture, breaking the linearity silos. As we increasingly need a mix of methods to understand and measure impact of or change resulting from development interventions, the same applies to the ToC. Linear, the ToC may eventually betray the context in which an intervention takes place. Multilinear or curvilinear and time-evolving, the ToC is more likely to represent the real but changing picture of the local communities.

      I would like to end with a quotation:

      “Twenty-first century policymakers in the UK face a daunting array of challenges: an ageing society, the promises and threats for employment and wealth creation from artificial intelligence, obesity and public health, climate change and the need to sustain our natural environment, and many more. What these kinds of policy [and development intervention] challenges have in common is complexity.” Source: Magenta Book 2020

      All evolves in a complex context which needs to be acknowledged as such and accommodated into our development interventions.

      Once again, thank you John and colleagues for bringing and discussing this important topic.

      Stay well and safe.

      Jean Providence

    • Dear OUEDRAOGO and colleagues,

      I like so much the topic under discussion. Let's consider a scenario. Imagine the left hand is conflicting with the right hand. Or, one hand is duplicating what the other hand is doing. Outcome: the whole body would suffer. If this were to happen in development interventions, and indeed it is unfortunately happening, it is counterproductive and self-defeating.

      Thanks Serdar for sharing your reflection which, when followed, has proven effective in addressing development duplicates, waste of resources and negative effects on the lives and livelihoods of communities.

      I would like to share my two cents:

      1. Creating and working in technical or thematic working groups for review and supporting one another. I have found this effective. For example, I encourage development partners to plan and conduct a multi-stakeholder, multi-projects evaluation in a community rather than each doing it on their own. When done in silos, this requires more time, extra resources from all stakeholders including community members. When done by multiple stakeholders, it saves resources for both. It adds credibility and sense of ownership and belonging among all actors. It becomes easier to advocate for the use of jointly-generated evaluation results. It informs coordinated programming and improved development outcomes. Here comes in accountability to raise awareness not only among development actors but also among communities. Anyone involved in misaligning and therefore misusing limited resources should be held onto accounts.

      2. Exchange and sharing platforms for learning and dissemination of results/evidence (slightly an extension of the above point): In this media-focused era, no single development actor would like to lag behind. Each wants to be at the high table to showcase what they are doing (this seems natural and okay to me when done with integrity). By being invited to a sharing forum by x partner, y partner can be encouraged to do the same in the future. Some development actors wrongly think that by holding information to themselves, they will have competitive advantage over others. There is lots of evidence that development organizations that are open and sharing lessons benefit more, and eventually become the powerful source of evidence about what works or about how to redress what does not work. They thus attract opportunities for funding and partnerships.

      3. On a personal, possibly on a political note, I have seen these conflicting and duplicative development interventions somehow reflecting the lack of or limited leadership for sustainable development. Good governance can make a difference. It is common wisdom that most (if not all) development interventions are interconnected, interdependent, and enriching one another. Colleagues have clearly pointed it out. A very good lesson is this covid-19 pandemic. It has proved difficult for social, educational, economic, agricultural interventions, etc. to strive for results when health is under threat. I guess, no single development sector or actor can navigate the current development landscape alone and expect sustainable results. The same applies within the same sector.

      In addition to the development forums and guidelines mentioned by colleagues, I believe community participation in the design and monitoring of projects through accountability practices can contribute to eventually addressing this serious challenge.

      Stay safe and well in these crazy times!

      With kind regards to all,

      Jean

      The African Capacity Building Foundation

    • Hello Judith,

      Thanks for sharing this topic to get reflections and insights from other countries. Below are my two cents (being Rwandan but practising M&E elsewhere):

      I usually use a car dashboard to illustrate the twinned nature of Monitoring and Evaluation. A functional Monitoring system feeds into Evaluation system. A functional Evaluation system in turn feeds into Monitoring processes.

      As a control panel for tracking progress and the conditions of a car, a functional dashboard shows the conditions of the car. The driver needs to keep tracking or checking progress to reach the destination. Imagine driving a car without a dashboard! Strange, risky, accident-prone, etc.

      The driver uses the same dashboard to evaluate and decide when to see the mechanic, stop by a petrol station to refuel or for additional tyre pressure. Sometimes, the driver (i.e. project manager) can by himself take corrective measures from their experience and knowledge of the car system (i.e. project). This is equivalent to using monitoring data or process evaluation to fix issues. Using monitoring results, the driver (or project manager) may learn a lesson here and there to keep the car running (or the project) on the right track.

      But in the end, there are technical issues beyond the driver's (or the project/program manager's) control. In such as case, the driver needs to service the car or seek technical expertise for informed guidance. When it is beyond the driver's control, we are talking about change (outcome or impact level). At this level, we need fresher eyes to add a new perspective to the way we have been seeing the conditions of our car. We need evaluation to be on the safer side, more objective, closer to the desired outcome.

      Monitoring system is about low hanging fruits, that is why most organizations and countries alike find it easy to set it up. Evaluation is technically demanding and it is the ultimate goal of proper monitoring. We monitor to ensure we achieve process results (under our control). We evaluate to prove or disprove we reached expected change-level results (beyond our control). Monitoring is limited to "vanity indicators" (a term from colleague on social media) such as numbers trained, kgs distributed, etc. Without Evaluation system, what works or does not work would not logically and objectively be identified with evidence. True change would not be rewarded by scaling up or replicating successful projects, etc. Without evaluation system, we fail or succeed without our knowledge and we can't be proud it it.

      Having Monitoring system is like having institutional capacity or meeting institutional requirements so that we are able to report to xyz. But having Evaluation system is like having human capacity, expertise required to navigate complex development landscape so that what works is kept. What does it mean to M&E in practice? Let me save this for another day.

      Looking forward to more reflections from other countries.

      With kind regards,

      Jean Providence