Jean Providence [user:field_middlename] Nzabonimpa

Jean Providence Nzabonimpa

Regional Evaluation Officer
United Nations World Food Programme
South Africa

Jean Providence Nzabonimpa (PhD) is a social, behavioral, educational, and public health researcher and evaluator, a development and humanitarian practitioner with 16 years of experience in project design, implementation, performance monitoring, outcome and impact evaluation, social marketing, and applied research. Using behavior change theories and communication as an approach to achieve project outcomes and impact in public health, education, and other social development sectors, currently keen on human face in technology, he brings rigorous methodological approaches to development interventions, generating and using evidence for decision-making and impact.

With specialization in mixed methods research, he innovates methodologically when it comes to impact, behavior change, and user experience research and evaluation. With more than 30 research and evaluation studies, coupled with a strong background in education, language use, public health, and capacity development, he uses advanced social science evaluative, analytical, communicative, and programmatic knowledge and skills to generate evidence and insights to impact the lives of the poor and vulnerable people. Since 2009, he is an apt user and advocate of ICT in program monitoring and evaluation for real-time access to data and evidence. Expert user and trainer in data analysis using SPSS (expert level) and STATA (lesser extent) for quantitative data analysis, ATLAS.ti and MAXQDA for qualitative data analysis. He is SCRUMMaster certified, Core Humanitarian Certified, ATLAS.ti certified professional trainer, and a certified peer-reviewer.

My contributions

    • Colleagues, massive thanks for going extra miles to provide additional and new perspectives to this discussion. These include sequential, concurrent, and parallel mixed methods (MM) designs. Some analyses are performed separately while others bring data analysis from either method strand to corroborate trends or results emanating from the other method strand.

      One of the latest contributions include these key points:

      The evaluators will […] perform data triangulation by cross-referencing the survey data with the findings from the qualitative research and the document review or any other method used. […] Sometimes a finding from the qualitive research will be accompanied by the quantitative data from the survey” Jackie.

       “Mixed methods is great, but the extent of using mixed methods and sequencing should be based on program and evaluation circumstances, otherwise instead of answering evaluation questions of a complex or complicated program, we end up with data constipation. Using all sorts of qualitative methods at once i.e., open ended surveys, KIIs, community reflection meetings, observations, document reviews etc. in addition to quantitative methods may not be that smart.” Gordon.

      Lal: Thanks for sharing on two projects one on "a billion-dollar bridge to link up an island with the mainland in an affluent Northern European country while the second is a multi-million-dollar highway in an African country". This is an excellent example of what can go wrong in the poor design of projects and inappropriate evaluation of such projects. Are there any written reports/references to share? This seems to be a good source of insights to enrich our discussions and, importantly, our professional evaluation practice using mixed methods. I so much like the point you made: "the reductive approach made quality and quantity work against project goals". Linking to the projects used for illustration, you very well summarized it: "the emergency food supplies to a disaster area cannot reasonably meet the same standards of quality or quantity, and they would have to be adjusted to make the supply adequate under those circumstances".

      Olivier: you rightly argue and agree that sequential exploratory designs are appropriate: "you cannot measure what you don't conceive well, so a qualitative exploration is always necessary before any measurement attempt". But also, you acknowledge that: "there is also room for qualitative approaches after a quantification effort”. You are right about that: in some cases, a survey may yield results that appear odd, and one way to make sense of them is to "zoom" on that particular issue through a few additional qualitative interviews.

      Gordon: Mea culpa, I should have specified that the discussion is about the evaluation of programme, project or any humanitarian or development intervention. You rightly emphasize the complexity that underlies programmes: “programs are rarely simple (where most things are known) but potentially complicated (where we know what we don't know) or complex (where we don't know what we don't know)”. One argument you made seems to be contradictory: “when something is too complicated or complex, simplicity is the best strategy!” Some more details would add context and help readers make sense of the point you raised. Equally, who between the evaluator and programme team should decide the methods to be used?

      While I would like to request all colleagues to read all contributions, Jackie’s submission is different, full of practical tips and tricks used in mixed methods.

      Jackie: Thanks so much for taking time and provide insightful comments. As we think about our evaluation practice, may you explain how “all evaluation questions can be answered using a mixed method approach”? In your view, the data collection tools are developed in parallel, or concurrently. And you argue that there is ONE Evaluation Design Matrix, hence both methods attempt to answer the same question. For sampling would you clarify how you used probabilistic or non-probabilistic sampling, or at least describe for readers which one you applied, why and how? Would there be any problem if purposive sampling is applied for a quantitative evaluation?

      Except a few examples, most of the contributions are so far more theoretical, hypothetical than practical, lived experiences. I think what can help all of us as evaluators is practical hints and tricks, including evaluation reports or publications that utilized mixed methods (MM). Please go ahead and share practical examples and references on:

      • MM evaluation design stage
      • MM data collection instruments
      • MM sampling
      • MM data collection
      • MM data analysis
      • MM results interpretation, reporting, and dissemination

       

      Looking forward to more contributions.

    • This discussion is interesting and intriguing especially based on the multidisciplinary background of the contributors. I will be abbreviating Mixed Methods as MM in this discussion. Without pre-empting further ideas and fresh perspectives colleagues are willing to share, allow me to request further clarification for our shared learning. This is not limited to colleagues whose names are mentioned, it’s an open discussion. Feel free to share the link to other platforms or networks as well.

      Consider these viewpoints before delving into further interrogations. Keep reading, icing on the cake comes after:

      “Successful cases [of MM in evaluation] occur when the integration process is well-defined or when methods are applied sequentially (e.g., conducting focus groups to define survey questions or selecting cases based on a survey for in-depth interviews).” Cristian Maneiro.

      five purposes for mixed-method evaluations: triangulation, complementarity, development, initiation, and expansion (also summarized in this paper)” shared by Anne Kepple. I encourage all MM practitioners and fans to read this article.

      “A good plumber uses several tools, when and as necessary, and doesn't ask himself what type of plumbing requires only one tool... Likewise, a good evaluator needs to know how to use a toolbox, with several tools in it, not just a wrench” Olivier Cossée.

      The evaluation also analyzed and explained the quantitative results with information from qualitative methods, which not only allowed characterizing the intervention, educational policy and funding, but also led to more relevant policy recommendations” Maria Pia Cebrian.

      Further queries:

      • Cristian: Thanks for sharing your experience and preference for exploratory sequential design where qualitative methods precede quantitative methods. A follow up question: what if MM evaluation begins with a survey and ends up with qualitative interviews or focus group discussions – an explanatory sequential design? By the way has anyone used or seen in action any explanatory sequential design? Are there such MM evaluation designs?  Let's keep retrieving insights from experiences and various resources written on MM evaluation design and share.
      • Cristian has also raised an excellent point worth taking into account. Some publications show that all non-numerical data are qualitative, e.g., pictures, maps, videos, etc. what about those data types? Has anyone got experience mixing numerical/quantitative data with pictorial, spatial and video data? If yes, share. Don’t mind contributing insights how you deal with such non-numerical data.
      • Emilia, you made my day (actually my flight)! I was airborne while reading colleagues’ contributions. Wow, thanks Emilia. You raised a point which reminded me that when 1+1=2 in MM, it's a loss. In MM, 1+1 should equal 3, if not it's a loss, reductionistic. By the way it's a double loss. On the one hand, find out from this article which cogently argues that 1+1 should be 3 in mixed methods. The second loss is that the author of the article, i.e., Michael Fetters, passed away a few weeks ago and like-minded scholars (Creswell, J. W., & Johnson, R. B. (2023) paid tribute to him. May his soul rest in eternal peace!
      • Emilia, I enjoyed reading your contribution. In the existing literature (remind me to share at some point), there is mention of MM when qualitative and quantitative methods are mixed. Other instances where methods of the same paradigm (say, qualitative) are used, they have been termed multimethod or multiple approaches.
      • And then - going a bit beyond that:  couldn’t we consider the mix of “colonizers“ with “indigenous “ approaches also “mixed methods”?” Aha ... in the upcoming African Evaluation Journal, there is a special issue on Addressing Knowledge Asymmetries. Possibly this point would be a great topic for further mixed methodological interrogation. In practice, do we have examples whereby western methodologies (e.g., survey) are mixed with oral or pictorial methods from the global south? I am on standby to hear more from you and other colleagues.
      • Lal, you are spot on. Would you exemplify how thinking or operating in silos applies when conducting MM evaluation?
      • Margrieth, well put. Our academic background determines to a large extent what we embrace in our professional practice. How do we bridge this gap? In mixed methods, it is encouraged to have 'researcher triangulation'. If I am a number cruncher, I should ideally work with a qualitative researcher, an anthropologist for example, to complement each other, bringing together our strengths to offset gaps in our academic training or professional practice. How would you suggest such a researcher or evaluator triangulation be implemented? Anyone with practical examples? Please share.
      • Pia: Impressive, various sources of data collection tools and analyses performed! Thanks for sharing the published article. The article is a good example of how selective or biased researchers or evaluators might be following their academic background as mentioned in Margrieth's contribution. This article is fully quantitative, with no mention of qualitative methods (unless I missed it through the quick scan of the article). May you check in the original publication in Spanish to help us learn more how data from interviews and focus group discussions were used in this survey? Thanks in advance.
      • Margrieth made it clear that the choice of quantitative or qualitative methods or both is in principle determined by our professional background. The tendency of evaluators coming from professions such as economics, engineering or similar is to use quantitative methods, while evaluators from arts or humanities use qualitative methods. I can’t agree more. What about evaluators whose training prepared them to be number crunchers but their professional practice re-oriented them into more qualitative methods, and vice versa. I am a living example, but not stuck in any school of thought.  
      • Olivier: This describes very well an exploratory sequential design. What about scenarios whereby an evaluation begins with quantitative methods and when results are out, there are some counter-intuitive findings to understand, make sense of? Are there cases you might have seen in the professional conduct of evaluation where quantitative methods PRECEDED qualitative methods, i.e., explanatory sequential design)? Spot on, Olivier! There is no lab for social beings as is the case in natural sciences.

      Happy learning together!

    • Dear evaluators and colleagues,

      Thanks so much to those of you who took active part in this discussion, replying to my follow up questions and comments, and to all the others who read the contributions for learning!

      The discussion was rich and insightful, and raised the attention on the rationale for applying MM as well as to some persisting challenges and gaps in Mixed Methods practical applications.

      Bottom line, Mixed Methods are surely here to stay. However, on the one hand there are innovative and revolutionary tools including Big Data, artificial intelligence, and machine learning which have started dictating how to gather, process, and display data. On the other hand, there are methodological gaps to fill. As evaluators we have a role to play to ensure MM is not merely mentioned in TORs and followed superficially but appropriately used both in theory and practice.

      I am going to share a summary of the discussion with some personal methodological reflections soon, so please stay tuned!

      JP

    • Great topic, great discussions!

      Evaluation and communication are two sides of the same coin, trying to achieve similar goals (disseminating evaluation evidence for use in decision-making). By the way, they both require different skillsets. That's not a big deal.

      Off to the topic. Assume we as evaluators are all teachers. We prepare lessons, ready to teach, I mean facilitate the learning process. Shall we fold our arms after the preparation and finalization of the lesson? Not at all. I am not alone, I guess, to rightly believe that the teacher will follow through even after teaching, facilitating a learning process. Building off the previous lesson, the teacher will usually recap before starting a new one. Interesting, it seems our evaluations should be informing subsequent evaluations as well!

      The teacher scenario also applies here, at least in my school of evaluation practice. The essence of evaluating is not about producing reports, or reporting results. Then what? For whom and why such evaluation results are reported? Not for filing, not for ticking the box. It would be heart-breaking if we as teachers, after investing in time and resources, prepare class notes and guidance and our students never use them. Would anyone be motivated to prepare notes and guidance for the next lesson? Very few would do. As passionate and professional as we are (or should be) as evaluators, we are change agents. In our ethical and professional standards, never should we rest satisfied with reporting of evaluation results without following through to ensure evidence thereof is used as much as possible. Some evaluation principles include utility of evaluations.

      To the good questions you raised, my two cents:

      • Each evaluation has (or should have) a plan for dissemination and communication (or a campaign plan for evaluation evidence use). This needs to be part of the overall evaluation budget. Evaluators need to keep advocating for dissemination of evaluation results in different formats and for different types of audience even when evaluations are completed, even one or more years ago.

      • If there are people who understand better evaluation results, the evaluator is one of them. Alongside other stakeholders who participated in the evaluation process, s/he should be part of the communication processes to avoid any misconstruing of messages and meaning by external communicators. Communicators (some organizations have specific roles such as communication for development specialists) are experts who know the tricks of the trade. Our allies.

      Happy reading contributions from colleagues.

      Jean Providence

    • Dear Elias and Colleagues,

      Thanks for sharing and discussing this important topic. The more we discuss, the more we understand how to address the issues affecting evaluation practice. To begin with, Monitoring and Evaluation, are they different or are they two sides of the same coin? A perfect combination in theory, but largely mismatching in practice, as Elias posited.

      With an anecdote and some thought-provoking, or controversial views (I hope I get more than one!), I will look at Monitoring and Evaluation, each in its own right, and end up with my personal reflection. First, I encourage colleagues to (keep) read(ing) Ten Steps to a Results-Based Monitoring and Evaluation System by Kusek and Rist. Though published in 2004, it still sheds lights on the interlinkages of Monitoring and Evaluation. Note that I disagree at some propositions or definitions made in that textbook. But I will quote them:

      "Evaluation is a complement to monitoring in that when a monitoring system sends signals that the efforts are going off track (for example, that the target population is not making use of the services, that costs are accelerating, that there is real resistance to adopting an innovation, and so forth), then good evaluative information can help clarify the realities and trends noted with the monitoring system”. p. 13

      Monitoring as the low hanging fruits. Anecdote. One decision-maker used to tell me that he prefers quick and dirty methods to rigorous, time-consuming evaluation methods. Why? No wonder it is easy and quick to get an idea about implemented activities and ensuing outputs. By the way, monitoring deals with all that is under the control of implementers (inputs, activities and outputs). A discussion for another day. With Monitoring, it is usually a matter of checking the database (these days, we look at visualized dashboards) and be able to tell where a project stands in its implementation, progress towards (output/outcome?) targets.

      Evaluation as the high hanging fruits: In a traditional sense, Evaluation tries to establish whether change has taken place and what has driven such change and how. That’s the realm of causality, correlation, association, etc. between what is done and what is eventually achieved. Evaluation is time-consuming and its results takes time. Few decision-makers have got time to wait. In no time, their term of office comes to an end, or there is government reshuffle. Some may no longer be in the office by the time Evaluation results are out. Are we still wondering why decision-makers  prefer Monitoring evidence?

      My understanding of and experience in M&E, as elaborated in Kusek and Risk (2004), is that a well-designed and conducted Monitoring feeds into Evaluation and Evaluation findings show (when the project is still ongoing) what to closely monitor. A good Monitoring gathers and provides, for example, time-series data, useful for evaluation. Evaluation also informs Monitoring. By the way, I am personally less keen on end-of-project evaluations. It seems antithetical for an evaluation practitioner, right? Because the target communities the project is designed for do not benefit from such endline evaluations. Of course, when it is a pilot project, it may be scaled up and initial target groups reached with improved project, thanks to lessons drawn from Evaluation. Believe me, I do conduct endline evaluations, but they are less useful than developmental, formative, real-time/rapid evaluations. A topic for another day!

      Both Monitoring and Evaluation make one single, complementary, and cross-fertilizing system. Some colleagues in independent evaluation offices or departments may not like the interlinkage and interdependence of Monitoring and Evaluation. Simply because they are labelled 'independent'. This reminds me of the other discussion about independence, neutrality and impartiality in evaluation. Oups I did not take part in that discussion. I agree that self- and internal evaluation should not be discredited as Elias argued in his blog. Evaluation insiders understand and know the context which sometimes external, independent evaluators struggle to grasp to make sense of evaluation results. Let’s park this for now.

      Last year, there was an online forum (link to the draft report), bringing together youth from various Sahel countries. Through that forum, youthful dreams, aspirations, challenges, opportunities, etc. were discussed/shared. Huge amount of data were eventually collected through the digital platform. From those youth conversations (an activity reaching hundreds of youth), not only was there proof of change in the narrative but also what drives/inhibits change and youth aspirations. A perfect match of monitoring (reaching x number of young people) and evaluation (factors driving or inhibiting desired change). When there are data from such youth conversations, it is less useful to conduct an evaluation to assess factors associated with change in the Sahel. Just analyze those data, of course develop an analytical guide to help in that process. Using monitoring data is of great help to evaluation. There is evidence that senior decision-makers are very supportive of insights from the analysis done on youth discussions. Imagine waiting till time is ripe for a proper evaluation! Back to the subject.

      All in all, decision-makers are keen on using Monitoring evidence as it is readily available. Monitoring seems straightforward and user-friendly. As long as Evaluation is considered an ivory tower, sort of rocket science, it will be less useful for decision-makers. The evaluation jargon itself, isn't it problematic, an obstacle to using evaluative evidence? My assumptions: Decision-makers like using Monitoring evidence as they make decisions as fire-fighters, not minding quick and dirt but practical methods. They use less evaluative evidence as they don't have time to wait.

      A call to innovative actions: real-time, rapid but rigorous evaluations, if we really want evaluative evidence to be used by decision-makers.

      Thank you all. Let's keep learning and finding best ways to bring M&E evidence where it is needed the most: decision-making at all levels.

      Jean Providence Nzabonimpa

       

    • Dear Gordon and colleagues,

      Before sharing my two cents, let's consider a lived experience. With a team of 4 evaluators, I participated in a five-year project evaluation. As evaluators, a couple of colleagues co-designed the evaluation and collected data. We joined forces during the analysis and reporting and ended up up with a big report of about 180 pages. I have never see fans of big reports, I am not a fan either. To be honest, very few people would spend time reading huge evaluation reports. If an evaluator is less likely to read (once finalized) a report they have produced, who else will ever read it? Off to recommendations. At the reporting stage, we highlighted changes (or lack of it); we pointed out counterintuitive results and insights on indicators or variables of interest. We left it to the project implementation team who brought onboard a policy-maker to jointly draft actionable recommendations. As you can see, we intentionally eschewed the established practice of evaluators writing recommendations all the time.

      Our role was to make sure all important findings or results are translated into actionable recommendations. We supported the project implementation team to remain as close to the evaluation evidence and insights as possible. How would you scale up a project that have produced this change (for positive findings)? What would you do differently to attain desired change on this type of indicators (areas for improvement)? Mind you, I don't use the word 'negative' alongside findings. How would you go about it to get desired results here and there? Such questions helped to get to actionable recommendations.

      We ensured the logical flow and empirical linkages of each recommendation with evaluation results. In the end, the team owned the recommendations while the evaluation team owned empirical results. Evaluation results informed each recommendation. Overall it was a jointly produced evaluation report. This is something we did for this evaluation and it has been effective in other evaluations. With the participation of key stakeholders, evaluation results are relatively easy to sell to decision-makers.

      In my other life of an evaluator, such recommendations are packaged into an Action Tracker (in MS Excel or any other format) to monitor over time how they are implemented. This is the practice in institutions that are keen on accountability and learning or hold accountable their staff and projects for falling short of these standards. For each recommendation, there is a timeline, person or department responsible, status (implemented, not implemented, or ongoing), and way forward (part of the continuous learning). Note that one of the recommendations is about sharing and using evaluation results which require extra work after the evaluation report is done. Simplify the report in audience-friendly language and format such as a two-pager policy brief, evaluation brief, and evaluation brochure based on specific themes that emerged from the evaluation. I have seen such a practice relatively very helpful for a couple of reasons:

      (i) evaluators are not the sole players, there are other stakeholders with better mastery of the programmatic realities

      (ii) implementation team has got space to align their voices and knowledge with evaluation results

      (iii) the end of an evaluation is not, and should not be, an end of evaluation, hence the need for institutions to track how recommendations from evaluation are implemented for remedial actions, decision- or policy-making, using evaluation evidence in new interventions, etc.

      To institutionalize the use of evidence from evaluation takes time. Structural changes (top-level) do not happen overnight nor do they come from the blue, there are small but sure steps to initiate changes from the bottom. If you have the top management fully supporting evidence use, it is a great opportunity not miss out. Otherwise, don't assume, use facts and the culture within the organization. Build small alliances and relationships for evidence use, gradually bring on board more "influential" stakeholders. Highlight the benefits of evidence and how impactful it is for the implementing organization, decision-makers and the communities.

      Just my two cents.

      Over to colleagues for inputs and comments to this important discussion.

      Jean Providence

    • Dear John,

      Happy 2021 to you and all our colleagues on the platform!

      Thanks for raising a critical and much-intriguing question worth looking into as evaluators. I am sure I cannot do justice to the important points you have raised but at least I can share my two cents. I hope colleagues will also keep coming in for a richer discussion.

      It is true we assume we understand issues affecting local communities. We thus design interventions to meet their needs. I completely agree with you. There are important factors unknown at the design stage of development interventions. When little is empirically and theoretically known about a community, little may be done and achieved. Ideally, we need to known the unknowns to design proper interventions and serve better the target communities. But it is unfortunate that it does not work all the time like that, it is not linear, more so in the pandemic-stricken era. We base on what we know to do something. In that process, we learn something new (i.e. evidence) which is helpful to redefine our design and implementation. The complexity of our times, worsened by COVID-19, has pushed all evaluators to rethink their evaluation designs and methods. It would be an understatement to point out that we all know the implications of social (I personally prefer physical) distancing. Imagine an intervention designed through face-to-face results chain as its underlying assumption to achieve the desired change! Without rethinking its Theory of Change (ToC), the logic underlying such an intervention may not hold water. This scenario may apply and rightly prove we need time-evolving ToC. In my view and professional practice, my answer is in the affirmative. We need time-evolving, evidence-informed ToC. We use assumptions because we do not have evidence, right?

      Keeping the ToC intact throughout the life of a project assumes most of its underlying assumptions and logical chain are known in advance and remain constant. This is rarely the case. I believe the change of the ToC does not harm but instead it maximizes what we learn to do better and benefit communities. Let’s consider this scenario: assume X outputs lead to Y outcomes. Later on one discovers that A and B factors are also, and more significantly, contributing to Y than their initial assumptions on X outputs. Not taking into account A and B factors would undermine the logic of the intervention; it undermines our ability to measure outcomes. I have not used outcome mapping in practice but the topic under discussion is a great reminder for its usefulness. Few development practitioners would believe flawed ‘change’ pathways. Instead, I guess, many would believe the story of the failure of the ToC (by the way I hate using words fail and failure). Development practitioners’ lack of appetite to accommodate other factors in the time-evolving ToC when evidence is available are possibly the cause of such failure. In the end, evaluation may come up with positive and/or negative results which are counterintuitive, or which one cannot be linked to any component of the intervention. It sounds strange, I guess, simply because there are pieces of evidence which emerged and were not incorporated in the logic of intervention.

      • With the above, a localized project would be a project with full local colours. With different sizes and forms, all coming in to play their rightful place. This does not mean being too ambitious (too many colours can blur the vision, just kidding but never mind, I wear glasses!). A project which discovers new evidence should incorporates it into the learning journey. Such project is more likely to achieve its desired outcomes. In view of time-evolving context, a project with a static ToC is more likely to become irrelevant over time.
      • In my view, a ToC needs to be dynamic or flexible in complex and time-evolving settings. Is there any development context which can be fully static for a while? I guess no. This reminds me of systems theories and complexity theories without which we would easily fall into the trap of linearity. In my view, there is no harm to start with assumptions but when evidence emerges, we should be able to incorporate new evidence in the implementation theory and program theory which, if combined, may constitute the full ToC for development interventions. No longer are projects looked at in silos (I guess we have seen coherence as a new OECD DAC evaluation criteria!). In my view, there is a need to understand the whole picture (that is, current + future knowns) to benefit the part (that is, current knowns only). But understanding the single part will less likely benefit the whole.
      • The challenges with evolving ToC are related to impact evaluations, mostly Randomized Control Trials. With the evolving ToC, the RCT components or study arms will get blurred and contamination uncontrollable. In statistical jargon, unexplained variance will be bigger than necessary. While there are labs for natural and physical sciences, I believe there are few, if any, reliable social and behavioural science labs. The benefit of knowing how to navigate complex ToC is that one may learn appropriate lessons and generate less questionable evidence on impact of development projects.

      I guess I am one of those interested in understanding complexity and its ramifications in ToC and development evaluation. I am eagerly learning how Big Data can and will shed light on the usually complex development picture, breaking the linearity silos. As we increasingly need a mix of methods to understand and measure impact of or change resulting from development interventions, the same applies to the ToC. Linear, the ToC may eventually betray the context in which an intervention takes place. Multilinear or curvilinear and time-evolving, the ToC is more likely to represent the real but changing picture of the local communities.

      I would like to end with a quotation:

      “Twenty-first century policymakers in the UK face a daunting array of challenges: an ageing society, the promises and threats for employment and wealth creation from artificial intelligence, obesity and public health, climate change and the need to sustain our natural environment, and many more. What these kinds of policy [and development intervention] challenges have in common is complexity.” Source: Magenta Book 2020

      All evolves in a complex context which needs to be acknowledged as such and accommodated into our development interventions.

      Once again, thank you John and colleagues for bringing and discussing this important topic.

      Stay well and safe.

      Jean Providence

    • Dear OUEDRAOGO and colleagues,

      I like so much the topic under discussion. Let's consider a scenario. Imagine the left hand is conflicting with the right hand. Or, one hand is duplicating what the other hand is doing. Outcome: the whole body would suffer. If this were to happen in development interventions, and indeed it is unfortunately happening, it is counterproductive and self-defeating.

      Thanks Serdar for sharing your reflection which, when followed, has proven effective in addressing development duplicates, waste of resources and negative effects on the lives and livelihoods of communities.

      I would like to share my two cents:

      1. Creating and working in technical or thematic working groups for review and supporting one another. I have found this effective. For example, I encourage development partners to plan and conduct a multi-stakeholder, multi-projects evaluation in a community rather than each doing it on their own. When done in silos, this requires more time, extra resources from all stakeholders including community members. When done by multiple stakeholders, it saves resources for both. It adds credibility and sense of ownership and belonging among all actors. It becomes easier to advocate for the use of jointly-generated evaluation results. It informs coordinated programming and improved development outcomes. Here comes in accountability to raise awareness not only among development actors but also among communities. Anyone involved in misaligning and therefore misusing limited resources should be held onto accounts.

      2. Exchange and sharing platforms for learning and dissemination of results/evidence (slightly an extension of the above point): In this media-focused era, no single development actor would like to lag behind. Each wants to be at the high table to showcase what they are doing (this seems natural and okay to me when done with integrity). By being invited to a sharing forum by x partner, y partner can be encouraged to do the same in the future. Some development actors wrongly think that by holding information to themselves, they will have competitive advantage over others. There is lots of evidence that development organizations that are open and sharing lessons benefit more, and eventually become the powerful source of evidence about what works or about how to redress what does not work. They thus attract opportunities for funding and partnerships.

      3. On a personal, possibly on a political note, I have seen these conflicting and duplicative development interventions somehow reflecting the lack of or limited leadership for sustainable development. Good governance can make a difference. It is common wisdom that most (if not all) development interventions are interconnected, interdependent, and enriching one another. Colleagues have clearly pointed it out. A very good lesson is this covid-19 pandemic. It has proved difficult for social, educational, economic, agricultural interventions, etc. to strive for results when health is under threat. I guess, no single development sector or actor can navigate the current development landscape alone and expect sustainable results. The same applies within the same sector.

      In addition to the development forums and guidelines mentioned by colleagues, I believe community participation in the design and monitoring of projects through accountability practices can contribute to eventually addressing this serious challenge.

      Stay safe and well in these crazy times!

      With kind regards to all,

      Jean

      The African Capacity Building Foundation

    • Hello Judith,

      Thanks for sharing this topic to get reflections and insights from other countries. Below are my two cents (being Rwandan but practising M&E elsewhere):

      I usually use a car dashboard to illustrate the twinned nature of Monitoring and Evaluation. A functional Monitoring system feeds into Evaluation system. A functional Evaluation system in turn feeds into Monitoring processes.

      As a control panel for tracking progress and the conditions of a car, a functional dashboard shows the conditions of the car. The driver needs to keep tracking or checking progress to reach the destination. Imagine driving a car without a dashboard! Strange, risky, accident-prone, etc.

      The driver uses the same dashboard to evaluate and decide when to see the mechanic, stop by a petrol station to refuel or for additional tyre pressure. Sometimes, the driver (i.e. project manager) can by himself take corrective measures from their experience and knowledge of the car system (i.e. project). This is equivalent to using monitoring data or process evaluation to fix issues. Using monitoring results, the driver (or project manager) may learn a lesson here and there to keep the car running (or the project) on the right track.

      But in the end, there are technical issues beyond the driver's (or the project/program manager's) control. In such as case, the driver needs to service the car or seek technical expertise for informed guidance. When it is beyond the driver's control, we are talking about change (outcome or impact level). At this level, we need fresher eyes to add a new perspective to the way we have been seeing the conditions of our car. We need evaluation to be on the safer side, more objective, closer to the desired outcome.

      Monitoring system is about low hanging fruits, that is why most organizations and countries alike find it easy to set it up. Evaluation is technically demanding and it is the ultimate goal of proper monitoring. We monitor to ensure we achieve process results (under our control). We evaluate to prove or disprove we reached expected change-level results (beyond our control). Monitoring is limited to "vanity indicators" (a term from colleague on social media) such as numbers trained, kgs distributed, etc. Without Evaluation system, what works or does not work would not logically and objectively be identified with evidence. True change would not be rewarded by scaling up or replicating successful projects, etc. Without evaluation system, we fail or succeed without our knowledge and we can't be proud it it.

      Having Monitoring system is like having institutional capacity or meeting institutional requirements so that we are able to report to xyz. But having Evaluation system is like having human capacity, expertise required to navigate complex development landscape so that what works is kept. What does it mean to M&E in practice? Let me save this for another day.

      Looking forward to more reflections from other countries.

      With kind regards,

      Jean Providence