Janvier [user:field_middlename] Mwitirehe

Janvier Mwitirehe

Evaluation Researcher
Horizon of Excellence Ltd
Rwanda

More about me

From being a Community Development Facilitator, Community Supervisor, Auditor, then a Project Director of Administration and Finance as well as Financial Analyst, Mr Janvier Mwitirehe has done considerable efforts to become an expert in monitoring and evaluation. He is the founder of the Horizon of Excellence Ltd, a research, consulting and training company created in 2011. Through it, Janvier is recognized for his role in M&E capacity development in Rwanda.
Mr Mwitirehe is an active member of the Rwanda Monitoring and Evaluation Organization (RMEO) and other M&E platforms. More so, Janvier is a part-time researcher in statistics for policy development and a part-time trainer of planning, monitoring and evaluation (PM&E) approved by Rwanda Management Institute (RMI) since January 2020. It is in this framework that he developed a Strategic PM&E training package that was given a concurrence by RMI management.
His main areas of intervention include the following:
- Use of statistics and big data to build an Effective M&E System: to address the issue of statistical illiteracy among M&E professionals, Janvier has been a strong proponent of the use of statistics in M&E. Moreover, he has developed a prototype of an effective M&E system and, this facilitated his doctoral studies in “Leveraging Big Data to Build an Effective Monitoring, Evaluation and Learning (MEL) system of a water utility company”.
- Data mining, analysis and visualization: Janvier masters different data mining techniques and is computer literate in Ms Office, Power BI, SurveyCTO, R, Python, Stata, Eviews, Epi Info and SPSS with statistical modeling and analytical skills. Using advanced Excel, GIS, Power BI and Tableau, Janvier can develop interactive performance dashboards.
- Designing experimental and quasi-experimental impact evaluations: Janvier created an Evaluation Research Team skilled in randomized control trial, propensity score matching, difference in difference, instrumental variables, regression discontinuity designs.
- Doing research in Performance, Impact and Efficiency (PIE) Evaluation: Janvier has so far published two papers: “Robust Impact Evaluation Experiments in Rwanda” and “Evaluation de l’Efficacite et Efficience du Système de Santé Rwandais”. In addition, Janvier is completing his third paper: “Integration of Big Data in Water Monitoring, Evaluation and Learning System”. Case Study of Water and Sanitation Corporation (WASAC) LTD in Rwanda.

    • Dear John, Thank you for your feedback and very good follow-up questions. 

      The answer to both questions is YES. In addition, I would like to indicate that, in USAID perspective, interventions (activities or projects) are designed within both local assumptions and evidence; and that assumptions are one of the components of the Theory of Change (TOC). For those with the need to know, let’s first define USAID development, CLA framework, and then explain how TOC may be reviewed along with examples:

      USAID development Context

      For USAID to operate in any country, it first of all defines a strategic plan (called country development cooperation strategy-CDCS) aligning it with partner country priorities. CDCS lays its foundation on country risks analysis and consultations with different Government and partners institutions. The CDCS portrays a county development goal with sector development objectives supporting that goal and intermediate results falling under DOs. Every DO is described along with respective risks and assumptions.  Let’s note that the performance monitoring plan (PMP) is designed along with the strategic plan.

      In line with the strategy, one or more projects are designed to contribute to the achievement of development objectives.  A "project" refers to a set of complementary implementing mechanisms or "activities," over an established timeline and budget, intended to achieve a discrete development result, that is often aligned with an intermediate result in a CDCS. This one goes with its associated project MEL plan.

      Finally, the activity which is the level of implementation. USAID implements its strategies and projects through activity design and implementation. An activity carries out an intervention or set of interventions, typically though an implementing mechanism such as a contract, assistance program, or partnership. This goes with its associated activity MEL plan.

      Putting in simple terms, any activity is viewed within a certain project which at in turn responds to a certain development goal. Activity MEL plan feed into project MEL plan which also feeds into PMP.

      CLA Framework

      To cope with an evolving environment, USAID integrates Collaborating, Learning, and Adapting (CLA) into is program life cycle to ensure that interventions are coordinated with others, grounded in a strong evidence base, and iteratively adapted to remain relevant throughout implementation. CLA framework consist of managing adaptively through continuous learning. 

      USAID definitions:

      • Collaborating is the process of strategically identifying key internal and external stakeholders and deciding how best to work with them in order to add value, fill gaps, and avoid duplication while working towards a shared goal.
      • Learning is the intentional process of generating, capturing, sharing, and analyzing information and knowledge from a wide range of sources to inform decisions and adapt programs to be more effective.
      • Adapting is an intentional approach to reflecting on learning, and making decisions and iterative adjustments in response to new information and changes in context.

      Pausing and reflecting on a regular basis helps identify what’s working and what needs adapting and it allows USAID to consider the impact of changes in the operating environment or context. Examples of pause and reflect opportunities include portfolio review, learning events, team meetings, communities of practice, learning networks, etc.  

      TOC review

      In USAID context, the underlying logic of a project/activity is captured in the TOC. A strong TOC is a narrative that summarizes the context, identifies points of leverage for change, specifies the needed outcomes, describes the interventions expected to realize those outcomes, and states the associated assumptions.

      The process of developing the TOC should be participatory, involving broad engagement with local stakeholders and a series of dynamic critical thinking exercises to examine the body of evidence, draw out different viewpoints, and reach consensus on the best possible approach given the available information.

      Therefore, TOC design is always based on local evidence and assumptions; and given the rigorous process, it is less likely to be poorly understood. Even if this may happen or assumptions change, regular pause-and-reflect actions will help realize that gap and adaptive management should follow. For instance, for Food for security activities, there is a requirement for partners to develop a Theory of Change (TOC) for their activities and to review it “whenever there is new evidence, or when there are changes in the context that affect assumptions or hypothesized pathways of change” and, at a minimum, annually.

      We have examples where TOC was reviewed:

       

      In conclusion, USAID has different options to adapt its interventions: at activity, project or strategy level. Through its collaboration, USAID may also co-create. For this to be possible, its learning should be strengthened.

      Thanks.

      Janvier M

       

    • Dear all,

      The reality is that we operate in fast evolving environment that need to be considered when implementing our programs or projects. As colleagues pointed out, constraints brought up by COVID-19  for example should push to adapt in order to successful achieve the intended results.  

      This can be done for example introducing what USAID called " collaborating, learning and adapting (CLA) framework" [CLA Tool Kit Landing | USAID Learning Lab] which involves a set of practices integrated in the program cycle to ensure that programs are coordinated with others, grounded in a strong evidence base, and iteratively adapted to remain relevant throughout the implementation. With this, identified critical assumptions central to a TOC must be periodically tested – which is a central feature of assumption-based planning - and if no longer valid then adaptive management steps employed in response.

      It is true that this brings an implementation complexity which also requires the use of new, and still evolving complex-responsive evaluation methods.  Under such conditions, there is need to integrate data science* in MEL activities. 

       

      Note: Data science is an inter-disciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from many structural and unstructured data. Data science is related to data mining, machine learning and big data (https://en.wikipedia.org/wiki/Data_science).

       

      Best,

      Janvier

    • Dear All,

      It is true that Rwanda has been doing great if we consider efforts put in managing for accountability in all activity sectors. Referring to the car dashboard illustration, the effectiveness of the evaluation system should be viewed through the wholistic perspective of the M&E system. In fact, either monitoring or evaluation aims at the same thing: ensuring the effective and efficient attainment of the goals/objectives/mission.

      The evaluation can complement, confirm or contradict the results of the monitoring. Being more systematic and rigorous, the evaluation can provide more credible explanations and clarity [in the eyes of stakeholders] that, for different reasons, cannot be provided by the monitoring itself.  Therefore, if evaluation system fails as a component of the M&E system, the whole system has already failed.

      A well-crafted (effective) M&E system should:

      In absence of a national M&E system, the country misuses financial resources and misses learning opportunities. If there were a national evaluation system, this would limit the number evaluations to the ones that are really worthwhile. It would also put in place a national strategy for learning from the evaluation reports to inform future program designs and policies. 

      Moreover, my research work has realized up to four challenges that current M&E systems are facing following the fact that they have not yet taken advantage of big data in time the world has embarked on a digital era characterized by the availability of information growing at exponential rates: the inaccurate performance measures, inability to make reliable predictions to inform future planning and new program designs, delayed implementation of corrective actions recommended by the evaluation undertakings, and the inability of some users to understand and use monitoring and evaluation reports. [It may  be strange to many but I can provide more insights on this if need be].  In this regard, I would also like to highlight that Rwanda has developed and adopted a National Data Revolution Policy in 2017 which provides that big data analytics should be used in monitoring the development progress and insightful research activities, etc. The key take-away is that to boost the Monitoring as well as the Evaluation functions, there is need for national commitment to install well functioning systems and to build capacities; otherwise, the issues of national accountability and ownership will persist. 

      Finally,  there is need to take advantage of the commitment  and willingness of Rwanda Government  and M&E society to check again the completeness of the national MEL guidelines [developed last year and for which we provided inputs] and enforce it. Then we will be done.

       

      Best,

      Janvier