Lewis N. KISUKU

Lewis N. KISUKU

Praticien en Suivi & Évaluation Axé sur les Résultats (SAR)/Practitioner of Results Based Monitoring (RBM)
LIR Consulting Cabinet
Democratic Republic of the Congo

More than 15 years of experience in humanitarian assistance with short- and long-term missions in similar geopolitical contexts; Extensive and in-depth experience on direct participation in the management and implementation of effective M&E systems to assess progress and record results achieved, on monitoring of subcontractors, as well as technical assistance related to, performance management, theories of change and monitoring and evaluation plans in USAID, GAC, ECHO & BMZ funded projects. Solid experience in managing emergency, transitional and development projects with a focus on aid to displaced populations, food security, conflict resolution, WASH and education. I have benefited from several international trainings through the German Academy for International Development with a focus on results-based M&E, food security and managing for development results. I have a good knowledge of the geographic, political and socio-cultural contexts of Central African Republic, Burundi, Rwanda and DR Congo in general, and a good mastery in the provinces of the resilience zone in particular (Goma, Bukavu, Tanganyika, Kassaï and Ituri)

My contributions

    • 1. How is this shift in responsibility being managed? What institutions are involved – government, universities, NGOs, private consultancy companies or individuals? 

      • First of all, in my country there is no body or institution with the mandate or responsibility to carry out monitoring and evaluation of the actions of the government, of NGOs or other international institutions such as in South Africa (Monitoring and Evaluation, Office of the Public Service Commission of South Africa) or in Colombia (SINERGIA). In the Democratic Republic of Congo, the Ministry of Planning is the ministry responsible for planning and programming the country's economic and social development policy. It has 9 directorates, including one for "studies and planning" and more than 7 committees and 4 cells, one of which is in charge of projects and public procurement and another for control and monitoring, etc., but the mandate and role has never been clarified and is still not played.
      • None of our universities or training institutions offer a degree course in Monitoring and Evaluation, except for our Higher Institute for Rural Development which has a branch at the "Licence" master2 level with an option in "Regional Planning". This is where we offer some courses related to monitoring and evaluation. This leads to the conclusion that there is a crying lack of skills in monitoring and evaluation.
      • Most independent evaluations are organised by individuals: they can be presented as a single individual, as a duo or as the head of a study centre or association. Their profiles: (i) often former employees of INGOs (who have become independent consultants or are still in service, but acting in their spare time as freelancers); (ii) sometimes university professors or assistants acting in an autonomous capacity, without the direct involvement of their respective universities. Major projects of the World Bank, ADB and some UN agencies very often make these academics comparable to the INGOs that call on them for this first profile. Currently, even World Bank projects use this first profile, because the second profile is too theoretical, the reports are very long (but well written) and often submitted late because their authors are too busy with other priorities.
      • Evaluations within and/or during the course of the project are often the work of one or more staff employed by the project or NGO in question (with the consequent question of the real independence of these staff under hierarchical authority, very often expatriated). 

      2. How far is this responsibility being taken? Is it still confined to data collection and analysis, or does it include greater responsibility in the management of the evaluation ?

      • For the first profile of monitoring and evaluation professionals, very often their responsibility is limited to data collection and/or support from international consultants. There is no real transfer of skills, nor decision-making authority upstream or downstream of the evaluation. By way of illustration: once the report has been written by the consultant and validated by the administrative hierarchy of the NGO, even if the approaches or opinions diverge between the consultant and the staff responsible for monitoring and evaluation within the project, it is very rare for the local staff to be proved right.
      • For the second profile, very often they enjoy an initial credit of confidence, and are therefore often called upon for the basic or final evaluations. They are given free rein in the design and planning of survey tools. Very often they are masters in the drafting of final reports. Sometimes the data are under the authority of others, and the interpretation, analysis of the data and writing of the report is left to others.

      3. How is this work financed? Are costs borne solely by governments or do donors contribute with funds previously allocated for donor staff or consultants?

      • 95% of the costs of evaluations are borne by donors. With the issue of accountability in vogue: the existence of a monitoring system or accountability mechanism is now part of the conditions or criteria for project selection. The real challenge remains the allocation of the budget, normally 3%-7% of the total project budget should be available for monitoring and evaluation, but few projects respect this proportion, especially the aspect relating to staff capacity building in relation to monitoring and evaluation.

      4. What are the difficulties met? 

      (i) Lack of trained personnel; The Covid pandemic19 has accentuated this gap. With the restriction of movement, monitoring and evaluation professionals have to fall back on either the use of localizing resources or technology. Both are greatly lacking in my country.

      (ii) Lack of funding; as the organisational culture of monitoring and evaluation is still in its infancy in many organisations, staff in monitoring and evaluation departments often have to shake things up to get the attention of budget decision-makers. As a direct consequence, visits are reduced to their strict minimum. Training supported by the project is almost non-existent.

      (iii) problems of peer pressure leading to optimistic reports; due to the lack of this culture and training, many colleagues perceive monitoring and evaluation staff as policemen and/or pessimists who only see the worm as half empty. Unfortunately, if the hierarchy is not open to criticism, this attitude can exacerbate inter-relational tension and even put the moral and/or physical integrity of some officers at risk (I myself was a victim in Tanganyika).

      (iv) other problems ? insecurity or instability in work areas. As an illustration: the news of the death of the Italian ambassador to eastern DR Congo during a monitoring mission in the field.

       

      [Note that this contribution was originally submitted in French]

         

    • What can we do to improve food security data?

      Discussion
      • Data Quality Management and Surveys

        The General Policy for Data Quality Management in Agriculture provides for the development of a Code of Good Practice in Surveys for Harmonization of Approaches. during the design and production phases. Here, the term "survey" refers to any activity aimed at collecting or acquiring data for statistical purposes. This includes censuses, sample surveys, and the production of statistics from data from administrative records produced by agents. The creation and maintenance of an administrative file for statistical purposes does not fall into this category, only the exploitation of such a file for statistical purposes belongs to the field of surveys.

        The collection of good data collection and analysis practices is one of the internal mechanisms contributing to the quality of processes, a prerequisite for the quality of products and services. Distributed to the staff of the practical guides while directing the responsibility since 2016 the GIZ initiated an institutional mechanism of collection of the data and on the other hand a measure of Audit (against verification) of the quality of the data collected by the team of responsible for the production of the data. However, GIZ's desire to reach out to a broader audience of customers, users and partners led to the development of the Policy Statement on Quality in Surveys. It consists of sections of the Code of Good Practice in Surveys whose content is less specialized.

         

        The definition of quality

        Like many statistical organizations, the Institute defines the quality of a product by all the characteristics that influence its capacity to satisfy a given need, to allow a planned use. GIZ uses six dimensions as criteria of quality: relevance, reliability and objectivity, comparability, timeliness, intelligibility and accessibility. Table 1 summarizes the definition and general guidelines for quality assurance for each of these dimensions. During the realization of a survey project, the dimensions are targets to be met to ensure the quality of the statistical information resulting from this survey. After its realization, they represent the criteria that make it possible to evaluate the quality of the statistical information produced. In the process of being implemented, the decisions made regarding the procedures and their implementation must take into account all the dimensions of quality. The quality assurance of a survey product therefore depends on the work done by the implementation team and all the staff involved, at all stages of the survey.