Thanks to Malika for bringing up this issue of rapid evaluation, which shows once again the difficulties we often encounter in the practical implementation of certain theoretical notions.
My point of view on the issue is that of an institutional actor and not that of a consultant. In Benin we have started to conduct rapid evaluations, a new concept to which we have been exposed in South Africa with the "Twende Mbele" Programme (a cooperation programme in evaluation that we have initiated with South Africa and Uganda to strengthen our national monitoring-evaluation systems through the sharing of experience and the development of collaborative tools).
It is within the framework of this programme that we have developed a specific methodological guide on this type of evaluation and have simultaneously undertaken 4 rapid evaluations, 3 of which concern public interventions and the 4th relates to the effects of COVID-19 on the informal sector.
First of all, it must be said that the major difference between rapid and traditional evaluation lies in the constraints of time and limited resources that characterize rapid evaluation. In Benin, for example, a normal evaluation (excluding impact assessments which can take up to 5 years) takes on average 9 months to 1 year, or even longer, due to many factors related to procedures (notably administration, procurement, institutional management especially when there are many stakeholders), and sometimes even due to the data collection and analysis phase, which is often lengthy. Rapid evaluation therefore calls for new processes to reduce the time of the pre- and post-data collection phases. With the adaptation we made in Benin, the overall time for a rapid evaluation was estimated at 12 weeks maximum. This implies a data collection period of a maximum of two weeks to a month, to allow time for initial activities, organising data collection, analysing results, writing a draft report, obtaining observations and finalising the report. However, it is a reality check to see if this is realistic.
In terms of tools, the gap in time and scope can be filled by using rapid methods:
- Collection with groups (rather than individuals), workshops,
- Use of routine data or other evaluations,
- Team work to carry out different steps at the same time (Collection and processing)
Furthermore, we believe that in order to effectively save time, it is preferable for the rapid evaluation to be carried out by an internal team, because this is the only option that does not require a procurement procedure. But it does require that appropriate organizational mechanisms are put in place, for example:
- a project organisation chart for the team,
- the organisation of the weekly working time to be devoted to the evaluation mission and strict delivery deadlines,
- support measures for the team, etc.
In addition, as the evaluation team did not reside in the communities, we identified focal points in the data collection settings to facilitate with community members in order to save time.
Malika was not specific enough to enable us to propose solution approaches adapted to her context. But from the little I have retained, here are a few ideas that I share for the moment. I could provide more factual elements once we have learned from the experience currently underway at the Benin Public Policy Evaluation Office.
Thank you and good luck to all of you.
[the original contribution is available on the French page]
Très sincères merci à vous, cher Mustafa,
J'espère qu'on aura l'occasion de se voir bientôt. J'en profite alors pour partager ci-joint le rapport d'évaluation de la PNE, qui met en évidence plus de détails et de données factuelles sur la situation actuelle du Bénin en matière de pratique évaluative.
Salutations cordiales à toute la communauté.
Elias A. K. SEGLA
Presidency, Republic of BeninBureau of public policy evaluation and government action analysis08 BP 1165 Cotonou - Benin
Hello to all the community,
I fully believe that monitoring and evaluation are two distinct functions that must complement each other harmoniously. The first contributes to feeding the latter with reliable and quality data, and the second, by qualitative analysis of the secondary data provided by the former, contributes to the improvement of their interpretation. Thus, monitoring and evaluation provide evidence for informed decision-making.
It is true that for a long time the two functions were confused under the term "Monitoring&Evaluaion" terminology through which the evaluation was obscured for the benefit only of the monitoring activity. It would seem, therefore, that the evaluation is trying to take revenge over monitoring in recent years, with its institutionalization under the impetus of a leadership that has not yet achieved the necessary alchemy between the two inseparable functions.
For example, the case of Benin, of which I would like to share with you here some elements of the results of the evaluation of the implementation of the National Evaluation Policy (PNE) 2012-2021, a policy that aimed to create synergy between stakeholders in order to build an effective national evaluation system through the Institutional Framework for Public Policy Evaluation (CIEPP). The National Evaluation Policy distinguishes between the two functions by stating:
“Evaluation [...] is based on data from monitoring activities as well as information obtained from other sources. As such, evaluation is complementary to the monitoring function and is specifically different from the control functions assigned to other state structures and institutions. [...] The monitoring function is carried out in the Ministries by the enforcement structures under the coordination of the Directorates of Programming and Prospective and the Monitoring-Evaluation units. These structures are responsible for working with the Office for the Evaluation of Public Policy and other evaluation structures to provide all the statistical data and information and insights needed for evaluations.
Organizational measures planned for in pages 32 and 33 of the attached National Evaluation Policy document were therefore taken, measures that clearly reveal the ambition to bring the two functions into symbiosis by creating the synergy between stakeholders necessary for the harmonious conduct of participatory evaluations.
On the test of the facts, the results-based management movement and budgetary reforms in Benin have induced the culture of monitoring and evaluation in public administration. But is this culture reinforced with the implementation of the National Evaluation Policy, and more generally by the institutionalization of evaluation?
The evaluation regime in departments today shows that the implementation of the National Evaluation Policy has not had a significant impact on improved evaluative practices. The programming and funding of evaluation activities, the definition and use of monitoring-evaluation tools (inherently monitoring), commissions for evaluations of sectoral programs or projects are the factors that the field data were able to analyze. As a result, departments are less focused on evaluation activities than on monitoring and evaluation.
Resources allocated to evaluation activities in departments have remained relatively stable and generally do not exceed 1.5% of the total budget allocated to the ministry. This reflects the low capacity of departments to prioritize evaluation activities. Under these conditions, it cannot be expected that evaluative practices will develop a great way. This is corroborated by the execution rate of programmed monitoring and evaluation activities, which is often in the order of 65%. Added to this is the fact that the activities carried out are predominantly related to monitoring. Evaluations of projects or programs are rare. Sometimes even the few evaluations carried out in some departments are carried out at the behest of the technical and financial partners who make it a requirement.
However, since the adoption in the Council of Ministers of the National Methodological Assessment Guide, there has been an increase in evaluation activities in departmental annual work plans, particularly on the theory of change and the programming of some evaluations. These results already show the existing dynamics in departments.
In addition, few departments have a regularly updated, reliable monitoring and evaluation database. The development of technological infrastructure to support the information system, the communication and dissemination of evaluation results at the departmental level reflect the state of development of evaluative practices as presented above.
In the end, the state of development of evaluative practice at the departmental level is justified by the lack of an operational evaluation program. As a result, the National Evaluation Policy has not been able to have a substantial effect on evaluative culture in departments in the absence of this operationalization tool, the three-year evaluation program.
When we go down to the level of the municipalities, the situation is even more serious, because the level of development of monitoring and evaluation activities (inherently monitoring) is very unsatisfactory. The evaluation provided very specific data on that. I am happy to share the evaluation report if you are interested.
All this allows me to answer clearly the 4 questions of Mustapha to say:
There is also a need to strengthen:
Thank you all.
[this is a translation of the original comment in French]
In Benin, we have not yet addressed the evaluation of capacity development.
Capacity development is something we are working on. Currently, we are in partnership with the Center for Sociology Studies and Political Science of the University of Abomey-Calavi (CESPo-UAC), to develop a Certificate in Analysis and Evaluation of Public Policies. This is a 3-week certifying course for evaluation actors who wish to strengthen their capacities in this area. In addition, the Journées Beninoise de l’Evaluation are an opportunity for us to train (in one day) government actors, NGOs and local authorities on different themes. Apart from that, we organize training seminars for these same actors every three to five days. This year, for example, we will train (in 5 days) the managers of the planning and monitoring-evaluation services of the 77 communes of Benin, on the elaboration or the reconstitution of the theory of change of the Communal Development Plans (their documents of strategic planning). We did it the year before for the actors of the Government and NGOs.
But we have never undertaken to evaluate these capacity developments. We will get there gradually.
Good evening dear members,
Georgette's concerns in Burkina Faso are extremely relevant. We have not yet found the answer in Benin, but we have initiated an activity that we hope will help us begin to provide relevant answers. This is the assessment of the sensitivity of the national system of monitoring and evaluation in relation to gender. This study was done in Benin, South Africa and Uganda. The diagnosis focused on national evaluation policies and the national monitoring and evaluation systems of the three countries. The results of the study allowed us to adopt a plan of improvement actions including, among others, the definition of national indicators by sector to evaluate the gender aspect, as well as the revision of our national policy of evaluation, to incorporate norms and standards to take gender into account in all our evaluations.
This certainly does not answer Georgette's questions, but it is to show at least that the concern is shared.
Spécialiste en Gouvernance et Management public Présidence de la République du Bénin Bureau de l’Évaluation des Politiques Publiques et de l'Analyse de l'Action Gouvernementale Palais de la Marina
01 BP 2028 Cotonou - Bénin