Evaluation trends and dynamics have an influence on our work as evaluators. But what exactly are those dynamics?
Like the plumber faced with a leaky tap, we may not always have a full picture of the situation, going in. In the field of evaluation, we often lack information needed about the context in which we must carry out our work. This is true despite the fact that evaluators encourage others to use empirically-grounded approaches to programme design.
To address the need to understand what is going on in the field of evaluation and the relationship between the evaluation agenda and our practice, the Center for Learning on Evaluation and Results (CLEAR-Anglophone Africa) and the Center for Research on Evaluation, Science and Technology (CREST) worked together to build a database of evaluation research and reports.
The African Evaluation Database (AfrED), covers 12 African countries over a period of 10 years and provides the opportunity to uncover patterns of evaluation practice. Ultimately, it can help us to understand which interventions are needed to influence learning and innovation in this burgeoning domain. The database is publicly available, though not all papers are available for download due to copyright restrictions.
In general, the picture that has emerged is of a more vibrant and active collection of observations than we had anticipated. We found over 9,000 research articles based on evaluation results in the region, which challenged previous anecdotal impressions of a general lack of peer-reviewed research. These reports were mostly published in sectoral journals of education and health, agriculture and development, and surprisingly, to a very limited extent in evaluation-related journals.
Due to the large scope of journal articles, only 2,635 evaluation reports themselves could be included in the database. These evaluations span a wide range of sectors, donors, and countries. The evaluation reports included in the database were not exhaustive, due to resource limitations. Several existing repositories of evaluations could not be included due to time constraints.
This landscape view allowed us to dig into some questions and anecdotal impressions that have been negatively influencing the evaluation field in the region.
First, there is an impression that evaluators in Africa are largely parachuted in from the North. While this issue is complex and completed evaluations and evaluation studies in the database can only provide part of the picture, we found that, indeed, a disproportionate number of evaluators from the global North are leading evaluation teams and research initiatives around evaluations in Africa. We will need to do further research to understand the implications this has for the relevance and appropriateness of these evaluations, as well as the ways in which the evaluation results are used by all stakeholders. Given that evaluations in the region continue to be led by both donors and multilateral organizations, this calls for pressure to be placed on commissioning organizations to adhere to the principles of the Paris Declaration, and to ensure that contextually-appropriate knowledge and skills are given priority and fully considered in procurement and management processes.
Second, M&E in the region is overwhelmingly dominated by monitoring. Evidence from the database suggests that, in fact, even evaluation practice focuses heavily on whether sufficient progress is being made towards pre-determined results, and less on overall programme effectiveness and strategic planning. Even though evaluation practice is growing in the region, the purpose of evaluation is still very closely tied to monitoring. For evaluation to play its role in strengthening development effectiveness, all stakeholders need to work towards reaching consensus on the purpose of evaluations and evaluation systems.
Finally, a common allegation in the sector is that, since evaluation is an emerging profession, the quality of evaluations can vary significantly. The lack of consistency and high-quality evaluation reports means that evaluations are not always suitable for effective use. While the issue of use cannot be answered by the database alone, results from this cache of research did show a lack of consistency in quality and in fulfilment of reporting guidelines. Also, different stakeholders hold different views on how quality is determined. This makes it particularly important to place high value on local evaluation capacities, thus ensuring that the sector responds to the demands of diversity and context.
The construction of this database has provided us with a view of the evaluation landscape that will help us, as evaluators, managers, commissioners, and other stakeholders, to “walk the talk.” Just as we insist that programme design be grounded in the best available evidence, we have the same obligation in planning and conducting evaluations.
Now that we have more information at our fingertips about the methods, approaches and governance structures of evaluation processes, it is time to focus further research on building the capacities needed to ensure that evaluation can meet its full potential to strengthen development in the region.
[The link to the publication is available on the right hand side of this page]
 Botswana, Ethiopia, Ghana, Kenya, Namibia, Nigeria, Rwanda, South Africa, Tanzania, Uganda, Zambia, Zimbabwe