Thank you so much for your active participation and engagement in this discussion. A few words on the latest contributions:
Nea-Mari, thank you for the information and links of the Finnish SDG M&E at subnational level. Inspiring examples for other countries and cities! Thanks Esosa for highlighting the critical role of evaluation in evidence-based development programs. And Mark, yes, acknowledging limitations of our studies/evaluations is always a good practice, thanks for highlighting this point.
This discussion comes to a close for now, but there will be more opportunities ahead to further exchange ideas and knowledge on supporting progress towards the SDGs through evaluation.
Wishing you all the best, and stay tuned for future updates.
I comment on a few points, not with the intention of exhausting the conversation (neither to fully summarize it!), but hoping to provoke some additional reflection.
1. We should go beyond the focus on measuring contribution or progress to SDGs: there is a range of dedicated studies/evaluations and indicators (including proxy indicators) which also contribute to understanding development progress. SDGs do not sit in isolation, and there can be several pathways and ways leading to the same direction. Dorothy and John Akwetey particularly articulated this topic, but it is present various contributions. They also emphasize the significance of evaluations at national, institutional, and sub-national levels, beyond large-scale SDG evaluations.
The approach used by the study on evaluation evidence shared by Mark Engelbert, which used impact evaluations as a key input, seems to speak to this last point.
On the same line is the work that the The Global SDG Synthesis Coalition is conducting. The synthesis can be used either as an alternative to an SDG focused evaluation or as part of a larger study. The syntheses follow a systematic and transparent approach to identifying, collating and appraising the quality of individual evaluations, and then synthesizing findings and lessons from bodies of evaluative evidence. The approach includes evidence gap maps and other tools, including a rigorous process (and corresponding framework) to include or exclude studies.
2. The challenges to evaluate SDGs encountered by most countries and development actors, and shared with different lenses by Ram Khanal Lovemore Mupeta and Hadera Gebru include: limited resources, insufficient data, lack of appropriate evaluation techiques and complex interlinked targets. In light of these challenges, we should (i) consider/search for other approaches (synthesis is one of them), rather than launching ourselves into potentially daunting evaluations, (ii) start small and (iii) scope wisely for studies that can be useful. Engaging country-based professionals (evaluators and implementers from different sectors) in the process, could support increasing awareness and build evaluative capacity. Unfortunately, major political unrest and challenges can result in a complete setback for any attempt to evaluate progress towards the Sustainable Development Goals (SDGs), as exemplified by the situation in Ethiopia where the post-Covid crises and civil war have undermined all developmental progress.
3. The subnational level (local, in particular), is another recognized challenge shared by many. Nea-Mari, I am curious to know a couple of examples of what Finland has been doing at local level – which types of digital solutions have you adopted for the M&E of SDG progress? I am also positively surprised by the influence of the evaluations into parliamentary elections and in the planning of the new government programme. What would you say, Nea-Mari, are the key-elements that make these evaluations powerful in Finland?
4. Examples of reports: Pelagia Monou, Fabandian Fofana, I wonder if the reports of the evaluations you have been involved are public and you could share the link with us? Pelagia, were you able to go beyond the number of projects and budget to tap into contributions or result? Fabandian, did you measure the contributions to SDGs at local level? Who was involved and how?
5. And last but not least (but on a kind of a side note), a comment about the finding of the 3ie report shared by Mark, that evaluation work on the “Planet” SDGs (SDGs 6 and 12 to 15) has been neglected. The report informs that very little (impact) evaluation research was found covering SDGs 12 (Responsible Consumption and Production), 14 (Life Below Water), and 15 (Life on Land). While I have my own hypothesis as an explanation for this finding, I wonder if Stefano D’Errico, Ram Khanal and other colleagues with expertise in the environmental sector would like to chip in on the reasons? 😊
Still a long way to go: Chris, Olivier and Lal remind us that the post A2030 Framework is rapidly approaching!
Dear colleagues, and thank you Jean for provoking this.
Please bear with me while I bring a drop of academic content to the discussion, hoping we can expand it a bit.
What are mixed methods after all? I think the quanti x quali debate is quite reductionist; and honestly after all these decades I cannot believe we are still discussing if RCTs are the gold standard.
I would like to bring an approach that called my attention, which was presented by Professor Paul Shaffer from Trent University (Canada). His approach is focused mixed methods for impact assessment – but I understand it can be extrapolated to other types of studies – such as outcome assessment. What I like about his proposal is that it goes beyond and deeper the quanti + quali debate.
In his view, the categories that supposedly would differentiate quanti x quali approaches are collapsing. For example, (i) qualitative data is many times, quantified; (ii) large qualitative studies can allow for generalization (while scale/generalization would be a characteristic of quantitative studies), and (iii) induction and deduction inferences are almost always present.
In light of that, what are “mixed methods” ??
What “mixed methods” means is combining approaches that can bring robustness to your design, different perspectives/angles to look at the same object. Based on the questions you want to answer/what you want to test, ‘mixed methods’ for impact assessment could mean combining two or more quantitative methods. Therefore, different qualitative methods could be used to improve the robustness of an evaluation/research – and this would also be called ‘mixed methods’.
And then - going a bit beyond that: couldn’t we consider the mix of “colonizers’ “ with “indigenous “ approaches also “mixed methods”?
Thank you for all the responses and interesting and insightful inputs.
What motivated me to write this post was the impression that most lessons learned identified in evaluation reports are either not lessons or are poorly formulated and rarely used; in other words, evaluators are generating (but are we ?) knowledge that is not really serving to any purpose. Also, I had the impression that behind this issue was the lack of a shared understanding of the concept and of the processes to identify and capture these specific types of evidence.
So, how do we how do we identify real and useful lessons learned?
I will try to summarize the key-points raised:
1. The diversity of responses makes it clear that as evaluators we still do not have a shared understanding of what lessons learned are. Many times, the lessons seem to be there just to check a box on the reports’ requirements.
2. What are key-elements of lessons? Lessons should:
- be formulated based on experience and on evidence, and on change that affects people’s lives;
- be observable (or have been observed);
- reflect the perspective of different stakeholders (therefore, observed from different angles);
- reflect challenges faced by stakeholders in different positions (i.e. also donors may have something to learn!);
- be something new [that represents] valuable knowledge and/ or way of doing things;
-reflect what went well and also what did not went well;
- be able to improve intervention;
- specific and actionable (while adaptable to the context) so that they can be put in practice.
I really like the approach of ALNAP that synthesizes lessons that should be learned. From my perspective a lesson is only really learned if you do it differently next time. Otherwise, it’s not yet learned! Which is why I tend to call the ones we identify during evaluations as simply “lessons”.
3. How do we collect lessons learned? The collection process should:
- be systematic and result from consultations with different stakeholders;
- include what (content), for whom (stakeholders) and how (context!!);
- be clear on who is the target public (e.g., operational staff, staff involved in strategic operations, policy makers etc)
- take into account power dynamics (donors, implementers, beneficiaries etc);
- consider the practical consequences (e.g. is it possible to adapt?);
- include operational systems/feedback mechanisms to ensure that the findings will be discussed and implemented when that is agreed;
- balance rigour and practicality.
4. My final question was: how do we (try to) guarantee that the lessons will be actually “learned” – i.e. used, put in practice? (here I am using the interesting concept of ALNAP, that lessons are formulated to be learned). Some tips shared were:
-associate strategies for putting in practice, including incentives;
- lessons should inform or be aligned with the recommendations;
- recommendations should be reflected in the "management" response; and
- management should be accountable for the implementation.
It’s good to see other organizations and colleagues are interested and that there are some resources available. I hope that this can help us improve our practices. I have compiled below approaches and tools recommended, and examples and resources shared.
Thank you!
Kind regards,
Emilia
***
Approaches or tools recommended as effective to capture and process lessons learned:
Study paper by the UNEP evaluation office: “Lessons Learned from Evaluation: A Platform for Sharing Knowledge” co-authored by Catrina (2007) (link here)
The evaluation report of the “Projet de prestation de services participatifs de la Tunisie pour la reintegration” prepared for the Union Tunisienne de Solidarité Social with a good example of capturing lessons learned (in French): booklet utss.pdf (evalforward.org) . You can find the lessons learned from page 38 onwards.
ABC de la COVID-19. Prevención, Vigilancia y Atención de la Salud en las Comunidades Indígenas y Afromexicanas. (Bertha Dimas Huacuz. INPI. 2020), accesible at the webpage of the National Institute of Indigenous Peoples (INPI-MX): https://lnkd.in/gpv3wgu (book cover)/ https://lnkd.in/gG5wpVE (book text). Consolidates lessons derived from the pandemic for the development of regions and municipalities, compiled from various community development initiatives in Mexico – resource in Spanish.
A qualitative study on three different themes post earthquake 2015 in Nepal, led by World Vision International in Nepal on collaboration of 7 different international agencies working for earthquake response. DEC Collective Learning Initiative Report, Nepal.pdf
Thank you so much for your interesting comments, insights and resources!
While going through all the comments and resources provided, I have another question:
Assuming that (i) we are capturing actual/real lessons, (ii) based on experience/evidence, (iii) actionable, that they (iv) reflect a diversity of perspectives and (v) are applicable in other contexts (some of the characteristics of lessons I take from your contributions), how do we (try to) guarantee that they will be actually “learned”? (here I am using the interesting concept of ALNAP brought up by Jennifer Doherty, that lessons are formulated TO BE learned).
In other words, which are strategies that work to support putting lessons into practice?
I had this message drafted a few days ago, was not able to send it, but I think the moment – after our colleague ask for concrete experiences – is very appropriate to share. So here are some lessons from personal experience.
These I have learned during 4 intensive months of work in northern Uganda with refugee communities (mostly from South Sudan) to develop and administer a survey – a few years ago, while working as an independent consultant.
[For a bit of context, this process involved the community in all steps of the survey from design/piloting, translation to 4 languages (by community members), selection and training of non-professional enumerators (community members), application of survey and feedback/participatory analysis.]
1. You also have a culture, yours is also ‘a culture’. At the eyes of the other, you are the strange. I particularly ‘discovered’ myself as Latin-American during these months in Uganda (note: Brazilians don’t really identify with the Latin-American stereotypes and neither with the lable of ‘latino’- even if we are seen as such and in reality share so much in terms of culture with all other Latin-Americans. Please also notice that, even if Latin-American and having lived most of my life in Brasil, I am a white middle-class woman, that had access to higher level education and whose culture is very close to western/European – this is where I speak from, and how I am perceived).
2. Be prepared to recognize you made a mistake and act in case something happens. In one situation I felt the need to go at the houses of each of my team members (around 12 in total) and have an individual conversation. This was after one very difficult meeting, in the middle of a lot of stress and time pressure. It was all sorted out, but took a lot of energy to make sure all was kept on track and the trust (built over weeks of work and intensive dedication) was not broken.
3. Be open (be curious!), be patient, and always respectful. Have some reality checks. I had talks with my driver that helped me to understand the culture in which I was immersed. And if necessary, take a day or two off to breathe in the middle of culturally difficult situations – and talk to experienced colleagues. Better to step off for a couple days than having to fix things later.
4. And a lesson from something that went very well: be mindful and respectful with dress codes. Women were open to receive me in their houses and talk to me because I dressed respectfully – they literally told me that they appreciated that I did not wear pants, but longer skirts and modest blouse. (I believe I was able through this and other attitudes to show respect and build trust. After one Focus Group Discussion, women sang for me and ‘baptized me’ with a name in their language.)
I hope this adds a bit of concreteness to the discussion J
Emilia Bretan
Evaluation Manager FAODear colleagues,
Thank you so much for your active participation and engagement in this discussion. A few words on the latest contributions:
Nea-Mari, thank you for the information and links of the Finnish SDG M&E at subnational level. Inspiring examples for other countries and cities! Thanks Esosa for highlighting the critical role of evaluation in evidence-based development programs. And Mark, yes, acknowledging limitations of our studies/evaluations is always a good practice, thanks for highlighting this point.
This discussion comes to a close for now, but there will be more opportunities ahead to further exchange ideas and knowledge on supporting progress towards the SDGs through evaluation.
Wishing you all the best, and stay tuned for future updates.
Emilia
Emilia Bretan
Evaluation Manager FAODear colleagues,
Thanks for contributing to a lively discussion.
I comment on a few points, not with the intention of exhausting the conversation (neither to fully summarize it!), but hoping to provoke some additional reflection.
1. We should go beyond the focus on measuring contribution or progress to SDGs: there is a range of dedicated studies/evaluations and indicators (including proxy indicators) which also contribute to understanding development progress. SDGs do not sit in isolation, and there can be several pathways and ways leading to the same direction. Dorothy and John Akwetey particularly articulated this topic, but it is present various contributions. They also emphasize the significance of evaluations at national, institutional, and sub-national levels, beyond large-scale SDG evaluations.
The approach used by the study on evaluation evidence shared by Mark Engelbert, which used impact evaluations as a key input, seems to speak to this last point.
On the same line is the work that the The Global SDG Synthesis Coalition is conducting. The synthesis can be used either as an alternative to an SDG focused evaluation or as part of a larger study. The syntheses follow a systematic and transparent approach to identifying, collating and appraising the quality of individual evaluations, and then synthesizing findings and lessons from bodies of evaluative evidence. The approach includes evidence gap maps and other tools, including a rigorous process (and corresponding framework) to include or exclude studies.
2. The challenges to evaluate SDGs encountered by most countries and development actors, and shared with different lenses by Ram Khanal Lovemore Mupeta and Hadera Gebru include: limited resources, insufficient data, lack of appropriate evaluation techiques and complex interlinked targets. In light of these challenges, we should (i) consider/search for other approaches (synthesis is one of them), rather than launching ourselves into potentially daunting evaluations, (ii) start small and (iii) scope wisely for studies that can be useful. Engaging country-based professionals (evaluators and implementers from different sectors) in the process, could support increasing awareness and build evaluative capacity.
Unfortunately, major political unrest and challenges can result in a complete setback for any attempt to evaluate progress towards the Sustainable Development Goals (SDGs), as exemplified by the situation in Ethiopia where the post-Covid crises and civil war have undermined all developmental progress.
3. The subnational level (local, in particular), is another recognized challenge shared by many. Nea-Mari, I am curious to know a couple of examples of what Finland has been doing at local level – which types of digital solutions have you adopted for the M&E of SDG progress? I am also positively surprised by the influence of the evaluations into parliamentary elections and in the planning of the new government programme. What would you say, Nea-Mari, are the key-elements that make these evaluations powerful in Finland?
4. Examples of reports: Pelagia Monou, Fabandian Fofana, I wonder if the reports of the evaluations you have been involved are public and you could share the link with us? Pelagia, were you able to go beyond the number of projects and budget to tap into contributions or result? Fabandian, did you measure the contributions to SDGs at local level? Who was involved and how?
5. And last but not least (but on a kind of a side note), a comment about the finding of the 3ie report shared by Mark, that evaluation work on the “Planet” SDGs (SDGs 6 and 12 to 15) has been neglected. The report informs that very little (impact) evaluation research was found covering SDGs 12 (Responsible Consumption and Production), 14 (Life Below Water), and 15 (Life on Land). While I have my own hypothesis as an explanation for this finding, I wonder if Stefano D’Errico, Ram Khanal and other colleagues with expertise in the environmental sector would like to chip in on the reasons? 😊
Still a long way to go: Chris, Olivier and Lal remind us that the post A2030 Framework is rapidly approaching!
Thanks all for contributing!!
Warm regards
Emilia