Dr. Dorothy Lucks is the Executive Director of SDF Global Pty Ltd. For the last 25 years
Dr. Lucks is a credentialled evaluator with a PhD in Sustainable Development. She is a Fellow of the Australian Evaluation Society and served as Secretary of the International Organisation for Professional Evaluators (IOCE), as a management team member of EvalPartners and was an inaugural Co-Chair of the EVALSDGs Network which is a network of policy makers, institutions and practitioners who advocate for the evaluability of the performance indicators of the new Sustainable Development Goals (SDGs) and support processes to integrate evaluation into national and global review systems.
Dr. Lucks has independently evaluated development policies and programmes and projects of international organizations such as FAO, IFAD, UNHCR, the Asian Development Bank and the World Bank in over 30 countries.Dr. Lucks has acted as an Evaluation Team leader for MOPAN III (Multilateral Organization Performance Assessment Network) that comprises a performance assessment process for a consortium of key donors. She has expertise in design, implementation as well as evaluation and has conducted a wide range of thematic evaluations. She is strongly focused on innovation and sees the SDGs as an opportunity and global driving force for transformation.
DOROTHY LUCKS
EXECUTIVE DIRECTOR SDF GLOBAL PTY LTDThanks Amy and others for this interesting thread.
We have been involved in many EAs for different organisations - international financing institutions, UN agencies, NGOs and private sector. I agree with Rick that complexity rather than size of investment is most critical in terms of the EA value. Institutions with a clear mandate and operational procedures, and often a menu of performance indicators and guidelines usually do not require an EA.
The most useful ones that we have been engaged with are with complex, developmental projects where the expected outcomes may be emergent with process as well as output and outcome indicators. Another useful process for EAs has been where there is limited M&E capacity within the implementation team and they are unsure how to measure what is outlined in the design. So it is the incremental value of the EA and also the investment of cost to benefit - two recent examples below.
One, a very complex natural resource management programme that reached its final years, covering policy, institutional and physical results. The implementation team realised that they did not know how to measure all of the final outcomes - they had assumed that an impact assessment team would produce all data required but did not have the budget for the extent of data gathering required. We did a (very belated) EA and found that the team needed to reconstruct a range of raw implementation data to enable tracking of outcomes - a huge job. If they had an EA, and capacity development earlier in the programme, they would have been in a much stronger position and the costs involved in solving the issues would have been much lower.
Another was a complex youth and indigenous project - close to commencement - where a culturally sensitive approach to indicators and monitoring processes was required. That EA was carried out in a very participatory (inexpensive) way that was designed to engage participants in safe and appropriate ways of recording data that would demonstrate levels of progress and learning that would feed back into improving design for later stages of implementation. The costs of the early time investment in the EA reaped huge benefits for both programme outcomes and evaluation.
I also like the idea of the decision-making nodes for whether an EA is required or not. Thanks again for all the points raised.