DOROTHY [user:field_middlename] LUCKS

DOROTHY LUCKS

EXECUTIVE DIRECTOR
SDF GLOBAL PTY LTD
Australia

Dr. Dorothy Lucks is the Executive Director of SDF Global Pty Ltd. For the last 25 years

Dr. Lucks is a credentialled evaluator with a PhD in Sustainable Development. She is a Fellow of the Australian Evaluation Society and served as Secretary of the International Organisation for Professional Evaluators (IOCE), as a management team member of EvalPartners and was an inaugural Co-Chair of the EVALSDGs Network which is a network of policy makers, institutions and practitioners who advocate for the evaluability of the performance indicators of the new Sustainable Development Goals (SDGs) and support processes to integrate evaluation into national and global review systems.  

Dr. Lucks has independently evaluated development policies and programmes and projects of international organizations such as FAO, IFAD, UNHCR, the Asian Development Bank and the World Bank in over 30 countries.Dr. Lucks has acted as an Evaluation Team leader for MOPAN III (Multilateral Organization Performance Assessment Network) that comprises a performance assessment process for a consortium of key donors. She  has expertise in design, implementation as well as evaluation and has conducted a wide range of thematic evaluations. She is strongly focused on innovation and sees the SDGs as an opportunity and global driving force for transformation.

 

My contributions

    • Thanks Amy and others for this interesting thread.

      We have been involved in many EAs for different organisations - international financing institutions, UN agencies, NGOs and private sector. I agree with Rick that complexity rather than size of investment is most critical in terms of the EA value.  Institutions with a clear mandate and operational procedures, and often a menu of performance indicators and guidelines usually do not require an EA.  

      The most useful ones that we have been engaged with are with complex, developmental projects where the expected outcomes may be emergent with process as well as output and outcome indicators. Another useful process for EAs has been where there is limited M&E capacity within the implementation team and they are unsure how to measure what is outlined in the design. So it is the incremental value of the EA and also the investment of cost to benefit - two recent examples below.

      One, a very complex natural resource management programme that reached its final years, covering policy, institutional and physical results. The implementation team realised that they did not know how to measure all of the final outcomes - they had assumed that an impact assessment team would produce all data required but did not have the budget for the extent of data gathering required. We did a (very belated) EA and found that the team needed to reconstruct a range of raw implementation data to enable tracking of outcomes - a huge job.  If they had an EA, and capacity development earlier in the programme, they would have been in a much stronger position and the costs involved in solving the issues would have been much lower.

      Another was a complex youth and indigenous project - close to commencement - where a culturally sensitive approach to indicators and monitoring processes was required.  That EA was carried out in a very participatory (inexpensive) way that was designed to engage participants in safe and appropriate ways of recording data that would demonstrate levels of progress and learning that would feed back into improving design for later stages of implementation. The costs of the early time investment in the EA reaped huge benefits for both programme outcomes and evaluation.

      I also like the idea of the decision-making nodes for whether an EA is required or not. Thanks again for all the points raised.

  • How are we progressing in SDG evaluation?

    Discussion
    • Dear Emilia

      Thank you for bringing up this discussion.  I have been reading the contributions with interest and would like to add another perspective. The question that you raise about delving into concrete evaluation practices got me thinking about the depth and breadth of practice related to the SDGs. 

      In my work as an evaluation external reviewer for several different organizations, in meta-evaluations, institutional level evaluations across national and multilateral organizations and in evaluation syntheses, I am involved in or read deeply at least a hundred evaluation reports in a year. The responses so far to this thread provide some really good practice examples that are at the pinnacle of SDG evaluation and these are hugely valuable, but we also don’t want to miss the less visible practice that is also contributing to the SDGs. 

      To illustrate this, we can think of an atoll, an iceberg, a mountain range. The tips are visible but underneath there are masses that connect to the peaks. This led me to consider three key points, but there are undoubtedly more.

      Beyond the SDGs

      The SDGs do not sit in isolation. They were crafted as part of the 2030 Agenda for Sustainable Development – Transforming Our World. The SDGs are only flags on the pathway to a bigger summit. The SDGs are not the only processes that contribute to sustainable development – but they do help to provide focus. The tendency to focus on the few “SDG evaluations” does not account for the increasing number of evaluations that arise within countries and organizations that relate to strategic work that occurred through national and institutional responses for the 2030 Agenda commitments. These responses are reflected in national development plans, institutional change, shifts to multi-sectoral approaches to the SDGs or have more participatory approaches, to name a few influences of evaluation work. An example is the Multilateral Organisation Performance Network (MOPAN) that has incorporated assessment of the extent to which a multilateral organization, funded by the 22 countries that are members of MOPAN, has shifted its strategy and systems related to its mandate to align with the 2030 Agenda. These assessments are used by the organization to consider strategic and systematic improvements in line with the 2030 Agenda and other global commitments.

      Below the SDG indicators

      As countries and organizations shift, countries like Nepal, Ghana and many more plus organizations at all levels, have integrated the SDG indicators into their plans and acknowledged other factors of culture and country that were important, leading to a suite of indicators that are relevant to different contexts. As Ram says so clearly, these are now normal process and therefore we can look beyond and more deeply.  The VNRs are only one part of the process. The effect of the SDG indicators is only one part of the visibility of what is being done towards SDG achievements. Some evaluations I read are clearly linked to SDG response but may barely mention a link to a specific SDG, but together they generate a body of evaluative work that is valuable in progressing the 2030 Agenda.

      Wider than the evaluation sector. The work that has been done through the National Evaluation capacities conferences and other evaluation capacity development initiatives has built evaluation capacity that has expanded and flowed down into other national and sub-national systems and local contexts.  If we subscribe to the principle that evaluations are designed to support accountability and learning for better design, implementation, performance and outcomes that lead to progress towards a more sustainable future, then the impact of the many evaluations that are being carried out at all levels are contributing to the 2030 Agenda results.

      A realistic view. The above points are made, not to be idealistic and say that the evaluation sector is making good progress in evaluation related to the SDGs.  There are many crevasses and cracks and fault lines. Some areas are hidden and others are crumbling.  I, like so many others are disappointed that more is not being done. But let’s not be short-sighted and think only in terms of large-scale SDG evaluations and miss the mass of other valuable work that is going on.

      With kind regards

      Dorothy Lucks 

      Executive Director, SDF Global,

       

  • Disability inclusion in evaluation

    Discussion
    • Dear Silva

      That is beautifully put , and points to the integral value, and values of an evaluator. I often view our role as both facilitator and translator to understand the language of context, Culture and experience and translate it into the language of technical theories, institutions, resources and decision-making, with the hope of strengthening connection, understanding and positive flow between them to facilitate the patterns and solutions that emerge. 

      Thank you for taking the time to make such a great explanation.

      Kind regards 

      Dorothy Lucks

    • Dear Mauro

      You raise a good point. There is usually feedback prior to finalization of the evaluation report. Often this is mainly from the internal stakeholders of the initiative (policy, program, process, project) that is being evaluated and from the commissioner of the evaluation. This is extremely useful and helps to ensure that the reports are good quality and the recommendations are crafted to be implementable. Unfortunately, the stakeholders for the evaluation content are often not the decision-makers for resource allocation or future strategic actions. Consequently while there is a formal feedback process, the decision-makers often do not engage with the evaluation until after the evaluation is complete. For instance, we are currently evaluating a rural health service. There are important findings and the stakeholders are highly engaged in the process. But the decisions on whether the service will be continued is central and decisions are likely to be made for political reasons rather than on the evaluation findings. It requires evaluation to gain a higher profile within the main planning ministries to exert influence on the other ministries to take decisions on evidence rather than on politics. We are still a long way from this situation but the shift to evaluation policy briefs is a good move that give ministerial policy officers the tools to properly inform decision-makers.

      Kind regards

      Dorothy Lucks

    •  Dear Isha and all

      Well said. I agree. With all the new tools that we have in our hands there is opportunity for evaluation to be more vibrant, less bureaucratic and ultimately more useful!

      Kind regards

      Dorothy

  • I recently raised a discussion with the EvalForward Community on the growing disconnect between youth and the agricultural sector, and how we can learn from evaluations to make strides forward in the area. The EvalForward discussion was stimulated through four key questions: Are evaluations relating to young people in agriculture making a difference or not? Do we actually see a shift in the way young people are included in agriculture? Are there lessons learned in relation to young people that we should be incorporating in all initiatives? What are the pitfalls and challenges?

    Participants across many countries and contexts shared their

    • Dear all who have engaged with this discussion,

      It is good to see the level of passion for the inclusion of young people in agriculture and I have been following each point with interest. But if I may probe a little deeper, I think Lal is getting closest to the point that I  have been making.

      The lessons learned in evaluations regarding young people in agriculture are valid, important and at least in my experience continually reinvented with each evaluation including the very good synthesis carried out by IFAD of over ten years of projects. Yet, where do these lessons go? 

      Do we actually see a shift in the way young people are included in agriculture? I agree that we cannot classify all youth as the same and there are, in every country, young people who are naturally drawn to working on the land and are passionate and skilled.  These are valuable and inherent in any agriculture community.

      Yet, rural communities tend to be shrinking as the other young people leave for other opportunities. Many will not find those opportunity and end up unemployed in cities.  If we are serious about the SDGs, particularly 2, 8 and 11, can all these learnings, from all these evaluations not be scaled up to make a re-investment of youth in agriculture.  The above mentioned synthesis of IFAD makes this point.  It says it is now time to scale up efforts based on ten years of learning.

      So my real question is .... are evaluations making a difference or not? If not, how does that happen to greater effect?

       

      Kind regards 

      Dorothy Lucks

       

       

       

    • Dear EvalForward members,

      Thank you all for posting many and insightful contributions to this discussion. 

      As pointed out, sustainable engagement of young people in agriculture faces several challenges. Many of these are similar across different country contexts, especially those related to enduring negative perceptions of the sector and the non-conducive political and enabling environment for youth employment and entrepreneurship in agriculture. The cases of successful engagement and business ventures highlighted shed light on the possible opportunities to counter the trend of rising average age of farmers and agripreneurs.  

      I would like to encourage all to step up consideration of youth engagement in your evaluations and in recommendations, in order to contribute to moving forward on youth-appropriate strategies in support of SDG2 – End Hunger, achieve food security and improved nutrition and promote sustainable agriculture.

      Please keep sharing lessons and evaluative knowledge you may have through this platform!

      Best regards,

      Dorothy

  • Gender and evaluation of food security

    Discussion