Steven [user:field_middlename] Lam

Steven Lam

Independent Research and Evaluation Consultant
Canada

I currently work at the International Livestock Research Institute where I am developing an international research agenda focused on testing and evaluating solutions to address global One Health challenges, including zoonoses, antimicrobial resistance, and food and water safety. I completed postdoctoral training in science policy at the Public Health Agency of Canada (2022-23; funded by Mitacs) and doctoral training in public health at the University of Guelph (2017-2022; funded by CIHR).

I also work as a research and evaluation consultant, both independently and with various consulting firms. Throughout my career, I have successfully completed 22 consulting projects centered on health and other development topics, spanning across Canada and worldwide. My particular focus lies in integrating equity and climate change considerations into program design and assessment. I explored this area during my doctoral research and am now extending these efforts in my ongoing research and practice. I am dedicated to high-quality evaluation and I hold a Credentialed Evaluator designation from Canada.

My contributions

    • Hi Amy and all,

      To answer question 2 of how to facilitate the use of evaluability assessments, I find it helpful to do an evaluability assessment without calling it an evaluability assessment given the politics of "evaluability." I conceptualize it as an activity to prepare programs for evaluation rather than to determine "evaluability." This means making sure that the linkages in the program theory of change are logical, that the proposed outcomes are possible, etc. This approach to evaluability assessment is more of a process integrated within program planning and evaluation generally and as such does not often lead to a stand-alone output. 

      A few years ago my colleague and I reviewed evaluability frameworks and case studies, which might provide more insights into other gaps and opportunities.

      Lam S, Skinner K. (2021). The use of evaluability assessments in improving future evaluations: a scoping review of 10 years of literature (2008-2018). American Journal of Evaluation. 42(4): 523-540.

      Best,

      Steven

    • How have your evaluation methods captured the impact of development projects on the environment or climate change? 

      This is a timely question because although the importance of integrating climate considerations into development programs is increasingly recognized, how such programs account for climate change often remains overlooked.

      A good starting point for meaningfully capturing the impact of development programs on climate change in evaluation methods is to first ‘mainstream’ or integrate climate considerations throughout the evaluation. In a 2021 paper published in Global Food Security (https://doi.org/10.1016/j.gfs.2021.100509) we shared a framework with guiding questions for different evaluation components:

      Evaluation scope 

      a. Does the introduction of the evaluation acknowledge a climate change issue(s)? 
      b. Does the evaluation include an objective/question/criterion specific to the assessment of climate change adaptation, mitigation, and/or impacts? 

      Evaluation approach 

      a. Is climate change adaptation, mitigation, and/or impacts mentioned in the evaluation theory, methodology, methods, and/or analysis? 

      Evaluation results 

      a. Does the findings section provide information on climate change adaptation, mitigation, and/or impacts? 

      b. Does the conclusion provide information on climate change adaptation, mitigation, and/or impacts? 

      c. Are there specific recommendations to address climate change adaptation, mitigation, and/or impacts?

      What indicators have you found to be most effective in measuring improvements or changes in the environment/climate change, as well as contributions to improved mitigation and adaptation? Emission levels? Resilience measures? Climate finance raised? Insurance products made available? Or others?

      In the above-mentioned study, we also applied the framework to examine evaluations of UN agencies working in food and agriculture (e.g. FAO, WFP, IFAD, UNICEF, UNEP, UNDP) and found many different approaches and indicators used. For example, IFAD defined a new adaptation criterion in an updated evaluation manual (2016) as: “The contribution of the project to reducing the negative impacts of climate change through dedicated adaptation or risk reduction measures”. IFAD also offered core questions to guide the evaluation such as: “To what extent did the program demonstrate awareness and analysis of current (climate) risks?”. 

      It is important to note that climate mainstreaming in program planning and evaluation is not keeping up with the urgent need for climate action. In a paper currently in press in WIREs Climate Change titled “Greener through gender: What climate mainstreaming can learn from gender mainstreaming” (doi: 10.1002/wcc.887), we leverage lessons from gender mainstreaming to accelerate progress in climate mainstreaming, drawing on a review of mainstreaming practices from the UN agencies mentioned above (stay tuned!).

      Steven 

    • Hi all,

      This discussion reminds me of debates within qualitative vs quantitative research. Qualitative research assumes that the position of the researcher – as the primary research instrument – impacts all aspects of the research. Quantitative research is perceived to be neutral/impartial, despite the fact that the researcher gets to pick the questions to ask, who to ask, where to look, and so on.

      Rather than striving for principles that do not really exist in evaluation, I think it is more fruitful to be aware of how the identities, experiences, and interests of evaluators and clients are intertwined in the evaluation. When designing the evaluation, ask: Whose interests does the evaluation serve? Who are we (not) asking? In what ways do we influence the evaluation process? Will the data be convincing? This awareness could lead to planning that results in stronger, more credible evaluations.

    • Hi Serge and all,

      Yes, I try to integrate these themes into all evaluations. Clients are often very open to learning about ‘for whom’ their programs work. This information helps them know whether their program supports different groups of people.

      In terms of environment, there tends to be a bit of hesitancy at first, as the link between program activities and environmental implications can be fuzzy. It could be that there are no implications. But asking about the environment provides a starting point for discussion.

      As Silva noted, there have been many efforts to promote the measurement of social impacts. The UN system typically does this by using a human rights/gender equality lens (see UNEG Ethical Guidelines 2008 and UN-SWAP 2006). Many UN agencies also outline this need in their evaluation policies.

      Similarly, there are many guidelines for mainstreaming environmental and climate change considerations into programs and policies (UNDP did a stocktake in 2010). UN agencies typically speak to this theme in evaluation guidance documents.

      While it would help if TORs asked and budgeted for questions around social and environmental impacts of programming, I agree with Silva that we should advocate for them if these elements are not there.

      A challenge I initially faced was, “well, how do we do this?” I’m currently finishing up my dissertation focused on answering this question. Examining previous evaluations of food security programs, I’m finding lots of evidence showing us how, why, and in what context we should integrate these themes.

      We should engage with methodological developments from the literature and try them out. Ask questions such as: how do different groups experience this program? And how has climate change affected people's experiences? Share your process and learnings.

      Evaluations could play in promoting equity and environmental sustainability, and we must.

      Steven

    • Recently I have been grappling with a similar set of questions so thank you, Carlos, for posing them. Drawing on my experience in facilitating several ToC workshops, I would say ToC is a useful approach to evaluation. Value is realized mainly in its process, of bringing participants from diverse disciplines and sectors together, of co-mapping systems change, of identifying areas where the program might influence change pathways, and of highlighting priority areas for monitoring. Some important context though, is that many of these participants have never heard of ToC before (and it doesn't help that ToC does not translate well in different languages), so some value might be attributed to its novelty. Anyway, while other planning tools might also have been appropriate, I find ToCs to be particularly helpful for programs that have multiple interacting components, diverse stakeholder perspectives, and uncertainty in outcomes, which are characteristic of many food security initiatives today.