Amy [user:field_middlename] Jersild

Amy Jersild

PhD Candidate and evaluation consultant
Western Michigan University
United States of America

Nearly 25 years of experience in the international development sector as an evaluator, manager, technical advisor and educator working in partnership with donors, governments and civil society organizations in Asia, Africa and the Middle East on development effectiveness and quality programming and policies. Deeply interested in co-creative and evaluative processes that support self-determination and development that is sustainable for both people and planet. MA degree in sustainable development, and PhD candidate in interdisciplinary evaluation studies at Western Michigan University. 

Amy's development evaluation experience includes serving as:
• Doctoral Candidate, Western Michigan University, Interdisciplinary PhD program in Evaluation. Completion of degree anticipated in 2024. Research interests include meta-evaluation, and the professionalization and internationalization of evaluation. 
• Assistant Professor at SIT Graduate Institute in Washington DC, designing and teaching both graduate level theory and practice-based courses on evaluation in Washington DC, India and Jordan;
• Independent evaluator since 1997 advising international agencies and donors on social development issues and programming. Clients include FCDO, ILO, USDOL, UNDP, CGIAR, Rockefeller Foundation, and Adaptation Fund.
• Internal evaluator as Deputy Director of Quality Programming from 2008-2012 in Bangkok leading international team effort to develop M&E framework for Hewlett Packard’s flagship global entrepreneurship education program.

Active member of EvalPartners (member of EVALSDG group)
Active member of AEA (member of the International Working Group)

Managerial experience in directly negotiating with and reporting to bilateral donors (USPRM, CIDA, SDC, GIZ, AusAID), multilateral (UNICEF, World Bank), and corporate (HP, Adidas, Deutsche Bank); in coordinating with military bodies (KFOR, Kosovo); and in partnering with civil society organizations (Cambodia, Laos, Thailand, China, and India).

In-country work and living experience in Bangladesh, Cambodia, China, Japan, Kosovo, Lao PDR, Thailand, and USA; additional work experience in Egypt, Ethiopia, India, Israel, Jordan, Kenya, Nepal, Philippines, Sri Lanka, Turkey, Uganda, and Vietnam.

Mandarin Chinese proficiency; basic to intermediate skills in French, Khmer, Lao, Spanish and Thai.
 

My contributions

    • Thank you all for an interesting and engaging dialogue on evaluability assessments. Please check back soon for an excellent summary of our discussion drafted by Gaia Gullotta of CGIAR. It will be provided in English, Spanish and French.

      Cheers! 

    • Many thanks, Rick, for your comments. Such historical data on past ratios would be interesting to examine. And yes, budget size may be one of the items on a checklist considered as a proxy for complexity, but I agree, it should not be the only one in depicting complexity, for the reason you pointed out. Your suggestion about depicting nodes in a network makes sense to me. The more numerous the possible causal linkages and sources of data may then result in a higher score, which may then lead to a “yes” decision on an EA. 

      Perhaps such a checklist might also help focus an EA, or include a follow-on set of items that can initially explore the four primary areas depicted in the jigsaw diagram you shared below - https://mande.co.uk/wp-content/uploads/2022/05/Austria-diagram.png (institutional and physical context, intervention design, stakeholder demand, and data availability). Such a checklist, if needed, may then not only guide a decision on whether to conduct an EA, but it may also help focus an EA and its priority areas, thus making it a more cost-effective and focused exercise. 

      I’d be interested to hear from others on this forum who manage evaluations/EAs. How do you decide in your organization whether or not to conduct an EA? And how are decisions made in how to focus an EA? 

      Regards, Amy

    • Thank you all for your participation. There’s been a lot of discussion on the pros and cons of EAs, with strong perspectives on either side of the debate. We have varied experiences with EAs as a group, some of us having implemented EAs, some of us not; and some of us having read reports, some of us not. And we have strong perspectives ranging from seeing them as an unnecessary use of scarce M&E resources to identifying specific benefits for their use in planning and maximizing the outcome of an evaluation. 

      We will wrap up this discussion by September 10th. Before then, I’d like to invite more reflection on when to implement an EA and when not to - the question of both cost-benefit and perceived benefit to stakeholders, relating to questions 1 and 2 above. I would suggest that EAs need to be proportionate to the cost of a subsequent evaluation, both as good use of financial resources and for stakeholder buy-in. Does anyone have any thoughts to contribute on this, both in terms of actual ratios, and/or addressing organizational policy on EAs on when and how they should be implemented? I know of some UN agencies that have developed an approach of making EAs mandatory for programs with large budgets over a specified amount. It seems to me that in addition to a checklist for implementing an EA, which provides important concepts to think about and address, a checklist for whether to implement an EA could also be useful in providing what to consider in deciding whether one is applicable and/or feasible. 

      Kind regards, Amy

    • Hi all,

      I agree with the argument that the rigid application of a tool, whatever it may be, likely does not result in a positive outcome. This may be the rigid application of theories of change, an overused approach that has become synonymous with “doing” evaluation, yet is still not used to its fullest application in most evaluation reports I read. Or with the over valuing of RCTs based on ideological interests. Or the rigid application of the OECD-DAC criteria based on an expected paradigm. There are expected pathways to what “knowledge” is to be within our field that contributes to this rigidity, particularly when applied in a mechanistic way, and its overuse can indeed perpetuate the bureaucratic nature of our established systems. I fully agreed with the points raised by Dahler-Larsen and Raimondo in Copenhagen several years ago at EES.

      Yet I would also argue that any tool, such as an evaluability assessment, should not be dismissed based on this argument. I think a more useful line of inquiry may be to think about when and how EAs could be most useful. In my experience EAs can in effect be a tool for breaking with mechanistic evaluation and bureaucratic systems – and yes, an attempt to breaking management’s capture of evaluation -- through better defining a meaningful and useful focus for an evaluation. Or the decision to not do an evaluation based on its findings. I think the challenge is at the organizational level with the inevitable interest to standardize and create norms for its use across complex realities. 

      Regards, Amy

    • Many thanks, Jindra, for sharing your experience and the useful links below. I read through your EA checklist for ex-ante evaluations with interest. Your experience of very few programs having sufficient data resonates. I’d be interested if you have any reflection on stakeholder reception and use of EA results based on your experience (question 3 above).

      Warm regards, Amy

    • Dear all,
       
      Thank you for your active participation and feedback! I am reading and reflecting on all as they come in. I'll respond to Dreni-Mi and Daniel now, and I look forward to continued discussion. 
       
      Dear Dreni-Mi,
       
      Many thanks for your posting. You’ve given a sound overview of the various phases of an evaluability assessment, a rationale for its implementation, and its benefits. There are several stages of evaluability assessments mapped out in the evaluation literature, all somewhat related but stressing different aspects or values, and with slightly different categorization. Wholey’s 8 steps come to mind, as well as Trevisan and Walser’s 4 steps. Your emphasis on cost effectiveness and preparing for quality evaluation resonates I think across all the approaches.
       
      One of our learnings conducting evaluability assessments at CGIAR this past year was the value of a framework (checklist) for use and also the need to be flexible in its implementation. The jigsaw nature of EAs that Rick Davies shared in another posting I think is an especially helpful way to think about EAs as a means of bringing together the various pieces in an approach best suited to a certain context. Clearly defining objectives for an EA, and responding to specific needs leads to a more effective use of the framework, providing more flexibility and nuance to the process.
       
      From the key stages you’ve outlined, what have you found the most challenging to implement? And what is your experience with use of evaluability assessment results?
       
      Kind regards,
      Amy
       
      Dear Daniel,
       
      Many thanks for your comments. I’ll respond to a few. I fully agree – evaluators should have a seat at the design table. Their participation and ability to facilitate evaluative thinking among colleagues usually aids in greater evaluability for an intervention, based on my experience. It can also facilitate the development of sound monitoring and evaluation planning, and the capacity to use and learn from data generated from these processes. Such a role broadens what is typically understood, in some circles anyway, of what evaluators are and what they do, which I really like in terms of support to the further professionalization of our field.
       
      In his 3rd edition of Evaluation Thesaurus, Michael Scriven makes reference to the philosopher Karl Popper’s concept of “falsifiability” when discussing evaluability. This concept relates to the idea that there should always be a capacity for some theory, hypothesis, or statement to be proven wrong. For an evaluand (Scriven’s definition of what is to be evaluated -- a program, project, personnel, or policy, etc) to be evaluable then, I understand that broadly it would be deemed falisifiable to the extent that it is designed, developed, or constructed in a way where evidence may be generated as “proof” of its value.
       
      The religious connotation in Scriven’s reference to a first commandment certainly is intended to place importance on the concept. Evaluability as “the first commandment in accountability” impresses upon me as something that is due – a justification, and ultimately a responsibility. Scriven notes low evaluability has a high price in terms of the cost borne. “You can’t learn by trial and error if there’s no clear way to identify the errors.” And “It is not enough that one be able to explain how one spent the money, but it is also expected that one be able to justify this in terms of the achieved results” (p. 1).
       
      I think Scriven provides further discussion on evaluability in his 4th edition. I’m traveling and don’t have access to my hard copy library back home. Perhaps others can reference further inputs on this.
       
      Thoughts?
       
      Best,
      Amy
    • Thank you, Svetlana, for the opportunity to participate in this discussion. I respond to two of your questions below.

      Do you think the Guidelines respond to the challenges of evaluating quality of science and research in process and performance evaluations?

      The Guidelines appear to respond to the challenges of evaluating quality of science and research in process and performance evaluations through a flexible and well-researched framework. I am not sure if a single evaluation criterion captures the essence of research and development. I think the answer would be found in reflecting on its application in upcoming varied evaluative exercises at CGIAR, as well as reflection on previous organizational experience. This may involve identifying how it is interpreted in different contexts, and whether further development of recommended criteria may be considered for a possible second version of the Guidelines.

      How can CGIAR support the roll-out of the Guidelines with the evaluation community and like-minded organizations?

      I agree with others that workshops and/or training on the Guidelines could be a means for rolling out the Guidelines and engaging with the evaluation community. Emphasizing its flexibility and fostering reflection on its use in different organizational contexts would be productive.

      In line with my response to the first question above, I would suggest a meta-evaluative exercise be done when there has been more organizational experience applying the Guidelines. There would be obvious value for CGIAR, possibly leading to an improved upon second version. It would also be of great value to the evaluation community with CGIAR taking an important role in facilitating continued learning through the use of meta-evaluation -- what the evaluation theorist Michael Scriven has called both an important scientific and moral endeavor for the evaluation field.

      At Western Michigan University, we are engaged in a synthesis review on meta-evaluation practice over a 50-year period. We’ve come up with many examples of meta-evaluation of evaluation systems in different contexts. We assumed very little meta-evaluation was being done and were surprised to find there are plenty of interesting examples in both the grey and academic literature. Documenting such meta-evaluative work would further strengthen the Guidelines and its applicability as well as add significant value in continued engagement with the international evaluation community.