Nearly 25 years of experience in the international development sector as an evaluator, manager, technical advisor and educator working in partnership with donors, governments and civil society organizations in Asia, Africa and the Middle East on development effectiveness and quality programming and policies. Deeply interested in co-creative and evaluative processes that support self-determination and development that is sustainable for both people and planet. MA degree in sustainable development, and PhD candidate in interdisciplinary evaluation studies at Western Michigan University.
Amy's development evaluation experience includes serving as:
• Doctoral Candidate, Western Michigan University, Interdisciplinary PhD program in Evaluation. Completion of degree anticipated in 2024. Research interests include meta-evaluation, and the professionalization and internationalization of evaluation.
• Assistant Professor at SIT Graduate Institute in Washington DC, designing and teaching both graduate level theory and practice-based courses on evaluation in Washington DC, India and Jordan;
• Independent evaluator since 1997 advising international agencies and donors on social development issues and programming. Clients include FCDO, ILO, USDOL, UNDP, CGIAR, Rockefeller Foundation, and Adaptation Fund.
• Internal evaluator as Deputy Director of Quality Programming from 2008-2012 in Bangkok leading international team effort to develop M&E framework for Hewlett Packard’s flagship global entrepreneurship education program.
Active member of EvalPartners (member of EVALSDG group)
Active member of AEA (member of the International Working Group)
Managerial experience in directly negotiating with and reporting to bilateral donors (USPRM, CIDA, SDC, GIZ, AusAID), multilateral (UNICEF, World Bank), and corporate (HP, Adidas, Deutsche Bank); in coordinating with military bodies (KFOR, Kosovo); and in partnering with civil society organizations (Cambodia, Laos, Thailand, China, and India).
In-country work and living experience in Bangladesh, Cambodia, China, Japan, Kosovo, Lao PDR, Thailand, and USA; additional work experience in Egypt, Ethiopia, India, Israel, Jordan, Kenya, Nepal, Philippines, Sri Lanka, Turkey, Uganda, and Vietnam.
Mandarin Chinese proficiency; basic to intermediate skills in French, Khmer, Lao, Spanish and Thai.
Amy [user:field_middlename] Jersild
Amy Jersild
PhD Candidate and evaluation consultant
Western Michigan University
United States of America
Amy Jersild
PhD Candidate and evaluation consultant Western Michigan UniversityThank you all for an interesting and engaging dialogue on evaluability assessments. Please check back soon for an excellent summary of our discussion drafted by Gaia Gullotta of CGIAR. It will be provided in English, Spanish and French.
Cheers!
Amy Jersild
PhD Candidate and evaluation consultant Western Michigan UniversityMany thanks, Rick, for your comments. Such historical data on past ratios would be interesting to examine. And yes, budget size may be one of the items on a checklist considered as a proxy for complexity, but I agree, it should not be the only one in depicting complexity, for the reason you pointed out. Your suggestion about depicting nodes in a network makes sense to me. The more numerous the possible causal linkages and sources of data may then result in a higher score, which may then lead to a “yes” decision on an EA.
Perhaps such a checklist might also help focus an EA, or include a follow-on set of items that can initially explore the four primary areas depicted in the jigsaw diagram you shared below - https://mande.co.uk/wp-content/uploads/2022/05/Austria-diagram.png (institutional and physical context, intervention design, stakeholder demand, and data availability). Such a checklist, if needed, may then not only guide a decision on whether to conduct an EA, but it may also help focus an EA and its priority areas, thus making it a more cost-effective and focused exercise.
I’d be interested to hear from others on this forum who manage evaluations/EAs. How do you decide in your organization whether or not to conduct an EA? And how are decisions made in how to focus an EA?
Regards, Amy
Amy Jersild
PhD Candidate and evaluation consultant Western Michigan UniversityThank you all for your participation. There’s been a lot of discussion on the pros and cons of EAs, with strong perspectives on either side of the debate. We have varied experiences with EAs as a group, some of us having implemented EAs, some of us not; and some of us having read reports, some of us not. And we have strong perspectives ranging from seeing them as an unnecessary use of scarce M&E resources to identifying specific benefits for their use in planning and maximizing the outcome of an evaluation.
We will wrap up this discussion by September 10th. Before then, I’d like to invite more reflection on when to implement an EA and when not to - the question of both cost-benefit and perceived benefit to stakeholders, relating to questions 1 and 2 above. I would suggest that EAs need to be proportionate to the cost of a subsequent evaluation, both as good use of financial resources and for stakeholder buy-in. Does anyone have any thoughts to contribute on this, both in terms of actual ratios, and/or addressing organizational policy on EAs on when and how they should be implemented? I know of some UN agencies that have developed an approach of making EAs mandatory for programs with large budgets over a specified amount. It seems to me that in addition to a checklist for implementing an EA, which provides important concepts to think about and address, a checklist for whether to implement an EA could also be useful in providing what to consider in deciding whether one is applicable and/or feasible.
Kind regards, Amy
Amy Jersild
PhD Candidate and evaluation consultant Western Michigan UniversityHi all,
I agree with the argument that the rigid application of a tool, whatever it may be, likely does not result in a positive outcome. This may be the rigid application of theories of change, an overused approach that has become synonymous with “doing” evaluation, yet is still not used to its fullest application in most evaluation reports I read. Or with the over valuing of RCTs based on ideological interests. Or the rigid application of the OECD-DAC criteria based on an expected paradigm. There are expected pathways to what “knowledge” is to be within our field that contributes to this rigidity, particularly when applied in a mechanistic way, and its overuse can indeed perpetuate the bureaucratic nature of our established systems. I fully agreed with the points raised by Dahler-Larsen and Raimondo in Copenhagen several years ago at EES.
Yet I would also argue that any tool, such as an evaluability assessment, should not be dismissed based on this argument. I think a more useful line of inquiry may be to think about when and how EAs could be most useful. In my experience EAs can in effect be a tool for breaking with mechanistic evaluation and bureaucratic systems – and yes, an attempt to breaking management’s capture of evaluation -- through better defining a meaningful and useful focus for an evaluation. Or the decision to not do an evaluation based on its findings. I think the challenge is at the organizational level with the inevitable interest to standardize and create norms for its use across complex realities.
Regards, Amy
Amy Jersild
PhD Candidate and evaluation consultant Western Michigan UniversityMany thanks, Jindra, for sharing your experience and the useful links below. I read through your EA checklist for ex-ante evaluations with interest. Your experience of very few programs having sufficient data resonates. I’d be interested if you have any reflection on stakeholder reception and use of EA results based on your experience (question 3 above).
Warm regards, Amy
Amy Jersild
PhD Candidate and evaluation consultant Western Michigan University