Aurelie [user:field_middlename] Larmoyer

Aurelie Larmoyer

Senior Evaluation officer
WFP
Italy

Senior Evaluation Specialist in the UN system (FAO, IAEA, WFP)
Food security analysis and Humanitarian assistance programmes management in the field with international NGOs (1999- 2003)

My contributions

  • One aspect of this investment has included developing the capacity to more efficiently and effectively mine the evidence contained in its portfolio of evaluations. This has naturally entailed looking to artificial intelligence (AI)-based options to automate text extraction.

    Seeing the high interest in A.I., I am sharing some information on what we are aiming to do, and reflections from our experience, as our A.I. project is getting started.

    Why is AI of interest? 

    Evidence-based decision-making is central to many multilateral organizations such as WFP. Evaluation is a key provider of credible evidence, generated by independent teams and backed by solid

    • Dear Muriel,

      I agree A.I. brings so much potential for evaluation, and many questions all at once! 

      In the Office of Evaluation of WFP, as we have looked to increase our ability to be more responsive to colleagues’ needs for evidence, the recent advancements in artificial intelligence (A.I.) came as an obvious solution to explore. Therefore, I am happy to share some of the experience and thoughts we have accumulated as we have started connecting with this field. 

      Our starting point for looking into A.I. was recognizing that we were limited in our capacity to make the most of the wealth of knowledge contained across our evaluations, to address our colleagues’ learning needs. This was mainly because manually locating and extracting evidence on a given topic of interest, to synthesize or summarize it for them, take so much time and efforts. 

      So, we are working on developing an A.I. powered solution to automate evidence search using Natural Language Processing (NLP) tools, allowing to query our evidence with questions in natural language, a little like we do in any search engine on the web. Then, making the most of recent technology leaps in the field of generative A.I, such as Chat GPT, the solution could also deliver text that is newly generated from the extracted text passages, such as summaries of insights. 

      We also expect that automating text retrieval will have additional benefits, such as helping to tag documents automatically and more systematically than humans, to support analytics and reporting; and as that Ai will also give an opportunity to direct relevant evidence directly to audiences based on their function, interests and location, just like Spotify or Netflix do. 

      As we manage to have a solution that offers a good performance in the search results it offers, we hope it may then be replicable to serve other similar needs.

      Beyond these uses that we are specifically exploring in the WFP Office of Evaluation, I see other benefits of A.I. to evaluations, such as:

      • Automating processes routinely conducted in evaluations, such as the synthesizing of existing evidence to generate brief summaries that could feed evaluations as secondary data.
      • Better access to knowledge or guidance and facilitating the curation of evidence for reporting in e.g., annual reporting exercises. 
      • Facilitating the generation of syntheses and identification of patterns from evaluation or review-type exercises.
      • Improving editing through automated text review tools to help enhance language.

      I hope these inputs are useful, and look forward to hearing the experiences of others, as we are all learning as we go, and this is indeed full of promises, risks and surely moves us out of our comfort zones.

      Best

      Aurelie

  • Disability inclusion in evaluation

    Discussion
    • Dear Judith,

      Thanks for tabling this important topic.

      Just wanted to share some resources that you may find interesting to address some of the questions you raise, which we all try to grapple with the best we can these days. This is a recording of an exchange session that the Office of Evaluation of WFP organized last May, to discuss practical strategies to address some of the challenges you raise, related to making evaluations inclusive.

      Sway (office.com)

      Hope this brings useful insights.

      Best,

      Aurelie

  • The COVID-19 pandemic has exposed and/or exacerbated any pre-existing issues worldwide. For evaluators too, since March 2020, additional questions arise as to how to ensure that evaluations offer useful contributions to their intended users. Driven by the need to continue to support learning and accountability, evaluations have also adopted new ways of working, and turned virtual for a large part. Do we know how this has affected the utility of our work?  While we acknowledge the new limitations posed by the pandemic situation, we may also need to address this question.

    What has evaluation done until now to promote its
    • Dear Malika,

      Thank you for raising such an important question. I find it interesting in two respects:

      First, because it raises the question of how we can capture the immediate (or medium term) effects of the COVID-19 situation on our realities. Many evaluators are grappling with this question. Some colleagues in the UN system have worked to draw some general directions in this respect. For instance, the recent publication from the ILO office of Evaluation (https://www.ilo.org/wcmsp5/groups/public/---ed_mas/---eval/documents/publication/wcms_757541.pdf) might be of inspiration, as it lists, in annex, typical evaluation questions that match the need for collecting specific information relevant to COVID-19.

      I also find your question interesting because it asks about how to make rapid evaluations, which we also had many possible reasons to aim for even prior to the pandemic, and on which therefore there are past experience to build on. And, if our colleague Jennifer is right in underlining that evaluation does not easily lend itself to fast reaction, I think there are ways to expedite processes to cater for the urge of timeliness. I can share the following learning points in respect to what worked when I aimed for conducting evaluations rapidly. First, focus. It makes a difference when someone’s time is entirely to the task, while multitasking takes away the precious focus you need to get to where you want fast. Second, aim for a good enough plan. We often go round in circles to prepare our evaluations, and invest a lot of time in back and forth exchanges over it, a straighter line, it can help to start with a rough scoping and testing and refining your focus and approach as you go along. Third, compensate any cut corners with engaging few select stakeholders with strategic knowledge as sounding board along the way.

      Of course, the COVID-19 situation complicates these rules of thumb, in particular when engagement needs to be virtual; and my last piece of advice is to get savvy with modern technologies for engaging by virtual means. As you report, this might last, so might be worth investing in such new competences.

      Best, Aurelie

    • Dear Mustapha,

      Thank you for your post, which brings up many important topics indeed!

      To only take-up a few, I would start by loudly asserting the view that monitoring and evaluating are by no means mutually exclusive and unquestionably complementary.

      It may be that Evaluation has developed well as a practice, and more so than its sister function Monitoring. Still, a study we have done (on which we recently shared preliminary results here https://www.evalforward.org/blog/evaluation-agriculture ) did show that in many developing countries, evaluations are done mostly when supported by a dedicated external funding: an indication that the bigger sister is not yet that sustainably established… 

      Your post still does raise a big question which is a concern to me too: why has the Monitoring function not yet been the subject of the same donor interest; why are monitoring systems not a number one requirement of all donors, considering how essential it is as a tool to learn from past actions and timely improve future ones? As our study also revealed, before promoting evaluation, countries need to establish Results Based Management, which starts, even before monitoring, with planning for results. 

      It is a fact that in many institutions, from national to international levels, monitoring is still heavily underrated and underinvested. Maybe one way would be to start by identifying who has a stake in ensuring the ‘M’ fulfils its function of identifying what works and does not, why and under what circumstances? In this respect, we evaluators could take a role in supporting the emergence of this function within our respective spheres of influence; putting aside our sacred independence cap for a while… Would other evaluators agree?

      All the best to all,

      Aurelie

  • How are these applied by institutions concerned with agriculture, and are officials working in this sector equipped with adequate capacities and resources to carry out evaluation? 

    Knowledge seems to be scarce on these questions. Hence, the FAO Office of Evaluation and EvalForward started exploring the dynamics of evaluation within Ministries of Agriculture and their relations with other institutions who play a role.

    While the study is ongoing, the Francophone International Forum on Evaluation - FIFE2019, held in Ouagadougou from 11 to 15 November, provided an opportunity to exchange ideas on these questions with Conference participants. In a Round Table with

  • The African Evaluation Association’s 9th International Conference will be held in Abidjan, Cote D’Ivoire from 11 to 15 March 2019. The Conference theme is Accelerating Africa’s Development: Strengthening National Evaluation Ecosystems. It aims at expanding the “Made in Africa” evaluation approaches, supporting knowledge sharing, capacity development and networking opportunities among a wide range of organisations and individuals working on evaluation.

    EVAL-ForwARD will actively promote and support one of 12 work strands of the Conference, titled “Improving Agriculture and Food Security through Evaluation”.  

    Those interested in contributing to the conference may propose: papers, roundtables, posters, exhibitions and workshops

  • Developmental evaluation

    Discussion
    • Dear Mustapha,

      Thank you for a contribution that raises an interesting issue for the development of this Community of Practice.

      I share your hope that EVAL-ForwARD will serve practitioners, to promote evaluations that are useful for refining development interventions. On the other hand, I would be more nuanced about the place, in our exchanges, of more theoretical contributions, which I do not believe should be restricted to an academic community: on the contrary, our platform of exchange plays an important role in that it makes it possible to build bridges between academics and practitioners. Of course, we do not all have the same time to digest the more abstract inputs, but the opportunity is there.

      As for your substantive question on developmental evaluation, you raise an interesting point, which applies to so many other concepts: that of differences of interpretation. How many times, when reading an evaluation journal, did I tell myself that the author did not have the same understanding as me, on a definition, an approach...

      If I share in my turn, what I believe characterizes Developmental Evaluation, over more ‘traditional’ evaluation or M&E, my interpretation is that Developmental Evaluation brings particular  value in cases where the subject to be evaluated is still too uncertainly identifiable  (e.g. because it is complex or innovative) to allow an evaluation on the basis of already formulated indicators or models. The value add of DE would thus be to accompany the intervention whilst it develops and test its effectiveness according to indicators that the evaluator can develop as the intervention is invented, so as to provide a real-time feedback, and so despite the constraints linked to uncertainty. So it seems to me that there is a real place for this approach, which I perceive as more exploratory - perhaps less mechanical - than the approaches based on change theories known ex-ante. In particular, because often the interventions we evaluate are placed in contexts involving many factors, and in cases where interventions evaluated seek to propose innovative solutions.

      I hope that this interpretation will enrich the set of contributions on this subject and that the whole, although of a somewhat theoretical nature, can feed the reflections and practices of the members of this network.

      Best regards,
      Aurelie