Anna Maria Augustyn

Anna Maria Augustyn

International consultant

Experienced international consultant in evaluation, agriculture, environment, research, technology and innovation. Throughout 15+ years of my career I've been collaborating with various clients on the efficient acquisition and delivery of financial instruments targeting investments in food, agriculture and environment sectors around the world. These included various international agencies (such as the EU, IFAD, FAO), non-governmental organizations and private sector companies. As an evaluator, I gained experience in working with rural communities and decision makers at all administrative levels. Addressing development challenges through building capacities of stakeholders, MEL and participatory approaches are at heart of my practice. I like to seek out of the box solutions for complex problems and connecting the dots. 

My contributions

    • Dear Muriel and Colleagues,

      Thank you for the questions and insights. I’d like to share some experiences I made while working on a large database (thousands of projects) from which a portfolio had to be extracted for the impact evaluation. The methodology utilized machine learning algorithms, a branch of AI. 

      The approach was two-fold: 1) a machine learning algorithm was developed by the experts, and 2) a semi-manual search was performed. In the first case, the portfolio turned out to be smaller than expected, but the projects were very precise and on the topic of interest. Yet, the portfolio was too small to make robust statistics out of it. In the second approach, the portfolio was much bigger but many projects had to be removed from the dataset as they were marginally correspondent to the topic of interest. An expert guidance was needed to define the keywords and refine the portfolio and a programming expert to develop a customized application. Subsequent activities proved to be very fruitful with using language-based processing of the projects and available evidence on the web (web-scrapping, incl. social media).

      The following methodology challenges could be observed:

      • Language-bias - the approach turns out more efficient where EN language dominates (in the project reporting, media and other communications) and in the countries which actively use it in a daily life. The semantic complexity, which can differ greatly across the languages, requires different algorithms, some of which may be more, some less sophisticated. 
      • Project jargon - it can vary greatly from project to project and some buzzwords can be used interchangeably. Also, various donors sometimes have different wording in their agendas, which needs to be reflected upon when designing the algorithms. A project can be classified as climate, but can be much more focused on construction engineering, water, waste etc., which also impacts how the machine will work with the semantics.
      • Availability of data on the web - it is more likely that for younger projects it will be more abundant than for the older ones. It can be also disproportionate, depending on the content that is produced by each project and shared. 
      • Black-box phenomenon - at some point evaluators may lose control over the algorithms. This can pose challenges to the security and governance. 
      • Database architecture - the considerations should be already made at the stage of developing datasets and databases for the reporting purposes while implementing the project. The structure and content of a database, incl. errors such as typos, has a paramount importance for the efficiency of work with AI. 
      • Costs – as the open source software poses challenges to the security, it may be helpful to invest into a customized app development and support from the IT experts. 

      To conclude, I found AI very useful there where large datasets and portfolios were available for the analysis, and where the web data was abundant. It can help greatly, but at the same time requires a good quality assurance as well as dedicated expertise.

      I am concerned about the privacy and security in using AI. It is already difficult to harmonize the approach in the international cooperation, especially with projects from different donors and legal systems at the international and national or even institutional levels. But we should give it a try!

      Best wishes,

      Anna Maria Augustyn


  • A ToC typically records the causal linkages between a chain of activities, results, outcomes and impacts, underpinned by their underlying assumptions. This type of framework is often developed at the design stage of an intervention and followed throughout its implementation.

    As projects and programmes are implemented in real life, with all its complexity, the ToC needs to be reviewed on a regular basis. This can be particularly important when interventions come up against unexpected challenges, such as the outbreak of COVID-19, civil unrest, price fluctuations or natural disasters. Evaluators can incorporate any such factors into the ToC review process and

    • Dear EvalForward Community,

      Many thanks for the contributors to discussion so far: Tom Archibald, Brian Belcher, Harriet Maria Matsaer, Hayat Askar, Moussa Coulibaly, Silva Ferretti, Seda Kojoyan, Nelson Godfried Agyemang, Nasser Samih Quadous and Alan Ferguson.

      I’m happy to see many interesting experiences and insights into the topic of ToC review. Thank you also for the useful document resources, video and links.

      The following main points emerged from the discussion:

      1.    There is a shared understanding that the ToC review is a beneficial exercise for the projects as it helps to better capture the underlying assumptions and identify the rationale behind success or failure of the specific interventions. Organizations use those reviews as a learning tool that can help to improve the project implementation or designing the follow up projects. 

      2.    As the projects and programmes operate within the complex systems, similar is with the ToC. There is a visible challenge in adapting the linear approach for measuring the progress with the systems thinking that better captures this complexity. Against this background, the evaluation policies of the donors usually tend to prefer the more focused and fragmented picture of the project implementation.

      3.    The ToC reviews are often performed within the projects / programmes evaluation, however there are some practical issues with implementing the suggested changes. As the Logframe or other frameworks supporting the project implementation are rather fixed at the project’s outset, it’s difficult to introduce changes in its course. 

      Concerning my last point, I’ve another issue to consider: 

      Are you familiar with any evaluation cases where the ToC review resulted in changing the expected outcome or impact indicators of a project? How difficult was to introduce those changes?

      I will welcome your comments and any further links or documents.

      Best wishes,

    • Dear Seda and Colleagues,

      Thank you for this interesting topic for the discussion. The TAPE tool developmed by the FAO looks very promising and it would be interesting to see, whether it could be embedded into the evaluations of projects or programmes targeting agroecological transitions.

      The tool is very much rooted in the systems thinking and because of this, it offers an alternative to the mainstream evaluations approaches. Maybe it would be helpful to investigate how it could be integrated with the popular frameworks using the OECD DAC, Theory of Change, Intervention Logic and Logframe approaches. In my view, the best would be to use the suggested indicators already at the project design stage.

      On the other hand, the proposed indicators can be possibly helpful for facilitation purposes in the rural communities. For instance, when I worked with the project beneficiaries on developing indicators in participatory manner, I was sometimes missing a good background template with indicators to inspire them. The TAPE Tool could be a good resource for this.

      In my experience, farmers and rural stakeholders often do not see a clear difference between agroecology and other system-type approaches, such as for instance conservation or carbon farming. The TAPE Tool could be possibly used to articulate more the similarities and differences.

      Best wishes,

      Anna Maria Augustyn

  • MEL provides a helpful framework and tools to accompany the implementation of targeted interventions, with a view to improving agricultural sustainability.

    The MEL system fosters continuous evaluation and learning, which enables the adaptive management of transformational projects. It requires a systematic effort to measure implementation progress and simultaneously enhance continuous and real-time learning among those involved, be they farmers and other rural dwellers, civil society representatives, researchers, policymakers or evaluation practitioners. It builds on a variety of tools, approaches and indicators to assess results, integrate lessons and improve impact. It thus supports improvements in project performance.[1]

    Yet, the

    • Dear David and Colleagues,

      Thank you very much for the interesting topic for the discussion. Below, I'd like to share some insights from my practice as evaluation consultant and researcher. I worked on a number of assignments from the local to global level, which involved surveys with farmers and other rural people in diverse geographic contexts. My most recent project has been focused on the capacities of evaluation stakeholders in multi-actor projects targeting agricultural innovation ( I've a strong background in sociology and psychology, which also affect my approaches to surveying. 

      1. Striking a balance between depth and length of assessment: 
      • How can the burden on smallholder farmers be reduced during M&E assessments

      This usually depends on the context. For instance, I interviewed farmers who were very interested in chatting with me, both about the survey questions and non-related topics. It's important to recognize their needs and issues they face, which may be often different from what we expect as evaluators. Some people are more, some less busy, introvert or extrovert and it can also affect their eagerness to engage into the task. I normally strive towards a balance between their needs and mine. At times, one may need to compromise skipping some questions in the survey. This could be reflected at an earlier stage - the evaluation design, when decisions are to be made on the direct and proxy indicators. 

      • What are the best ways to incentivize farmers to take part in the survey (e.g. non-monetary incentives, participation in survey tailoring, in presentation of results)?

      It can be helpful to ask what are their evaluation needs: a problem they want to solve, in which evaluation and data could help. They may be quite different from what the evaluators intend, so one should try to negotiate and seek an optimization in the evaluation design. It’s helpful to engage farmers into defining the scope of evaluation, relevant questions and indicators. For instance, I once run a workshop where participants were presented with a list of possible indicators and could rate those, which were most relevant in their opinion. The result was quite different from what the evaluators anticipated. Non-monetary incentives are also helpful. I remember bringing a box of fine chocolate from my home city to farmers, with whom I stayed during the survey work. They were helping me to identify other survey participants (snowballing) and at the end gave me also eggs from their farm to bring home. Concerning monetary incentives, I always fear the Hawthorne’s effect, i.e. an increased performance of respondents under the pressure of being studied and rewarded. 

      1. Making findings from M&E assessments useful to farmers:  
      • Based on your experience, what could be the most effective ways to communicate results from the sustainability assessment to farmers (e.g. field visits and peer learning, technical information workshop)? What kind of communication materials (e.g. briefs, leaflets, others) are most appropriate to support knowledge sharing events?

      Definitely, P2P learning is very useful. This way people can exchange with each other using the same language. As evaluators we often tend to communicate in a different way than farmers, hence a skilled facilitation is usually a better option than and top-down way of presenting the results. It’s good to have it as a facilitated discussion, field trip and some informal get together. In addition, various dissemination channels can be helpful, such as radio, videos or leaflets. Using visual communication is quite effective, in my experience. I remember evaluating a project where farmers had issues with recognizing grapevine diseases, which already existed in their area. They did not know actual names of those, but pictures helped to recognize them. 

      • Do you have experience in comparing results among farmers in a participatory way? What method have you used to do this? Was it effective?

      I remember a visioning exercise where evaluation results were presented and further elaborated. It was a mid-term project evaluation, where people who were earlier interviewed (farmers and other rural community members) participated in the event, also some contributed with their stories. Based on this, a visioning exercise was run by the external facilitators, which was intended to help in improving the project and planning other activities for the community’ future. Various methods were used, including the facilitator’s toolbox with sticky notes, flipchart and others. 

      • How can the results be used for non-formal education of farmers (e.g. to raise awareness and/or build capacity on ways to increase farm sustainability)?

      In principle, the evaluation results need to be translated into the farmers’ language. With these, they can be used in many ways through capacity building activities. Forms of P2P and experiential learning are in my experience most effective to maximize the uptake of the evaluation results at the farm level. Sometimes, the broader enabling environment of evaluation need to be also considered, for instance farmers may lack some incentives to change their practice, despite an increased awareness on the issue. It's important to choose the right means of communication, which can be also different in various countries, regions and depend on the literacy of the farmers and their community leaders. 

      With best wishes from Budapest,

      Anna Maria Augustyn


      LIAISON2020 | Optimising interactive innovation