How has the COVID-19 pandemic affected evaluation design?

image
©FAO/Antonello Proto

Evaluation in times of Covid-19 How has the COVID-19 pandemic affected evaluation design?

4 min.

The COVID-19 pandemic and subsequent restrictions on travel and face-to-face meetings have had a major impact on evaluation design.

Prior to the pandemic, traveling and face-to-face meetings with stakeholders allowed evaluation managers a degree of flexibility and control over evaluations. Since then, managers have had to adjust evaluation design to ensure remote oversight and quality control. I recently conducted a complex evaluation without travels or face-to-face meetings with stakeholders and evaluation team members. Since the restrictions are likely to continue, I would like to share my experience.

About the evaluation

The evaluation looked at FAO’s Technical Cooperation Programme. This programme allows FAO to draw from its regular programme resources and respond to the emergency needs of member countries. It is implemented in developing member countries in all regions. Our evaluation covered both technical and operational aspects of the programme, including relevance, effectiveness, efficiency, fund allocation and distribution, governance and management.

Team composition       

I kept the core team small to minimize the costs and risks of remote communication across different time zones. The core members consisted in an Evaluation Manager, a Team Leader and an Evaluation Analyst. In addition, a small team of analysts conducted quantitative analyses during the inception phase and eight local consultants conducted 11 country case studies during the main evaluation phase.

Adapting the evaluation design

Ensuring consistency among the data collected in different countries and regions was a major challenge. We gave greater weight to desk analyses. Virtual interviews were set up as systematically as possible in terms of the selections of countries, projects and stakeholders in each region to avoid sample bias. Questions were formulated as consistently as possible among countries and regions.

The evaluation was conducted following these steps:

  • Comprehensive document review during the inception phase.
  • In-depth data analysis by a team of analysts during inception.
  • Systematic surveys during the main evaluation phase (one survey for internal stakeholders and another for external stakeholders, sent to all member countries).
  • Country case studies by local consultants during the main phase.
  • Virtual interviews (one-to-one and group) during the inception and main evaluation phases.
  • Report writing. 
What worked: advance organization and structured design

We collected data through desk reviews as much as possible before the virtual interviews. We then validated and built on the data through the interviews.

Key questions were sent to the stakeholders before the interviews on Zoom. This worked well, in particular with high-level officials. They gathered information internally before the interview, and this enabled us to manage our time more efficiently. Follow-up questions were addressed during an additional Zoom meeting or via e-mail with relevant staff.

Case studies were structured to allow each local consultant to ask the same set of questions of stakeholders and to provide the core evaluation team with the same set of information in a consistent format for consolidation. Consultants were briefed via Zoom before conducting the study. Individual briefings worked well because the experience and capacity of the consultants varied. 

We applied the same criteria in selecting countries and projects for closer examination in each region. Selection was done in consultation with regional partners. We decided the number of cases in each region, assigning a weight according to the portfolio size.

The software tools used for virtual group meetings were relatively simple. We found that the level of computer literacy varied among stakeholders and many senior government officials were not familiar with the tools. We just used Zoom, Teams and Skype for Business to conduct video calls and sometimes shared screens from our side.

As the evaluation gave greater weight to desk analyses, we were able to produce separate documents using the results of desk analyses. Consistent design applied in different countries and regions made it easier to undertake cross-country and cross-regional analyses.

Capacities and skills to be build

Relying on national capacities is crucial. As virtual communication without travel becomes a new normal, ensuring the quality of field data collection and sound data analysis is essential for a successful evaluation. It is more critical than ever to strengthen national capacity in data collection, validation and analysis, as well as the capacity for self-evaluation. Using quantitative data helps to mitigate the risk of inconsistent interpretation of qualitative data collected in different locations by different people. Thus, evaluators should strengthen skills in sampling and quantitative data analysis.

In summary

Full virtual evaluation is possible. Structured design is the key. There will be more opportunities for collaborating with local partners and applying more rigorous data analysis.