RE: How to evaluate science, technology and innovation in a R4D context? New guidelines offer some solutions | Eval Forward

The reflections are based on my experience as a co-Principal Investigator in the Interim Evaluation of Project REG-019-18, the Nudging for Good project.

The project entails a research partnership between the International Food Policy Research Institute (IFPRI), Pennsylvania State University/Food and Agriculture Organization (FAO), the University of Ghana, the Thai Nguyen National Hospital, Thai Nguyen University of Pharmacy and Medicine, and the National Institute of Nutrition in Viet Nam. This interdisciplinary team spans a range of disciplines, including epidemiology, nutrition, economics, and machine learning to combine cutting-edge experience in Artificial Intelligence (AI) technology.

The research partnership was founded on the IFPRI’s experience of food systems that have shown that the timely provision of information can effectively address knowledge constraints that influence dietary choices. IFPRI also leads the research and takes on the responsibilities of data analysis and reporting on the results. Pennsylvania State University/FAO was tasked with extending their existing AI platform with additional functionality on dietary assessments and including the capability to nudge adolescents towards improved dietary practices. The country teams, including the University of Ghana and the Thai Nguyen National Hospital, Thai Nguyen University of Pharmacy and Medicine, and the National Institute of Nutrition in Viet Nam are responsible for the in-country validation and feasibility testing of the AI-based technology.

The research entails developing, validating, and testing the feasibility of using AI-based technology that allows for accurate diagnostics of food intake. The research was based on the hypothesis that food consumption and diet-related behaviours will improve if adolescents are provided with tailored information that addresses their knowledge-related barriers to healthy food choices.

Based on the nuances of these research partnerships, and the objectives of the evaluation, we adopted Relevance and Effectiveness from the OECD/DAC evaluation criteria and slightly redefined them to align with the Research Fairness Initiative (RFI). Why RFI?

Lavery & IJsselmuiden (2018) and other scholars highlighted the fact that structural disparities like unequal access to research funding among researchers and research institutions and differences in institutional capacity capable of supporting research partnerships shape the ethical character of research, presenting significant challenges to fair and equitable research partnerships between high-income countries (HICs) and low and middle-income countries (LMICs). 

In response to these challenges, the Research Fairness Initiative (RFI) was created and pilot-tested with leading research institutions around the world to develop research and innovation system capacities in LMIC institutions through research collaboration and partnerships with HIC institutions (COHRED, 2018c).  As a reporting system and learning platform, the RFI increases understanding and sharing of innovations and best practices, while improving the fairness, efficiency, and impact of research collaborations with institutions in LMICs (COHRED, 2018c).  The RFI is thus geared towards supporting improved management of research partnerships, creating standards for fairness and collaboration between institutions and research partners, and building stronger global research systems capable of supporting health, equity, and development in LMICs (COHRED, 2018a).  Reporting on research fairness have also been positively associated with opportunities to measure the relationship between the quality of research partnerships and the impact of the research itself, thus creating a platform for program planning, design, management, and evaluation that could have significant impact on the ethics and management of research programs (Lavery & IJsselmuiden, 2018). 

Lavery & IJsselmuiden (2018) emphasized that evaluative efforts of research fairness, therefore, need to clarify and articulate the factors influencing fairness in research partnerships, apply a methodology capable of operationalizing the concept of research fairness and through the collection of systematic empirical evidence, demonstrate how research partnerships add value for participating organizations.

Based on the above premises, and reading through the CGIAR QoR4D Evaluation Guidelines, below are my reflections:  

  1. The three key evaluation questions recommended in the guideline are appropriate to evaluate the Quality of Science (QoS) following my reflection on the evaluation questions we used to evaluate the Nudging for Good project.
  2. The four interlinked dimensions – Research Design, Inputs, Processes, and Outputs are clear and useful since they capture a more exploratory, less standardized way of doing academic evaluations – evaluative inquiry.
  3. Training and development, as well as closer engagement between the relevant stakeholders, could be an appropriate starting point for CGIAR to support the roll-out of the Guideline.