I enter into this rich discussion from the PoV of my experience managing the ongoing evaluation of the CGIAR GENDER (Generating Evidence and New Directions for Equitable Results) Platform which is being coordinated by IAES. And from this vantage, I explore in some detail questions 2, 3 and 1 beginning with an overview of the evaluation context, the design of the evaluation and then capping with one key takeaway from applying the guidelines.
Background
The guidelines present four interlinked dimensions (research design, input, processes and outputs) which consider the many variables in the delivery and uptake of high-quality research, framed by the QoR4D frame of reference and OECD DAC criteria. The application is by no means linear. The ongoing GENDER Platform evaluation served as a test-case. The evaluation aims to assess the Platform’s progress, document lessons learned, and provide forward-looking recommendations as it transitions to encompass an expanded mandate as an impact Platform.
In answering the central evaluation questions, although the evaluation was not framed around an explicit “quality of science” (QoS) criterion, the guidelines were a useful toolbox in an agricultural research for development (AR4D) context to situate QoS while assessing the key questions following five DAC evaluation criteria - relevance, effectiveness, efficiency, coherence, and sustainability. The Platform evaluation, which was conducted by a multidisciplinary team led by an evaluator, integrated participatory, theory-driven, utilisation-focused and feminist approaches and deployed mixed-methods in data collection.
By way of context, the GENDER Platform synthesizes and amplifies research, builds capacity, and sets directions to enable CGIAR to have an impact on gender equality, opportunities for youth, and social inclusion in agriculture and food systems. The Platform is organized around three interconnected modules (Evidence, Methods and Alliances).The guidelines were applied to the Evidence Module, which aims to improve the quantity and quality of gender-related evidence.
Mechanics
In terms of the evaluation design, in line with the inception report, the evaluation team developed sub- evaluation matrices that addressed the Platform’s Modules’ impact pathways and results framework. These sub matrices fed into an overarching parent evaluation matrix. The matrices, overarching matrix (and other outputs) were reviewed by a team of external peer reviewers, including some members of IAES’s evaluation reference group, and by the Platform team to strengthen its validity. The reviews informed the subsequent revisions of the documents.
The four QoS dimensions have been integral in helping to evaluate the evidence module - these four dimensions were mapped to the focal evaluation criteria. Subject matter experts that led the Evidence module assessment systematically applied it to assess the module in a nested manner based on the mapping they conducted. Each of the Platform’s three module assessments then fed into the overarching Platform evaluation in a synergistic manner.
Takeaway
From this test case, one of several takeaways is that the convergence of different lenses is pivotal in applying the guidelines. The multidisciplinary evaluation team, in this case, benefited from both evaluator lenses -led by an evaluator, and “researcher lens”, with subject matter experts who were (gender) researchers that led the assessment of the Evidence module. The evaluation team in applying the guidelines straddled both perspectives to unpack the central evaluation questions mapped along the four QoS dimensions. Although multidisciplinary evaluation teams may not always be feasible in some contexts, in applying the guidelines, such multidisciplinarity may prove handy. However, it is essential that such teams invest sufficient time in capacity sharing and cross-learning to shorten the learning curve it may take for the convergence needed to effectively assess “QoS”, or mainstream it along the standard OECD DAC criteria as was done in this case. And the guidelines (and other derivative user-friendly products) can serve as a ready-to-use resource in both cases.
High-quality research can be as challenging to assess as it is to deliver. Researchers, program managers, and other actors may also find the guidelines useful as a framing tool for thinking through evaluator perspectives at the formative and/or summative stages of the research or programming value chains for more targeted implementation and programming strategies. Application of the guidelines in process and performance evaluations across different contexts and portfolios will reveal insights to further strengthen and refine the tool.
Finally, the GENDER Platform evaluation report and Evidence module assessment that details the application of the guidelines are soon to be released by IAES.
RE: How to evaluate science, technology and innovation in a R4D context? New guidelines offer some solutions
I enter into this rich discussion from the PoV of my experience managing the ongoing evaluation of the CGIAR GENDER (Generating Evidence and New Directions for Equitable Results) Platform which is being coordinated by IAES. And from this vantage, I explore in some detail questions 2, 3 and 1 beginning with an overview of the evaluation context, the design of the evaluation and then capping with one key takeaway from applying the guidelines.
Background
The guidelines present four interlinked dimensions (research design, input, processes and outputs) which consider the many variables in the delivery and uptake of high-quality research, framed by the QoR4D frame of reference and OECD DAC criteria. The application is by no means linear. The ongoing GENDER Platform evaluation served as a test-case. The evaluation aims to assess the Platform’s progress, document lessons learned, and provide forward-looking recommendations as it transitions to encompass an expanded mandate as an impact Platform.
In answering the central evaluation questions, although the evaluation was not framed around an explicit “quality of science” (QoS) criterion, the guidelines were a useful toolbox in an agricultural research for development (AR4D) context to situate QoS while assessing the key questions following five DAC evaluation criteria - relevance, effectiveness, efficiency, coherence, and sustainability. The Platform evaluation, which was conducted by a multidisciplinary team led by an evaluator, integrated participatory, theory-driven, utilisation-focused and feminist approaches and deployed mixed-methods in data collection.
By way of context, the GENDER Platform synthesizes and amplifies research, builds capacity, and sets directions to enable CGIAR to have an impact on gender equality, opportunities for youth, and social inclusion in agriculture and food systems. The Platform is organized around three interconnected modules (Evidence, Methods and Alliances).The guidelines were applied to the Evidence Module, which aims to improve the quantity and quality of gender-related evidence.
Mechanics
In terms of the evaluation design, in line with the inception report, the evaluation team developed sub- evaluation matrices that addressed the Platform’s Modules’ impact pathways and results framework. These sub matrices fed into an overarching parent evaluation matrix. The matrices, overarching matrix (and other outputs) were reviewed by a team of external peer reviewers, including some members of IAES’s evaluation reference group, and by the Platform team to strengthen its validity. The reviews informed the subsequent revisions of the documents.
The four QoS dimensions have been integral in helping to evaluate the evidence module - these four dimensions were mapped to the focal evaluation criteria. Subject matter experts that led the Evidence module assessment systematically applied it to assess the module in a nested manner based on the mapping they conducted. Each of the Platform’s three module assessments then fed into the overarching Platform evaluation in a synergistic manner.
Takeaway
From this test case, one of several takeaways is that the convergence of different lenses is pivotal in applying the guidelines. The multidisciplinary evaluation team, in this case, benefited from both evaluator lenses -led by an evaluator, and “researcher lens”, with subject matter experts who were (gender) researchers that led the assessment of the Evidence module. The evaluation team in applying the guidelines straddled both perspectives to unpack the central evaluation questions mapped along the four QoS dimensions. Although multidisciplinary evaluation teams may not always be feasible in some contexts, in applying the guidelines, such multidisciplinarity may prove handy. However, it is essential that such teams invest sufficient time in capacity sharing and cross-learning to shorten the learning curve it may take for the convergence needed to effectively assess “QoS”, or mainstream it along the standard OECD DAC criteria as was done in this case. And the guidelines (and other derivative user-friendly products) can serve as a ready-to-use resource in both cases.
High-quality research can be as challenging to assess as it is to deliver. Researchers, program managers, and other actors may also find the guidelines useful as a framing tool for thinking through evaluator perspectives at the formative and/or summative stages of the research or programming value chains for more targeted implementation and programming strategies. Application of the guidelines in process and performance evaluations across different contexts and portfolios will reveal insights to further strengthen and refine the tool.
Finally, the GENDER Platform evaluation report and Evidence module assessment that details the application of the guidelines are soon to be released by IAES.