RE: How to evaluate science, technology and innovation in a R4D context? New guidelines offer some solutions | Eval Forward

Thank you to all the contributors, those new and familiar with the Guidelines document, or at least various supporting knowledge products. Here is a summary of the discussion structured by core themes.

Reflections on the Guidelines: content

Generally, the majority of participants agreed that the new Guidelines offer some solutions to evaluate quality of science in a R4D context. In particular, contributors used terms such as well-researched, useful, clear, adaptable and flexible. A couple of contributors emphasized the importance of flexibility, to seek for a middle ground and the application of the guidelines to other organizations. Another contributor praised the Guidelines for providing an interesting conceptual framework, a flexible guide, and compendium of methods and questions that would also be useful in other evaluation contexts.

The value of a designated evaluation criterion of Quality of Science 

There was also consensus that the four QoS evaluation dimensions (design, input, process and output) were clear and useful with well-defined indicators, especially when using a mixed methods approach. One contributor noted that the dimensions capture a more exploratory, less standardized way of doing evaluations at the R4D nexus, enriching the depth of evaluative inquiry. Another contributor emphasised the building and leveraging of partnerships under the ‘processes’ dimension. A further contributor was excited about using the framework to design a bespoke evaluation system for her department. In addition, the three key evaluation questions recommended to evaluate the QoS, were considered appropriate for R4D projects.

In the context of the ongoing GENDER Platform (of CGIAR) evaluation, a contributor noted the usefulness of the Guidelines as a toolbox in an Agricultural Research For Development (AR4D) context to situate QoS while assessing the key questions following five DAC evaluation criteria - relevance, effectiveness, efficiency, coherence, and sustainability. A key lesson from the evaluation team in applying the guidelines was that they straddled both the perspectives of the evaluator lenses, and the researcher lens, with subject matter experts to unpack the central evaluation questions mapped along the four QoS evaluation dimensions.

Several contributors requested clarity on whether the Guidelines were useful for evaluating development projects. They were developed for evaluating R4D in the context that co-designed research would be implemented in partnership with development stakeholders who would then be in a position to scale innovations for development impact. While framed around R4D interventions, we consider that the Guidelines are flexible enough to be adapted for evaluating development projects with science or research elements- the four dimensions for evaluating QoS would allow scope to bring them out. A recent CGIAR workshop discussed the retroactive application of the guidelines in evaluation of development interventions by means of two specific case studies: AVACLIM, a project implemented by FAO, and Feed-the-Future AVCD-Kenya project led by ILRI. Both case studies showcased the wide applicability of the Guidelines.

Several contributors also raised the importance of evaluation of impact. While the scope of work by the CGIAR’s Independent Evaluation Function would not evaluate impact, the Guidelines consider the possibility (See Figure 6) of assessing the impact, towards SDGs and beyond. Likewise, in other contexts and organizations there may be wider windows for integrating focus on impacts. The new Guidelines could be deployed 3-5 years after the finalization of an intervention to assess the progress made in uptake of technologies.

Echoing the 2022 discussion, some contributions highlighted inclusive or beneficiary focus in evaluations, namely emphasis on communities who might also be an important stakeholder in research and innovation. In a development or R4D intervention, a stakeholder analysis permits identifying beneficiaries  as key stakeholders; and the use of ‘process’ and ‘outputs’ dimensions would allow nuancing their participation and benefits from successful research and development activities.

Facilitating Learning from Implementation and Uptake of the Guidelines

Contributors raised issues related to the roll-out or use of the Guidelines, including:

  • Whether the single evaluation criterion of quality of science sufficiently captures the essence of research and development;
  • The usefulness of further clarifying  the differences between process and performance evaluations;
  • The need to include assumptions, specifically those that have to hold for the outputs to be taken up by the client; 
  • The importance of internal and external coherence;
  • The need to define appropriate inclusion and exclusion criteria when designing research evaluations;
  • The importance of defining the research context which is given priority in the revised IDRC RQ+.

 

Several suggestions were made on how CGIAR can support the roll-out of the Guidelines with the evaluation community and like-minded organizations. Useful suggestions were also made about the need to build capacity to use the new Guidelines including training sessions and workshops, online resources (webinars, collaborative platforms), mentoring partners, and piloting the Guidelines in case studies and up and coming evaluations. In particular, capacity development of relevant stakeholders to understand and use the Guidelines would be appropriate to support a wider use and further engagement with the evaluation community.

One contributor suggested conducting a meta-evaluation (perhaps a synthesis) of the usefulness of the Guidelines once the CGIAR used them to evaluate the portfolio of current projects. Remarkably, this is currently being done retrospectively with the previous portfolio of 12 major programs (implemented from 2012-2021) with notable improvements in clarity and definition of the outcomes. Further application of the Guidelines in process and performance evaluations across different contexts and portfolios will reveal more insights to further strengthen and refine this tool.