RE: How to evaluate science, technology and innovation in a R4D context? New guidelines offer some solutions | Eval Forward

Thank you, Svetlana, for the opportunity to participate in this discussion. I respond to two of your questions below.

Do you think the Guidelines respond to the challenges of evaluating quality of science and research in process and performance evaluations?

The Guidelines appear to respond to the challenges of evaluating quality of science and research in process and performance evaluations through a flexible and well-researched framework. I am not sure if a single evaluation criterion captures the essence of research and development. I think the answer would be found in reflecting on its application in upcoming varied evaluative exercises at CGIAR, as well as reflection on previous organizational experience. This may involve identifying how it is interpreted in different contexts, and whether further development of recommended criteria may be considered for a possible second version of the Guidelines.

How can CGIAR support the roll-out of the Guidelines with the evaluation community and like-minded organizations?

I agree with others that workshops and/or training on the Guidelines could be a means for rolling out the Guidelines and engaging with the evaluation community. Emphasizing its flexibility and fostering reflection on its use in different organizational contexts would be productive.

In line with my response to the first question above, I would suggest a meta-evaluative exercise be done when there has been more organizational experience applying the Guidelines. There would be obvious value for CGIAR, possibly leading to an improved upon second version. It would also be of great value to the evaluation community with CGIAR taking an important role in facilitating continued learning through the use of meta-evaluation -- what the evaluation theorist Michael Scriven has called both an important scientific and moral endeavor for the evaluation field.

At Western Michigan University, we are engaged in a synthesis review on meta-evaluation practice over a 50-year period. We’ve come up with many examples of meta-evaluation of evaluation systems in different contexts. We assumed very little meta-evaluation was being done and were surprised to find there are plenty of interesting examples in both the grey and academic literature. Documenting such meta-evaluative work would further strengthen the Guidelines and its applicability as well as add significant value in continued engagement with the international evaluation community.