I appreciate the CGIAR Evaluation Guidelines as a reference framework providing insights, tools and guidance on how to evaluate the quality of science, including in the context of development projects that include scientific and research components. This is specifically my perspective, as an evaluator of development projects that may include research components or the development of scientific tools to enhance the project effectiveness in the agricultural sectors. I preface that I have not analyzed the guidelines and related documents in depth and that I am external to the CGIAR. However, I believe the guidelines are an important contribution.
In the absence of similar guidelines to evaluate the quality of research and science, I realize that my past analysis was somewhat scattered across the 6 OECD/DAC criteria, even though encompassing most of the dimensions included in the guidelines. Under the criterion of relevance, for example, I analyzed the rationale and added value of the scientific products and the quality of their design; as well as the extent of “co-design” with local stakeholders, which the guidelines frames as “legitimacy”, within the QoS criterion. Under efficiency, I analyzed the adequacy of inputs of research, the timely delivery of research outputs, the internal synergies between research activities and other project components, and the cost-efficiency of the scientific products. Most of the analysis focused on the effectiveness, and usefulness of the scientific tools developed, and on potential sustainability of research results. It was more challenging to analyze “scientific credibility”, in the absence of subject-matter experts within the evaluation team. This concept was analyzed mostly basing on stakeholders’ perceptions through qualitative data collection tools. Furthermore, scientific validation of research and scientific tools is unlikely to be achieved within the project common duration of 3 years. Therefore, evaluations may be conducted before scientific validation occurs. The guidelines’ four dimensions are clear enough and useful as a common thread for developing evaluation questions. I would only focus more on concepts such as “utility” of the scientific tools developed from the perspective of project final beneficiaries; “uptake” of scientific outputs delivered by stakeholders involved and “benefits” stemming from research and/or scientific tools developed. In the framework of development projects, scientific components are usually quite isolated from other project activities, with few internal synergies. In addition, uptake of scientific outputs and replication of results is often an issue. I think this is something to focus clearly through appropriate evaluative questions. For example, QoS evaluation questions (section 3.2) do not focus enough on these aspects. EQ3 focuses on how research outputs contribute to advance science but not on how research outputs contribute to development objectives, what is the application of research findings on the ground or within policy development, what is the impact of outputs delivered, which, in my opinion deserves increased focus and practical tools and guidance. Besides this, which is based on my incipient reading of the guidelines, I will pilot the application of the guidelines in upcoming evaluations, in case the evaluand includes research components. This will help fine-tuning the operationalization of the guidelines through practical experience.
RE: How to evaluate science, technology and innovation in a R4D context? New guidelines offer some solutions
Dear all,
I appreciate the CGIAR Evaluation Guidelines as a reference framework providing insights, tools and guidance on how to evaluate the quality of science, including in the context of development projects that include scientific and research components. This is specifically my perspective, as an evaluator of development projects that may include research components or the development of scientific tools to enhance the project effectiveness in the agricultural sectors. I preface that I have not analyzed the guidelines and related documents in depth and that I am external to the CGIAR. However, I believe the guidelines are an important contribution.
In the absence of similar guidelines to evaluate the quality of research and science, I realize that my past analysis was somewhat scattered across the 6 OECD/DAC criteria, even though encompassing most of the dimensions included in the guidelines. Under the criterion of relevance, for example, I analyzed the rationale and added value of the scientific products and the quality of their design; as well as the extent of “co-design” with local stakeholders, which the guidelines frames as “legitimacy”, within the QoS criterion. Under efficiency, I analyzed the adequacy of inputs of research, the timely delivery of research outputs, the internal synergies between research activities and other project components, and the cost-efficiency of the scientific products. Most of the analysis focused on the effectiveness, and usefulness of the scientific tools developed, and on potential sustainability of research results. It was more challenging to analyze “scientific credibility”, in the absence of subject-matter experts within the evaluation team. This concept was analyzed mostly basing on stakeholders’ perceptions through qualitative data collection tools. Furthermore, scientific validation of research and scientific tools is unlikely to be achieved within the project common duration of 3 years. Therefore, evaluations may be conducted before scientific validation occurs. The guidelines’ four dimensions are clear enough and useful as a common thread for developing evaluation questions. I would only focus more on concepts such as “utility” of the scientific tools developed from the perspective of project final beneficiaries; “uptake” of scientific outputs delivered by stakeholders involved and “benefits” stemming from research and/or scientific tools developed. In the framework of development projects, scientific components are usually quite isolated from other project activities, with few internal synergies. In addition, uptake of scientific outputs and replication of results is often an issue. I think this is something to focus clearly through appropriate evaluative questions. For example, QoS evaluation questions (section 3.2) do not focus enough on these aspects. EQ3 focuses on how research outputs contribute to advance science but not on how research outputs contribute to development objectives, what is the application of research findings on the ground or within policy development, what is the impact of outputs delivered, which, in my opinion deserves increased focus and practical tools and guidance. Besides this, which is based on my incipient reading of the guidelines, I will pilot the application of the guidelines in upcoming evaluations, in case the evaluand includes research components. This will help fine-tuning the operationalization of the guidelines through practical experience.