Natascia [user:field_middlename] Palmieri

Natascia Palmieri

Evaluation consultant
Free lance
Italy

I am a social anthropologist with extensive experience (22-years) in the field of development cooperation and 10-years of experience in Monitoring and Evaluation. I have consolidated skills in i) Formative and summative mid-term, terminal, and impact evaluations (thematic and strategic evaluations, project and programme evaluations); ii) Design and implementation of project-related M&E Frameworks; iii) Capacity development on Monitoring, Evaluation, Accountability and Learning (MEAL) issues; iv) Ex-ante evaluation of project proposals. These assignments have been carried out on behalf of the Food and Agriculture Organization of the United Nations (FAO), the European Commission (DG DEVCO, DG NEAR - ENPI and ENI CBCMED), the French Agency of International Cooperation (AFD), the International Labour Organization (ILO), the Italian Ministry of Foreign Affairs and the Italian Agency for Development Cooperation (AICS), several NGOs (Ricerca e Cooperazione, Mani Tese, Volontariato Internationale per lo Sviluppo, amog others) and consultancies (ARS Progetti, ADE, Eptisa, Agristudio).
I have been both team leader and team member of several evaluation assignments in Sub-Saharan Africa, the Middle East and North Africa, South Asia and Latin America.

My contributions

    • Dear all,

      I appreciate the CGIAR Evaluation Guidelines as a reference framework providing insights, tools and guidance on how to evaluate the quality of science, including in the context of development projects that include scientific and research components. This is specifically my perspective, as an evaluator of development projects that may include research components or the development of scientific tools to enhance the project effectiveness in the agricultural sectors. I preface that I have not analyzed the guidelines and related documents in depth and that I am external to the CGIAR. However, I believe the guidelines are an important contribution.

      In the absence of similar guidelines to evaluate the quality of research and science, I realize that my past analysis was somewhat scattered across the 6 OECD/DAC criteria, even though encompassing most of the dimensions included in the guidelines. Under the criterion of relevance, for example, I analyzed the rationale and added value of the scientific products and the quality of their design; as well as the extent of “co-design” with local stakeholders, which the guidelines frames as “legitimacy”, within the QoS criterion. Under efficiency, I analyzed the adequacy of inputs of research, the timely delivery of research outputs, the internal synergies between research activities and other project components, and the cost-efficiency of the scientific products. Most of the analysis focused on the effectiveness, and usefulness of the scientific tools developed, and on potential sustainability of research results. It was more challenging to analyzescientific credibility”, in the absence of subject-matter experts within the evaluation team. This concept was analyzed mostly basing on stakeholders’ perceptions through qualitative data collection tools. Furthermore, scientific validation of research and scientific tools is unlikely to be achieved within the project common duration of 3 years. Therefore, evaluations may be conducted before scientific validation occurs. The guidelines’ four dimensions are clear enough and useful as a common thread for developing evaluation questions. I would only focus more on concepts such as “utility” of the scientific tools developed from the perspective of project final beneficiaries; “uptake” of scientific outputs delivered by stakeholders involved and “benefits” stemming from research and/or scientific tools developed. In the framework of development projects, scientific components are usually quite isolated from other project activities, with few internal synergies. In addition, uptake of scientific outputs and replication of results is often an issue. I think this is something to focus clearly through appropriate evaluative questions. For example, QoS evaluation questions (section 3.2) do not focus enough on these aspects. EQ3 focuses on how research outputs contribute to advance science but not on how research outputs contribute to development objectives, what is the application of research findings on the ground or within policy development, what is the impact of outputs delivered, which, in my opinion deserves increased focus and practical tools and guidance. Besides this, which is based on my incipient reading of the guidelines, I will pilot the application of the guidelines in upcoming evaluations, in case the evaluand includes research components. This will help fine-tuning the operationalization of the guidelines through practical experience.