Ibtissem [user:field_middlename] Jouini

Ibtissem Jouini

Senior Evaluation Manager

Before joining CGIAR, Ibtissem Jouini served as a Regional Evaluations Specialist at the  Food and Agriculture Organization (FAO) of the United Nations, where she managed decentralized evaluations and designed capacity development activities within the FAO Office of Evaluation (OED). Prior to this role, she led multiple independent evaluations and studies. Her professional journey also includes substantial contributions to International Development Organizations and Foundations. In these positions, she operated either as M&E or Program Manager for various national, regional, and global projects. She actively participates in several evaluation associations. She leads the working group on the professionalization of evaluation at the Asociación Ibérica de Profesionales Evaluadores (APROEVAL), and she is a member of the European Evaluation Society (EES), and the MENA Evaluators Network (EvalMena). 

My contributions

    • Dear Colleagues,

      Thank you for sharing your thoughts. Your insights have highlighted key points about this topic. Our Evaluation Function is committed to continuing this discussion. Currently, we are conducting a study to identify the conditions under which different evaluation management approaches are likely to succeed and to explore collaboration strategies between independent offices to overcome management challenges. We will soon launch an online survey to map evaluation practices across independent evaluation offices and plan to bring this discussion topic to the upcoming European Evaluation Conference.

      I enjoyed reading your contributions and I have summarized your key points here:

      Lal Manavado, you emphasized the importance of effective evaluation management in facilitating evaluators' work, ensuring relevant data collection, and fostering collaboration. You also highlighted how managers can provide holistic guidance to enhance the evaluation process, especially when evaluators need extensive background information.

      Gebril Mahjoub Osman, you underscored the necessity of preserving the independence of evaluation teams. Your argument against the active participation of evaluation managers, due to potential biases and conflicts of interest, suggests a role in facilitation and support rather than direct involvement.

      Vicente Plata, you stressed the value of effective communication, such as initial and final meetings between evaluation managers and teams, to provide contextual insights and refine conclusions. Your point that evaluations must balance data with an understanding of the project's broader impact on the actors involved is well-taken.

      Cristian Maneiro, you recommended that evaluation managers should be supported by evaluation analysts to manage workloads effectively. Your note on certain evaluation approaches, such as Developmental Evaluation, emphasizes a more formative focus. In these cases, the evaluation manager's involvement as an integral part of the program being evaluated is essential. This approach fosters greater ownership and promotes internal learning within the organization.

      Hadera Gebru, you supported the idea that the roles and responsibilities of evaluation managers should be clearly defined and communicated. Your advocacy for strategic involvement to ensure high-quality and credible evaluation results, while preserving the independence of evaluators, is valuable in my point of view.

      Adéléké Oguniyi, I agree with you on the importance of the inception phase as a foundation for successful evaluations. Your highlight on the need for clear communication, setting expectations, and collaborative planning between evaluation managers and external evaluators is key to the process.

      Anne Clémence Owen, you discussed the dual role of evaluation managers in supporting the evaluation process and promoting learning. Your point that managers' involvement should be clearly defined from the design stage to ensure alignment with evaluation goals and organizational requirements is important.

      Musa K. Sanoe, you noted the significance of proper orientation and clear role definitions for evaluation managers. Your emphasis on the need for strategic involvement of managers at critical steps to maintain the evaluation’s credibility and independence is very insightful.

      In conclusion, these collective insights underscore the value of balancing the involvement and independence of evaluation managers. Thank you all for your valuable contributions to this important discussion. Please let me know if you would like to be involved further in this project. Don’t hesitate to write to me at: i.jouini@cgiar.org.

      Best regards, 


  • What type of evaluator are you?

  • A common dilemma is striking a balance between team members who understand the technical subject and those who have expertise in evaluation methodologies and processes. In most evaluations, resources are scarce to hire big teams to cover all fields. Finding people who are experts in at least one discipline, have the right language skills and are skilled evaluators is challenging.

    Here are some principles for creating a good working relationship between the roles of experts and evaluators in development evaluations based on my own experience as an evaluator, and on informal conversations with colleagues, notably in the last European Evaluation

    • Bringing together evidence from evaluations which covered –in part or exclusively– the Quality of Science (QoS) can be a strong way to shape changes towards the development agenda. Evidence synthesis is a trustworthy method but certainly not simple to implement given the variety of evidence we can find, the evaluation criteria, approaches, focus, contexts, etc.

      The QoS theme was one of the major topics covered by the analysis under the framework of the Synthesis of Learning from a Decade of CGIAR Research Programs (2021).[i] One challenge the synthesis team faced was to find the specific analytical framework that best reflected the variety of evidence from the two phases of CRP implementation: 2011-2016 and 2017-2019 and reach the synthesis objectives.

      To further explain this, we had to find out how information had to be categorized and serve as a reference to indicate the focus, the scales, the concepts, and related terms and definitions based on the original objectives of the synthesis and mapping of the analyses forming the core basis of the 43-document corpus of the synthesis. The QoS related levels of inquiry were converted into two main questions and four subthemes.

      The two main questions for the Quality of Science (QoS) and Quality of Research for Development (QoR4D) theme are:

      1.    How has QoS evolved between two CGIAR Research Programs (CRP) phases along three dimensions—inputs, outputs, and processes?

      2.    To what extent has QoS evolved along two of the four QoR4D elements—legitimacy and credibility?

      The results were structured around four subthemes: (1) QoS: Research inputs (2) QoS: Quality of research outputs (3) QoS: Research management/process and, (4) QoR4D elements: legitimacy and credibility. Along these topics, a set of Cross-cutting themes were covered: gender, climate change/environment, capacity building, external partnerships and youth.

      Four key issues were addressed in the analysis of findings: (1) patterns and trends between the two phases of CRPs related to the quality of science (QoS) and research for development, achievement of sustainable development outcomes, and management and governance; (2) systemwide issues affecting CRP achievements; (3) recommendations for the future orientation of CGIAR research and innovation; and (4) key evidence gaps and needs for future evaluations.

      A narrative synthesis approach was used to summarize and analyze the Learning from a Decade of CGIAR Research Programs, employing secondary source data from 47 existing evaluations and reviews. External evaluations were systematically coded and analyzed using a standardized analytical framework. A bibliometric trend analysis was carried out for the QoS theme, and findings were triangulated against earlier syntheses and validated by members of the Independent Science for Development Council (ISDC), CRP leaders, and expert peer reviewers.


      [i] Report: CAS Secretariat (CGIAR Advisory Services Shared Secretariat). (2021). Synthesis of Learning from a Decade of CGIAR Research Programs. Rome: CAS Secretariat Evaluation Function. https://cas.cgiar.org/evaluation/publications/2021-Synthesis