Cristian [user:field_middlename] Maneiro

Cristian Maneiro

Senior Consultant
Maestral International, PLAN Eval
Uruguay

Evaluation specialist with over ten years of experience, both conducting and commissioning evaluations for UN Agencies and International Organizations. Regional work experience in Latin American and African countries. MA degree in Sociology and post graduate diploma in Public Policy and Evaluation, along with several short courses taken in development economics, research methods and softwares. Spanish native speaker, fluent in English and Portuguese with working knowledge of French.

My contributions

    • Hello Colleagues,

      Greetings from Uruguay!

      Thank you, Ibtissem, for bringing up this intriguing topic. Having experienced both sides of the evaluation process (commissioning evaluations for WFP and conducting as independent consultant for UNICEF and UNFPA), I completely agree that the Evaluation Manager plays a pivotal role and bears significant responsibility for ensuring the quality of the evaluation results, which ultimately determines their usefulness.

      Building on what other colleagues have already mentioned, I'd like to offer a couple of additional points that haven't been raised yet on the EM support and its role on Evaluation:

      Ideally, the EM shouldn't shoulder the entire burden alone. It's advantageous for them to be supported by at least one Evaluation Analyst. This team composition mimics the structure of an external evaluation team and facilitates smoother communication and coordination. Evaluation Analysts can handle bilateral meetings with data analysts or other external evaluation team members, allowing the EM to focus on overseeing the calendar, meeting deadlines, and making high-level decisions in consultation with the team leader.

      Furthermore, it's important to acknowledge that when discussing evaluation independence, we often assume we're referring to external evaluations. However, certain evaluation approaches, (e.g  Developmental Evaluation), emphasize a more formative focus. In these cases, the Evaluation Manager's involvement as an integral part of the program being evaluated is essential. This approach fosters greater ownership and promotes internal learning within the organization.

      Thanks and best regards,

      Cristian

       

       

    • Greetings colleagues,

      Thank you, Muriel, for bringing up this topic, and I appreciate all the contributors. I believe that AI holds promising features for evaluators, and it's crucial for us to be aware of them. Personally, the prospect of conducting fast and interactive quantitative analysis without the need for expertise in code-based software (e.g., R or Python) would be a game-changer for professionals like myself with backgrounds in human sciences.

      Additionally, the capability of summarizing extensive raw texts, such as interviews or focus group discussion transcripts, and facilitating accurate analysis of key points, has the potential to save a significant amount of time. However, it's essential to highlight that the evaluator's experience, prior knowledge of the field, insights from stakeholders, and a sense of the evaluation's purpose will continue to be crucial and valued.

      Moreover, ethical dilemmas and decisions on how to present results won't be solved by AI, no matter how powerful it becomes.

      I would love to see examples of AI used in both quantitative and qualitative approaches.

    • Dear Colleagues:

      Greetings from Uruguay!

      I believe that the discussion brought up by Jean is very relevant. Mixed methods are undoubtedly a powerful strategy for addressing an evaluation object from different angles, and it is almost a standard practice in most evaluation Terms of Reference (TDRs) that are currently seen, whether for UN agencies or others.

      However, I agree that sometimes the term becomes a cliché and is used without considering whether a mixed methods strategy is genuinely the most appropriate. It is assumed that different techniques (typically Key Informant Interviews and surveys) will provide complementary information, but there is often not a clear idea from the commissioners on how this information will be integrated and triangulated. In my view, successful cases occur when the integration process is well-defined or when methods are applied sequentially (e.g., conducting focus groups to define survey questions or selecting cases based on a survey for in-depth interviews).

      Furthermore, I understand that with current technological developments, mixed methods have new potentialities. It's no longer just the typical combination of Key Informant Interviews and Focus Group Discussions with surveys; instead, it can include big data analysis using machine learning, sentiment analysis, and more.