Adéléké [user:field_middlename] Oguniyi

Adéléké Oguniyi

MERL Expert
Togo

Having completed a Masters degree in Project Management with a focus on Monitoring Evaluation Acountability and Learning, i account for 12 years international and domestic experience in research, monitoring evaluation and learning of development programs. As a practitioner, i have rich experience in the design, planning and implementation of MEAL frameworks adapted to different programs and contexts and seamlessly able to grasp difficult concepts, readily apply these ideas in differing social, cultural and cognitive domains, and later clearly communicate results in a clear, well-grounded discussion. I have proven experience working closely with a broad range of stakeholders, writing evidence-based reports both in English and French languages. Moreover, i master complex conceptual frameworks and excel in management of extensive sets of quantitative and qualitative data. To date, i have managed several evaluations funded by institutions such as USAID, Millennium Challenge Corporation (MCC), Global Affairs Canada, NORAD, UNICEF, EU, SDC and more.
I am passionate about contributing community members social and economic development.

My contributions

    • Dear Thierno! This is a good topic for discussion.

      The responsibility for drafting "good recommendations" does not lie solely with the evaluator. The commissioner's responsibility is also seriously engaged. In my professional experience, drafting recommendations is the most complex and delicate part of the process of writing an evaluation report. In practice, when I examine an evaluation report, the 'recommendations' chapter is the one I always read first, even before appreciating the executive summary. Through the recommendations, the evaluator demonstrates: (i) his mastery of the subject, (ii) his expertise in the sector/area being evaluated, (iii) all his analytical and writing skills and (iv) all his powers of persuasion (yes!). That's what a quality evaluation report is all about, at least in my opinion.

      I published a short article on the subject on LinkedIn, accessible via this link Formulating recommendations . https://www.linkedin.com/pulse/recommandations-dune-evaluation-quelles-…

      Thanks again for this subject, which is still very topical.

      Adéléké.

      [Original contribution in French]
       

       

    • Dear Ibtissem, I echo your thoughts: the quality and utility of an evaluation is greatly influenced by the professionalism, expertise, and practical experience of the designated evaluation management team. 

      The role and responsibilities of the evaluation management team, as well as the extent of their involvement, are contingent upon the various stages or phases of the evaluation process. 

      I would like to offer my contribution from the perspective of the inception phase. In the complex landscape of project evaluation, the inception phase serves as a compass, guiding the journey toward success. It marks the initiation of a collaborative, participatory and learning journey between an organization and independent consultants engaged in project evaluation. It is a crucial activity where clarity, expectations, and mutual understanding are established.The inception phase is a critical component of project evaluation. It’s not only a formality but a strategic investment in the success of collaboration. Organizing a rigorous inception phase helps maximize external consultants' contributions from day one and establish a strong foundation for a successful partnership. This is where success is mapped, and the course is charted. Starting off on the right foot leads to a more efficient evaluation and increases the likelihood of meeting evaluation objectives. This is where the evaluation management team and external evaluators agree on dos and don'ts by:

      • reviewing the project's background and context
      • clarifying evaluation objectives and scope
      • setting expectations for stakeholders' engagement
      • discussing approaches and methodologies
      • discussing deliverables formats and timelines:
      • clarifying reporting and communication protocols
      • providing access to project documentation
      • discussing ethical considerations and confidentiality
      • establishing a feedback mechanism
      • gauging and address individual needs
      • ...

      As an external evaluators, the collaboration with evaluation managers significantly improves the relevance and utility of evidence for decision-making processes. Their expertise, stakeholder engagement initiatives, adaptability, and quality assurance efforts guarantee that evaluations are carried out efficiently and provide actionable insights that guide decision-making. It requires an opened-mindset of the "evaluation demand" and the "evaluation supply". 

    • Great discussion topic Yosi.
      I'd like to share some thoughts based on my involvement with the regional BOAD funded project "Promoting Climate Smart Agriculture in West Africa".
      The project seeks to strengthen the resilience of people to the adverse effects of climate change and to increase production while contributing to mitigation through carbon sequestration. This project has environmental benefits that include: (i) sustainable land management and reduction of agricultural land expansion at the expense of forest lands; (ii) contribution to the mitigation of GHG emissions through carbon sequestration; (iii) improving the capacity of actors to implement climate resilient practices; etc.

      The project's results framework includes relevant indicators (all of them are effective), and here are few of them:

      • Percentage of the target population by means of resilient livelihoods to climate change suffered
      • Rate of improvement of yields to support food security and improve the living conditions of beneficiaries
      • Type of income sources for households generated under climate change scenario
      • Number of beneficiaries (F/M) informed about climate risk issues through the actions of meteorological services
      • Number and type of risk reduction actions or strategies introduced at local level
      • Level of technical capacity of regional, national and local institutions to promote climate resilient best practices in an CSA approach
      • Number of community plans or policies improved or implemented that incorporate the CSA approach.

      To read more about the project: https://www.adaptation-fund.org/project/promoting-climate-smart-agriculture-west-africa-benin-burkina-faso-ghana-niger-togo/

    • As we embrace the era of Artificial Intelligence (AI), evaluators have a unique opportunity to leverage this technology to enhance their professional activities in several ways:

      1. Data Analysis and Interpretation: AI tools can significantly improve data analysis by processing large datasets efficiently and identifying patterns or trends that might be overlooked by human analysts. Evaluators can use AI algorithms to analyze complex data sets from evaluations, surveys, or other sources, enabling more robust and insightful conclusions.
      2. Predictive Modeling: AI techniques such as machine learning can be employed to develop predictive models for evaluating the potential outcomes of interventions or policies. By training models on historical data, evaluators can forecast future impacts with greater accuracy, aiding decision-making processes.
      3. Natural Language Processing (NLP): NLP algorithms enable evaluators to analyze and understand unstructured textual data such as reports, reviews, or social media feedback. This capability can facilitate sentiment analysis, thematic coding, and synthesis of qualitative data, providing deeper insights into program effectiveness and stakeholder perspectives.
      4. Automation of Routine Tasks: AI can automate repetitive tasks such as data cleaning, report generation, or scheduling, freeing up evaluators' time to focus on more strategic and analytical aspects of their work. By streamlining workflows, evaluators can increase productivity and efficiency.

      To harness AI effectively, evaluators should consider the following strategies:

      1. Continuous Learning and Adaptation: Stay informed about advancements in AI technologies and their applications in evaluation practice. Invest in training programs or workshops to build proficiency in using AI tools and techniques relevant to evaluation.
      2. Collaboration with Data Scientists and Technologists: Foster interdisciplinary collaborations with experts in AI, data science, and technology. By partnering with professionals skilled in AI development and implementation, evaluators can co-design innovative solutions tailored to specific evaluation challenges.
      3. Ethical Considerations and Bias Mitigation: Be mindful of ethical issues related to AI, such as data privacy, algorithmic bias, and transparency. Ensure that AI-driven evaluations adhere to ethical guidelines and principles, and actively address biases to maintain credibility and fairness.
      4. Effective Communication of AI Insights: Develop skills in translating AI-generated insights into actionable recommendations for stakeholders. Communicate the limitations and uncertainties associated with AI-based analyses transparently, fostering trust and understanding among diverse audiences.

      In addition to technical proficiency in AI, evaluators should cultivate a range of complementary skills to remain competitive and meet the evolving expectations of the field:

      1. Critical Thinking and Interpretation: Sharpen analytical skills to critically evaluate AI-generated outputs and contextualize findings within broader evaluation frameworks.
      2. Interdisciplinary Collaboration: Cultivate the ability to collaborate effectively with stakeholders from diverse backgrounds, including technologists, policymakers, and program implementers, to ensure that AI-driven evaluations address key priorities and perspectives.
      3. Adaptability and Agility: Embrace a growth mindset and be willing to adapt to changing technological landscapes and evaluation methodologies. Stay agile in response to emerging challenges and opportunities presented by AI advancements.
      4. Communication and Storytelling: Hone communication skills to effectively convey complex AI-driven insights to non-technical audiences. Develop the ability to craft compelling narratives that highlight the significance of evaluation findings and their implications for decision-making.