Malika [user:field_middlename] Bounfour

Malika Bounfour

President
Association Ayur pour le Développement de la femme Rurale
Morocco

I am a professional of, and passionate about, food production and plant protection. Also, I am an experienced project manager with track records working for government and private institutions in Morocco as well as development agencies (FAO).
I have an agriculture engineering degree in Morocco and a PhD in Entomology from WSU-USA and I am skilled in gender approach and policy evaluation.

My job and my volonteering activities led me to work and implement projects on women economic empowerment and gender issues. This is currently my main area of intervention.
You can also find me in
https://www.linkedin.com/in/malika-bounfour-79277140/

My contributions

    • Greetings to all

      Thank you for this topic as methodology usually dictates the quality of the report.

      Some of my takes on mixed methods through examples:

      1. When you want to measure equity and equality, quantitative methods should prevail and be supported by qualitative methods to gain insights on the why’s. Example:  what percent of funding for gender; how many girls at school…how many miles for rural roads….followed by policy analysis
      2. Measuring impact for an intervention needs to measure how many as well as the behavior that led to speeding or losing the impact.

       

      Q2. for mixed methods evaluation – Are these instruments developed at the same time or one after another? How do they interact?

      The ideal situation is to have all the instruments ready beforehand. However some qualitative instruments may give room for “improvement”. Example in key informant interviews or focus groups,  open-ended questions help improve the qualitative data gathering in case unexpected results wee found/observed. .

      Here is a reference from the world bank that describes situations where the quantitative and qualitative methods are used. It is for impact evaluation but I find it valuable for most study/research situations.

      Impact evaluation in practice

      Best regards

      Malika

    • Thank you all for very interesting contributions and insights. We seem to agree that reporting is the first step to communication about results. Most of the time, reporting is technical with data and results on project outcomes, with recommendations and lessons learnt. Then the commissioner validates, communicates the results to all stakeholders and develops a communication plan with a larger audience.

      Here some highlights from participants:

      Esosa Tiven Orhue suggests creating  harmony between the two elements for programme/project implementation by all stakeholders. This is possible if communication about results is included at the design stage of the intervention.

      For John, there is “plenty of reporting, but little communicating”. In addition, John suggests that “no one was to have a hand in project preparation and design until they have done at least five years of M&E.”  UNEP document shared by John has two lessons learnt related to our discussion. 1) Lack of ownership and shared vision due to insufficient stakeholder consultation processes during the design leads to poor project design and, 2) inefficient project management includes” Inadequate dissemination and outreach due to poor use of available dissemination methods”. 

      Most times, communication about the project results and evaluation targets the stakeholders consulted at the design and implementation phases. These are usually the immediate implementing partners (sphere of influence).  Thus, the sphere of interest is usually excluded leading to no change or if change occurs, it is not documented. This results (as John said) in losses of past experiences and the risk of repeating the same mistakes.

      Lal agrees with John while Silva adds that  “if we stick to conventional evaluation formats, we might make minor improvements but always miss out on the potential of evaluations, in the broader sense.” I can’t agree more with Silva since I see evaluators as change makers.

      Finally Gordon suggests that the communication about evaluation and evaluation results should be budgeted as part of the overall project and should be implemented by the commissioners and project managers. 

      If we agree that stakeholders include direct project/programmes implementing partners (sphere of influence)  as well as  impacted population (intended and not intended beneficiaries) then Esosa, John and Silva's suggestions should be considered for successful implementation.

      As summary, the debate about  whether ‘development aid works’ has been going on for at least a decade now. When mapping outcomes we need to think of the change we want and therefore communicate with the population at the design, implementation, closing and give them insights about evaluation results. This will empower them and give them the tool to implement the programme/project. Consequently, at  the next programme design, they will bring their perspective in  lessons learnt from previous programmes, thus, avoiding repeating mistakes. This should lead to avoid unnecessary activities and foster programme implementation. 

      I wish you all good end of the week

      Malika

       

      Links

      1. Lessons Learned from Evaluation: 

      https://wedocs.unep.org/bitstream/handle/20.500.11822/184/UNEP_Evaluati…

      2. A Comparative Study of Evaluation Policies and Practices in Development Agencies

      https://www.afd.fr/sites/afd/files/imported-files/01-VA-notes-methodolo…

       

    • Thank you all for your great contributions.

      Most contributors suggest that evaluators should be involved in communicating about results  at least in providing recommendations on key messages and tools  (ex. Norbert TCHOUAFFE TCHIADJE, and Karsten Weitzenegger). Messages and recommendations are mainly directed to intervention partners and decision-makers (ex. Aparajita Suman and Mohammed Al-Mussaabi). Key messages should be fine tuned by evaluator (ex.  Aparajita Suman, Karsten Weitzenegger and  Jean Providence Nzabonimpa). 

      Emile Nounagnon HOUNGBO suggests that “Stakeholders, including project managers, have more trust in the evaluator's technical findings and statements”. This puts the quality of the evaluation in front and places  the evaluator as communicator to validate the intervention results and recommendations. I believe that If we expand the idea to the large public,  the recommendations for a development project will have a better chance to be implemented.

      Most suggest that a specific communication budget should be allocated. This budget should be managed by the evaluation entity (ex. Ekaterina Sediakina Rivière). This will provide flexibility in priority setting according to type of intervention, targeted audience and type of messages.

      Jean Providence Nzabonimpa describes evaluators as change agents. As such, we need to go beyond submitting reports and contribute to the successful implementation of recommendations. 

      In summary, evaluators should be involved in communication campaigns for recommendations. A specific budget needs to be allocated and managed by evaluation units. The latters should also make provision for public communication of evaluation results and recommendations in the terms of references.

      The justification for the above is that any intervention affects intended and not intended beneficiaries. Therefore, in my opinion, communicating and organizing communication campaigns are justified. Thus, In addition to decision-makers, it is necessary to inform and educate the beneficiaries (intended and not intended) about evaluation results and recommendations. This should guarantee implementation of recommendations at scale.

      Key messages should be developed by evaluators who should also suggest the tools and languages since they know and understand the intervention, its results, and the audience.

      Malika

  • What type of evaluator are you?

    Discussion
  • How to define and identify lessons learned?

    Discussion
    • Thank you very much Seda for interesting subject and for sharing the TAPE guideline.

      I find the guideline very thorough and detailed. The tool does also give a definition of agroecology. I did appreciate that bees and pollinators were considered. 

      From a technical point of view, I would have some personal addings:

      1. I would combine the scored 0 and 1 cells for crops (page 17). If the crop covers 80% of land, it is "monoculture";

      2. For exposure to pesticides, I think the evaluator should look into the pesticides in stock on farm or territory. Both quantities and storage methods impact environment and health. Therefore waste management should be included;

      3. Soil micro-fauna is indicator of healthy soil and should be considered .

      From my experience, farmers and technicians consider mostly water use efficiency, pesticide and fertilizers application (type and quantity), crop diversity and soil management.

      I contributed to a proposal for a project evaluation where we added women empowerment (decision making) and youth employment as well as traditional knowledge. TAPE can be used as a reference for separating areas/farms in transition to agroecology from areas/farms where agroecology is fully implemented.

      With my best regards

      Malika 

  • Drawing from the wide array of experiences and views shared, here some of the key aspects I retained along with some personal reflections.

    What exactly do we mean by independence?

    The discussion raised clear questions as to definition. What does independence mean and what does it entail? A 2016 UNDP report [1] defines evaluation independence as a “twofold concept and refers to formal independence on the one hand and substantial independence on the other. Formal independence means structural freedom from control over the conduct and substantial independence can be described as the objective scientific assessment of a subject free from

    • Dear members

      Thank you all for your insights and contributions. The discussion brought together different experiences/views but most seem to agree on the core principles of the question.

      Before going any further, I will explain my perspective:

      Even in laboratory experiments where all the conditions are controlled, scientists allow themselves a level of error but they try to make it as small as possible. Therefore, I am not talking about 100% sure of the results with human (complicated being)  interventions.

      My take away from the discussion is that we all strive to be “objective and  inclusive “as much as we can. The latter expresses our “confidence interval'' and “degrees of freedom”.

      The discussion brought in a wide array of subjects pertinent to independence/impartiality/neutrality of evaluation. From discussing the concepts to suggesting work methodologies, contributors enriched the discussion.

      Different contributions brought up important factors that may influence the independence, neutrality and impartiality of evaluators.  Mr. Jean de Dieu Bizimana and Mr. Abubakar Muhammad Moki  raised the issue of  the influence of evaluation commissioner and the termes of reerences on these concepts. Dr Emile  HOUNGBO brings in the financial dependence of the evaluator, especially if the organization/team financing the evaluation is also responsible for the implementation of the intervention. Mr Richard Tinsley sees that even when funds are available, evaluators may lack neutrality in order to ensure future assignments. Mr Tinsley gave the example of farmer organizations that do not play the role intended but still are pushed on small holders. From Mr Lasha Khonelidze perspective, “open mindedness” of the evaluator is important in bringing in diverse points of view, but as important is to ensure that the evaluation is useful to end users (these are to be defined in ToR).

      Mr. Sébastien Galéa suggests working on norms/standards at the level of programme management in the field. He also brings in the importance of peer to peer information and experience exchange around the world (ex evalforward). He gracefully shared a document, the title of which clearly indicates that the aim of the evaluation is better results in the future, either through subsequent interventions, or through adjustments to the evaluated intervention. The paper also explains independence/impartiality from ADB perspective and how this organization worked on these principles. In my view, Weiss' paper shared by Mrs Umi Hanik came in as a complement. Weiss' paper analyzes program development, implementation and evaluation. The main idea is that programs are decided according to a political environment and since evaluation is meant to guide decision-making, it also shares the pressure from the political participants in the programme. Thus for a program participant, public acceptance is more important than program relevance. Therefore, I think this is where evaluation  independence/impartiality/neutrality come into play.

      Abubakar Muhammad Moki added that some companies recrute evaluators they know, which is confirmed by Mrs Isha Miranda who added that this fact impacts the quality of evaluation, leading to decrease in the quality of evaluation reports (evidence-based argument) :) . Mr. Olivier Cossee added that recruited  evaluators should be “reasonably neutral”. This, I believe, puts pressure on the evaluation commissioner to verify/ check for “reasonably” neutral evaluators and introduces another variable : to what extent is the evaluator reasonably neutral ? (can we refer to behavior studies?). For Mrs Siva Ferretti, the evaluator's individual choices are influenced by her/his culture and thus, it is difficult to be “really” inclusive/neutral. Podcast shared by Mrs Una Carmel Murray gives an example of inclusion of all participants in order to dilute the subjectivity of the researcher. Also, Mr Abado Ekpo suggests taking time to integrate the logic of the different actors and understand them in order to conduct an objective evaluation. In addition, Mr Steven Lam and Mr Richard Tinsley discuss the importance of the methodology in bringing in all participants’ interest. Mr Lal Manavado summarized the reflection in terms of  accountability to fund providers or  politicians or social groups. My view is to be accountable to the project objectives. Were they achieved or not? if not why not?. if achieved for whom?

      Mr. Khalid El Harizi added that the availability of data/information at the start of the evaluation as well as the capability of evaluators to synthesize the data are important. It is to be noted however that, even when data are available, they may not be easily accessible to evaluators. This is confirmed by Mr Ram Chandra Khanal who brought up the issue that lack of time and limited access to information on stakeholders will impact data collection.

      This discussion clearly raised the issue of term definition. As previously stated, end users need to be defined. Also Mrs. Svetlana Negroustoueva asked for examples to contextualize the term of independence. In addition, Mr. Thierno Diouf raised the importance of defining all the terms discussed from the perspective of all stakeholders and evaluators. These definitions should be clear in guides, standards and norms. 

      Mr. Diagne Bassirou talks about loss of quality and deepness of analysis with “too much” objectivity since the evaluator may not know about the socio-demographic conditions of the area. In my perspective and as Mr Lahrizi stated, there are data / information available (or should be available) at the start and the commissioner should make these available to the evaluation team. My experience is that there is always an inception meeting where these issues are discussed and cleared. Ability to analyze these data/information would be a matter of competence of the evaluator and not his/her independence nor impartiality.

      In summary it is possible to achieve a relevant degree of impartiality/neutrality/ in evaluation, given, the terms of references are clear, data are available and independence of the evaluator is ensured through sufficient funding and administrative independence. The evaluator needs to do work on self in terms of beliefs, culture and biases. Methodological approaches could help reverse possible unbiasedness.

      Program funders as well as program managers and evaluators are accountable for the changes brought by interventions. Could we link this reflection to cost-social benefits for development intervention?

      At last, this is probably an “open ended question” and could lead Therefore, let’s keep the discussion open. 

      Some exchanged links:

      https://www.sfu.ca/~palys/Weiss-1973-WherePoliticsAndEvaluationMeet.pdf

      https://www.ecgnet.org/sites/default/files/evaluation-for-better-results.pdf

      https://disastersdecon.podbean.com/e/s6e2-researcher-positionality

      https://agsci.colostate.edu/smallholderagriculture/request-for-information-basic-business-parameters/.

    • Dear all, 
      What we measure reflects our values and what matters to us. These are notes from the article below I came back to share with you as a complement to my previous message. Also, the article suggests that impacts on subgroups need to be reflected in the measurements. This is in accordance with recommendations of our meeting participants

      https://ssir.org/articles/entry/data_in_collective_impact_focusing_on_w…;

      Have a good day
      Malika
       

    • Hello everyone and thank you for this subject and for sharing your thoughts and approaches. 

      This is parallel to the discussions held last week by the francophone network for evaluation at the FIFE2021. 

      That said, I also believe that it is more an issue of méthodologies  to bring out these aspects. Organisations usually set their policies to take into account environment and gender in programming and evaluation. 

      I also would like to share with you that as side event to the Mountain International Day, we organized a meeting around "the olive tree, the Mountain and the environment ". Main and first  recommendation by participants was to consider multi-level social and landscape planning and analysis when talking about rural setting. This is because mountain conditions or results (almost) disappear when considering only the rural facet for development or evaluation.

      Thus, one approach is to consider intersectional analysis for evaluation.

      Best regards 

      Malika 

    • Hello everyone,

      I have tried to answer each question and my answers are below. They are based on some of my experience with small farmers.

      1. Striking a balance between depth and length of assessment: 

      Small farmers are very busy because they have to find alternative / complementary sources of income. In addition, social time is important (marriages, tea time, football for the young, carpet making for women...).

      Thus, assessment time should fit within their schedule. I suggest short questionnaires that are meaningful to them, which brings in that the programme should take into account their actual needs and not 100% according to organizational needs.

      • How can the burden on smallholder farmers be reduced during M&E assessments?
      • What are the best ways to incentivize farmers to take part in the survey (e.g. non-monetary incentives, participation in survey tailoring, in presentation of results)?
      1. Make it a social time and talk about what is meaningful  to them (ex cereals in the mountainous areas). The usually preferred time is the afternoon. ex. plan assessment time during tea time and work with focus group. If the questionnaire is preferred then it will take more  time for the evaluator because she / he will have to adjust to each farmer;
      2. Allow women to bring in toddlers or small infants ( up to 5 years);
      3. Give away written information on the programme. They will keep it and show it to their schooled children;
      4. Plan on lunch or afternoon tea with snacks.

      2. Making findings from M&E assessments useful to farmers: 

      - Like for the assessment, plan on information workshops in "a between seasons" time to avoid getting on the way for "actual" work;

      - Provide leaflets, audios, videos, pictures;

      - Allow for Q&A sessions.

      • Do you have experience in comparing results among farmers in a participatory way? What method have you used to do this? Was it effective?
      • How can the results be used for non-formal education of farmers (e.g. to raise awareness and/or build capacity on ways to increase farm sustainability)?

      1. Comparing results among farmers is effective in showing results and making farmers adopt new techniques. I used a treatment / non treatment method . The non treatment was actually from farmers not adhering to the programme. Once results were obvious, they asked to be included.

      2. Results could be used in  non-formal education of farmers through exchange visits among peers, audios and videos distributed through instant messaging, result presentations on field visits of extension workers. 

      Malika Bounfour

       

    • Hello Community

      First of all, I would like to thank you for your contributions by describing your very rich respective experiences and by proposing approaches and tools to ensure that Rapid Evaluation process responds to expectations.

      The responses highlighted common problems, namely the time and resources needed for a rapid evaluation of the effects of an intervention or of COVID 19. These resources are often more than planned.

      Slow communication between different partners and the availability of archived and analyzed data are also raised as major factors that slow down the implementation of rapid measures.

      As for the proposed solutions and tools, Jennifer Mutua advised to pay attention to unexpected factors  when estimating the budget for the evaluation. Carolina Turano suggested ways to improve communication between partners to make the process agile. Elias SEGLA proposed data collection tools and suggested that the rapid evaluation team should be internal with its own organisation chart and modus operandi. Nayeli Almanza's response describes the rapid data collection methodology to measure the impact of COVID 19 on migrant populations and Aurélie Larmoyer gives practical suggestions for individuals and teams to work in order to improve timeliness of protocols and reactions that can be crucial for the intervention.

      To summarize, I think that 1) it is necessary to work on the communication time between partners during the development of the approach as well as the feedback and reaction; 2) it is necessary to use modern tools, especially virtual means and mobile phones; 3) the budget issue remains and could require innovative work from the team to optimise the results within the budget allocated.

      Finally, I hope that this issue of rapid evaluation will receive the attention it deserves and will be developed especially during this COVID 19 crisis.

      Thanking you once again for your answers and shared references, I remain at your disposal for further exchanges on this issue.

      Yours sincerely

      Malika

      [This comment was originally posted in French