Jackie (Jacqueline) [user:field_middlename] Yiptong Avila

Jackie (Jacqueline) Yiptong Avila

Program Evaluator/Survey Methodologist
Independent Consultant
Canada

Jackie Yiptong Avila is Program Evaluator with over ten years of experience in conducting evaluation using a mixed methods approach. Previously. she has  worked at Statistics Canada as a Survey Statistician and Methodologist, a career that spanned for over 30 years. She has extensive experience in designing and managing household, socio-economic and enterprise surveys as well  as customer satisfaction surveys She has worked in West Africa, the Middle East, Haiti, Mauritius and Canada. Her experience in program evaluation includes

·       Familiarity with the UN, USAID and USDA Evaluation Policy and Guidelines.

·       Evaluation of USAID Feed the Future Food security and Nutrition Programs, USDA Food for Progress Agriculture Value Chains Program; UNDP Women and Youth empowerment and micro-enterprises Programs; UNFPA and USAID programs in humanitarian settings.

Jackie was a Staff member of the World Bank International Program for Development Evaluation Training (IPDET). She provides training and workshop facilitation in Survey Methodology;  Monitoring and Evaluation. She is a member of the Canadian Evaluation Society; the American Evaluation Association; the Canadian Association of International Development Practitioners and is a  Lifetime member of the International Development Evaluation Association ( IDEAS). She is fluent in English, French and Spanish and enjoys working with emerging young evaluators.

My contributions

    • Dear Jean,

      Thank you for bringing up this topic. I also wish to thank Renata Mirulla for her good work in administrating this forum. I am throwing my two cents' worth in this discussion as I have conducted using the mixed method approach in most if not all of my work in evaluation. If fact, it was mandatory that I used the mixed method in the evaluation of the USAID and USDA funded projects. You can find the reports by thematic on this page (see EVALUATIONS IN THE DEC: https://dec.usaid.gov/dec/content/evaluations.aspx)

      Please find below my reply to your questions. At the bottom of this document, I show how I have used the mixed methods approach in the evaluation of two Food for Progress Projects in The Gambia and Senegal. Please do not hesitate to contact me if you have any questions.

      Kind regards,

      Jackie

      1. In the evaluation design stage – What types of evaluation questions necessitate(d) mixed methods? What are the (dis)advantages of not having separate qualitative or quantitative evaluation questions?

      All evaluation questions can be answered using a mixed method approach. When designing the Evaluation Design Matrix or Framework, for each of the evaluation questions the evaluator will identify the informant(s) and the data collection method that will be used with the corresponding method of analysis. For example, for a quantitative survey, the method will be a statistical data analysis method such as descriptive analysis, inferential analysis; for the qualitative research method, content analysis and thematic analysis can be performed.

      2. When developing data collection instruments for mixed methods evaluation – Are these instruments developed at the same time or one after another? How do they interact?

      Yes, in parallel. There is ONE Evaluation Design Matrix using a mixed method approach; there is still ONE Evaluation not two.  Both methods will attempt to answer the same question. The Evaluation may decide to obtain information for a particular question using only one method for example the qualitative method for evaluation questions pertaining to RELEVANCE.

      3. During sampling – Is sampling done differently or does it use the same sampling frame for each methodical strand? How and why?

      We identify the informants, and we make the list for each type.

      The quantitative survey will not survey all the informants who are targeted by the evaluation. Surveys are conducted usually for large populations for example the programs beneficiaries and a probabilistic sample of the population unit will be selected. A sample frame is needed i.e., the list of the survey population (list frame) or an area frame as in the case of a household survey.   Note that the population are not always people, they can be schools or farms for example.  More than one survey can be conducted in an evaluation for example a household survey, a survey of farmers, a survey of intrant providers and a client satisfaction survey of services received from let say a lending or microfinancing institution depending on the program activities.  It all depends on what we are trying to find out. Note that the quantitative surveys will provide the data that can be used to calculate the performance indicators as well as providing the characteristics of the target population and prevalence of a situation or behavior e.g. number of percentage of farmers who do not have a certain type of equipment; number of household who eat less than 3 meals a day. The data is weighted to the population of interest and the estimates are produced for the entire population or subsets of the population for example gender and age groups (demographic variables).

      The qualitative data collection will target stakeholders for example government officials, program officers, suppliers for Key Informant or Semi-Structured interviews. Focus group discussion are conducted with a sample of the larger population of interest or stakeholders for example farmers, community health officers. In the case of qualitative research, purposive sampling is the technique used to select a specific group of individuals or units for analysis. Participants are chosen “on purpose,” not randomly. It is also known as judgmental sampling or selective sampling. The information gathered cannot be generalised to the entire population. The main goal of purposive sampling is to identify the cases, individuals, or communities best suited to help answer the research or evaluation questions.

      Note that there is no correct or universally recognized method for calculating a sample size for purposeful sampling whereas in quantitative surveys there are formulas to determine the sample size with the desired reliability level of the estimates.

      4. During data collection – How and why are data collected (concurrently or sequentially)?

      Data is collected concurrently since there is one deadline and a single evaluation report to submit.

      Qualitative data collection is usually performed by one person (I like to add a note taker or record the interviews with the permission of the informant for quality assurance). Surveys are carried out by team of trained enumerators which makes the process quite expensive. These days, data collection is usually performed using tablets instead of paper questionnaires. The survey data must be edited (cleaned) before the data is analysed. Surveys can also be conducted by phone or online depending on the type of informants, but the response rate is lower than face to face interviews.

      5. During data analysis – Are data analysed together/separately? Either way, how and what dictates which analytical approach?

      The data is analysed separately.  The evaluators will then perform data triangulation by cross-referencing the survey data with the findings from the qualitative research and the document review or any other method used.

      6. During the interpretation and reporting of results – How are results presented, discussed and/or reported?

      There is only one Evaluation Report, with the quantitative data accompanied by a narrative and explanation/confirmation/justification of findings from the qualitative research and secondary sources. Sometimes a finding from the qualitive research will be accompanied by the quantitative data from the survey. For example, in a Focus Group discussion, farmers have reported that they cannot buy fertilisers because it is too expensive. The survey can ask the same question and provide the percentage of farmers not able to purchase fertilisers but in addition the survey can tell if this issue exits in all geographical areas. In depth qualitative interviews can provide other reasons why they cannot buy fertilisers.

      The Mixed Method approach allows to take advantage of the respective strengths of the qualitative and quantitative method in collecting and analyzing information to answer the research/evaluation questions.

      Examples of Evaluation using the Mixed Method approach:

       Qualitative Quantitative Other

      Mid-Term Evaluation Millet Business Services Project

      • Key Informant Interviews (KIIs) of Implementer Staff and Partners
      • Focus Group Discussions (FGDs) of Millet Producers and Managers of Processing Units
      • Field observations

      Four surveys were conducted:

      • Survey of Millet Producers
      • Survey of Producer Organizations
      • Survey of Processing Units
      • Survey of Trained Processing Unit Staff.
      • Literature and Program Document reviews
      End-Line Evaluation of the SeneGambia Cashew Value Chain Enhancement Project
      • Key Informant Interviews (KII)
        • Managers of Tree Nurseries
        • Cashew Traders
        • Processors
        • Local Cashew Facilitators
        • Farmer Association
      • Focus Group Discussion (FGD) of Cashew Producers
      • Field Observations
      • Survey of Cashew Producers
      • Survey of Processing Units or Centers
      • Survey of Trained Processors
      • Literature and Program Document reviews
      • Review of IRD and CEPII monitoring data, monthly/quarterly/annual reports, thematic reports, case studies, and staff interviews

       

    • Dear Jean,

      In response to your follow-up to my contribution to this discussion [1], I would first like to thank Malika Bounfour and Marlene Roefs for sharing two very valuable documents.

      Please refer to  Mixed methods paper by WUR_WECR_Oxfam_0.pdf page 6  the definition of The convergent parallel design. This is the approach that I use when I mentioned the data collection tools are developed in parallel, or concurrently. As for ONE Evaluation Design Matrix, this follows what is also described as The embedded design also on page 6.

      With regards to sampling methods, I invite my colleagues to check this Statistics Canada link; I have worked for this national statistical agency for over 30 years as survey methodologist.

      https://www150.statcan.gc.ca/n1/edu/power-pouvoir/toc-tdm/5214718-eng.htm

      In particular, for the distinction between probability or probabilistic sampling (used in quantitative surveys) and non-probabilistic sampling as in purposive sampling ( used in qualitative data collection) please refer to 3.2 sampling  https://www150.statcan.gc.ca/n1/edu/power-pouvoir/ch13/prob/5214899-eng.htm

      You will note in Section 3.2.3 that why Purposive sampling is being considered by some for use in quantitative surveys. This is not the practice in official statistical agencies. It is also explained why non-probability sampling should be used with extra caution.

      I hope that this is helpful. I note the following on page 13 of Mixed methods paper by WUR_WECR_Oxfam_0.pdf  shared by Marlene “Combining information from quantitative and qualitative research gives a more comprehensive picture of a programme’s contribution to varies types of (social) change". I fully agree with this statement.

      Thank you for bringing up this topic on EvalForward which as you mentioned, it has raised a lot of interest in the group. I personally favour the mixed methods approach. In fact, in my experience, findings from a mixed methods evaluation receives less “push-back” from my clients. It is hard to argue/oppose the findings when triangulation shows the same results from different sources.

      Kindest regards,

      Jackie

      P.S. I have provided this link EVALUATIONS IN THE DEC: https://dec.usaid.gov/dec/content/evaluations.aspx where you will find hundreds of examples of evaluations that has used mixed methods.

       

      [1] Jackie: Thanks so much for taking time and provide insightful comments. As we think about our evaluation practice, may you explain how “all evaluation questions can be answered using a mixed method approach”? In your view, the data collection tools are developed in parallel, or concurrently. And you argue that there is ONE Evaluation Design Matrix, hence both methods attempt to answer the same question. For sampling would you clarify how you used probabilistic or non-probabilistic sampling, or at least describe for readers which one you applied, why and how? Would there be any problem if purposive sampling is applied for a quantitative evaluation?

       

  • What type of evaluator are you?

    Discussion
    • Dear Colleagues,

      I am happy that John has brought up ToC for discussion.  I often wondered why evaluations review/mention the ToC at the inception phase but seem to  ignore it during the analysis; seldom assess the validity or robustness of the ToC and the accompanying  assumptions. Instead, Evaluation focus on the OECD criteria and rarely have I seen the validity ToC addressed in evaluation reports, have you ?

      I do not think that ToC are cast in stone hence I like to think that evaluations can/should show data that 1. confirm that the assumptions are valid 2. Prove or disprove that the expected outcome and results are realistic, 3. Identify which one will be realised and 4. for those that will not happen,  what can be done/changed in the intervention so that the results be achieved; else tell us what will happen if the intervention is carried on as is.

      I would like to suggest an article from John Mayne who sadly passed away last December. In it, John discusses the criteria for a robust ToC and a tool for carrying out analysis of ToCs which he discusses in an  ex-ante or ex post setting. Please see https://www.researchgate.net/publication/321510354_Theory_of_Change_Analysis_Building_Robust_Theories_of_Change

      Kind regards,

      Jackie

    • Thank you, Carlos, for bringing up another important topic in the forum and for the document links. I also thank the colleagues who shared their experience and comments.

      Theory of Change, Logical Framework (Logframe) and result Chains are all methodologies for planning, measure and evaluate programs. Each has a visual representation in the form of a matrix of what happens or expected to happen as the result of the program or project or any initiative for that matter. As a past facilitator at IPDET (International Program for Developmental Evaluation training financed by the World Bank Program) and in my practice, I have found that Theory of Change is a name or title that is not easily identified as a methodology;  it is a term that can bring dread in the mind ?.  I would rather use Logic Model a term some use interchangeably to describe ToC.  However, Logic model is also synonyms for Result Chains (https://www.betterevaluation.org/en/search/site/result%20chain) and Program Theory/Theory of Change https://www.betterevaluation.org/en/rainbow_framework/define/develop_programme_theory.  

      In my view, Theory of change is a more powerful tool than the two others mentioned since direct links between Activities, Output and Outcomes (Short and mid -erm and long-term often referred to as Impact) must be established and shown in the matrix. Furthermore, the ToC is not complete without assumptions. It is not a one-time matrix but must/can be reviewed and modified with time. During evaluation, the assumptions must be verified and if they do not hold or activities were modified during the course of the program, the matrix has to be reviewed accordingly.

      The requirements of the ToC foster an in-depth reflection of what the program is trying to achieve. A difficulty often encountered is deciding what is an output and what is an outcome. I have found the Kellogg document a very useful Guide; it uses the term Logic model. https://www.bttop.org/sites/default/files/public/W.K.%20Kellogg%20LogicModel.pdf. Semantic is important in the developing the matrix and active verbs such as “Increased” that denote changes, help make the distinction between output and outcome.

       I also find that filling the Activity column brings discussions that often show that stakeholders are not aware or had different views of what was actually happening during the program implementation. Follow-up discussions would often happen on whether the activities will trigger behaviour change among the program beneficiaries and have spill-over effect and results  in the community or overall target population.   I find that the ToC matrix facilitate the identification of indicators which are more meaningful for measuring performance and results.

      I agree that the ToC must be developed in a participatory manner. However, at the time of evaluation, it may not exist or the one available is poor/confusing. In these cases, after an initial document review and discussions with the program staff, I will design the matrix or modify the existing one and circulate it. It helps me understand the program and formulate my requests for clarification. Since the matrix is simple to read (I like left to right), it usually receives attention and feedback.  We end up with all involved having the same understanding of the program and its expected achievements. I have seen evaluation questions revised as the result of this exercise.

      It would be interesting to find out how many Ipdeters who are practising evaluation, utilize ToC in their work. ToC is at the core of the IPDET.  See Road to Results, Morras and Rist 2009, the textbook for this training. https://openknowledge.worldbank.org/bitstream/handle/10986/2699/52678.pdf?sequence=1&isAllowed=y

       

    • Dear Colleagues,

      I wish to thank you for taking the time to join this discussion, share your experience and web links to very informative documents. Please allow me to summarize some of your comments and share my reflections following this discussion on women in agriculture which is certainly an important topic as demonstrated by your interest and contribution.

      Several of you have pointed out the challenges faced by women; these include no access to Land Ownership; lack of financing; chores and household responsibilities. More importantly, is the lack of voice of women in decision making which can be due to the cultural and societal norms; perception that women are illiterate hence cannot contribute to decision making. Furthermore, technology is perceived as a male domain.

      It was also noted that while evaluations found that agricultural production by women beneficiaries increased as a result of their participation in agricultural activities, there was less evidence to suggest that they were individually diversifying their agricultural products and breaking into agri-business and self-employment as expected. This is to say that women continue to practice subsistence farming which is not going to move them and their family from poverty.

      It appears that we have yet to find ways for women develop formal and informal support innovation networks with others;  ways for women to exercise decision-making power in intra-household discussions with their spouses, and extended family especially when culture limits this kind of interaction. Not the least is how do we get men to support women including their spouse to innovate and move from subsistence farming to entrepreneurship. Should we say moving women from the invisibles to strong actors along the agriculture value chains?

      I also note with interest in your contributions that urban farming especially on roof top is now an activity that is being practiced. I have not yet seen in my work and It would be interesting to see what data exist for this type of activity.  Sadly, you have noted that monitoring systems are not always in place to measure the results of agriculture programs on women performance beyond increased productivity. Furthermore, some of you are finding that program managers still think that M&E exercises are expensive and require significant effort; hence the lack of efficient M&E system.

      The Oxford Dictionary provides the following definition for Empowerment which says

      To empower somebody (to do something) is to give someone more control over their own life or the situation they are as in “The movement actively empowered women and gave them confidence in themselves.”

      This will become more necessary as we try to meet the challenges of the SDGs since statistics tell us that there are increasing number of households being headed by females (for a summary of World bank data please see http://www.factfish.com/statistic/female%20headed%20households). Women are often left in charge as their spouse has left to wage wars and/or have returned maimed; left to work in the cities; have never married; are widowed or the man has simply deserted the family.

      Thank you again for your contributions. I hope that we will have more opportunities to discuss this topic in the future and that you will be reporting that women and marginalized groups are moving from subsistence farming to engagement in agricultural market expansion. J

      Jackie Yiptong Avila, Bsc, MBA, DPE

      International Consultant

      Program Evaluator; Survey Methodologist

      Ottawa, Canada

    • Dear Dorothy,

      Thank you for having brought up Youth in Agriculture in this forum. The remarkable response indicates how important this topic is for evaluators and others. I have had the opportunity to evaluate several agriculture programs in Senegal and The Gambia. I would like to share my reflections, findings and the recommendations that I have made which I hope answers your initial questions: Are evaluations making a difference or not? If not, how does that happen to greater effect?

      There seems to be general agreement that the negative perception of agriculture which describes farmers as illiterate;  farm work as back breaking are deterrents for young people. I think that these are part of a larger number of reasons.  What can be done to de-stigmatize farming? I believe that changing the language and concept concerning small-scale farming is a first step. Should we not

      • Think of the farmer as an agriculture entrepreneur, businessman or businesswomen not the illiterate poor person who does a backbreaking job who is able to provide for his family
      • Treat the farm as a family business and not some entity for survival?

      Colleagues have mentioned educated and uneducated Youth. I believe that an uneducated Youth with some numeracy and literacy skills can become a successful entrepreneur. Let’s not stigmatize the “uneducated” Youth in rural areas. It will perpetuate the negative perception of farmers and farming.

      In the discussion, it was noted that there are initiatives that are encouraging the Youth to enter the Agribusiness. This person is not necessarily from a rural community. Hence two other divisions:

      1. Outsidesr and
      2. Children of the farmers/Youth in rural areas.

      The outsider is as in the AgriHack Talent initiative - already an entrepreneur/start-up/companies/country diaspora etc. i.e. they are investors in the agricultural sectors. (Thank you, Pamela White, for the link https://www.cta.int/en/youth ). They are educated presumably, with technology and resources obtained on their own or as beneficiaries of some programs. The questions I would like to ask are:

      • Are they going to build capacity among the local farmers and Youth or are they expecting cheap labour?
      • Do they have a good understanding of the rural and farming community to be able to collaborate with the rural community? Will they be ready to learn from locals and adopt traditional agricultural practices that bring results?
      • Are they truly going to make a positive difference for the local Youth or are they going to be the masters who dictate?

      For the children of the farmers, the challenges are many. Land ownership is an important issue. Farmers in my studies did not have title to their land and we collected incidence of abuse; for example once the farmer is having success  as in  the cashew sector that can be quite lucrative,  there is a “cousin” who lives in the city and presumably now “wealthy” who arrives and makes claim to the ancestral land farmed by the “pauvre paysan, son cousin” . Young people are justifiably upset and discouraged to see their father mistreated and have few recourses for this “injustice”. One of the recommendations made is that there is a push for land ownership in the country and if the law already allows for this, that the farmer  be taught and supported in obtaining title for his/her parcel of land through an association of farmers and/or the aid program.

      Someone has mentioned the “claustrophobic” environment of farms. Indeed, lack of roads or difficult access to towns is an issue. This situation limits the sale of the crops and in fact, in many villages, we found that the farmers are at the mercy of buyers. For lack of transportation, the farmer has no choice but to sell to those who come in the village with their own truck, car, motorcycle etc. and of course,  at the price set by the buyers. An unfair practice which will discourage the Youth from farming.

      I disagree that young people are leaving for the city just because of the big city lights; the discotheques and the “fun” life. Once outside, the appeal for not returning to the village is great. Can we blame them if they do not return to their village where there is no electricity and no running water? Rural development is fundamental if we wish  Youth to remain on agricultural land.

      How do we get the young people to start thinking of the farm as a business?

      I was deeply saddened to hear of a compound in Gambia that saw 26 of its youth leave for the north. They were believed to have all died in the Mediterranean Sea. They were young people  who have attended school, but the lack of opportunities led them to take this risk of leaving home with the hope of a better future. I was deeply saddened because growing cashew trees could be a lucrative business in this country and maybe had they known that this sector had much to offer, they would not have left their village but cultivate the land instead. Unfortunately, an academic education is intended for landing in a white-collar job. This is a common problem. Here in Canada, we have a shortage of tradespeople; for a long time, our children were encouraged to get a university degree for example,  in electrical engineering while a college degree to become an electrician was not viewed in the same light. This is in reverse now as an entrepreneurial electrician is often making more income than an engineer.

      Older farmers are also selling their land as their children have gone away for higher education and other professions.  However, university graduates in agriculture in this country are hired by large food producing companies and in research; this may not yet be as frequent in developing countries.  

      We should not forget the agriculture sectors offer jobs along the value chain and toiling the land is not the sole occupation. These usually require a certain level of education.

      It was suggested in our evaluation reports that young literate family members be included in the Farmers Field Schools (FFS) which targeted the farmers only. The evaluations found that the training which includes business practices and accounting was not very successful since the farmers were too often illiterate.

      I understand that the recommendations we have made, were taken into account in the planning of the next phase of the programs. I hope to have the opportunity to evaluate these “enhancements”. I strongly suggest that similar to gender, we treat Youth as a cross-cutting theme in evaluations of agriculture programs. Let us not forget the young girls and women who farm. I have found that the agriculture programs would make a head count of female beneficiariesbut few initiatives adopt activities to match the needs and accommodate the timetable of the women. Absenteeism and drop-out rates for female at FFS was higher for females than males.

      It will be good that evaluators share their findings and recommendations. Should we, evaluators have a common set, a repertoire of recommendations that promotes practices proven to bring positive results?  Of course, to be applied where relevant and contextual! 

      Jackie Yiptong Avila
      Program evaluator / Survey methodologist
      Canada