Jackie Yiptong Avila is Program Evaluator with over ten years of experience in conducting evaluation using a mixed methods approach. Previously. she has worked at Statistics Canada as a Survey Statistician and Methodologist, a career that spanned for over 30 years. She has extensive experience in designing and managing household, socio-economic and enterprise surveys as well as customer satisfaction surveys She has worked in West Africa, the Middle East, Haiti, Mauritius and Canada. Her experience in program evaluation includes
· Familiarity with the UN, USAID and USDA Evaluation Policy and Guidelines.
· Evaluation of USAID Feed the Future Food security and Nutrition Programs, USDA Food for Progress Agriculture Value Chains Program; UNDP Women and Youth empowerment and micro-enterprises Programs; UNFPA and USAID programs in humanitarian settings.
Jackie was a Staff member of the World Bank International Program for Development Evaluation Training (IPDET). She provides training and workshop facilitation in Survey Methodology; Monitoring and Evaluation. She is a member of the Canadian Evaluation Society; the American Evaluation Association; the Canadian Association of International Development Practitioners and is a Lifetime member of the International Development Evaluation Association ( IDEAS). She is fluent in English, French and Spanish and enjoys working with emerging young evaluators.
Jackie (Jacqueline) Yiptong Avila
Program Evaluator/Survey Methodologist Independent ConsultantDear Jean,
Thank you for bringing up this topic. I also wish to thank Renata Mirulla for her good work in administrating this forum. I am throwing my two cents' worth in this discussion as I have conducted using the mixed method approach in most if not all of my work in evaluation. If fact, it was mandatory that I used the mixed method in the evaluation of the USAID and USDA funded projects. You can find the reports by thematic on this page (see EVALUATIONS IN THE DEC: https://dec.usaid.gov/dec/content/evaluations.aspx)
Please find below my reply to your questions. At the bottom of this document, I show how I have used the mixed methods approach in the evaluation of two Food for Progress Projects in The Gambia and Senegal. Please do not hesitate to contact me if you have any questions.
Kind regards,
Jackie
1. In the evaluation design stage – What types of evaluation questions necessitate(d) mixed methods? What are the (dis)advantages of not having separate qualitative or quantitative evaluation questions?
All evaluation questions can be answered using a mixed method approach. When designing the Evaluation Design Matrix or Framework, for each of the evaluation questions the evaluator will identify the informant(s) and the data collection method that will be used with the corresponding method of analysis. For example, for a quantitative survey, the method will be a statistical data analysis method such as descriptive analysis, inferential analysis; for the qualitative research method, content analysis and thematic analysis can be performed.
2. When developing data collection instruments for mixed methods evaluation – Are these instruments developed at the same time or one after another? How do they interact?
Yes, in parallel. There is ONE Evaluation Design Matrix using a mixed method approach; there is still ONE Evaluation not two. Both methods will attempt to answer the same question. The Evaluation may decide to obtain information for a particular question using only one method for example the qualitative method for evaluation questions pertaining to RELEVANCE.
3. During sampling – Is sampling done differently or does it use the same sampling frame for each methodical strand? How and why?
We identify the informants, and we make the list for each type.
The quantitative survey will not survey all the informants who are targeted by the evaluation. Surveys are conducted usually for large populations for example the programs beneficiaries and a probabilistic sample of the population unit will be selected. A sample frame is needed i.e., the list of the survey population (list frame) or an area frame as in the case of a household survey. Note that the population are not always people, they can be schools or farms for example. More than one survey can be conducted in an evaluation for example a household survey, a survey of farmers, a survey of intrant providers and a client satisfaction survey of services received from let say a lending or microfinancing institution depending on the program activities. It all depends on what we are trying to find out. Note that the quantitative surveys will provide the data that can be used to calculate the performance indicators as well as providing the characteristics of the target population and prevalence of a situation or behavior e.g. number of percentage of farmers who do not have a certain type of equipment; number of household who eat less than 3 meals a day. The data is weighted to the population of interest and the estimates are produced for the entire population or subsets of the population for example gender and age groups (demographic variables).
The qualitative data collection will target stakeholders for example government officials, program officers, suppliers for Key Informant or Semi-Structured interviews. Focus group discussion are conducted with a sample of the larger population of interest or stakeholders for example farmers, community health officers. In the case of qualitative research, purposive sampling is the technique used to select a specific group of individuals or units for analysis. Participants are chosen “on purpose,” not randomly. It is also known as judgmental sampling or selective sampling. The information gathered cannot be generalised to the entire population. The main goal of purposive sampling is to identify the cases, individuals, or communities best suited to help answer the research or evaluation questions.
Note that there is no correct or universally recognized method for calculating a sample size for purposeful sampling whereas in quantitative surveys there are formulas to determine the sample size with the desired reliability level of the estimates.
4. During data collection – How and why are data collected (concurrently or sequentially)?
Data is collected concurrently since there is one deadline and a single evaluation report to submit.
Qualitative data collection is usually performed by one person (I like to add a note taker or record the interviews with the permission of the informant for quality assurance). Surveys are carried out by team of trained enumerators which makes the process quite expensive. These days, data collection is usually performed using tablets instead of paper questionnaires. The survey data must be edited (cleaned) before the data is analysed. Surveys can also be conducted by phone or online depending on the type of informants, but the response rate is lower than face to face interviews.
5. During data analysis – Are data analysed together/separately? Either way, how and what dictates which analytical approach?
The data is analysed separately. The evaluators will then perform data triangulation by cross-referencing the survey data with the findings from the qualitative research and the document review or any other method used.
6. During the interpretation and reporting of results – How are results presented, discussed and/or reported?
There is only one Evaluation Report, with the quantitative data accompanied by a narrative and explanation/confirmation/justification of findings from the qualitative research and secondary sources. Sometimes a finding from the qualitive research will be accompanied by the quantitative data from the survey. For example, in a Focus Group discussion, farmers have reported that they cannot buy fertilisers because it is too expensive. The survey can ask the same question and provide the percentage of farmers not able to purchase fertilisers but in addition the survey can tell if this issue exits in all geographical areas. In depth qualitative interviews can provide other reasons why they cannot buy fertilisers.
The Mixed Method approach allows to take advantage of the respective strengths of the qualitative and quantitative method in collecting and analyzing information to answer the research/evaluation questions.
Examples of Evaluation using the Mixed Method approach:
Mid-Term Evaluation Millet Business Services Project
Four surveys were conducted:
Jackie (Jacqueline) Yiptong Avila
Program Evaluator/Survey Methodologist Independent ConsultantDear Jean,
In response to your follow-up to my contribution to this discussion [1], I would first like to thank Malika Bounfour and Marlene Roefs for sharing two very valuable documents.
Please refer to Mixed methods paper by WUR_WECR_Oxfam_0.pdf page 6 the definition of The convergent parallel design. This is the approach that I use when I mentioned the data collection tools are developed in parallel, or concurrently. As for ONE Evaluation Design Matrix, this follows what is also described as The embedded design also on page 6.
With regards to sampling methods, I invite my colleagues to check this Statistics Canada link; I have worked for this national statistical agency for over 30 years as survey methodologist.
https://www150.statcan.gc.ca/n1/edu/power-pouvoir/toc-tdm/5214718-eng.htm
In particular, for the distinction between probability or probabilistic sampling (used in quantitative surveys) and non-probabilistic sampling as in purposive sampling ( used in qualitative data collection) please refer to 3.2 sampling https://www150.statcan.gc.ca/n1/edu/power-pouvoir/ch13/prob/5214899-eng.htm
You will note in Section 3.2.3 that why Purposive sampling is being considered by some for use in quantitative surveys. This is not the practice in official statistical agencies. It is also explained why non-probability sampling should be used with extra caution.
I hope that this is helpful. I note the following on page 13 of Mixed methods paper by WUR_WECR_Oxfam_0.pdf shared by Marlene “Combining information from quantitative and qualitative research gives a more comprehensive picture of a programme’s contribution to varies types of (social) change". I fully agree with this statement.
Thank you for bringing up this topic on EvalForward which as you mentioned, it has raised a lot of interest in the group. I personally favour the mixed methods approach. In fact, in my experience, findings from a mixed methods evaluation receives less “push-back” from my clients. It is hard to argue/oppose the findings when triangulation shows the same results from different sources.
Kindest regards,
Jackie
P.S. I have provided this link EVALUATIONS IN THE DEC: https://dec.usaid.gov/dec/content/evaluations.aspx where you will find hundreds of examples of evaluations that has used mixed methods.
[1] Jackie: Thanks so much for taking time and provide insightful comments. As we think about our evaluation practice, may you explain how “all evaluation questions can be answered using a mixed method approach”? In your view, the data collection tools are developed in parallel, or concurrently. And you argue that there is ONE Evaluation Design Matrix, hence both methods attempt to answer the same question. For sampling would you clarify how you used probabilistic or non-probabilistic sampling, or at least describe for readers which one you applied, why and how? Would there be any problem if purposive sampling is applied for a quantitative evaluation?