RE: How are mixed methods used in programme evaluation? | Eval Forward

Dear Jean,

In response to your follow-up to my contribution to this discussion [1], I would first like to thank Malika Bounfour and Marlene Roefs for sharing two very valuable documents.

Please refer to  Mixed methods paper by WUR_WECR_Oxfam_0.pdf page 6  the definition of The convergent parallel design. This is the approach that I use when I mentioned the data collection tools are developed in parallel, or concurrently. As for ONE Evaluation Design Matrix, this follows what is also described as The embedded design also on page 6.

With regards to sampling methods, I invite my colleagues to check this Statistics Canada link; I have worked for this national statistical agency for over 30 years as survey methodologist.

https://www150.statcan.gc.ca/n1/edu/power-pouvoir/toc-tdm/5214718-eng.htm

In particular, for the distinction between probability or probabilistic sampling (used in quantitative surveys) and non-probabilistic sampling as in purposive sampling ( used in qualitative data collection) please refer to 3.2 sampling  https://www150.statcan.gc.ca/n1/edu/power-pouvoir/ch13/prob/5214899-eng.htm

You will note in Section 3.2.3 that why Purposive sampling is being considered by some for use in quantitative surveys. This is not the practice in official statistical agencies. It is also explained why non-probability sampling should be used with extra caution.

I hope that this is helpful. I note the following on page 13 of Mixed methods paper by WUR_WECR_Oxfam_0.pdf  shared by Marlene “Combining information from quantitative and qualitative research gives a more comprehensive picture of a programme’s contribution to varies types of (social) change". I fully agree with this statement.

Thank you for bringing up this topic on EvalForward which as you mentioned, it has raised a lot of interest in the group. I personally favour the mixed methods approach. In fact, in my experience, findings from a mixed methods evaluation receives less “push-back” from my clients. It is hard to argue/oppose the findings when triangulation shows the same results from different sources.

Kindest regards,

Jackie

P.S. I have provided this link EVALUATIONS IN THE DEC: https://dec.usaid.gov/dec/content/evaluations.aspx where you will find hundreds of examples of evaluations that has used mixed methods.

 

[1] Jackie: Thanks so much for taking time and provide insightful comments. As we think about our evaluation practice, may you explain how “all evaluation questions can be answered using a mixed method approach”? In your view, the data collection tools are developed in parallel, or concurrently. And you argue that there is ONE Evaluation Design Matrix, hence both methods attempt to answer the same question. For sampling would you clarify how you used probabilistic or non-probabilistic sampling, or at least describe for readers which one you applied, why and how? Would there be any problem if purposive sampling is applied for a quantitative evaluation?