RE: How are mixed methods used in programme evaluation? | Eval Forward

This discussion is interesting and intriguing especially based on the multidisciplinary background of the contributors. I will be abbreviating Mixed Methods as MM in this discussion. Without pre-empting further ideas and fresh perspectives colleagues are willing to share, allow me to request further clarification for our shared learning. This is not limited to colleagues whose names are mentioned, it’s an open discussion. Feel free to share the link to other platforms or networks as well.

Consider these viewpoints before delving into further interrogations. Keep reading, icing on the cake comes after:

“Successful cases [of MM in evaluation] occur when the integration process is well-defined or when methods are applied sequentially (e.g., conducting focus groups to define survey questions or selecting cases based on a survey for in-depth interviews).” Cristian Maneiro.

five purposes for mixed-method evaluations: triangulation, complementarity, development, initiation, and expansion (also summarized in this paper)” shared by Anne Kepple. I encourage all MM practitioners and fans to read this article.

“A good plumber uses several tools, when and as necessary, and doesn't ask himself what type of plumbing requires only one tool... Likewise, a good evaluator needs to know how to use a toolbox, with several tools in it, not just a wrench” Olivier Cossée.

The evaluation also analyzed and explained the quantitative results with information from qualitative methods, which not only allowed characterizing the intervention, educational policy and funding, but also led to more relevant policy recommendations” Maria Pia Cebrian.

Further queries:

  • Cristian: Thanks for sharing your experience and preference for exploratory sequential design where qualitative methods precede quantitative methods. A follow up question: what if MM evaluation begins with a survey and ends up with qualitative interviews or focus group discussions – an explanatory sequential design? By the way has anyone used or seen in action any explanatory sequential design? Are there such MM evaluation designs?  Let's keep retrieving insights from experiences and various resources written on MM evaluation design and share.
  • Cristian has also raised an excellent point worth taking into account. Some publications show that all non-numerical data are qualitative, e.g., pictures, maps, videos, etc. what about those data types? Has anyone got experience mixing numerical/quantitative data with pictorial, spatial and video data? If yes, share. Don’t mind contributing insights how you deal with such non-numerical data.
  • Emilia, you made my day (actually my flight)! I was airborne while reading colleagues’ contributions. Wow, thanks Emilia. You raised a point which reminded me that when 1+1=2 in MM, it's a loss. In MM, 1+1 should equal 3, if not it's a loss, reductionistic. By the way it's a double loss. On the one hand, find out from this article which cogently argues that 1+1 should be 3 in mixed methods. The second loss is that the author of the article, i.e., Michael Fetters, passed away a few weeks ago and like-minded scholars (Creswell, J. W., & Johnson, R. B. (2023) paid tribute to him. May his soul rest in eternal peace!
  • Emilia, I enjoyed reading your contribution. In the existing literature (remind me to share at some point), there is mention of MM when qualitative and quantitative methods are mixed. Other instances where methods of the same paradigm (say, qualitative) are used, they have been termed multimethod or multiple approaches.
  • And then - going a bit beyond that:  couldn’t we consider the mix of “colonizers“ with “indigenous “ approaches also “mixed methods”?” Aha ... in the upcoming African Evaluation Journal, there is a special issue on Addressing Knowledge Asymmetries. Possibly this point would be a great topic for further mixed methodological interrogation. In practice, do we have examples whereby western methodologies (e.g., survey) are mixed with oral or pictorial methods from the global south? I am on standby to hear more from you and other colleagues.
  • Lal, you are spot on. Would you exemplify how thinking or operating in silos applies when conducting MM evaluation?
  • Margrieth, well put. Our academic background determines to a large extent what we embrace in our professional practice. How do we bridge this gap? In mixed methods, it is encouraged to have 'researcher triangulation'. If I am a number cruncher, I should ideally work with a qualitative researcher, an anthropologist for example, to complement each other, bringing together our strengths to offset gaps in our academic training or professional practice. How would you suggest such a researcher or evaluator triangulation be implemented? Anyone with practical examples? Please share.
  • Pia: Impressive, various sources of data collection tools and analyses performed! Thanks for sharing the published article. The article is a good example of how selective or biased researchers or evaluators might be following their academic background as mentioned in Margrieth's contribution. This article is fully quantitative, with no mention of qualitative methods (unless I missed it through the quick scan of the article). May you check in the original publication in Spanish to help us learn more how data from interviews and focus group discussions were used in this survey? Thanks in advance.
  • Margrieth made it clear that the choice of quantitative or qualitative methods or both is in principle determined by our professional background. The tendency of evaluators coming from professions such as economics, engineering or similar is to use quantitative methods, while evaluators from arts or humanities use qualitative methods. I can’t agree more. What about evaluators whose training prepared them to be number crunchers but their professional practice re-oriented them into more qualitative methods, and vice versa. I am a living example, but not stuck in any school of thought.  
  • Olivier: This describes very well an exploratory sequential design. What about scenarios whereby an evaluation begins with quantitative methods and when results are out, there are some counter-intuitive findings to understand, make sense of? Are there cases you might have seen in the professional conduct of evaluation where quantitative methods PRECEDED qualitative methods, i.e., explanatory sequential design)? Spot on, Olivier! There is no lab for social beings as is the case in natural sciences.

Happy learning together!