How are mixed methods used in programme evaluation?
Evaluation in different development and humanitarian settings requires varying methods to capture multiple voices and multifaceted trends. Indeed, even the purists in quantitative methods have started incorporating some qualitative methods into randomized control trials (RCTs), having excluded them in the past or suspected them of being less rigorous.
Complexities in development programmes are a great opportunity to rethink evaluation methods. This, among other things, has led to mixed methods in evaluation . Mixed-method evaluators combine at least one quantitative method with at least one qualitative method , helping to broaden and understand how and in what context outcomes and impacts are achieved . (Note that using, say, in-depth interviews and focus group discussions is not mixed-method evaluation; this is merely using methods of the same family and worldview).
I have seen numerous evaluation terms of reference and protocols that mention mixed methods. Sounds great, right? Sadly, it is too often a cliché in many terms of reference and evaluation reports. Mixed methods are mentioned here and there, and overused as a yardstick of all that matters in evaluation.
Bamberger  shows and recommends that evaluators should not limit mixed methods to data collection. Rather, he argues for the use of mixed methods even in forming teams of evaluators. He also cites mixed methods at the stage of formulating evaluation questions. Have you ever thought about using mixed methods in testing, or generating hypotheses and sampling for both qualitative and quantitative methods? What about collecting and analysing both types of data, presenting or discussing results? When qualitative and quantitative methods, data, and results are not methodically integrated, it is basically two studies or two evaluations, not a single evaluation.
I would be grateful if you could provide some links to evaluation reports and publications where mixed methods are used. Importantly, I would appreciate if you could share specific, practical experiences and lessons on how qualitative methods (have) interact(ed) with quantitative methods:
1. In the evaluation design stage – What types of evaluation questions necessitate(d) mixed methods? What are the (dis)advantages of not having separate qualitative or quantitative evaluation questions?
2. When developing data collection instruments for mixed methods evaluation – Are these instruments developed at the same time or one after another? How do they interact?
3. During sampling – Is sampling done differently or does it use the same sampling frame for each methodical strand? How and why?
4. During data collection – How and why are data collected (concurrently or sequentially)?
5. During data analysis – Are data analysed together/separately? Either way, how and what dictates which analytical approach?
6. During the interpretation and reporting of results – How are results presented, discussed and/or reported?
I’m looking forward to learning from and with you all!
 Using Mixed Methods in Monitoring and Evaluation, Experiences from International Development, The World Bank, 2010
 Designing and Conducting Mixed Methods Research, J.W. Creswell and V.L. Plano Clark, SAGE, 2017
 Introduction to mixed methods in impact evaluation, M. Bamberger, 2012
 Using mixed methods to strengthen process and impact evaluation, M. Bamberger, 2022