Colleagues, massive thanks for going extra miles to provide additional and new perspectives to this discussion. These include sequential, concurrent, and parallel mixed methods (MM) designs. Some analyses are performed separately while others bring data analysis from either method strand to corroborate trends or results emanating from the other method strand.
One of the latest contributions include these key points:
“The evaluators will […] perform data triangulation by cross-referencing the survey data with the findings from the qualitative research and the document review or any other method used. […] Sometimes a finding from the qualitive research will be accompanied by the quantitative data from the survey” Jackie.
“Mixed methods is great, but the extent of using mixed methods and sequencing should be based on program and evaluation circumstances, otherwise instead of answering evaluation questions of a complex or complicated program, we end up with data constipation. Using all sorts of qualitative methods at once i.e., open ended surveys, KIIs, community reflection meetings, observations, document reviews etc. in addition to quantitative methods may not be that smart.” Gordon.
Lal: Thanks for sharing on two projects one on "a billion-dollar bridge to link up an island with the mainland in an affluent Northern European country while the second is a multi-million-dollar highway in an African country". This is an excellent example of what can go wrong in the poor design of projects and inappropriate evaluation of such projects. Are there any written reports/references to share? This seems to be a good source of insights to enrich our discussions and, importantly, our professional evaluation practice using mixed methods. I so much like the point you made: "the reductive approach made quality and quantity work against project goals". Linking to the projects used for illustration, you very well summarized it: "the emergency food supplies to a disaster area cannot reasonably meet the same standards of quality or quantity, and they would have to be adjusted to make the supply adequate under those circumstances".
Olivier: you rightly argue and agree that sequential exploratory designs are appropriate: "you cannot measure what you don't conceive well, so a qualitative exploration is always necessary before any measurement attempt". But also, you acknowledge that: "there is also room for qualitative approaches after a quantification effort”. You are right about that: in some cases, a survey may yield results that appear odd, and one way to make sense of them is to "zoom" on that particular issue through a few additional qualitative interviews.
Gordon: Mea culpa, I should have specified that the discussion is about the evaluation of programme, project or any humanitarian or development intervention. You rightly emphasize the complexity that underlies programmes: “programs are rarely simple (where most things are known) but potentially complicated (where we know what we don't know) or complex (where we don't know what we don't know)”. One argument you made seems to be contradictory: “when something is too complicated or complex, simplicity is the best strategy!” Some more details would add context and help readers make sense of the point you raised. Equally, who between the evaluator and programme team should decide the methods to be used?
While I would like to request all colleagues to read all contributions, Jackie’s submission is different, full of practical tips and tricks used in mixed methods.
Jackie: Thanks so much for taking time and provide insightful comments. As we think about our evaluation practice, may you explain how “all evaluation questions can be answered using a mixed method approach”? In your view, the data collection tools are developed in parallel, or concurrently. And you argue that there is ONE Evaluation Design Matrix, hence both methods attempt to answer the same question. For sampling would you clarify how you used probabilistic or non-probabilistic sampling, or at least describe for readers which one you applied, why and how? Would there be any problem if purposive sampling is applied for a quantitative evaluation?
Except a few examples, most of the contributions are so far more theoretical, hypothetical than practical, lived experiences. I think what can help all of us as evaluators is practical hints and tricks, including evaluation reports or publications that utilized mixed methods (MM). Please go ahead and share practical examples and references on:
MM evaluation design stage
MM data collection instruments
MM sampling
MM data collection
MM data analysis
MM results interpretation, reporting, and dissemination
RE: How are mixed methods used in programme evaluation?
Colleagues, massive thanks for going extra miles to provide additional and new perspectives to this discussion. These include sequential, concurrent, and parallel mixed methods (MM) designs. Some analyses are performed separately while others bring data analysis from either method strand to corroborate trends or results emanating from the other method strand.
One of the latest contributions include these key points:
“The evaluators will […] perform data triangulation by cross-referencing the survey data with the findings from the qualitative research and the document review or any other method used. […] Sometimes a finding from the qualitive research will be accompanied by the quantitative data from the survey” Jackie.
“Mixed methods is great, but the extent of using mixed methods and sequencing should be based on program and evaluation circumstances, otherwise instead of answering evaluation questions of a complex or complicated program, we end up with data constipation. Using all sorts of qualitative methods at once i.e., open ended surveys, KIIs, community reflection meetings, observations, document reviews etc. in addition to quantitative methods may not be that smart.” Gordon.
Lal: Thanks for sharing on two projects one on "a billion-dollar bridge to link up an island with the mainland in an affluent Northern European country while the second is a multi-million-dollar highway in an African country". This is an excellent example of what can go wrong in the poor design of projects and inappropriate evaluation of such projects. Are there any written reports/references to share? This seems to be a good source of insights to enrich our discussions and, importantly, our professional evaluation practice using mixed methods. I so much like the point you made: "the reductive approach made quality and quantity work against project goals". Linking to the projects used for illustration, you very well summarized it: "the emergency food supplies to a disaster area cannot reasonably meet the same standards of quality or quantity, and they would have to be adjusted to make the supply adequate under those circumstances".
Olivier: you rightly argue and agree that sequential exploratory designs are appropriate: "you cannot measure what you don't conceive well, so a qualitative exploration is always necessary before any measurement attempt". But also, you acknowledge that: "there is also room for qualitative approaches after a quantification effort”. You are right about that: in some cases, a survey may yield results that appear odd, and one way to make sense of them is to "zoom" on that particular issue through a few additional qualitative interviews.
Gordon: Mea culpa, I should have specified that the discussion is about the evaluation of programme, project or any humanitarian or development intervention. You rightly emphasize the complexity that underlies programmes: “programs are rarely simple (where most things are known) but potentially complicated (where we know what we don't know) or complex (where we don't know what we don't know)”. One argument you made seems to be contradictory: “when something is too complicated or complex, simplicity is the best strategy!” Some more details would add context and help readers make sense of the point you raised. Equally, who between the evaluator and programme team should decide the methods to be used?
While I would like to request all colleagues to read all contributions, Jackie’s submission is different, full of practical tips and tricks used in mixed methods.
Jackie: Thanks so much for taking time and provide insightful comments. As we think about our evaluation practice, may you explain how “all evaluation questions can be answered using a mixed method approach”? In your view, the data collection tools are developed in parallel, or concurrently. And you argue that there is ONE Evaluation Design Matrix, hence both methods attempt to answer the same question. For sampling would you clarify how you used probabilistic or non-probabilistic sampling, or at least describe for readers which one you applied, why and how? Would there be any problem if purposive sampling is applied for a quantitative evaluation?
Except a few examples, most of the contributions are so far more theoretical, hypothetical than practical, lived experiences. I think what can help all of us as evaluators is practical hints and tricks, including evaluation reports or publications that utilized mixed methods (MM). Please go ahead and share practical examples and references on:
Looking forward to more contributions.