Thanks for clarifying that the discussion is not only limited to programs but also includes projects or any humanitarian or development intervention. Very informative and rich discussion. I am learning a lot in the process!
When I say "when something is too complicated or complex, simplicity is the best strategy" in the context of Evaluations, I mean we do not need to use an array of, or several methodologies and data sources for an evaluation to be complexity-aware. We can keep the data, both quantitative and qualitative lean and focused on the evaluation objectives and questions. For example, use of complexity-aware evaluation approaches such as Outcome Harvesting, Process Tracing, Contribution Analysis, Social Network Analysis e.t.c does not necessarily mean several quant and qual data collection methods have to be applied. For example, in OH, you can use document review and KII to develop outcome descriptors then do a survey and KII during substantiation. I have used SNA and KII to evaluate change in relationships among actors in a market system. I have used SNA followed by indepth interviews in a social impact study of a rural youth entrepreneurship development program. In essence, you can keep the data collection methods to three ( The three legged stool or the triangle concept) and still achieve your evaluation objectives with lean and sharp data. A lot has been written on overcoming complexity with simplicity in different spheres of life, management, leadership e.t.c.
On the issue of who decides on the methodology, evaluator or program team? From my experience, a MEL plan is very clear on the measurements and evaluation methods. And, MEL plans are developed by the program team. Evaluators are asked to propose an evaluation methodology in the technical proposals to serve two purposes - that is to assess their technical competence and to identify the best fit with the evaluation plan. Topically, the evaluator and program team will consultatively agree on the best fit methodology during inception phase of the evaluation and this forms part of the inception report which is normally signed off by the program team.
RE: How are mixed methods used in programme evaluation?
Dear Jean and colleagues.
Thanks for clarifying that the discussion is not only limited to programs but also includes projects or any humanitarian or development intervention. Very informative and rich discussion. I am learning a lot in the process!
When I say "when something is too complicated or complex, simplicity is the best strategy" in the context of Evaluations, I mean we do not need to use an array of, or several methodologies and data sources for an evaluation to be complexity-aware. We can keep the data, both quantitative and qualitative lean and focused on the evaluation objectives and questions. For example, use of complexity-aware evaluation approaches such as Outcome Harvesting, Process Tracing, Contribution Analysis, Social Network Analysis e.t.c does not necessarily mean several quant and qual data collection methods have to be applied. For example, in OH, you can use document review and KII to develop outcome descriptors then do a survey and KII during substantiation. I have used SNA and KII to evaluate change in relationships among actors in a market system. I have used SNA followed by indepth interviews in a social impact study of a rural youth entrepreneurship development program. In essence, you can keep the data collection methods to three ( The three legged stool or the triangle concept) and still achieve your evaluation objectives with lean and sharp data. A lot has been written on overcoming complexity with simplicity in different spheres of life, management, leadership e.t.c.
On the issue of who decides on the methodology, evaluator or program team? From my experience, a MEL plan is very clear on the measurements and evaluation methods. And, MEL plans are developed by the program team. Evaluators are asked to propose an evaluation methodology in the technical proposals to serve two purposes - that is to assess their technical competence and to identify the best fit with the evaluation plan. Topically, the evaluator and program team will consultatively agree on the best fit methodology during inception phase of the evaluation and this forms part of the inception report which is normally signed off by the program team.
My thoughts.
Gordon