I am a Monitoring, Evaluation, and Learning (MEL) expert with over 15 years of related work experience gained in international and national Non-Governmental Organizations (NGOs) and Governments in East and West Africa. My MEL experience has been in the roles of management, advisory, consultancy, and volunteerism.
My contributions
I recently used a survey of evaluators to explore the concept of evaluation use, how evaluation practitioners view it and how this translates into their work – in other words, how evaluators are reporting and supporting evaluation use and influence.
Evaluation use and utilization: an outline
Michael Quinn Patton’s utilization-focused evaluation (UFE) approach is based on the principle that an evaluation should be judged on its usefulness to its intended users. This requires evaluations to be planned and conducted in ways that increase the use of the findings and of the process itself to inform and influence decisions.
The Africa
Gordon Wanzare
Monitoring, Evaluation, & Learning ExpertDear Jean and colleagues.
Thanks for clarifying that the discussion is not only limited to programs but also includes projects or any humanitarian or development intervention. Very informative and rich discussion. I am learning a lot in the process!
When I say "when something is too complicated or complex, simplicity is the best strategy" in the context of Evaluations, I mean we do not need to use an array of, or several methodologies and data sources for an evaluation to be complexity-aware. We can keep the data, both quantitative and qualitative lean and focused on the evaluation objectives and questions. For example, use of complexity-aware evaluation approaches such as Outcome Harvesting, Process Tracing, Contribution Analysis, Social Network Analysis e.t.c does not necessarily mean several quant and qual data collection methods have to be applied. For example, in OH, you can use document review and KII to develop outcome descriptors then do a survey and KII during substantiation. I have used SNA and KII to evaluate change in relationships among actors in a market system. I have used SNA followed by indepth interviews in a social impact study of a rural youth entrepreneurship development program. In essence, you can keep the data collection methods to three ( The three legged stool or the triangle concept) and still achieve your evaluation objectives with lean and sharp data. A lot has been written on overcoming complexity with simplicity in different spheres of life, management, leadership e.t.c.
On the issue of who decides on the methodology, evaluator or program team? From my experience, a MEL plan is very clear on the measurements and evaluation methods. And, MEL plans are developed by the program team. Evaluators are asked to propose an evaluation methodology in the technical proposals to serve two purposes - that is to assess their technical competence and to identify the best fit with the evaluation plan. Topically, the evaluator and program team will consultatively agree on the best fit methodology during inception phase of the evaluation and this forms part of the inception report which is normally signed off by the program team.
My thoughts.
Gordon
Gordon Wanzare
Monitoring, Evaluation, & Learning ExpertGreetings to all!
Great discussion question from Jean and very insightful contributions!
First, I think Jean's question is very specific - that is how mixed methods are used not just in evaluations but PROGRAM evaluations, right? Then, we know that a program consists of two or more projects i.e. a collection of projects. Therefore, programs are rarely simple (where most things are known) but potentially complicated (where we know what we don't know) or complex (where we don't know what we don't know). Oxford English dictionary tells us that a method is a particular procedure for accomplishing or approaching something. Tools are used in procedures. I am from the school of thought that believes that when something is too complicated or complex, simplicity is the best strategy!
Depending on the context, program design, program evaluation plan, evaluation objectives and questions, the evaluator and program team can agree on the best method(s) that helps achieve the evaluation objectives and comprehensively answers the evaluation questions. I like what happens in the medical field, in hospitals where, except in some emergency situations, a patient will go through triage, clinical assessment and historical review by the Doctor, laboratory examination, radiology e.t.c then the doctor triangulates these information sources to arrive at a diagnosis, prognosis, treatment/management plan. Based on circumstances and resources, judgements are made whether all these information sources are essential or not.
Mixed methods is great, but the extent of using mixed methods and sequencing should be based on program and evaluation circumstances, otherwise instead of answering evaluation questions of a complex or complicated program, we end up with data constipation. Using all sorts of qualitative methods at once i.e. open ended surveys, KIIs (Key Informant Interviews), community reflection meetings, observations, document reviews e.t.c in addition to quantitative methods may not be that smart.
In any case, perhaps, the individual projects within the program have already been comprehensively evaluated and their contribution to program goals documented, and something simple like a review, is what is necessary at the program level.
When complicated or complex, keep it simple. Lean data.
My thoughts.
Thanks.
Gordon