At the significant risk of creating a too-rigid conceptualisation, I suggest it could be useful to ask these questions when considering the relevance of an EA: 1. institutional and physical context: How numerous and diverse are the locations? 2. Intervention design: How numerous and diverse are the interventions, and their interconnections? 3. Stakeholder demand: How numerous and diverse are the funders, implementing partners and beneficiary groups, and their interconnections? 4. Data availability: How numerous and diverse are the M&E systems and their interconnections?
Responding to Amy's post. 1. Yes, EA's need to be proportionate to likely costs of an evalaution. I agree 100%. Some historical research on past ratios should now be possible to give us ball-park ideas on the range and median values 2. Yes, a checklist re the need for an EA , is at least worth thinking about. One pointer in this direction is in the comment that "some UN agencies that have developed an approach of making EAs mandatory for programs with large budgets over a specified amount" Here I think budget size may in effect be being used as a proxy for programme complexity. But it is not a straight forward correlation. A large simple programme may not need an EA i.e one with a few simple clear objectives and interventions. Would an immuniation programme fit this description? But as general rule I think large budget programmes do tend to be more complex in terms of objectives, interventions, partners and stakeholders and geographic locations. Consider these as nodes in a network, the more of these there are the more numerous are the possible and actual causal linkages, and types and sources of information about what is happening. if a checklist of EA need is to be developed perhaps these sources of complexity could be items to considered?
I would just like to add to Svetlana's important point: "it is important to note that all contexts are not created equal and level of unquestionable preparedness for an evaluation cannot be assumed in any type and size of intervention in any context. In the evolving contexts, some aspects may not be prioritized in time and to meet the needs of everyone involved". I think this is especially the case with stakeholders' interests in an evaluation: what they want to know and what they are concerned about. These are very likely to change over time.
In response to Silva's comments below (also sent to email server)
"Hi Silva
Let me address some of your points
1. "All projects can and should be evaluated"
Yes but WHEN? EA recommended delays can give time to address data, theory and stakeholder concerns time to be resolved and thus lead to a more useful evaluation
Yes, but HOW? Approaches need to be proportionate to resources and capacities and context. EAs can help here
2. Re "Some project designs are manifestly unevaluable, and some M&E frameworks are manifestly inadequate at first glance." and your comment that.. "We are confusing the project documentation with the reality of the work."
It would be an extreme position to argue that there will be no instances of good practice (however defined) on the ground in such circumstances (i.e where there are poor designs poor M&E frameworks).
But It would be equally extreme to argue the other way, that the state of design and data availability has no relationship to outcomes on the ground at all. If you do take that position the decision to evaluate is effectively a gamble, with someone's time and resources.
At some stage someone has to decide how much money to spend when and how. EAs can help inform those decisions
3. Re ""Can the program be evaluated with a standard toolbox" (which is what evaluability risks becoming) "
I would like to see some evidence for this claim
As counter evidence, at least of intention, I refer you to this diagram, from the Austrian Development Agency EA guide, and to the reference to the jigsaw nature of an EA, in the sense of having to fit different needs and capacities together, rather than following any blueprint
Rick Davies
Evaluation ConsultantAt the significant risk of creating a too-rigid conceptualisation, I suggest it could be useful to ask these questions when considering the relevance of an EA:
1. institutional and physical context: How numerous and diverse are the locations?
2. Intervention design: How numerous and diverse are the interventions, and their interconnections?
3. Stakeholder demand: How numerous and diverse are the funders, implementing partners and beneficiary groups, and their interconnections?
4. Data availability: How numerous and diverse are the M&E systems and their interconnections?
Rick Davies
Evaluation ConsultantResponding to Amy's post.
1. Yes, EA's need to be proportionate to likely costs of an evalaution. I agree 100%. Some historical research on past ratios should now be possible to give us ball-park ideas on the range and median values
2. Yes, a checklist re the need for an EA , is at least worth thinking about. One pointer in this direction is in the comment that "some UN agencies that have developed an approach of making EAs mandatory for programs with large budgets over a specified amount" Here I think budget size may in effect be being used as a proxy for programme complexity. But it is not a straight forward correlation. A large simple programme may not need an EA i.e one with a few simple clear objectives and interventions. Would an immuniation programme fit this description? But as general rule I think large budget programmes do tend to be more complex in terms of objectives, interventions, partners and stakeholders and geographic locations. Consider these as nodes in a network, the more of these there are the more numerous are the possible and actual causal linkages, and types and sources of information about what is happening. if a checklist of EA need is to be developed perhaps these sources of complexity could be items to considered?
Rick Davies
Evaluation ConsultantHi all
I would just like to add to Svetlana's important point: "it is important to note that all contexts are not created equal and level of unquestionable preparedness for an evaluation cannot be assumed in any type and size of intervention in any context. In the evolving contexts, some aspects may not be prioritized in time and to meet the needs of everyone involved".
I think this is especially the case with stakeholders' interests in an evaluation: what they want to know and what they are concerned about. These are very likely to change over time.
Rick Davies
Evaluation ConsultantIn response to Silva's comments below (also sent to email server)
"Hi Silva
Let me address some of your points
1. "All projects can and should be evaluated"
Yes but WHEN? EA recommended delays can give time to address data, theory and stakeholder concerns time to be resolved and thus lead to a more useful evaluation
Yes, but HOW? Approaches need to be proportionate to resources and capacities and context. EAs can help here
2. Re "Some project designs are manifestly unevaluable, and some M&E frameworks are manifestly inadequate at first glance." and your comment that.. "We are confusing the project documentation with the reality of the work."
It would be an extreme position to argue that there will be no instances of good practice (however defined) on the ground in such circumstances (i.e where there are poor designs poor M&E frameworks).
But It would be equally extreme to argue the other way, that the state of design and data availability has no relationship to outcomes on the ground at all. If you do take that position the decision to evaluate is effectively a gamble, with someone's time and resources.
At some stage someone has to decide how much money to spend when and how. EAs can help inform those decisions
3. Re ""Can the program be evaluated with a standard toolbox" (which is what evaluability risks becoming) "
I would like to see some evidence for this claim
As counter evidence, at least of intention, I refer you to this diagram, from the Austrian Development Agency EA guide, and to the reference to the jigsaw nature of an EA, in the sense of having to fit different needs and capacities together, rather than following any blueprint
regards, rick