Lessons learned from managing Impact Evaluations on development interventions

©FAO/Luis Tato / FAO
©FAO/Luis Tato/FAO

Lessons learned from managing Impact Evaluations on development interventions

4 min.

Impact evaluations have recently been playing an increasingly prominent role as part of development interventions, including in agriculture and rural development. 

Impact evaluation, and particularly randomized control trials, are a credible way to measure the benefits of such programs. Additionally, they can help answer questions related to intervention design, helping improve the design of interventions over time. One of the main challenges in conducting impact evaluations in development settings is to establish a counterfactual; the counterfactual represents the state of the world that intervention participants would have experienced in its absence. Obviously, the counterfactual does not take place spontaneously, so a control group has to be constructed to mimic the counterfactual. From the research perspective, the control group can be randomly chosen from a large set of potential beneficiaries. However, sometimes other methods of selecting control groups must be used.

In the rest of this post, I discuss some thoughts about making impact evaluations work better for both researchers and program implementers.

Lessons related to program implementation

In conducting both randomized and non-randomized impact evaluations over time, I’ve found that there are several things one can do to make such evaluations mutually beneficial for researchers, program implementers, and donors. First, it is most useful when the design of evaluations takes place in consultation with the implementers before the project begins. A series of meetings and conversations are particularly important to define the research questions to be studied. To be successful, the research questions should link interests of the research team with questions that the implementing partner has about how to make their programming more effective.  If randomizing, it is important to understand the level of randomization (individual, farmer group, village) and what is being randomized; for example, it is certainly possible to make the control group “business as usual.” If so, then the research question relates to adding (or subtracting) services from the way the implementer normally provides services, but it is no less valid especially if it is of research interest.

Second, it is important for researchers to understand more than just the implementation details. It is also important to understand how adhering to a research protocol might affect the implementation. For example, randomization can affect the way that interventions must be planned; whereas there are adjustments that can be made, such as randomized rollouts, any adherence to plans can affect both implementation plans and even implementation costs.

From the perspective of project implementation, it is important to understand that once a research protocol has been agreed upon, it cannot be broken without discussing with the research team. Impact evaluation research requires both treatment to take place largely as planned, and for the control group to be kept out of any direct implementation, at least while it remains as a control group. If the treatment cannot take place as initially planned, it is important to inform researchers as soon as possible so they can assess implications of those changes for their research. If the project works within the control group, the implication is that measured impacts will be reduced, so it is in the best interest of implementers to adhere to the protocol.

How can Impact Evaluations work with large programs?

Many programs are actually quite large and take place over multiple years, and consequently they both have several related goals and are expected to benefit a large number of people. In designing ex ante impact evaluation of such projects, one wants not just to understand overall impacts, but also to understand what parts of such projects work towards its objectives and what parts do not. However, an impact evaluation of the entire project is not likely to provide convincing evidence about those mechanisms, as impact evaluations are best at identifying average effects. Moreover, such estimates may not be of substantial interest to researchers, particularly those trying to work on unanswered questions in the literature.

For these reasons, a better method of conducting impact evaluations is to pick out components of those programs on which implementers have questions about the best way to effectively design them, and then to design research around those components in a first phase of the program. The findings can then be fed into later phases of the same program, improving its effectiveness. Here, mixed methods can be particularly effective in both determining how much important outcomes can change, through quantitative research, and why, through well designed qualitative research.