RE: What works in improving food security and nutrition in very poor communities? | Eval Forward

Dear Mr. Molloy,

I have read with a great attention your contribution referring to the evaluation of CASU in Zambia. I must congratulate your department for such an achievement. However, I have two points to make here.

You mention at the start of your contribution that the entire population of the Conservation Agriculture project is "targeting over 300,000 smallholder farmers". That is the entire population of the project. You also mention that "the main focus of the evaluation was to assess the extent to which conservation agriculture has been sustainably adopted by Zambian beneficiary farmers … also sought to assess what outcomes were evident (positive and/or negative) from the project’s activities, and what were the impacts on food security, income, and soil health"

The first point I want to raise concerns the adoption study that you highlighted in your message. Though I don't have all details about how such a study was conducted and what results it did achieve, I would like use this opportunity to share some experiences on adoption studies, a sort of outcomes evaluation and if these outcomes are sustainable over time. Everett Rogers, one of the gurus on technology adoption by farmers, instruct us not to check the adoption rate at once or at any time. Adoption studies require that one is aware of the technology adoption process among farmers in order to understand how to work with adoption studies and set up appropriate protocols to study technology adoption among farmers. I saw many of adoption studies giving high rates of adoption at the end of a project and very low numbers of farmers are still keeping the technology 5-10 years after the end of a project. This is because what seems adoption to researchers is just experimentation to farmers, so real adoption for farmers will come far away after that moment of project end.   

The second point I want to raise concerns the household survey undertaken by the University of Zambia and the sample size used by the research team. Besides other activities conducted within this evaluation (among which focus groups with 650 beneficiary farmers), you mention that "a household-level impact assessment survey to collect quantitative data amongst a sample of over 300 farmers, in order to assess progress against the baseline survey".

Nobody can deny that a survey can only be truly valuable when it is reliable and representative for the entire population of project's beneficiaries. This is why determining the ideal survey sample size with robust external and internal validities is quite important as it will help the research team to infer and extrapolate the results obtained on the research sample over the entire population of the project's beneficiaries.

Using a correct survey sample size is crucial for any research, and project evaluation is a research. A too big sample will lead to the waste of precious resources such as time and money, while a too small sample, though it can yield sound results (strong internal validity), but will certainly not allow inference and extrapolation of its results on the entire project population (weak external validity).

So, the sample size cannot be by how much a research team can handle but on how accurate the survey data ought to be. In other words, how closely the research team wants the results obtained on a sample to match those of the entire project population.

In statistics and statistical probabilities, we use two measures that affect the accurateness of data and which have a great importance as for the sample size: (1) the margin error, in most cases, we use 5%; and (2) the confidence level, in most cases, we use 95%. Based on these two measures, and given the population size, the research team can calculate how many respondents (people who might completely fill the survey questionnaire) it may actually need; that is the survey sample. Beside all this, the research team must consider a sufficient response rate – that is the number of "really exploitable" survey questionnaires – so that they include additional questionnaires beyond the sample so that the research team has sufficient number of completed questionnaires to exploit. The table can give an idea on the sample size for a project population of 300,000 individuals. For example, if we target 380-390 "exploitable" questionnaires, we allow 20-25% more questionnaires so that the survey is not put at risk of weak robustness.

As a conclusion, I believe that the sample size for the mentioned household survey, as part of the undertaken CASU evaluation, was a bit lower than what a probabilistic law would accept. Of course, this statement has no consequence on the results obtained within the sample as such, but the survey findings cannot be strongly and robustly inferred and extrapolated to the entire population of project's beneficiaries because of the weak external validity of the sample, due to no respect of the principles of probabilistic law.

Kind regards

Mustapha