evaluation has been primarily developed and used mechanically and served mainly the tick mark purpose (donor accountability) rather than for learning and improvement. Now, we know the indicators and so-called ‘log frame’ become more or less redundant in the complex situations in which most agricultural projects are run.
Please allow me to share my recent experience. I am part of a team to assess the contribution of budget support with a small TA (3 years intervention) by a donor to the government to implement a national agriculture development strategy in one of the South Asian countries. As an evaluator, I have noted the following issues during the evaluation process:
a) The ‘Budget Support’ is provided to the government treasury and not earmarked for the agriculture sector. In this case, there is a high possibility of fungibility. We do not know whether the sector received the fund they have the opportunity to have incremental work. And how to evaluate the contribution.
b) The funding contract contained ambitious and not-relevant targets: The programme has six targets with annual milestones to be fulfilled for getting the fund. These targets are not only ambitious for the 3 years intervention but they are also outside the scope of the agriculture ministry. For example, decreasing the stunting percentage and increase in the percentage of land ownership by women at the national level. This is not a direct intervention from the ministry of agriculture and there are many other main contributing responsibly to attain that in a long period. There were also inadequate coordination and collaboration mechanisms among the ministries and government agencies to get information/progress. In addition, there are no M & E systems to collect data from the sub-national level.
c) The governance structure has also been changed from a unitary to a federal structure. The three tiers of the government are functioning on their own without having proper coordination and reporting mechanisms. Institutions and policies are in the process of development where as there exists a serious capacity gap. In this case, it has been difficult for the ministry to collect data and compile reporting.
In this context, the log frame is still there without revision and evaluators are asked to assess the contribution of the fund on those indicators/targets. Both implementing agencies and the donors are still trying to attribute the impact of the fund which is like ‘squeezing water from a stone’. Perhaps, a push is still a far way to make the M & E approach more contextual and useful.
Agree: ‘here we go again’ and ‘repeat’ unfortunately.
RE: A lack of learning in the monitoring and evaluation of agriculture projects
Dear Daniel and other Evalforward members,
evaluation has been primarily developed and used mechanically and served mainly the tick mark purpose (donor accountability) rather than for learning and improvement. Now, we know the indicators and so-called ‘log frame’ become more or less redundant in the complex situations in which most agricultural projects are run.
Please allow me to share my recent experience. I am part of a team to assess the contribution of budget support with a small TA (3 years intervention) by a donor to the government to implement a national agriculture development strategy in one of the South Asian countries. As an evaluator, I have noted the following issues during the evaluation process:
a) The ‘Budget Support’ is provided to the government treasury and not earmarked for the agriculture sector. In this case, there is a high possibility of fungibility. We do not know whether the sector received the fund they have the opportunity to have incremental work. And how to evaluate the contribution.
b) The funding contract contained ambitious and not-relevant targets: The programme has six targets with annual milestones to be fulfilled for getting the fund. These targets are not only ambitious for the 3 years intervention but they are also outside the scope of the agriculture ministry. For example, decreasing the stunting percentage and increase in the percentage of land ownership by women at the national level. This is not a direct intervention from the ministry of agriculture and there are many other main contributing responsibly to attain that in a long period. There were also inadequate coordination and collaboration mechanisms among the ministries and government agencies to get information/progress. In addition, there are no M & E systems to collect data from the sub-national level.
c) The governance structure has also been changed from a unitary to a federal structure. The three tiers of the government are functioning on their own without having proper coordination and reporting mechanisms. Institutions and policies are in the process of development where as there exists a serious capacity gap. In this case, it has been difficult for the ministry to collect data and compile reporting.
In this context, the log frame is still there without revision and evaluators are asked to assess the contribution of the fund on those indicators/targets. Both implementing agencies and the donors are still trying to attribute the impact of the fund which is like ‘squeezing water from a stone’. Perhaps, a push is still a far way to make the M & E approach more contextual and useful.
Agree: ‘here we go again’ and ‘repeat’ unfortunately.
Ram Chandra Khanal, PhD
Independent evaluator, Kathmandu, Nepal