Sometime in 2018 my project collaborated with West Africa Rural Foundation (WARF) which is a regional NGO specialist in building capacity for rural development initiatives, in order to do Outcome Assessment using the Outcome Harvesting concept. Outcome Harvesting happened to be a pretty new concept, at least for our case in The Gambia.
In order to embark on the exercise we first exchanged on the details of implementation modalities and key partners to engage in the exercise. Outcome Harvesting is a highly participatory approach and stakeholders involve all parties to the project, including beneficiaries. The process requires both quantitative and qualitative data to provide evidence of outcome achievements.
Given that the tool has not been applied in the country before, or that it’s relatively new, we appreciated the fact that the target participants in the exercise should first be trained on the concept. As such we devoted the whole of the first working sessions to do an introduction of Outcome Harvesting, it’s rationale and approach to the point that our target participants were comfortable using it to identify activities which contribute to specific outcomes going backwards from the outcome level to outputs.
During the second round of the same assessment, this time targeting another set of outcomes, our technical partner (WARF) decided the was no need for the first working session on the introduction of OH, given that this was the second series. However what we did not pay close attention to was that this series involves a whole new set of participants who have no idea about OH. So we went straight into the session and by the time we got to the group presentation on the OH exercise, we saw clear evidence that this second group did not do as well as the first group. The linkages between the outcomes and the initiative which brought them about was weak, thus requiring more evidence generation.
The purpose of sharing this experience is to provide evidence that for participatory evaluation to be effective the evaluators capacity should be built first. This also points that there is no short cut to capacity building and any attempts to do so will have a negative effect on the quality of results.
I also want to add that not just the participants should have their capacities built but also the client. In my case, we spent some good time sensitizing the Project Director and entire senior staff of the project. If findings of Outcome Assessment must be used I think the client is in better position to implement and appreciate the findings if they’re partners in the implementation of the tool.
Just my thoughts please, thank you.
Paul Mendy
National Agricultural Land and Water Management Development Project
RE: What can evaluations do in terms of capacity development?
Sometime in 2018 my project collaborated with West Africa Rural Foundation (WARF) which is a regional NGO specialist in building capacity for rural development initiatives, in order to do Outcome Assessment using the Outcome Harvesting concept. Outcome Harvesting happened to be a pretty new concept, at least for our case in The Gambia.
In order to embark on the exercise we first exchanged on the details of implementation modalities and key partners to engage in the exercise. Outcome Harvesting is a highly participatory approach and stakeholders involve all parties to the project, including beneficiaries. The process requires both quantitative and qualitative data to provide evidence of outcome achievements.
Given that the tool has not been applied in the country before, or that it’s relatively new, we appreciated the fact that the target participants in the exercise should first be trained on the concept. As such we devoted the whole of the first working sessions to do an introduction of Outcome Harvesting, it’s rationale and approach to the point that our target participants were comfortable using it to identify activities which contribute to specific outcomes going backwards from the outcome level to outputs.
During the second round of the same assessment, this time targeting another set of outcomes, our technical partner (WARF) decided the was no need for the first working session on the introduction of OH, given that this was the second series. However what we did not pay close attention to was that this series involves a whole new set of participants who have no idea about OH. So we went straight into the session and by the time we got to the group presentation on the OH exercise, we saw clear evidence that this second group did not do as well as the first group. The linkages between the outcomes and the initiative which brought them about was weak, thus requiring more evidence generation.
The purpose of sharing this experience is to provide evidence that for participatory evaluation to be effective the evaluators capacity should be built first. This also points that there is no short cut to capacity building and any attempts to do so will have a negative effect on the quality of results.
I also want to add that not just the participants should have their capacities built but also the client. In my case, we spent some good time sensitizing the Project Director and entire senior staff of the project. If findings of Outcome Assessment must be used I think the client is in better position to implement and appreciate the findings if they’re partners in the implementation of the tool.
Just my thoughts please, thank you.
Paul Mendy
National Agricultural Land and Water Management Development Project
Gambia Evaluation Association