Daniel is a reluctant evaluator, his passion is monitoring with a focus on listening to and learning from those who deliver the support and those who matter most- their clients or ultimate beneficiaries. He strongly believes it is to them, not donors, organisations and programmes should be primarily accountable.
Daniel’s skills are in advising and mentoring in:
• facilitating development and reviews of theories of change and results frameworks, preferably one or the other, certainly not both;
• reviewing, establishing and developing thoughtful monitoring and learning processes and products in organisations and programmes;
• fostering cross team and portfolio learning; and
• designing, oversighting and leading different types of evaluations – ex ante, process and impact - that are utilisation focussed.
Daniel holds an MSc in Agricultural Economics, with a focus on Agrarian Development Overseas, from London University as well as a BA in Geography from the School of African and Asian Studies, University of Sussex.
He lives North of London with his Mosotho wife, Tsepe and has two children – Thabo and Ella. He plays tennis and volunteers at the King’s College African Leadership Centre, University of London and the Woodland Trust.
Daniel Ticehurst
Monitoring > Evaluation Specialist freelanceDear Musa,
Your point on donor-led evaluation and its consequences are largely correct - Dahler-Larsen's evaluation machines.
"Steering, control, accountability, and predictability come back on the throne. The purpose of evaluation is no longer to stimulate endless discussions in society, but to prevent them."
Thing is, donors pay for and design them. What does this say about evaluation capacity within donor agencies? And I'm not referring to academic expertise on methodology (the supply side, rather the politics of the demand side).
For example, DFID's, now FCDO, evaluation function has never been independent - it's been hidden under the broader research function - with inevitable consequence. Tony Blair was proud of his lack of adaptability in not having a reverse gear or changing course. No surprise that an independent review rated DFID as red on learning and found that
“Staff report that they sometimes are asked to use evidence selectively in order to justify decisions.”
It is often the most rigid and bureaucratic organisations that congratulate themselves on being a learning organisation. This happened, not because DFID did not have many excellent and competent staff, rather because of how powerful political and institutional imperatives crowd out time to think, reflect and be honest.
As an aside, have you /do you know of any "evaluations" commissioned and paid for by the Liberian govt that assess donor performance incl the FAO in the agriculture sector?
Daniel Ticehurst
Monitoring > Evaluation Specialist freelanceDear Harriet,
Many thanks for prompting this discussion and, as Paul said, for the links to specific examples. Really helpful.
I liked the example of the work with Financial Services Deepening Kenya (FSD) Kenya in Marsabit and how it involved FSD Kenya brokering partnerships with CARE and Equity Bank [link here] (It would be interesting to find out, given this all started in 2016, to what extent the groups in Marsabit are faring and whether they remain dependent on CARE's sub-contract with FSD Kenya. For Equity Bank, i wonder whether the savings products they sold to the groups have found "markets" beyond Marsabit.)
Moving on, i wanted to share my first experience of using visual tools back in the early 1990's in Bangladesh on an irrigation project, lessons from which i still take heed of. They respond to your first two questions.
I am doing this for two reasons. First, i agree with Silva Ferretti that use of visuals tools are not just about communicating the "result" of an evaluation, but also an integral part of the process - as Kombate says re: data collection and analysis . Second, reference made by Harvey on the use of GIS and Landsat TM imagery.
We "measured" the area of land irrigated in specific communities through 'pictures' / Landsat images of the country over a three year period. We found out how irrigated areas varied significantly between communities in the same year and over time for the same community. We wanted to find out why. Rather than staying in the office, we took hand drawn maps for each community down down from the landsat images and took them with us. Through focus group discussions we presented these maps to each of the communities. The discussions focussed on us listening to the groups discuss why and how the demand for irrigation water varied so much. The 'results' from these discussions informed not only lessons for the community in managing irrigation facilities, but also for local upazilla govt support and the implications for national policy. For me, it was a lesson as to how if you want to find out why and how people respond to national level interventions, just go ask them and learn from them how they make decisions and why. Far better this, than staying in the office and further manipulating data.
I hope the above is not too terse and crude a contribution, and thanks again.
Best wishes,
Daniel