Daniel is a reluctant evaluator, his passion is monitoring with a focus on listening to and learning from those who deliver the support and those who matter most- their clients or ultimate beneficiaries. He strongly believes it is to them, not donors, organisations and programmes should be primarily accountable.
Daniel’s skills are in advising and mentoring in:
• facilitating development and reviews of theories of change and results frameworks, preferably one or the other, certainly not both;
• reviewing, establishing and developing thoughtful monitoring and learning processes and products in organisations and programmes;
• fostering cross team and portfolio learning; and
• designing, oversighting and leading different types of evaluations – ex ante, process and impact - that are utilisation focussed.
Daniel holds an MSc in Agricultural Economics, with a focus on Agrarian Development Overseas, from London University as well as a BA in Geography from the School of African and Asian Studies, University of Sussex.
He lives North of London with his Mosotho wife, Tsepe and has two children – Thabo and Ella. He plays tennis and volunteers at the King’s College African Leadership Centre, University of London and the Woodland Trust.
Daniel Ticehurst
Monitoring > Evaluation Specialist freelanceDear Rahel,
Thanks for posting this blog about how far "we" have come on impact evaluation. Let me be terse with my answer: not much, if at all. And for the following three reasons:
2. CGD's self-serving basic thesis:
James Morton in his 2009 paper "Why We Will Never Learn" provides a wonderfully lettered critique of the above: the Public Good concept is a favourite resort of academics making the case for public funding of their research. It has the politically useful characteristic of avoiding blame. No one is at fault for the ‘evaluation gap’ if evaluation is, by very its nature, something that will be underfunded. Comfortable as this is, there are immediate problems. For example, it is difficult to argue that accountability is a public good. Why does the funding agency concerned not have a direct, private-good interest in accountability?
Having effectively sidelined Monitoring and Processes, WWWEL goes on to focus, almost entirely, on measuring outcomes and impact. This left the "monitoring gap" conveniently alone. While avoiding any discussion of methodologies: randomised control trials, quasi experimental double-difference, etc. many discussions WWWEL encouraged were the abstruse, even semantic nature of the technical debates which dominate discussion about impact measurement.
3. Pawson and Tilley's expose - through their masterful 1997 publication "Realistic Evaluation" of experimentalists and RCT's intrinsic limits as defined by its narrow use based on the deficiency of its external validity. They challenge orthodox view of experimentation: the construction of equivalent experimental and control groups, the application of interventions to the experimental group only and comparisons of the changes that have taken place in the experimental and control groups as a method of finding out what effect the intervention has had. Their position throws into doubt experimental methods of finding out which programmes do and which do not produce intended and unintended consequences. They maintain it not to be a sound way of deriving sensible lessons for policy and practice.
In sum then, CGD's proposition of RCTs, to cite Paul Krugman. is like a cockroach policy: it was flushed away in the 1970's but returned forty years later along with its significant limits intact; and CGD missed the most significant gap. From the above, one could get the impression that development aid has lost the capacity to learn: it suppresses, not takes heed of, lessons.
I hope the above is seen as a constructive contribution to the debate your blog provokes; and my seeming pessimism simply qualifies my optimism - a book was launched yesterday on monitoring systems in Africa.
Best wishes and good luck,
Daniel