Daniel is a reluctant evaluator, his passion is monitoring with a focus on listening to and learning from those who deliver the support and those who matter most- their clients or ultimate beneficiaries. He strongly believes it is to them, not donors, organisations and programmes should be primarily accountable.
Daniel’s skills are in advising and mentoring in:
• facilitating development and reviews of theories of change and results frameworks, preferably one or the other, certainly not both;
• reviewing, establishing and developing thoughtful monitoring and learning processes and products in organisations and programmes;
• fostering cross team and portfolio learning; and
• designing, oversighting and leading different types of evaluations – ex ante, process and impact - that are utilisation focussed.
Daniel holds an MSc in Agricultural Economics, with a focus on Agrarian Development Overseas, from London University as well as a BA in Geography from the School of African and Asian Studies, University of Sussex.
He lives North of London with his Mosotho wife, Tsepe and has two children – Thabo and Ella. He plays tennis and volunteers at the King’s College African Leadership Centre, University of London and the Woodland Trust.
Daniel Ticehurst
Monitoring > Evaluation Specialist freelanceTo you all, my thanks for sparing time to share your experiences and insights. I will be posting, based on your comments, some conclusions and tips when the discussion closes next week.
Meanwhile, I wanted to make some initial responses drawn from your comments.
1. The trick to make monitoring useful is not to leave it to people who may not be natural judges of performance, whether they are employees of donor agencies or their agents. People who are fluent in developing frameworks and theories of change, use overly complicated language and are well versed in an array of methodologies insisted on by the donor. Understandably, this puts off many team members and managers. It seems boring and onerous. So much so that, for some, it is not clear that it is even a profession. Perhaps, monitoring is but a contrived learning process unique to development aid?
2. The fashion of adding more letters to the acronym, M&E, such as L - Learning, A – Accountability, R – Results appears to be more for affect, not effect. I, like some of you, query why some consider this either revealing or helpful. It defines the fatuity in which some of us toil.
3. It also distracts from the most important feature many of you point out. To listen to and so learn from those that matter most - the ultimate clients or beneficiaries. They are also the experts. Too often their voices and objectives are crowded out by those of donors typically set out in log or results frameworks. Accountability to donors, not to beneficiaries appears to more commonplace than would be expected or hoped for, and being so is burdensome for other stakeholders.
4. As some of you mentioned, the inevitable result is a mass of numbers and comparisons that provide little insight into performance. Some even require a suspension of belief given typical implementation periods. Rather they are often used for justifying the investment to donors; and may even paint a distorted picture of the reality. Beating last year's numbers is not the point.
5. Managers need to take ownership of monitoring - to find measures, qualitative as well as quantitative, that look past the current budget and previous results and ask questions. Questions that reveal answers to help determine how the programme or project can better be attuned and responsive to so better "land" or be acceptable to clients beneficiaries in the future.
Many thanks again and please, if there are any further contributions or responses to the above...
With best wishes and good weekends,
Daniel