Daniel [user:field_middlename] Ticehurst

Daniel Ticehurst

Monitoring > Evaluation Specialist
freelance
United Kingdom

Daniel is a reluctant evaluator, his passion is monitoring with a focus on listening to and learning from those who deliver the support and those who matter most- their clients or ultimate beneficiaries. He strongly believes it is to them, not donors, organisations and programmes should be primarily accountable.

Daniel’s skills are in advising and mentoring in:
• facilitating development and reviews of theories of change and results frameworks, preferably one or the other, certainly not both;
• reviewing, establishing and developing thoughtful monitoring and learning processes and products in organisations and programmes;
• fostering cross team and portfolio learning; and
• designing, oversighting and leading different types of evaluations – ex ante, process and impact - that are utilisation focussed.

Daniel holds an MSc in Agricultural Economics, with a focus on Agrarian Development Overseas, from London University as well as a BA in Geography from the School of African and Asian Studies, University of Sussex.
He lives North of London with his Mosotho wife, Tsepe and has two children – Thabo and Ella. He plays tennis and volunteers at the King’s College African Leadership Centre, University of London and the Woodland Trust.

My contributions

    • Dear Eriasafu,

      Many thanks for the post, and good to be in touch on the subject of monitoring, much neglected and given short thrift by the evaluation community.

      I like your observation on how time complying to demands of collecting data all the way to the top of the results framework or theory of change, often missing out on the assumption along the way, crowds out time for reflection and learning. I believe such reflection comes in revealing the unknown through listening to and learning from those in need, not measuring those in charge - excluded and underserved communities.

      So, how to resolve the issue you raise as to how "MEL/MEAL systems are limited to compliance, outcomes and impact, and rarely include cross cutting issues such as gender and leave-no-one behind principles."

      It strikes me as ironic how, as monitoring is all about learning, it, itself, shows a limited capacity to learn about its past. The pursuits of measuring outcomes and impact are not so much limiting as they are mis-guided. Even if you had more time, outcome and impact indicators generate limited value for learning purposes. This is easier said than done in comparison to measuring indicators laid out in some needy theory of change or logic model. Indicators do what they are supposed to do, they measure things that happened, or not, in the past. They don’t tell you what to do. Monitoring does and should not entertain using rigorous – as a statistician would define the term - methods geared to academic concerns and obsessive pursuits of measuring and attributing intervention effects.

      Monitoring has different requirements as highlighted above; that is, if it is to help managers resolve their decision uncertainties. Your claim ignores the hegemony of mainly transient, academically inclined western evaluators, and those in the monitoring and results measurement community, addicted to single narratives, and rigid methodological dogmas. Monitoring needs to free itself from these mechanistic approaches; and managers need to step up, afford primacy to the voices and needs of indigenous communities, and take ownership to ensure monitoring does generate insights for decision-making purposes that benefit those who legitimize, not just measure the predicted results defined by those who fund, development and humanitarian aid.

      Of course, including gender and ensuring no-one gets left behind is important. However, and without sounding glib, doing this means management not getting left behind by, for example:

      • Pointing out that exploring assumptions matter as much as, if not more than measuring indicators and the ‘system’ needs to be driven by questions defined by those who are its primary users, and they do not include external evaluators;
      • Highlighting how, although numbers are important, they are arguably not as important as learning how, for example, the numbers of men and women or boys and girls came to be and how and how well they interact with  together.

       

      Thanks again, and I hope the above helps,

      Daniel

    • Dear Ana,

      Many thanks for responding, for sharing John’s 10 questions and his email address.   

      I wonder: has anyone else come across them or questions similar to them? And, if so, have you been asked them? If not, have you asked them of yourself in designing an evaluation?   

      It seems to me, responses to them could usefully inform the design of evaluation and/or help teams adequately prepare. That is, rather than waiting for community members to ask them on ‘arrival’, so to say.

       

      Does not doing so run the risk of potentially de-railing the process and wasting community members’ time?  

      What do you or others think?  

      Many thanks again Ana and will connect with John to find out more.

      With best wishes,

      Daniel 

       

       

    • Dear Pedronel,

      Hi and thank you for responding. And I completely agree with how evaluations and evaluators are challenged in the way you describe. Failing to overcome these risks an exclusion of more diverse streams of knowledge and local ways of making change can be especially hampered by a fixation on a pre-ordained finishing line rather than flowing with a generative process at the speed of seasons. 

      What are the challenges you mention in relation to learning about and prioritising indigenous knowledge?; and how do you think these can be overcome?

      Best wishes and thank you again,

      Daniel 

    • Dear all,

      My thanks to all of you who spared time to contribute to the discussion. I hope you found it interesting to read about the insights and experiences of others. The discussion will now be closed, but given the number of rich and varying responses, EvalForward, in collaboration with EvalIndigenous have decided to set up a webinar event on Monday 24th October at 14.00 (Rome time). On their behalf, I would greatly appreciate, if you have capacity, to participate and invite others in your own networks along as well.

      John Ndjovu will make a presentation to provoke, what we hope, will an exciting opportunity to share and learn more about this extremely important issue.

      With thanks in advance, and thanks again for contributing. We look forward to seeing you all there, so to say!

      Daniel

    • Thank you to all who have contributed to the discussion. Many of you point to the importance of culturally appropriate behaviours, and these are associated with compelling reasons. Some provide telling examples of western culture and how some of their institutions remain stuck despite being aware of the consequences in needing to change. However, too few reveal specific instances of experiences in how, either as commissioners or evaluators, they have sought to be culturally appropriate and/or how they have not, and with what consequence.

      Therefore, we would welcome any ‘personal’ experiences that respond more explicitly to the question: what lessons or experiences – successes, challenges, failures - have you had in trying to ensure evaluations adequately prioritise indigenous knowledge, values and practices?

      Many thanks. 

    • Dear Olivier, you are right: it's not universal, yet it is commonplace among many donors for evaluators and evaluations to be driven by the pursuit of being solely accountable to those who commission them and afford privilege to their administrative requirements and corporate objectives, not of those in need. What Bob Picciotto refers to as mindless evaluation machines, and quite rightly so. 

       

      Best wishes from a strange land - i live in the uk - and hope this finds you very well

    • Interesting analysis on the indiscriminate application and diminishing returns to the practice of late through its "performative" use.

      Reference to how "....sometimes, agencies can reduce reputational risk and draw legitimacy from having an evaluation system rather than from using it" reminds of the analogy the famous classicist and poet AE Housman made in 1903:

      "...gentlemen who use manuscripts as drunkards use lamp-posts,—not to light them on their way but to dissimulate their instability.”

      or in plain english relating to the subject: People Use Evaluation as a Drunk Uses a Lamppost — For Support Rather Than Illumination

    • Dear Anna Maria,

      Hi and I think you're on the right lines - develop some questions with which to frame conversations with people who have been in involved in implementation. I would also add those who were involved in developing the ToC (who may be different folk).

      My own experience in reviewing ToC follows three broad themes each with an overarching question:

      • Inclusiveness of the approach/method - who was involved and how well and whose theory is it? For example, was it the donor advisors consulting with beneficiaries/clients or just the donor staff doing their own  research and toing and froing around different technical areas and then signed off by an internal QA unit. or .......? 
      • Robustness of the evidence - on what basis were the assumptions developed and the results - outputs, outcome and impact - arrived at - the pathways to change and the changes themselves ?; and 
      • Coherence and plausibility of the product - is the diagram/visual accompanied with a clear narrative explaining HOW the Action Theory (ie, activities and outputs) will stimulate WHAT change among WHOM and WHY ie, the Change Theory). 

      The look of the product will also vary and, in this regard, Silva makes a good point, though i wouldn't profess to have the software skills to produce the second diagram, nor the intellectual capacity to understand it!!! The action and change theories rarely follow a linear trajectory, but there is no right or wrong. A key difference is in how the product makes clear the consequences for the monitoring and learning process. If it's building a bridge, then you simply engineer a set process to hell from beginning to end and monitor this accordingly. However, if its to do with programme outputs striving to stimulate changes in the behaviours and relationships among people - or outcomes - then this has obvious implications on monitoring: the assumptions made about how and why they will respond to outputs - matter as much as, if not more than, the outcome indicators.  

      Depending on who funds the work you are doing, each donor has a slightly different take on/guidelines for for  ToC (and LFs). I developed some for reviewing the content of log frames and have attached them. Hope they are of some help.  

      On log frames.......The methodology used for developing what people call a ToC is not so different to how some, like GTZ (now GIZ), develop Logframes. See here. I think this is the best method i have seen, thus strongly recommend it as reference in assessing the quality of the process and, in many ways, the product. Its essence is well captured by Harriet Maria's neat video. Claims as to how ToC take better account than LFs of complexity and with its emphasis on assumptions and better explaining the why's and how's of change ring somewhat hollow.   

      As with many methods and tools, there is nothing i believe to be intrinsic to LFs that encouraged many donor agencies to either ignore or mis-use the method and arrive at a product that is too simplistic and  deemed as not fit for purpose. Given they did, it didn't surprise me that they moved to ToC......!

      I hope some of this helps and good luck. Please do get back to me if you want to talk anything through.

      Best wishes,

      Daniel