Daniel [user:field_middlename] Ticehurst

Daniel Ticehurst

Monitoring > Evaluation Specialist
freelance
United Kingdom

Daniel is a reluctant evaluator, his passion is monitoring with a focus on listening to and learning from those who deliver the support and those who matter most- their clients or ultimate beneficiaries. He strongly believes it is to them, not donors, organisations and programmes should be primarily accountable.

Daniel’s skills are in advising and mentoring in:
• facilitating development and reviews of theories of change and results frameworks, preferably one or the other, certainly not both;
• reviewing, establishing and developing thoughtful monitoring and learning processes and products in organisations and programmes;
• fostering cross team and portfolio learning; and
• designing, oversighting and leading different types of evaluations – ex ante, process and impact - that are utilisation focussed.

Daniel holds an MSc in Agricultural Economics, with a focus on Agrarian Development Overseas, from London University as well as a BA in Geography from the School of African and Asian Studies, University of Sussex.
He lives North of London with his Mosotho wife, Tsepe and has two children – Thabo and Ella. He plays tennis and volunteers at the King’s College African Leadership Centre, University of London and the Woodland Trust.

My contributions

    • To you all, my thanks for sparing time to share your experiences and insights. I will be posting, based on your comments, some conclusions and tips when the discussion closes next week. 

      Meanwhile, I wanted to make some initial responses drawn from your comments.

      1. The trick to make monitoring useful is not to leave it to people who may not be natural judges of performance, whether they are employees of donor agencies or their agents. People who are fluent in developing frameworks and theories of change, use overly complicated language and are well versed in an array of methodologies insisted on by the donor. Understandably, this puts off many team members and managers. It seems boring and onerous. So much so that, for some, it is not clear that it is even a profession. Perhaps, monitoring is but a contrived learning process unique to development aid?

      2. The fashion of adding more letters to the acronym, M&E, such as L - Learning, A – Accountability, R – Results appears to be more for affect, not effect. I, like some of you, query why some consider this either revealing or helpful. It defines the fatuity in which some of us toil.

      3. It also distracts from the most important feature many of you point out. To listen to and so learn from those that matter most - the ultimate clients or beneficiaries. They are also the experts. Too often their voices and objectives are crowded out  by those of donors typically set out in log or results frameworks. Accountability to donors, not to beneficiaries appears to more commonplace than would be expected or hoped for, and being so is burdensome for other stakeholders.

      4. As some of you mentioned, the inevitable result is a mass of numbers and comparisons that provide little insight into performance. Some even require a suspension of belief given typical implementation periods. Rather they are often used for justifying the investment to donors; and may even paint a distorted picture of the reality. Beating last year's numbers is not the point.

      5. Managers need to take ownership of monitoring - to find measures, qualitative as well as quantitative, that look past the current budget and previous results and ask questions. Questions that reveal answers to help determine how the programme or project can better be attuned and responsive to so better "land" or be acceptable to clients beneficiaries in the future.  

      Many thanks again and please, if there are any further contributions or responses to the above...

      With best wishes and good weekends,

      Daniel   

  • Monitoring and evaluation (M&E), monitoring, evaluation, accountability and learning (MEAL), monitoring, evaluation and learning (MEL), monitoring, evaluation, reporting and learning (MERL) monitoring and results management (MRM) or whatever you choose to call it (or them?), should help us learn from experience. Sadly, this is not always the case.

    There is an apparent irony in the fact that systems supposedly designed to help us learn from experience have been so reluctant to learn from their own experience. In my view, this is in large part due to the isolation of M&E withing programmes and projects, to working in silos and collecting

  • What type of evaluator are you?

    Discussion
  • How to define and identify lessons learned?

    Discussion
    • Dear Emilia,

      First,  we can not always assume that those who claim to be learning organisations are necessarily so. I have learned that very often the most conceited and intolerant  are the ones who congratulate themselves on their capacity to learn and tolerance of other views.

      My crude answer is putting lessons to work is about strategies associated with incentives to do so - the organization should not only be accountable for the quality of evaluand objectives and their achievement but also for their adjustment as operating circumstances change; that is, accountability extends to accountability to learn. 

      My understanding of current practice, in relation to evaluations, in ensuring lessons learnt are taken heed of  and put into practice is typically about:

      a) the lessons learnt inform or are aligned with the recommendations - their consequences - lest they be missed altogether; 

      b) the recommendations are reflected in the "management" response; and

      c) management actually implements them.

      That's the theory and it defines much practice, yet a lot of this depends on who holds management to account in following through - to what extent are they accountable to learn? 

      Thanks again and best of luck moving forward with this.

       

      Daniel 

    • Dear Emilia,

      Hi and many thanks for such a useful post, and great to see how it has provoked so many varied and interesting responses from other community members.

      While I do not have any resources cum text book answers in mind, my experience has taught me three things:

      1. Crudely put - i apologise -  there are two types of lessons, each with their own questions a well phrased lesson needs to answer : what went well for whom and how; what did not go quite so well, for whom and why?. An adequate balance is not always struck between the two, perhaps due to the power dynamics between those that fund, that do and among those intended to benefit from development aid; implied from this 

      2. To be clear and search for who has learnt what from whom, why this is important and what is the consequence? Of course, providing discretion and opportunity to learn from those that matter most - the intended clients  - is important, yet so is it the responsibility of senior managers, who often know little about the practical consequences of their decisions on the ground, so to say, to do the same for form those who deliver the support. Their silence often stifles learning among them; and so, too, the programme's or organisation's capacity to adapt. (And it's an obvious point, yet worth mentioning: evaluation also needs to generate lessons on the performance of those that fund. This is politically a  tricky and messy ask as they commission evaluations and fund what is being evaluated. The main point point holds, however: they seldom make themselves available in being held to account to those that matter most; rather to their respective treasury or finance ministries.)  ho hum!  

      3. It is through doing this, listening to those on the ground, with an emphasis on the assumptions less so indicators, that generates the most revealing lessons. In other words, exploring the unknowns. Not doing so hampers success; it also encourages failure.

      I've shot my bolt, yet hope some of the above is helpful.  

      Best wishes and thanks again,

      Daniel  

    • Dear Eriasafu,

      Many thanks for the post, and good to be in touch on the subject of monitoring, much neglected and given short thrift by the evaluation community.

      I like your observation on how time complying to demands of collecting data all the way to the top of the results framework or theory of change, often missing out on the assumption along the way, crowds out time for reflection and learning. I believe such reflection comes in revealing the unknown through listening to and learning from those in need, not measuring those in charge - excluded and underserved communities.

      So, how to resolve the issue you raise as to how "MEL/MEAL systems are limited to compliance, outcomes and impact, and rarely include cross cutting issues such as gender and leave-no-one behind principles."

      It strikes me as ironic how, as monitoring is all about learning, it, itself, shows a limited capacity to learn about its past. The pursuits of measuring outcomes and impact are not so much limiting as they are mis-guided. Even if you had more time, outcome and impact indicators generate limited value for learning purposes. This is easier said than done in comparison to measuring indicators laid out in some needy theory of change or logic model. Indicators do what they are supposed to do, they measure things that happened, or not, in the past. They don’t tell you what to do. Monitoring does and should not entertain using rigorous – as a statistician would define the term - methods geared to academic concerns and obsessive pursuits of measuring and attributing intervention effects.

      Monitoring has different requirements as highlighted above; that is, if it is to help managers resolve their decision uncertainties. Your claim ignores the hegemony of mainly transient, academically inclined western evaluators, and those in the monitoring and results measurement community, addicted to single narratives, and rigid methodological dogmas. Monitoring needs to free itself from these mechanistic approaches; and managers need to step up, afford primacy to the voices and needs of indigenous communities, and take ownership to ensure monitoring does generate insights for decision-making purposes that benefit those who legitimize, not just measure the predicted results defined by those who fund, development and humanitarian aid.

      Of course, including gender and ensuring no-one gets left behind is important. However, and without sounding glib, doing this means management not getting left behind by, for example:

      • Pointing out that exploring assumptions matter as much as, if not more than measuring indicators and the ‘system’ needs to be driven by questions defined by those who are its primary users, and they do not include external evaluators;
      • Highlighting how, although numbers are important, they are arguably not as important as learning how, for example, the numbers of men and women or boys and girls came to be and how and how well they interact with  together.

       

      Thanks again, and I hope the above helps,

      Daniel

    • Dear Ana,

      Many thanks for responding, for sharing John’s 10 questions and his email address.   

      I wonder: has anyone else come across them or questions similar to them? And, if so, have you been asked them? If not, have you asked them of yourself in designing an evaluation?   

      It seems to me, responses to them could usefully inform the design of evaluation and/or help teams adequately prepare. That is, rather than waiting for community members to ask them on ‘arrival’, so to say.

       

      Does not doing so run the risk of potentially de-railing the process and wasting community members’ time?  

      What do you or others think?  

      Many thanks again Ana and will connect with John to find out more.

      With best wishes,

      Daniel 

       

       

    • Dear Pedronel,

      Hi and thank you for responding. And I completely agree with how evaluations and evaluators are challenged in the way you describe. Failing to overcome these risks an exclusion of more diverse streams of knowledge and local ways of making change can be especially hampered by a fixation on a pre-ordained finishing line rather than flowing with a generative process at the speed of seasons. 

      What are the challenges you mention in relation to learning about and prioritising indigenous knowledge?; and how do you think these can be overcome?

      Best wishes and thank you again,

      Daniel 

    • Dear all,

      My thanks to all of you who spared time to contribute to the discussion. I hope you found it interesting to read about the insights and experiences of others. The discussion will now be closed, but given the number of rich and varying responses, EvalForward, in collaboration with EvalIndigenous have decided to set up a webinar event on Monday 24th October at 14.00 (Rome time). On their behalf, I would greatly appreciate, if you have capacity, to participate and invite others in your own networks along as well.

      John Ndjovu will make a presentation to provoke, what we hope, will an exciting opportunity to share and learn more about this extremely important issue.

      With thanks in advance, and thanks again for contributing. We look forward to seeing you all there, so to say!

      Daniel

    • Thank you to all who have contributed to the discussion. Many of you point to the importance of culturally appropriate behaviours, and these are associated with compelling reasons. Some provide telling examples of western culture and how some of their institutions remain stuck despite being aware of the consequences in needing to change. However, too few reveal specific instances of experiences in how, either as commissioners or evaluators, they have sought to be culturally appropriate and/or how they have not, and with what consequence.

      Therefore, we would welcome any ‘personal’ experiences that respond more explicitly to the question: what lessons or experiences – successes, challenges, failures - have you had in trying to ensure evaluations adequately prioritise indigenous knowledge, values and practices?

      Many thanks. 

    • Dear Olivier, you are right: it's not universal, yet it is commonplace among many donors for evaluators and evaluations to be driven by the pursuit of being solely accountable to those who commission them and afford privilege to their administrative requirements and corporate objectives, not of those in need. What Bob Picciotto refers to as mindless evaluation machines, and quite rightly so. 

       

      Best wishes from a strange land - i live in the uk - and hope this finds you very well

    • Interesting analysis on the indiscriminate application and diminishing returns to the practice of late through its "performative" use.

      Reference to how "....sometimes, agencies can reduce reputational risk and draw legitimacy from having an evaluation system rather than from using it" reminds of the analogy the famous classicist and poet AE Housman made in 1903:

      "...gentlemen who use manuscripts as drunkards use lamp-posts,—not to light them on their way but to dissimulate their instability.”

      or in plain english relating to the subject: People Use Evaluation as a Drunk Uses a Lamppost — For Support Rather Than Illumination

    • Dear Anna Maria,

      Hi and I think you're on the right lines - develop some questions with which to frame conversations with people who have been in involved in implementation. I would also add those who were involved in developing the ToC (who may be different folk).

      My own experience in reviewing ToC follows three broad themes each with an overarching question:

      • Inclusiveness of the approach/method - who was involved and how well and whose theory is it? For example, was it the donor advisors consulting with beneficiaries/clients or just the donor staff doing their own  research and toing and froing around different technical areas and then signed off by an internal QA unit. or .......? 
      • Robustness of the evidence - on what basis were the assumptions developed and the results - outputs, outcome and impact - arrived at - the pathways to change and the changes themselves ?; and 
      • Coherence and plausibility of the product - is the diagram/visual accompanied with a clear narrative explaining HOW the Action Theory (ie, activities and outputs) will stimulate WHAT change among WHOM and WHY ie, the Change Theory). 

      The look of the product will also vary and, in this regard, Silva makes a good point, though i wouldn't profess to have the software skills to produce the second diagram, nor the intellectual capacity to understand it!!! The action and change theories rarely follow a linear trajectory, but there is no right or wrong. A key difference is in how the product makes clear the consequences for the monitoring and learning process. If it's building a bridge, then you simply engineer a set process to hell from beginning to end and monitor this accordingly. However, if its to do with programme outputs striving to stimulate changes in the behaviours and relationships among people - or outcomes - then this has obvious implications on monitoring: the assumptions made about how and why they will respond to outputs - matter as much as, if not more than, the outcome indicators.  

      Depending on who funds the work you are doing, each donor has a slightly different take on/guidelines for for  ToC (and LFs). I developed some for reviewing the content of log frames and have attached them. Hope they are of some help.  

      On log frames.......The methodology used for developing what people call a ToC is not so different to how some, like GTZ (now GIZ), develop Logframes. See here. I think this is the best method i have seen, thus strongly recommend it as reference in assessing the quality of the process and, in many ways, the product. Its essence is well captured by Harriet Maria's neat video. Claims as to how ToC take better account than LFs of complexity and with its emphasis on assumptions and better explaining the why's and how's of change ring somewhat hollow.   

      As with many methods and tools, there is nothing i believe to be intrinsic to LFs that encouraged many donor agencies to either ignore or mis-use the method and arrive at a product that is too simplistic and  deemed as not fit for purpose. Given they did, it didn't surprise me that they moved to ToC......!

      I hope some of this helps and good luck. Please do get back to me if you want to talk anything through.

      Best wishes,

      Daniel