Daniel [user:field_middlename] Ticehurst

Daniel Ticehurst

Monitoring > Evaluation Specialist
freelance
United Kingdom

Daniel is a reluctant evaluator, his passion is monitoring with a focus on listening to and learning from those who deliver the support and those who matter most- their clients or ultimate beneficiaries. He strongly believes it is to them, not donors, organisations and programmes should be primarily accountable.

Daniel’s skills are in advising and mentoring in:
• facilitating development and reviews of theories of change and results frameworks, preferably one or the other, certainly not both;
• reviewing, establishing and developing thoughtful monitoring and learning processes and products in organisations and programmes;
• fostering cross team and portfolio learning; and
• designing, oversighting and leading different types of evaluations – ex ante, process and impact - that are utilisation focussed.

Daniel holds an MSc in Agricultural Economics, with a focus on Agrarian Development Overseas, from London University as well as a BA in Geography from the School of African and Asian Studies, University of Sussex.
He lives North of London with his Mosotho wife, Tsepe and has two children – Thabo and Ella. He plays tennis and volunteers at the King’s College African Leadership Centre, University of London and the Woodland Trust.

My contributions

    • Dear Ravinder,

      Many thanks for your post on agroecology and its call for explaining and measuring its value. Really interesting, hence this reply. Coincidentally I worked at NRI from 1997-2002, thoughI never made it to Associate Professor 😏

      I thought your first question, ‘hidden’ in your introduction was great, so I've tried to answer it. I then provide, I hope, some useful references by way of answering your third question on available evidence.  

      1.    But do we really understand the value of agroecology in terms of its potential contribution to poverty alleviation, human health, and the environment?

      I think there remains widespread misunderstanding about the environmental impact of food production. Modern-day agriculture is not a battle between “good” organic farmers and “bad” industrial ones.  Just because a farm is organic does not mean it has sidestepped environmental and social drawbacks of large-scale farming. Organic farms, for instance, can still employ a damaging monoculture approach. Rather, it is between sterile monocultures of a limited number of foods and a more diverse approach to farming. One which marries a particular place’s unique ecology with local farmers’ knowledge of how to make their landscapes useful to humans: Agroecology. The only way to feed the Earth’s rapidly growing population without destroying the planet. Many farmers practice what is referred to as “climate smart agriculture”. Thing is we often do not know about them. Worse, we don’t seek to find them and learn. More often than you would hope or expect, the starting point is that research institutions can offer them ways to practice ‘it’. The example of Mr Zepheniah Phiri, an indigenous innovator, is a wonderful example of an agroecological farmer. (See later.) The opportunity for support is less about extending climate smart agricultural practices to him and his farm, more about extending his approach to others (and preferably not through farmer field schools!)

      Moving on. The productivity of nearly one-half of all soil worldwide is decreasing. Another 15 percent can no longer be used for farming because its biology has been so depleted. Biodiversity is fading, too. Look at Africa: Fallow areas have virtually disappeared. On average the rate of fallow is 1.2 percent with fallow having all but disappeared with the exception of Tanzania (7,8%) and less so Uganda (5%). The result of African farmers more than doubling annual increases in cropped land from 1.7Mha in 2004-2007 to just under 4Mha in 2016-2019. Production gains have been through an increase in area under cultivation; that is, as opposed to gains in productivity. This is in stark contrast to the rest of the world where production increases have been realised exclusively through increases in physical yields. Studies have shown that if progress on crop yields in Africa does not improve, the continent will lose large amounts of its natural habitat to farmland. In many countries across Sub-Saharan Africa, researchers estimate that cropland could almost triple by 2050. This will come at the cost of wildlife: in these same projections, 10% to 20% of animal habitats will be lost.

      For some smallholders, adopting an agroecological approach to farming is an option. One, also, that makes their farm more robust so insulates themselves from debilitating pests and weather patterns. Such an approach also has the potential to undo some of the environmental degradations of conventional farming by restoring nutrients to the soil. All this said, however:

      1.    African smallholder farmers, unlike their European counterparts, are taxed in the form of subsidising urban consumer prices and lack a voice and agency to reform such govt interference.  
      2.    There is little or no evidence that such practice will boost yields to the extent needed; 
      3.    It assumes farming households, where often labour not land is the binding constraint, will be in a position to allocate more time to farm this way when household members are malnourished; and, relatedly 
      4.    Food security for some households is not necessarily best pursued through own production.

      Agroecological techniques replace the "vicious cycles" bringing down our planetary support systems with "virtuous circles" that mimic nature's own systems. For instance, agroecology can restore soil fertility and sequester carbon naturally rather than spewing it dangerously into the atmosphere or as acid into the ocean. Its nutrient cycling approach — whereby nitrogen passes again and again through food systems, roots, and soils — can turn waste into raw materials rather than pollutants.

      As already mentioned, current performance metrics for agroecology often fail to take the type of multifunctionality set out above into account. Rather they focus disproportionately on productivity and profitability. This limits its assessment of the multiple economic, environmental, and social values created by agroecological farming systems.

      3. Do we already have some demonstrative empirical evidence proving or disproving the value of agroecology?

      Sone examples that, if you have not already seen them, I hope will be helpful.

      A systematic overview of the effects of agroecological practices on socio-economic indicators using a sustainable livelihoods framework 
      Agroecological practices also bring ancillary benefits to poor rural regions. This study found that, since this kind of farming is labour-intensive, it can create valuable employment opportunities in communities starved for jobs. In addition, the emphasis that agroecology places on biodiversity dramatically improves nutrition in many developing countries, especially in areas formerly reliant on cereal-based systems that produced large quantities of rice, wheat and maize, which lack vital micro-nutrients. https://www.researchgate.net/publication/283721240_Social_and_economic_performance_of_Agroecology and here: https://www.iatp.org/sites/default/files/2019-06/2019_06_11_Agroecology_links_IATP.pdf 

      A farmer case study 
      An inspirational mentor of mine – Master water harvester Mr Zepheniah Phiri from Zvishavane District, Zimbabwe – who said Farming systems need to “rhyme with nature” if they are to be sustainable. Mr Phiri’s farm integrated scientific understanding with his knowledge of how to make his local landscapes useful to humans. He celebrated the value of diverse and complex methods of land stewardship. His approach re-integrates livestock, crops, pollinators, trees, and water in ways that work resiliently with the landscape.

      Unlike other farming systems that rely only on annuals that grow rapidly during the brief rain periods, his system focuses on perennials, or at least multi-year species like bananas, reeds, bamboo, sugar cane and yams. With deep and extensive roots, they can access water and nutrients at a deeper level. The roots also have a stabilizing effect, tying up the soil and preventing surface erosion by wind and water. As the roots slow down water runoff, they can help manage streams and avoid dry or flash flood situations.

      The wide diversity of crops, livestock and other products provides him with a steady and resilient income through the vicissitudes of economic and ecological crisis, cycle and change. He has become very resilient to droughts, for he is putting far more water into the soil than he takes out. Phiri practices a wide diversity of crop rotations tailored to meet the different soil-water conditions and to help manage weeds, pests and diseases. 
      https://afsafrica.org/wp-content/uploads/2019/04/water_harvesting_zimbabwe.pdf 

      The Foresight Global Food and Farming Futures project 
      This reviewed 40 agroecological projects in 20 African countries. Between 2000 and 2010, these initiatives doubled crop yields, resulting in nearly 5.8 million extra tons of food. But agroecology doesn't just increase the output of farms. It values farmers' relationships with and knowledge of their lands. https://assets.publishing.service.gov.uk/media/5a7e00c6ed915d74e33ef6a8/14-533-future-african-agriculture.pdf 

      Scaling-up agroecological approaches: what, why and how? 
      A useful discussion paper produced by Oxfam in 2014 that provides an extensive body of evidence demonstrating how efficient scaling-up of agroecological approaches can contribute to ensuring sustainable and resilient agricultural and food systems today and in the future.

      https://www.fao.org/fileadmin/templates/agphome/scpi/Agroecology/Agroecology_Scaling-up_agroecology_what_why_and_how_-OxfamSol-FINAL.pdf 

      My thanks again.

      Daniel 
       

    • Dear Emily,

      Many thanks for the interesting and thought-provoking blog. 

      In reading it, I also skimmed the documents to which you provided links. The report on the Digitalisation of African Agriculture by the Technical Centre for Agricultural and Rural Cooperation ACP-EU report on was particularly revealing. First my thoughts on this, then those on the blog itself. 

      1.    The CTA Report on the Digitalisation of African Agriculture

      The hope that that D4Ag could be a game changer in boosting productivity, profitability, and resilience to climate change. This assertion is riddled with many assumptions. 

      Two points: 

      First, potential jobs for 75% of unemployed African youth may be, what about how this transforms African agriculture and the lives and livelihoods of farmers? I question the significance of how the absence of digital solutions offers a significant reason as to why smallholders are disconnected from input and product markets. The absence of a solution rarely explains the underlying problem. I also worry about what Varoufakis calls Techno feudalism  - the tyranny of big tech - and the effects of D4Ag going to scale, who the main beneficiaries are, and who pays the rent. A bit of a counter-culture, and pours water on the tech parade, but Yanis Varoufakis makes some intriguing points. 
      https://www.theguardian.com/world/2023/sep/24/yanis-varoufakis-technofeudalism-capitalism-ukraine-interview

      Second, if the  EU is serious about supporting the transformation of African Agriculture, it would: 

      • Do a lot more than co-finance the African Continental Free Trade Area’s (AfCFTA) Secretariat in Dakar - one of Agenda 2063’s flagships - and look beyond mimicking China’s belt and road support through the $150 investment allocated for the Global Gateway. A good start would be to cancel its main trading mechanism with the continent  - the EPAs and pressure African governments to stop taxing its farmers: a policy that contradicts what Africa needs as defined by the AfCFTA; and a policy that largely explains the constraints to achieving yield gains. 
      • Reform its CAP that facilitates the dumping of food on domestic African markets; inhibits Africa’s aspirations to grow its agricultural economies through extortionate non-tariff barriers to African exports; and, equally important, taxes European consumers and wreaks havoc with European ecosystems through biodiversity losses and greenhouse gas emissions.   

      The adverse effects of the above are arguably the main reasons explaining Africa’s food import bill, their continuation its projected increase.

      In addition to the above comment on youth, technofeudalism, and the policy and regulatory constraints, I found the YouTube clip  - Key Figures from the Report - to be cleverly presented but thin. For example, the projected 22% decline in yields on the continent, I submit, is not the main and most important consequence of climate change as much as the growing presence of D4Ag can resolve so stimulate increases in yield, let alone those in farmer incomes. Biodiversity loss is the main significant consequence. Why? Fallow areas have virtually disappeared in Africa. On average the rate of fallow is 1.2 percent with fallow having all but disappeared except for Tanzania (7,8%) and less so Uganda (5%). The result of African farmers more than doubled annual increases in cropped land from 1.7Mha in 2004- 2007 to just under 4Mha in 2016-2019. For the most part, production gains have been through an increase in area under cultivation; that is, as opposed to gains in productivity. This is in stark contrast to the rest of the world where production increases have been realised exclusively through increases in physical yields. This expansion of agricultural land has taken over natural ecosystems and has been the biggest driver of the destruction of Africa’s biodiversity. Defining success has to do more than claiming incremental increases in farmer yields and associated gains in smallholder incomes typically reported by many NGOs and donors that are typically used to justify the project investment. I don’t believe the issues facing Africa’s farmers can be resolved by projects anymore.   


      2.    The blog itself. 
      A great problem statement: “But MEL is often imposed by donors to track the impact of their funding, and service providers often associate it with tedious reporting and struggle to see its value.”.  

      But isn’t it too easy to blame donors, and assume they know what information they need and when and what decision uncertainties they face? Consulting companies, the agents of donors, rarely negotiate information requirements. They see the donor as the ultimate client and lack a balanced accountability arrangement with those in need/those they are paid to support. Projects often resemble a traded commodity trapped in a client/agent relationship. 

      And the default measure of crop yields? The obsession, almost an indicator fetish, with using crop yields as a valid measure of success for agricultural projects, was called out back in the early 1990s’. See a blog I wrote on the pitfalls of having this as a valid and useful pursuit. Its use is a cockroach M&E policy measure: you think it was flushed away; yet keeps coming back!!! (Sorry)

      It would be really interesting to learn more about the MEL approach you designed for the GSMA Innovation Fund for the Digitisation of Agricultural Value Chains – who did you read/talk to on developing the approach? 

      I like how you saw and pursued the need to re-brand M&E and adapt its tools to effectively collaborate with private sector partners. As part of a study on impact investing back in 2020, I stumbled across what I thought was a great example developed by Leapfrog, an impact investor in the financial services sector. Its approach to capturing customer experience and making services more service user/client-centric reminded me of the pioneering work of Robert Chambers and Lawrence Salmen in the development aid sector back in the 1980s. Feedback loops and treating farmers as the subject of conversations on issues that matter to them; as opposed to objects of a survey on matters that concern the donor. Leapfrog’s approach is documented here if you are interested. https://leapfroginvest.com/press-release/creating-impact-with-leapfrogs-cx-launchpad-program/

      I completely agree that when designing MEL frameworks, it is useful to reflect on the value proposition of MEL; one that balances the information requirements among a “hierarchy of users” and doesn’t divorce itself from so is seen in isolation with other people and processes – financial control, learning, decision-making, and delivery. More often than one would hope or expect, the process starts on the wrong foot by developing a theory of change and/or a results framework and “slides downhill’ from there. But, I was left wondering why GMSA did this with the private sector. Why isn’t this done for all M&E frameworks?  

      Your approach involved running three quantitative surveys per service to gain smallholder feedback on services, which also helped you report on outputs and early outcomes KPIs such as farmer satisfaction with services and behaviour change in farming practices.

      Why three  - to capture seasonality and why quantitative? Won’t this many encourage fatigue to set in just as it appeared to be the case with farmers being swamped with SMS? And, concerning your last screenshot - High SMS reading rate and understanding of advice, but behaviour change challenges remain and the frequency of SMS is too high – how do the numbers inform answers from the enumerators in response to the two questions on the left-hand side? Did your survey design test the assumptions made explicit in the Impact Roadmaps or Project Blueprints as much as measure movements in the relative values of pre-defined indicators (of adoption, for example)? 

      Apologies for the ramble, yet I hope some of the above observations are helpful, and many thanks again.

      Best wishes,

      Daniel 
       

    • Dear Musa,

      Your point on donor-led evaluation and its consequences are largely correct - Dahler-Larsen's evaluation machines.

        "Steering, control, accountability, and predictability come back on the throne. The purpose of evaluation             is no longer to stimulate endless discussions in society, but to prevent them."

      Thing is, donors pay for and design them. What does this say about evaluation capacity within donor agencies? And I'm not referring to academic expertise on methodology (the supply side, rather the politics of the demand side).  

      For example, DFID's, now FCDO, evaluation function has never been independent - it's been hidden under the broader research function - with inevitable consequence. Tony Blair was proud of his lack of adaptability in not having a reverse gear or changing course. No surprise that an independent review rated DFID as red on learning and found that 

      “Staff report that they sometimes are asked to use evidence selectively in order to justify decisions.” 

      It is often the most rigid and bureaucratic organisations that congratulate themselves on being a learning organisation. This happened, not because DFID did not have many excellent and competent staff, rather because of how powerful political and institutional imperatives crowd out time to think, reflect and be honest. 

      As an aside, have you /do you know of any "evaluations" commissioned and paid for by the Liberian govt that assess donor performance incl the FAO in the agriculture sector? 

    • Dear Harriet, 

      Many thanks for prompting this discussion and, as Paul said, for the links to specific examples. Really helpful. 

      I liked the example of the work with Financial Services Deepening Kenya (FSD) Kenya in Marsabit and how it involved FSD Kenya brokering partnerships with CARE and Equity Bank [link here] (It would be interesting to find out, given this all started in 2016, to what extent the groups in Marsabit are faring and whether they remain dependent on CARE's sub-contract with FSD Kenya. For Equity Bank, i wonder whether the savings products they sold to the groups have found "markets" beyond Marsabit.)

      Moving on, i wanted to share my first experience of using visual tools back in the early 1990's in Bangladesh on an irrigation project, lessons from which i still take heed of. They respond to your first two questions.

      I am doing this for two reasons. First, i agree with Silva Ferretti that use of visuals tools are not just about communicating the "result" of an evaluation, but also an integral part of the process - as Kombate says re: data collection and analysis . Second, reference made by Harvey on the use of GIS and Landsat TM imagery.

      We "measured" the area of land irrigated in specific communities through 'pictures' / Landsat images of the country over a three year period. We found out how irrigated areas varied significantly between communities in the same year and over time for the same community. We wanted to find out why. Rather than staying in the office, we took hand drawn maps for each community down down from the landsat images and took them with us. Through focus group discussions we presented these maps to each of the communities. The discussions focussed on us listening to the groups discuss why and how the demand for irrigation water varied so much. The 'results' from these discussions informed not only lessons for the community in managing irrigation facilities, but also for local upazilla govt support and the implications for national policy. For me, it was a lesson as to how if you want to find out why and how people respond to national level interventions, just go ask them and learn from them how they make decisions and why. Far better this, than staying in the office and further manipulating data.

      I hope the above is not too terse and crude a contribution, and thanks again.

      Best wishes,

      Daniel 

       

    • Dear Seda, what a great contribution. Thanks. Proving a quality of science is important, yet insufficient. It, and the explanations around it, falls short for a organisation claiming its research programme is for development. The guidelines feel feint on this. As you say, and as I alluded to in my response, you want the them to be a stronger, more compelling read.    

    • Dear Svetlana,

      Hi and thanks for the opportunity to comment on the guidelines. I enjoyed reading them, yet only had time to respond to the first two questions.

      My responses come with a caveat - I do not have a research background, yet observed during a time i worked with agricultural scientists that the then current preoccupation with assessing impact among the ultimate clients group, as gauged by movements in the relative values of household assets, tended to mask the relative lack of information and interest about the capacity and capabilities of local R&D / extension systems before, during, and after investment periods. Their critical role in the process often got reduced to being treated as assumptions or risks to "good" scientific products or services.

      This made it difficult to link any sustainable impact among beneficiaries with information on institutional capacity at the time that research products were being developed. This may also have explained how believing in (hopelessly inflated) rate of return studies required a suspension of belief,  thus compromising prospects for efforts in assessing the impact of research to make much difference among decision-makers.

      Moving on - My responses to the two of your questions follow, and hope some you find interesting, useful even. 

      1.    Do you think the Guidelines respond to the challenges of evaluating quality of science and research in process and performance evaluations?

      Responding to this question assumes/depends on knowing the challenges to which the guidelines refer. In this regard, Section 1.1 is a slightly misleading read given the title. Why?

      The narrative neither spells out how the context has changed nor therefore, how and why these pose challenges to evaluating the Quality of Science. Rather, it describes CGIAR’s ambition for transformative change across system transformation – a tautology? - resilient agri-food systems, genetic innovation, and five  - unspecified  - SDGs. And, it concludes by explaining that, while CGIAR funders focus on development outcomes, the evaluation of CGIAR interventions must respond to both the QoR4D – research oriented to deliver development outcomes – and OECD/DAC – development orientation – frameworks. 

      The reasons that explain the insufficiency of the 6 OECD DAC criteria in evaluating CGIAR’s core business do not appear peculiar to CGIAR’s core business, relative to other publicly funded development aid  - the unpredictable and risky nature of research and the long time it takes to witness outcomes. Yes, it may take longer given the positioning of the CG system but, as we are all learning, operating environments are as inherently unpredictable as the results. Context matters. Results defy prediction; they emerge. Scientific research, what it offers, and with what developmental effect is arguably not as different as the guidelines suggest.  About evaluating scientific research, the peculiarity is who CGIAR employ and the need to ensure a high standard of science in what they do – its legitimacy and credibility. The thing is, it is not clear how these two elements, drawn from the QoR4D frame of reference, cover off so to say the peculiarities of CGIAR’s core business and so fill the gap defined by the 6 OECD DAC criteria. Or am I missing something?

      The differences between Process and Performance Evaluations are not discernible as defined at the beginning of Section 2.2. Indeed they appear remarkably similar; and so much so I asked myself – why have two when one would do? Process evaluations read as summative self-assessments across CGIAR and outcomes are in the scope of Performance Evaluations. Performance Evaluations read as more formative and repeat similar lines of inquiry - assessing organisational performance and operating models as well as process to Process Evaluations – the organisational functioning, instruments, mechanisms and management practices together with assessments of experience with CGIAR frameworks, policies etc.. No mention of assumptions – why given the “unpredictable and risky nature of research?” Assumptions, by proxy, define the unknown and for research managers and (timely) evaluations, they should be afforded an importance no less than the results themselves. See below

      The explanation as to the differences between the Relevance and Effectiveness criteria as defined by OECD/DAC with QoR4D in Table 2 is circumscribed. While the difference to do with Relevance explicitly answers the question of why CGIAR?, that for effectiveness is far too vague (to forecast and evaluate). What is so limiting about how the reasons why CGIAR delivers knowledge, products, and services  - to address a problem and contribute to innovative solutions  - can not be framed as objectives and/or results? And especially when the guidelines claim Performance Evaluations will be assessing these. 

      2. Are four dimensions clear and useful to break down during evaluative inquiry (Research Design, Inputs, Processes, and Outputs)? (see section 3.1)

      This section provides a clear and useful explanation of the four interlinked dimensions – Research Design, Inputs, Processes, and Outputs in Figure 3 that are used to provide a balanced evaluation of the overall Quality of Science. 

      A few observations:

      “Thinking about Comparative Advantage during the project design process can potentially lead to mutually beneficial partnerships, increasing CGIAR’s effectiveness through specialization and redirecting scarce resources toward the System’s relative strength”. https://iaes.cgiar.org/sites/default/files/pdf/ISDC-Technical-Note-Iden…

      1)    With this in mind, and as mentioned earlier in section 2.3, it would be useful to explain how the research design includes proving, not asserting, CGIAR holds a comparative advantage by going through the four-step process described in the above technical note. Steps that generate evidence with which to claim CGIAR does or does not have a comparative advantage to arrive at a go/no go investment decision. 

      2)    Table 3 is great in mapping the QoS’s four dimensions with the six OECD/DAC criteria and I especially liked the note below on GDI. I remain unclear, however, why the Coherence criterion stops at inputs and limits its use to internal coherence. External coherence matters as much, if not more, and especially concerning how well and to what extent the outputs complement and are harmonised and coordinated with others and ensure they add value to others further along the process.  

      3)    While acknowledging the centrality of high scientific credibility and legitimacy, it is of equal importance to also manage and coordinate processes to achieve and maintain the relevance of the outputs as judged by the client. 

      4)    I like the description of processes, especially the building and leveraging of partnerships  

      5)    The scope of enquiry for assessing the Quality of Science should also refer to the assumptions, specifically those that have to hold for the outputs to be taken up by the client organisation, be they a National Extension Service or someone else. Doing this should not be held in abeyance to an impact study or performance evaluation. I say this for, as mentioned earlier, the uncertainty and unpredictability associated with research is as much to do with the process leading up to delivering outputs as it is in managing the assumption that the process along the impact pathway, once the outputs have been “delivered”, will continue. This mustn’t be found out until too late. Doing this helps mitigate the risk of rejection. Scoring well on the Quality of Science criterion does not guarantee the product or service is accepted and used by the client remembering that it is movement along the pathway, not the QoS, that motivates those who fund CGIAR.
       

    • Dear Richard,

      Thank you for providing the link to your reflections on M&E. A telling and thought-provoking read. I especially liked, yet was surprised at how, the issues you raise persist, notably:

      # 3 On the limited consequence of research plots (on farm?) regarding the spread of practice/technology on the farmer's other plots and/or among other farmers in the community.

       - And all in the face of farming systems research with its focus on systems thinking and Chamber's work on farmer first dating back to the 1980's. How can we remind people associated with today's Agriculture Market Systems Programmes of these, and others lessons?

      # 4 On how donors assume that land, not labour is the limiting factor with the unlettered indicator of choice being physical or financial runs to land  - yields - without bothering to find out why which smallholder farmers  cultivate what.

       - Your reference, later into the document, to Kipling's poem "White Man's Burden" reminded me of William Easterly's book with the same (borrowed) title. His central message is about how imposition by the west of large, grand schemes thought up by "friends of Africa"  - Tony Blair' Africa Commission, Sachs and Millennium Villages and Obama's Feed the Future programme. In Agriculture, unlike Health and Education, farmers are not patients treated by doctors or pupils taught by teachers, they are the experts.

      Last week there was an interesting EvalForward webinar on Evaluation and Climate Resilience. One thing that interested me was how little the evaluations revealed about indigenous "Climate Smart" agriculture. The term seems limited to practice being introduced to farming communities without necessarily learning about how, for example, indigenous concepts of soil-moisture dynamics could explain contrasting seasonal and inter-annual fluctuations in agricultural productivity, nutrition, health, mortality and even marriage rates across a soil-type boundary.    

      #11 On how M&E is more about covering up failure and its fit with taxpayer expectations. Peter Dahler Larssen's (mindless) Evaluation Machines define a god example of what I think you refer to here.  He and Estelle Raimondo presented a great expose of current evaluation practice at last year's European Evaluation Conference. On the taxpayer issue, there some interesting research a few years ago that highlighted how UK taxpayers don't want numbers, rather stories of how and why Aid works, or not. Thing is, DFID is not accountable to the UK taxpayer, but the Treasury (who want numbers). Numbers, as Dahler-Larden and Raimondo say, is one of evaluations blind spots. 

       

      Apologies for the Monday afternoon rant, and thanks again for pitching in with your writing. 

    • To you all, my thanks for sparing time to share your experiences and insights. I will be posting, based on your comments, some conclusions and tips when the discussion closes next week. 


      Meanwhile, I wanted to make some initial responses drawn from your comments.


      1. The trick to make monitoring useful is not to leave it to people who may not be natural judges of performance, whether they are employees of donor agencies or their agents. People who are fluent in developing frameworks and theories of change, use overly complicated language and are well versed in an array of methodologies insisted on by the donor. Understandably, this puts off many team members and managers. It seems boring and onerous. So much so that, for some, it is not clear that it is even a profession. Perhaps, monitoring is but a contrived learning process unique to development aid?


      2. The fashion of adding more letters to the acronym, M&E, such as L - Learning, A – Accountability, R – Results appears to be more for affect, not effect. I, like some of you, query why some consider this either revealing or helpful. It defines the fatuity in which some of us toil.


      3. It also distracts from the most important feature many of you point out. To listen to and so learn from those that matter most - the ultimate clients or beneficiaries. They are also the experts. Too often their voices and objectives are crowded out  by those of donors typically set out in log or results frameworks. Accountability to donors, not to beneficiaries appears to more commonplace than would be expected or hoped for, and being so is burdensome for other stakeholders.


      4. As some of you mentioned, the inevitable result is a mass of numbers and comparisons that provide little insight into performance. Some even require a suspension of belief given typical implementation periods. Rather they are often used for justifying the investment to donors; and may even paint a distorted picture of the reality. Beating last year's numbers is not the point.


      5. Managers need to take ownership of monitoring - to find measures, qualitative as well as quantitative, that look past the current budget and previous results and ask questions. Questions that reveal answers to help determine how the programme or project can better be attuned and responsive to so better "land" or be acceptable to clients beneficiaries in the future.  

      Many thanks again and please, if there are any further contributions or responses to the above...

      With best wishes and good weekends,


      Daniel 
       

    • Dear All,

      Many thanks for all your varied and useful responses. Informed by these, I have put together some concluding remarks. I hope you find them useful.

      The trick to make monitoring useful is to avoid leaving it in the hands of people  who may be fluent in theorising, using overly complicated language and well versed in the array of methodologies, yet may not be  natural judges of performance. Understandably, this puts off many team members and managers.

      As some of you mentioned, M&E activities often throw up a mass of numbers and comparisons that provide little insight into performance. Rather, they are used for justifying the investment; and may even paint a distorted picture of the reality. Managers need to take ownership of monitoring - to find measures, qualitative as well as quantitative, that look past the current budget and previous results and ask questions, answers to which  determine how the programme or project can best attract and retain clients or beneficiaries in the future.

      Five takeaways from the discussion in the form of tips:

      1. Avoiding elephant traps in design

      • Change takes time. Be realistic when defining the outcome (what changes in the behaviours and relationships among the client groups will emerge) and the impact (what long term consequences will such changes stimulate geographically outside the programme area and/or in the lives and livelihoods of  clients).
      • For market system programmes: i) farming systems are systems too, and need an adequate diagnosis; ii) don’t make premature reference to system level change during the pilot phase among the hierarchy of results in order to treat impact as being solely about farmer level change; and iii) the crowding in phase is, by definition, impact in a geographical or spatial sense, and rarely is it possible to observe, let alone ‘measure’ this within pre-ordained project time frames; see here for a ‘watered down’ version of how M4P (making markets work for the poor) learns from itself

       https://assets.publishing.service.gov.uk/media/5f4647bb8fa8f517da50f54a/Agriculture_Learning_Review_for_publication_Aug_2020.pdf

      • Ensure the outcome and its indicators reflect the needs and aspirations of those in need, not those of donor- for example, do not assume all farmers aspire to increase returns to land (i.e. yield/productivity gains). Often the limiting factor is labor and not land.

       

      2. Distinguishing competencies for M with those for E

      • Clearly explain how monitoring is driven by helping managers resolve decision-making uncertainties, often found among the assumptions, through answering questions. And in doing so, clearly distinguish these from questions that evaluators  - who often come from a research background – are trained to answer typically for by the donor.
      • Use the useful analogy of accounting (monitoring) and audit (evaluation) to help make the distinction – they are done for different reasons, at different times and by and for different people. You can be a “best in class” evaluator by developing methods, delivering keynotes at conferences, getting published, teaching, attending an “M&E” course. Do these skills and experiences make you “best in class” at monitoring? No, not necessarily and rarely indeed. Yet it is surprising how much sway and influence evaluation and evaluators have on monitoring practice – developmental evaluation?

       

      3. Negotiating information needs with donors

      • Unambiguously define what information is needed to manage implementation by balancing the need to be accountable to the client as much as, if not more than, to the funder, and do it before developing a theory of change and/or a results framework.
      • Focus these information needs on the perceptions of client farmers, and their reception and acceptance or rejection of the project – being accountable to them will aid learning, more so than that from being accountable to funders and learning about their administrative issues; and
      • Do not limit management’s and client information needs to indicators in a logframe and blindly develop a “measurement” or “M&E” plan. Taking this route leads to a qualitative degeneration of monitoring. Assumptions, or the unknown, often matter more for monitoring than indicators when working in unpredictable operating environments. And: “Beating last year’s numbers is not the point; a performance measurement system needs to tell you whether the decisions you’re making now are going to help you and those you support in the coming months”.[1]

      ​​​​

      4. Integrating the monitoring process into those of other people and processes

      • Build into the job descriptions of those who deliver the support asking questions they can use to develop relationships with and better learn from clients – see 3a) above;
      • Use Activity Based Costing as a way to encourage financial specialists to work with those responsible for delivering the outputs – this helps cost the activities so link financial and non-financial monitoring (It will help you answer value for money questions, if required.)
      • Good management decision-making is about making choices, and monitoring information needs to inform these. A decision to stop doing something or do something differently should be analyzed as closely as a decision to do something completely new.5. 

      ​​​​

      5. Being inquisitive while keeping it simple

      • Ignore the pronouncements of  rigid methodological dogmas or standard. As some of you mentioned, there is a lot of really useful material out there. Old and new. Take the risk of thinking for yourself….
      • Keep it simple and avoid making it more complicated by keeping up with fads and jargon that isolates monitoring through impenetrable language

       

      “If you can't explain it simply, you don't understand it well enough.” 

  • Monitoring and evaluation (M&E), monitoring, evaluation, accountability and learning (MEAL), monitoring, evaluation and learning (MEL), monitoring, evaluation, reporting and learning (MERL) monitoring and results management (MRM) or whatever you choose to call it (or them?), should help us learn from experience. Sadly, this is not always the case.

    There is an apparent irony in the fact that systems supposedly designed to help us learn from experience have been so reluctant to learn from their own experience. In my view, this is in large part due to the isolation of M&E withing programmes and projects, to working in silos and collecting

  • What type of evaluator are you?

    Discussion
  • How to define and identify lessons learned?

    Discussion
    • Dear Emilia,

      First,  we can not always assume that those who claim to be learning organisations are necessarily so. I have learned that very often the most conceited and intolerant  are the ones who congratulate themselves on their capacity to learn and tolerance of other views.

      My crude answer is putting lessons to work is about strategies associated with incentives to do so - the organization should not only be accountable for the quality of evaluand objectives and their achievement but also for their adjustment as operating circumstances change; that is, accountability extends to accountability to learn. 

      My understanding of current practice, in relation to evaluations, in ensuring lessons learnt are taken heed of  and put into practice is typically about:

      a) the lessons learnt inform or are aligned with the recommendations - their consequences - lest they be missed altogether; 

      b) the recommendations are reflected in the "management" response; and

      c) management actually implements them.

      That's the theory and it defines much practice, yet a lot of this depends on who holds management to account in following through - to what extent are they accountable to learn? 

      Thanks again and best of luck moving forward with this.

       

      Daniel 

    • Dear Emilia,

      Hi and many thanks for such a useful post, and great to see how it has provoked so many varied and interesting responses from other community members.

      While I do not have any resources cum text book answers in mind, my experience has taught me three things:

      1. Crudely put - i apologise -  there are two types of lessons, each with their own questions a well phrased lesson needs to answer : what went well for whom and how; what did not go quite so well, for whom and why?. An adequate balance is not always struck between the two, perhaps due to the power dynamics between those that fund, that do and among those intended to benefit from development aid; implied from this 

      2. To be clear and search for who has learnt what from whom, why this is important and what is the consequence? Of course, providing discretion and opportunity to learn from those that matter most - the intended clients  - is important, yet so is it the responsibility of senior managers, who often know little about the practical consequences of their decisions on the ground, so to say, to do the same for form those who deliver the support. Their silence often stifles learning among them; and so, too, the programme's or organisation's capacity to adapt. (And it's an obvious point, yet worth mentioning: evaluation also needs to generate lessons on the performance of those that fund. This is politically a  tricky and messy ask as they commission evaluations and fund what is being evaluated. The main point point holds, however: they seldom make themselves available in being held to account to those that matter most; rather to their respective treasury or finance ministries.)  ho hum!  

      3. It is through doing this, listening to those on the ground, with an emphasis on the assumptions less so indicators, that generates the most revealing lessons. In other words, exploring the unknowns. Not doing so hampers success; it also encourages failure.

      I've shot my bolt, yet hope some of the above is helpful.  

      Best wishes and thanks again,

      Daniel  

    • Dear Eriasafu,

      Many thanks for the post, and good to be in touch on the subject of monitoring, much neglected and given short thrift by the evaluation community.

      I like your observation on how time complying to demands of collecting data all the way to the top of the results framework or theory of change, often missing out on the assumption along the way, crowds out time for reflection and learning. I believe such reflection comes in revealing the unknown through listening to and learning from those in need, not measuring those in charge - excluded and underserved communities.

      So, how to resolve the issue you raise as to how "MEL/MEAL systems are limited to compliance, outcomes and impact, and rarely include cross cutting issues such as gender and leave-no-one behind principles."

      It strikes me as ironic how, as monitoring is all about learning, it, itself, shows a limited capacity to learn about its past. The pursuits of measuring outcomes and impact are not so much limiting as they are mis-guided. Even if you had more time, outcome and impact indicators generate limited value for learning purposes. This is easier said than done in comparison to measuring indicators laid out in some needy theory of change or logic model. Indicators do what they are supposed to do, they measure things that happened, or not, in the past. They don’t tell you what to do. Monitoring does and should not entertain using rigorous – as a statistician would define the term - methods geared to academic concerns and obsessive pursuits of measuring and attributing intervention effects.

      Monitoring has different requirements as highlighted above; that is, if it is to help managers resolve their decision uncertainties. Your claim ignores the hegemony of mainly transient, academically inclined western evaluators, and those in the monitoring and results measurement community, addicted to single narratives, and rigid methodological dogmas. Monitoring needs to free itself from these mechanistic approaches; and managers need to step up, afford primacy to the voices and needs of indigenous communities, and take ownership to ensure monitoring does generate insights for decision-making purposes that benefit those who legitimize, not just measure the predicted results defined by those who fund, development and humanitarian aid.

      Of course, including gender and ensuring no-one gets left behind is important. However, and without sounding glib, doing this means management not getting left behind by, for example:

      • Pointing out that exploring assumptions matter as much as, if not more than measuring indicators and the ‘system’ needs to be driven by questions defined by those who are its primary users, and they do not include external evaluators;
      • Highlighting how, although numbers are important, they are arguably not as important as learning how, for example, the numbers of men and women or boys and girls came to be and how and how well they interact with  together.

       

      Thanks again, and I hope the above helps,

      Daniel

    • Dear Ana,

      Many thanks for responding, for sharing John’s 10 questions and his email address. 
       

      I wonder: has anyone else come across them or questions similar to them? And, if so, have you been asked them? If not, have you asked them of yourself in designing an evaluation? 
       

      It seems to me, responses to them could usefully inform the design of evaluation and/or help teams adequately prepare. That is, rather than waiting for community members to ask them on ‘arrival’, so to say.

       

      Does not doing so run the risk of potentially de-railing the process and wasting community members’ time?
       

      What do you or others think?  

      Many thanks again Ana and will connect with John to find out more.

      With best wishes,

      Daniel 


       

       

    • Dear Pedronel,

      Hi and thank you for responding. And I completely agree with how evaluations and evaluators are challenged in the way you describe. Failing to overcome these risks an exclusion of more diverse streams of knowledge and local ways of making change can be especially hampered by a fixation on a pre-ordained finishing line rather than flowing with a generative process at the speed of seasons. 

      What are the challenges you mention in relation to learning about and prioritising indigenous knowledge?; and how do you think these can be overcome?

      Best wishes and thank you again,

      Daniel 

    • Dear all,

      My thanks to all of you who spared time to contribute to the discussion. I hope you found it interesting to read about the insights and experiences of others. The discussion will now be closed, but given the number of rich and varying responses, EvalForward, in collaboration with EvalIndigenous have decided to set up a webinar event on Monday 24th October at 14.00 (Rome time). On their behalf, I would greatly appreciate, if you have capacity, to participate and invite others in your own networks along as well.

      John Ndjovu will make a presentation to provoke, what we hope, will an exciting opportunity to share and learn more about this extremely important issue.

      With thanks in advance, and thanks again for contributing. We look forward to seeing you all there, so to say!

      Daniel

    • Thank you to all who have contributed to the discussion. Many of you point to the importance of culturally appropriate behaviours, and these are associated with compelling reasons. Some provide telling examples of western culture and how some of their institutions remain stuck despite being aware of the consequences in needing to change. However, too few reveal specific instances of experiences in how, either as commissioners or evaluators, they have sought to be culturally appropriate and/or how they have not, and with what consequence.

      Therefore, we would welcome any ‘personal’ experiences that respond more explicitly to the question: what lessons or experiences – successes, challenges, failures - have you had in trying to ensure evaluations adequately prioritise indigenous knowledge, values and practices?

      Many thanks. 

    • Dear Olivier, you are right: it's not universal, yet it is commonplace among many donors for evaluators and evaluations to be driven by the pursuit of being solely accountable to those who commission them and afford privilege to their administrative requirements and corporate objectives, not of those in need. What Bob Picciotto refers to as mindless evaluation machines, and quite rightly so. 

       

      Best wishes from a strange land - i live in the uk - and hope this finds you very well

    • Interesting analysis on the indiscriminate application and diminishing returns to the practice of late through its "performative" use.

      Reference to how "....sometimes, agencies can reduce reputational risk and draw legitimacy from having an evaluation system rather than from using it" reminds of the analogy the famous classicist and poet AE Housman made in 1903:

      "...gentlemen who use manuscripts as drunkards use lamp-posts,—not to light them on their way but to dissimulate their instability.”

      or in plain english relating to the subject: People Use Evaluation as a Drunk Uses a Lamppost — For Support Rather Than Illumination

    • Dear Anna Maria,

      Hi and I think you're on the right lines - develop some questions with which to frame conversations with people who have been in involved in implementation. I would also add those who were involved in developing the ToC (who may be different folk).

      My own experience in reviewing ToC follows three broad themes each with an overarching question:

      • Inclusiveness of the approach/method - who was involved and how well and whose theory is it? For example, was it the donor advisors consulting with beneficiaries/clients or just the donor staff doing their own  research and toing and froing around different technical areas and then signed off by an internal QA unit. or .......? 
      • Robustness of the evidence - on what basis were the assumptions developed and the results - outputs, outcome and impact - arrived at - the pathways to change and the changes themselves ?; and 
      • Coherence and plausibility of the product - is the diagram/visual accompanied with a clear narrative explaining HOW the Action Theory (ie, activities and outputs) will stimulate WHAT change among WHOM and WHY ie, the Change Theory). 

      The look of the product will also vary and, in this regard, Silva makes a good point, though i wouldn't profess to have the software skills to produce the second diagram, nor the intellectual capacity to understand it!!! The action and change theories rarely follow a linear trajectory, but there is no right or wrong. A key difference is in how the product makes clear the consequences for the monitoring and learning process. If it's building a bridge, then you simply engineer a set process to hell from beginning to end and monitor this accordingly. However, if its to do with programme outputs striving to stimulate changes in the behaviours and relationships among people - or outcomes - then this has obvious implications on monitoring: the assumptions made about how and why they will respond to outputs - matter as much as, if not more than, the outcome indicators.  

      Depending on who funds the work you are doing, each donor has a slightly different take on/guidelines for for  ToC (and LFs). I developed some for reviewing the content of log frames and have attached them. Hope they are of some help.  

      On log frames.......The methodology used for developing what people call a ToC is not so different to how some, like GTZ (now GIZ), develop Logframes. See here. I think this is the best method i have seen, thus strongly recommend it as reference in assessing the quality of the process and, in many ways, the product. Its essence is well captured by Harriet Maria's neat video. Claims as to how ToC take better account than LFs of complexity and with its emphasis on assumptions and better explaining the why's and how's of change ring somewhat hollow.   

      As with many methods and tools, there is nothing i believe to be intrinsic to LFs that encouraged many donor agencies to either ignore or mis-use the method and arrive at a product that is too simplistic and  deemed as not fit for purpose. Given they did, it didn't surprise me that they moved to ToC......!

      I hope some of this helps and good luck. Please do get back to me if you want to talk anything through.

      Best wishes,

      Daniel