Isha [user:field_middlename] Miranda

Isha Miranda

Visiting Lecturer and Independent Evaluator
Independent Consultant
Sri Lanka

Isha, As address myself an independent elevator working for various organizations nearly 20 years and presently appointed to the Board of Directors  and CEO of Agromart outreach foundation (establish 1989).  well experiences with fully pledge NGO and programme management and evaluations with a strong management skills , monitoring skills, also as a visiting lecture,  trainer and facilitator  working in the field of INGO, Government , private sector, development and in the humanitarian filed. I have been trained and worked as team leader many years exclusively selected lies in the conceptions the most difficult situations and embrace complexity process, has an ability, stability and control, and instinctively resolve problems rapidly sometimes before they fully understand a problem’s significance. An effective trainer bring many experienced to the table also willing to share the knowledge. Notable expertise are Gender and Women empowerment, SDGs and CSO, Public Private Partnership and Governance enhancements.

Member of Associations and Professional Bodies;

    Sri Lanka: Current

  • Member SDG’s People’s Platform Volunteer People’s Review- Sri Lanka 
  • Visiting  lecture , Trainer and Facilitator: Sri Lanka Institute of Administration and Development
  • (Government Civil Service) 
  • Visiting lecture University of Kotalawala  Defence  Academy of Sri Lanka - Post graduate studies 
  • Member Government National Evaluation Policy Development Technical Committee-DPMM of Ministry of Finance   of Sri Lanka
  • National SDG’s VNR Review committee –Ministry of Sustainable Development  of Sri Lanka
  • Member of the Board - Sri Lanka Evaluation Association(SLEvA)             
  • National Committee Member - Sri Lanka Micro Finance Forum 
  • Member of the Board – Agro Mart Outreach Foundation
  • Member – Gender Women Empowerment  Sri Lankan Major Group – SDG 5 and 10
  • Adviser on Gender and women empowerment – NGO consortium of CHA 


  • Member world forum of SDG 16 , 5 10
  • Former Board Secretary - Asia Pacific Evaluation Association.(APEA)
  • Member – American Evaluation Association –(AEA)
  • Member – Community of Evaluators –(COE-South Asia)
  • Member -Technical Advisory Evaluation Committee –UNFPA-New York

EvalPartners: Member of the Eva partners working sub-group: Eval SDGs

My contributions

    • There are many positive outcomes of utilization of visual tools

      The most crucial elements are: 

      1. Visualisation of data. In terms of qualitative data collection using data collection procedures. 
      2. Evidence-based visuals to present findings. In terms of concrete activity and output, as well as consequence and impact. For example, the impact of a pandemic or specific disease control in agricultural and health sectors, or peace building efforts.  
      3. Diagrams, plagiarism detection, feedback and peer assessment, surveys (TV, Social Media, and Key Informant interviews recording), and classroom polls in terms of education.
      4. Visual report - Results - Positive and negative. For example, most JICA (Japan International Cooperation Agency) reports combine with pictorials. 
      5. They also provide users with flexibility and diversity by letting them to select from a wide range of graphic elements, formats, and styles and alter them to their tastes and goals.


      Challenges are: Online surveys need data accountability and identification of the surveyor and surveyee.

    • Dear Mallika

      I am so proud of you that you brought this topic openly. We as evaluators always performed neutrality and impartiality. But many organization such as UN, WB, ADB etc. (as Abubakar says "Most often Evaluators are selected due to some connections and when selected they hope to be selected again in the future") get into a comfort zone with some sets of evaluators or evaluations companies over and over. 

      I  have review many evaluation reports and found deterioration of professionalism in evaluation reporting, either because: a) very biased, b) lacking of synergies between findings vs. recommendation vs. conclusion, c) large reports of nearly 100s pages, without productive analysis, lack of productive data collection, less professionalism in data collections as well as questionnaires. These are few of my observations.

      I think time is up now to raise a voice on this also pushing for evaluation professionalism given priorities above, in order to save this profession. 

    • The issues are as follows: 

      Lack of horizontal assessment: the most common problem in TORs is a lack of horizontal assessment. There are various gaps in the evaluation process that could be filled but, instead, TORs often request to focus on their issue vertically rather than horizontally.

      We, as evaluators, are obliged to execute the TORs duly and this causes:

      - Lack in oversight: we should be able to use the important external elements that may have influenced the intervention to our advantage.

      - Gap in observation tools, such as behavioral tools for stakeholders

      - Gap in focus group discussions: i.e., lack of time and preparation

      - Gaps in external factors to the programme.

      There is also often inadequate understanding of the community, sites, or the intervention by the evaluation team: most assessors come into the issue with preconceived notions based on their previous experiences. This is one of the most serious errors we make. I strongly think that comparable programs may face different obstacles, methodologies, beneficiary profiles, and behaviors, and that time variables should be considered. We need to think about new difficulties. Do not use the same team over and again.

      Here are some of the social and environmental aspects that should be included in evaluation:

      Livelihoods need to be assessed based on the subject and local context, f.i crop production: evaluation needs to look into government or any other organization collaboration and cohesion, capacity (f.i. government logistics and services to the community: health facilities, education, agriculture centers for advocacy, product collections beneficiaries' capacity, collaborative capacity development) and new knowledge on climate changes, hazards mitigation, government subsidies (advises, fertilizers, seeds, technology etc.)

      Contribution of stakeholders: Assessing community knowledge and actions related to their livelihoods. Gaps in consistency, technical knowledge, logistics (localize and new crop development technologies), product knowledge, market information, price variables, and market middleman contributions. In addition, there are local political interferences and impacts.

      Stakeholders (Beneficiaries behaviors) - Consistency in production fields: health aspects (wellbeing risks, health deterioration due to epidemic, infectious illnesses, family and external abuses, education levels (formal, informal, and subjective), family nutritional level (adult and children).

      Financial Management - Poverty: Reduction Traps, wins and losses, women have greater access to microfinance inside and between families. Credit should be targeted at low-income households, particularly women.


    • I agree with John and Silva's earlier comments. 

      Evaluators responsibility is to give recommendations not solutions. But recommendations will help "solutions". 

      What is missing in evaluation practice is:  

      a. Most recommendations are unrealistic and not achievable. 

      b. Time is up that evaluators should cultivate a vision for the future. 

      c. Findings and recommendations should be program (holistically) wise.

      d. Evaluators should design their own evaluation indicators and this should be included in TORs. 


    • Dear Mauro, 

      I trust this mail finds you well. Certainly as Dorothy says, Evaluation has a component on presenting the findings and feedback before the final report.  

      For me that is the best part of the evaluation exercise. In some cases, I requested organizations to provide a good representation during the presentation of the findings and feedback, including different levels such as sub nationals, central level, sometimes field officials. 

      The benefits of presenting evaluation findings are that you will be able to engage in data verification and also gather some further qualitative information, as well as overcome the misunderstanding that evaluation is a fault finding exercise etc.

      You also could have pre-discussion with the stakeholders on the Evaluation process. 


    • Dear all,

      I agree with Silva, evaluation reporting is very bureaucratic. Even if people don't read, still having a big report seems to all "work well done". 

      But very few will read it all, may be not at all, they may only read the Executive Summary page. 

      The requirements for evaluation reports should comes with the TORs, however sadly TORs are too very uninformative, lack innovation or vision. Mostly I should say they are cut and paste. 

      Evaluation hasn't changed much on the above aspects. 

      Best regards 


    • Dear Jean, Gordon and others,

      Thanks for the such good topic for a discussion. Sadly you are correct on raising these issues .
      Many times I have seen evaluation reports that are bulky, with too many things to read and also a big gap between findings versus recommendations.

      Writing too many pages will make the reader bored, reading some things they knew about or looking to get to the point.

      My recommendation is keep it simple.

      1. make an executive page less than 4 pages (writing on both sides), highlighted on findings and conclusion and recommendations based on findings.

      2. make a summary less than 10 pages , more tables, diagrams and findings on bullet  points

      3. full report should take 50 pages.

      Best regards

    • With Covid-19 I see national consultants in my country more involved in evaluation. Here some responses to the questions proposed:

      1. How is this shift in responsibility being managed? What institutions are involved – government, universities, NGOs, private consultancy companies or individuals? 

      In my context, I see three entities mostly involved in evaluation: private consultancies (mostly), universities and NGOs. Universities are very theoretical and look for details which they are unable to find on the ground level; they usually write long and challenging reports. 

      2. How far is this responsibility being taken? Is it still confined to data collection and analysis, or does it include greater responsibility in the management of the evaluation ?

      I see both. Some evaluation companies are in charge of the complete organization, coordination and collaboration of the evaluation process on behalf of the evaluation unit. Others do only data collection and analysis. 

      Both are designated as National Consultants or national evaluation consultants. 

      3. How is this work being financed?

      I think donor contributions still prevail and most programs budgets embed the evaluation budget.

      I believe that due to Covid19 there is not much of cost effects due to less cost involvement of  international consultancies. In my case, I am mostly hired on a daily basis or with a flat figure with perdiem.

      4. What are the difficulties met? 

      On training: data collectors need to be trained for the task even though they are already trained.

      On reporting: it is usually the national consultant task. It is very challenging and it is where experience comes in place. If you have experience in working in Country program evaluation, you get things correctly.  Mostly you need to identify the process, select the documents and identifying the stakeholders.  Questions and methods in most cases need to be re-designed to adapt to the local context. 

      Challenges faced due to Covid 19 by evaluators also include:

      • more review and zoom interview based on prior set questions given. How ever both parties should have a good understanding of the task and programme in detail. 
      • interview meetings: many do not (government sectors and grassroots) feel comfortable with zoom interviews. 
      • reporting unless if you can get your template organized well it will be bit of challenging. 
  • Racism in the field of evaluation

    • Dorothy : So my real question is .... are evaluations making a difference or not? If not, how does that happen to greater effect?

      Answer: If the evaluations are focus on “why? Factor” and “how actor”? on youth instead of the Agriculture only, the   following needs to emphasis on findings and recommendation;

      1. Addressing on what? less negatives more positives

               Gaps and lesson learned – focus on the opportunities within and human development and          technology

               Opportunities- Focus on next generation as well as  transformation from traditional        farming to  entrepreneurs - Horizontal analysis

      2.     in the process of Evaluation  can  focus to attract youth to get involve in the process of “evaluation” where participatory and visibly youth can see the evidence that can be produce for a behavior changes on occupations on agriculture among youth.  (Messengers)

      3. Recommendations: achievable and magnetisms

        • Clearly clarify the sustainability of the agriculture short term and long term  
        • from traditional industry to  technical transformation in  agriculture
        • Answers to the risk and assumptions in youth prespectives
        • Society in terms of accepting they are farmers. (Social Acceptance) as a profession
        • Linkage with other professions  which can enhance/impact and sustainable  agriculture industry   
        • Recommendation on professionalism in capacity development
        • taboo Traditionalism in land ownership - specially South Asian culture - Gender discrimination issues. 



    • There is no concept called Youth agriculture evaluation. I believe evaluation does not have barriers, the profession comes with complete packages including technical experts.  

      Overall youth in evaluation are very limited around the world. Most youth have not been exposed to the profession due to many veterans still engaged and dominated in this profession.Most  either who had been ex -members of the UN organizations or associates who are either by retiree of international organizations. Also organizations too looking comfort zone to hire same professionals.

      i.e. Youth programme evaluation conducted by seniors, aging retiree professionals who are unable to understand the minds of youth. 

    • Dear Dorothy

      On engaging younger people in agriculture, I would like to share this article “Why are our youth not interested in agriculture?” by David Felix



    • Dear All, 

      I am sharing the Sri Lankan experience as a contribution to this debate. I am currently participating in the development of the Policy framework of M&E and one of the guiding principles we are using is the SDG concept of "Leaving no one behind".

      What are the most common mistakes that you face in your country?

      1. Common mistakes are linked to the fact that politicians make election promises that are not achievable and deliver election winning economic analysis.
      2. The biggest issue is that the country holds to outdated policies and laws, which remain unattended. For instance, it is common that countries that have been under a colonial era still have policies drafted in that era being used in government economic and social governance as well as laws in the country, which are ineffective and irrelevant.
      3. Too many governing structures: national, provincial and local authorities govern in parallel but they have underneath different political agendas which can be seen mishandling some policies, misinterpreting national priorities, mismanaging national interventions and resources etc.
      4. Politicization of governance structure and bureaucracy: everybody wants to remain in power and to remain in the position, which paralyzes the governance.
      5. Corruption at all levels, and hidden corruption even more dangerous than the visible one.

      Here are my suggestions to improve the use of evaluation in policies and programmes:

      1. Key to the any country economics planning is setting up a separate independent unit for monitoring and evaluation under the Act or Policy of government constitution/legislation to safeguard taxpayers money, accountability and transparency of the government programmes and projects/ interventions. This unit can be established under either the Planning ministry or the Ministry of finance but needs to maintain independence.
      2. The National Auditor general should focus performance audit function, operation or the management systems and procedures of a government entities to assess whether the entity is achieving economy, efficiency and effectiveness in the welfare and employment of available resources. This is a qualitative method.
      3. The Ministry of policy planning or another responsible entity should undertake in-depth analysis of all policies and trade agreements, government circulars and amendments with policy activities, to identify the gaps and lesson learned
      4. Develop a National M&E policy and Policy framework (the Sri Lankan government is in the progress of developing the framework). The M&E Policy can be mandatory. Set up expert and technical committee of National M&E / reviews and assessments committee. This will be the backbone of the planning and finance entities. The committee responds to the ministry of finance and Ministry of policy planning and economic reform.
      5. Promote joint evaluation with other funding and donor agencies and public participation in order to strengthen ownership.
  • What can we do to improve food security data?

    • Dear Emile

      Answering to your questions is highly complex in terms of collecting data and collecting quality data.

      See below my answers, which I hope will somewhat help you or that we can come together to improve the data collection.

      • How can we monitor and evaluate efficiently progress towards the SDG2 – End Hunger if we cannot count on reliable data and consequent statistics and indicators?

      All SDGs are complex, unable to be determined through direct data and most data vary from community to community.

      • Do you think there are also weakness and  challenges in data collection in your country?

      Yes we do have same issues. Departments of censers and statistics are not able to identify the data collection methodologies for SDGs. Practically it has been a nightmare to them.

      Another issue is that we could be able to gather many quality data could through CSO's. But those data are not being sharing at national level and within the organization. Therefore, we are losing quality data and useful data.

      Therefore, how do you address this issues?

      Then it comes to "Big Data". How accurate and viable when you know that we are missing some data?


    • Dear All,

      This is an interesting discussion, please see below my comments.
      Evaluation is not just about doing and writing a report. It also includes competencies in effectively designing, managing, implementing and using M&E. It includes strengthening a culture of valuing evidence, valuing questioning, and valuing evaluative thinking.  In terms of "Evaluative Thinking" it is not just looking at the programme, looking at the data analysis and giving the conclusions and recommendations, but also forward thinking beyond the programme. Looking for an unexpected theory of change rather than the expected one, it is visionary thinking, horizontal broad approaches. These are not capacities that can be learned by in training classes or in workshops only, but also need to develop a curriculum in the field. Joint and participatory evaluations are one of the methods to gain these experiences.


    • Dear Luisa and Lavinia,

      Thanks for sharing your work and thinking on Evaluation of Capacity Development and the framework you are working on.

      Unfortunately, many of these frameworks are useless if we do not develop the capacities of evaluators in the first place. Most frameworks are uninformative and lack of room for subjective and suitable adjustments.

      For instance, you ask about participatory evaluations. Most TORs are participatory in general (I think it is always cut and paste) and indicate methods that are generic and not up to evaluation expectations. In agriculture very often evaluators are either agriculture specialists or researchers; they are not evaluators and are not able to capture the specific needs of farmers that differ one from the other despite being all farmers. Often they use blanket commonly known questions.

      Participatory approaches are certainly a way to carry out a meaningful evaluation and to develop capacities of evaluands and beneficiaries, but to get there we have to make sure participation is effective and not a token. 

      Here some recommendations:

      1. We need to develop capacities of evaluators on the use of participatory approaches so that they are able to understand farmers’ points of view.
      2. Develop a tool / guideline for participants on participatory sessions.
      3. Elaborate soft skills as well as on the job skills on evaluation for young and emerging evaluators
      4. Develop capacities of trainers and facilitators on how to address marginalized people and indigenous communities in line with the focus on equity and gender.
      5. Guidelines: competences on selecting evaluation organization and external evaluators (individuals). 

      Isha Miranda

      Independent consultant and trainer, Sri Lanka

    • Dearest Natalia,

      I am not surprised either and I have similar experiences with many organizations.

      This is the one of the reasons why I decided to lobby for “freedom of speech for evaluators”. Let me explain what I mean.

      First, we cannot be independent and bound to be under ToRs instructions; this is not how evaluation should be conducted. Evaluation is a lesson learning, gap-finding mission to eliminate obstacles and prepare to become a visionary leader, seeing things early, logically and to respond and share with rest of the stakeholders and guide them to take things forward.

      I have done some fact-findings on this subject:

      1. In many organizations, the M&E entity or unit lacks evaluation knowledge at the field or ground level. They are leaning the art during the evaluation period from consultants.
      2. Most evaluators are bound to work as per the “Handbook” given by the organization and are unable to attend to any changes if required. Very uninformative.
      3. Most handbooks are more similar to curriculum for evaluation studies than “guidelines for evaluation”.
      4. A very few “Handbooks” are evaluation oriented, and instead focus on research approaches. Even the Kellogg foundation evaluation handbook does not differentiate the Researcher and Evaluator in (see page 7).
      5. In some cases the definition of the Evaluation is questionable in both documents, Handbook and TORs. i.e. TOR and handbook are not compatible. The Handbooks give guidance on major programmes evaluations or end/post evaluations but lack guidance and examples in the field and ground challenges.
      6. Very few give templates for a TORs Terms of Reference
      7. Many ToRs are cut and paste
      8. Most manuals include standards for all evaluations including the questionnaires as well as target groups and instruction on methods of conducting the interviewers and on selection of target group identified by the programmes. Most of the time they are very biased.
      9. No questions are suggested to address the indirect outcome or impact of the programme but only direct expect-able answerable questions.

      Overall sometimes I have asked the organization "why do you hire me, you can do it yourself" when they give and indicate everything on the methodology to conduct the evaluations.



      Independent evaluator

      Sri Lanka

    • Dear Patricia,

      see below my comments on your questions 1 and 2.

      1.      Have you ever been involved in the evaluation of social protection programs? What is the approach to assessing such programs?

      Yes, nationally and sub-nationally as well as internationally. At national level in my country the Government has created a program called "Samurdhi", in English "Prosperity". It is a social protection program for citizens living well below the poverty line. The vision is "To make a Poverty Free Empowered and Prosperous Sri Lanka by 2030”. The mission is “Contributing to economic development through the building up of a poverty free prosperous country by empowering disadvantaged people (economically, socially, politically, physically, psychologically, legally and environmentally) and minimizing regional disparity through delivering effective, efficient, speedy and productive solutions in a people-friendly manner through the satisfactory contribution of the network of Departmental, Community Based Organizations and Micro-Finance Institutions and professionals with the collaboration of the private, public, people and political sectors and local and global agencies“. 

      At sub-national level, micro finance institutions, NGOs and other government sub-national entities have taken over most of the targets from economic to social achievements. Therefore, most programs circle it around this objective. However, there are gaps showing in this program due to various social and political environment embedded into the systems.     

      Initially the program adopted the longstanding welfare approach using both monetary approach and non- monetary approach.

      Then it expanded to address the multidimensional aspects of poverty such as economy (consumption and assets), human development (education, health, safe sanitation, safe drinking water, electricity), socio-cultural dimension (dignity and network), political dimensions (power and voice) and protective aspects (conflict, natural disasters, risk of eviction).

      For more information:

      2.      What are the key elements that any expert would be looking for in social protection activities/programs?

      The evaluation questions should address both economic and social aspects of the social protection programme,to assess its contribution to community development in rural areas: 

      •             Is the programme sustainable, contributing to a stable community rather than creating dependency?

      •             Are the program elements linked with national priorities in terms of livelihood development? F.i. are there links with agriculture/non agriculture sectors, government Agribase/non Agriculture trade subsides/welfare programmes, and activities relating alternative and product development  non traditional agribase products?  

      •             Is the program considering land Management, Agriculture land distribution and harvesting technics?

      •             Are there activities supporting livelihoods such as market development activities relating to the local areas, market expandable beyond the local area, usable technology and introducing new methods, product development?

      •             Financial inclusion, control over income over expenditure management, loan management.

      •             Family management and prevention of addiction: such as alcoholism, drugs (locally and internally made stuff) local gambling, abuses and harassment on women and children.   


    • Dear Abubakar Muhammad Moki,

      I would like to share the Sri Lankan experience. In Sri Lanka the Evaluation Policy reached the legislation after 16 years and was launched in Sep 19th 2018.  The Policy framework is now under construction.  I have been through this process all these years and now are a member of the National Policy Framework Technical committee.

      First and foremost, it is important to create a culture of evaluation and key activities can fall under three broad areas:  

      1. Enabling environment,
      2. Institutional Capacity,
      3. Individual capacity in the country at all levels from national to sub national government.

      If the country does not have a Policy on Evaluation it is necessary to develop it in order to create the legally bonded “must” culture. However, policy alone is not enough to create a culture, which needs the following activities:

      1. Enabling environment, creating a demand before the supply
      • Do not wait for the whole government but can do it organization by organization.
      • If the government has a central agency who already does monitoring you could add or transform the mandate to Monitoring and Evaluation.
      • Setting up national policy combined with sub-national.
      • All the international donor funded intervention (INGO’s, Finance organizations etc): at the time of the contracting government should/could ask for a joint evaluation activity (i.e. evaluation team will comprise with Government and funding organization members) it is very much a lesson learning exercise that will help in creating ownership too.
      • Set up a “must” activities and embed the evaluation into that with inclusiveness in every government own program/intervention/project either nationally or sub nationally.
      • Lobbying the legislation either upper house/lower house/senate on policy on evaluation through members who are champion of good governance. Even few members can have a large impact.
      • Create more awareness by utilizing your VOPEs and other Professional Associations.

      2. Institutional Capacity: policy and frameworks take a long time to materialize. Therefore creating awareness on “what is evaluation” and “how” will benefit to their work and knowledge among the government institution and can make a tremendous impact. Take this message and promote the upcoming policy and framework even it is not yet ready.

      Target institutional capacity for:

      • Government and Sub –National institutes
      • Academia – Setting up workshops and other possible certificate or diploma programmes by practitioners of the field of evaluation.
      • Institutes like Project Management, Government Administration training institutes, universities etc.
      • Private sector institutes under Project Management etc.

      3. Individual capacity: this is important to create the culture.

      However, it is better that above centres enroll all the members of the working world to adopt as part of their job as learning and implementing evaluation “Must”


      • Resources and Practitioners in the country also need international support in order to upgrade the skills and methods. i.e. evaluation is not a uniform effort it’s change all the time, methods, approaches norms and standards etc.
      • Make it important and vital also how to create demanding culture,
      • Evaluation is very costly and how do you show the authorities that it is Value for Money.
      • Raise awareness to all via language they understand. i.e. legislation, individuals and institutional level too.

      If you need any clarification or any other assistance, please ask me I am happy to help.

  • Challenges of evaluation

    • Dear Hynda,

      Very true. Most think that evaluation is assessment of finding faults rather other way around. 

      I think that the evaluation community does not consider enough the enabling environment for evaluation but focuses too much on conducting the evaluation based on TORs. 

      I take a step further and before the assignment, I conduct a basic awareness raising on evaluation for the contracting organization and their stakeholder, which makes things easier.

  • The issues facing global agriculture

    • Dear Bintou,

      Many thanks for the response. This is what I like about EvalForward, it gives us an open platform to talk about many things.

      I guess that I only somewhat agree with what you say, because my thinking is that the future evaluations should take a broader approach than what we normally do.

      I would like to quote my friend Zenda Ofir focusing on DAC criteria, as we both honestly believe that DAC criteria need a face lift and argue that time is up for changes of the old with new thinking. 

      Zenda says "yes, we can have ‘top-down’, ‘bottom-up’ or ‘up-and-down’ interventions and strategies. But we need to be much more aware of the realities within which we should aim to make a difference".

      Also she says that "We are working ourselves into a technocratic, simplistic notion of development, humanitarian work and evaluation that makes us increasingly irrelevant for that which matters now.

      Yes, it is in part the result of the political economy in which we work. But we are not that powerless to change key aspects of our work. It is a matter of will and conviction".

      Best regards



    • Dear EVAL-ForwARD members,

      I was delighted to see a lively debate raising from my discussion question! 

      To my query about the need to address the burning issues I mentioned when evaluating agriculture-based interventions, the majority of your responses drew the attention on the scope of the evaluations, which are bound by theories of change. 

      I continue to think that the commissioner/s of evaluation must take these concerns seriously when it comes to either policy level evaluation or to any activity related to agricultura-based programme evaluation in their future assessments. I strongly believe that as evaluators we should find a way to incorporate these burning issues in the TORs of evaluation for the benefit of programme implementers, and entities, so that they come to think more about productive and sustainable manners to lead there interventions in future.

      I thank you all for your insightful contributions.

      Isha Wedasinghe Miranda

      Sri Lanka