Challenges of evaluation

Challenges of evaluation
11 contribuciones

Challenges of evaluation


Good evening everyone,

I work at the Algerian Ministry of Finance and recently joined this Community of Practice.

I would be particularly interested in opening a debate on the different constraints / limits that you encounter during the different evaluations of programs and interventions on rural development, agriculture and food security.

How do you get around them? And which are the ones you cannot get around?

Thank you all.


Hynda Krachni

Ministère de Finances



Esta discusión ha terminado. Por favor póngase en contacto con para más información.
  • Thanks, dear Naser, for bringing this issue again to the forefront.

    We should not stop 'hammering' that evaluation cannot and should not be disconnected from monitoring and we should do all we can to connect them from the start, at the moment of developmental action formulation, albeit a project, a programme, or a policy.

    It is a fact - and nobody can deny that - that most of the time developmental actions are:

    • lacking a clear theory of change, and hence a thorough and sound results framework; and,
    • not founded upon a robust M&E system which will systematize monitoring from the start and prepare the ground for evaluation.

    But why this is still happening after eighteen years of the MDG endeavour?

    Because of weak or insufficient M&E capacities within national systems in almost all developing countries, but also a 'stricking' reluctance and lack of political will to adopt a national M&E framework for national development. Again this fear of M&E as a control and audit system is in the air...

    Besides, whenever international organizations is pleading the need to build national capacities on this issue, stress and focus are rather put on evaluation and very low consideration is allotted to monitoring.

    And again, I would claim that monitoring and evaluation - and not monitoring or evaluation - are the two 'legs' of a system on which will stand a developmental action seeking to ensure achieving its expected results; choosing the one or the other would just mean that our development action - as a person standing on one leg - will certainly fall short of achieving its expected results.

    That's what I wanted to say as a rejoinder to Naser's contribution...


  • Hello dear friends,

    I am really happy. The last two contributions of Mustapha, Nasser and Raoudha are quite relevant. Thank you for sharing them with us.

    Compliance with the principles of evaluation is crucial in the conduct of all evaluations to ensure objectivity and therefore provide answers to the questions asked and introduce the necessary corrective measures.

    The rich debate that emerged from my question highlights the great diversity of constraints encountered and their impact on the results of the evaluation.

    Some fall under the responsibility of the evaluators, others to the evaluators and others to the policies and devices set.

    Once again, thank you all for your responsiveness and the relevance of your contributions.



  • Hello from Tunisia,
    Thank you very much, dear colleagues, for these rich and fruitful exchanges. I think that one of the major constraints of evaluation is the evaluation culture itself when it is not really well anchored. Most of the time we are satisfied with a self-assessment that is not based on a scientific basis and does not meet or only partly meets international evaluation standards. We need to be convinced that the evaluation will allow us either to adjust the direction to resume the right path or to continue on the same path that is the right one, and in both cases it will allow us to move forward. This will lead us to the second constraint that hinders evaluation, namely the personalization of the project or program. Indeed the person who is the leader of the project (I am talking about public institutions) does not accept that, supposedly, his project is evaluated because he believes that this evaluation will affect his credibility and this is really a big problem that goes back to the question of the evaluation culture.

  • Dear Colleagues,

    Good morning from Palestine.

    The evaluation challenges in rural development/ food security or agriculture are not much different from what is related to other sectors and even evaluation of policies.

    I agree with what colleagues mentioned earlier. From my experience in evaluating agricultural projects in Palestine I found one major issue complicating the evaluation which is the design of projects. Even with the international organizations, results frameworks of programs/ projects are not well defined. Indicators are not well adopted or formulated. The whole theory of change is not clear. This is reflected on the evaluability of the program. For examples, baselines studies when present, are not related to indicators. 

    Another major issue: clients of the evaluation (implementers) do not have clear understanding of the evaluation process and methodological approaches. Therefore, the ToR would not be clear, the expectations from the evaluation become not realistic, and as earlier stated, the program design does not allow good monitoring. 

    To end up with, a good evaluation implementer should start planning M&E at the first phase of the program cycle, have enough resources, do right things at the right time, especially monitoring. 

    Clients of evaluation should understand evaluation practices and know that no M&E can be done without working together with the evaluation team. And they should give enough time for evaluation, not leave it to the last month of the project.

    The discussion on this issue never ends. I think evaluation networks (like EvalMENA) should reach clear set of recommendations to enhance evaluation culture and reach common understanding of the M&E on both supply and demand sides.

    Good luck

    Naser Qadous

    Palestinian Evaluation Association

  • Hello everyone,

    Many thanks to our dear Hynda for opening a very interesting debate on the challenges and constraints that hinder the emancipation of evaluation in some countries. All that has been said is quite valid, nevertheless the lack of understanding of the evaluation function, as evoked by Hynda, very often perceived as a control and forcing many individuals to positions of resistance for different reasons, remains one of the challenges that must be addressed. From my modest experience in the various results-based management training workshops that I lead, in their monitoring and evaluation component, I always start by demystifying the monitoring and evaluation functions among participants by asking a simple question: do we do monitoring and evaluation in our daily life? And I engage in a frank and serene debate with the participants by taking them to evoke examples of the everyday life where the human being practices the monitoring-evaluation in a rather intuitive and fortuitous way. The example of a trip by car to a destination where we have never been to arrive at a specific date and time, according to a precise route that we have never taken, is the example that comes up quite often. And here we begin to dissect our actions to finally discover that we do quite often monitoring-evaluation, sometimes without realizing it, and concluding that eventually the monitoring-evaluation is rather in our favor than to our disadvantage.

    However, there are other challenges for the evaluation function that I can personally advance, by way of illustration and without being exhaustive, and that are better housed in the immediate environment of the evaluation function, including:

    • Self-censorship practiced by some evaluators in some political systems in order to remain "politically correct", sometimes pushing things to the point where politicians and other officials hear what they like to hear;
    • The interference of some politicians and other officials and the pressure on the evaluators to change certain conclusions in the evaluation report, or even disguise the reality highlighted by the evaluation exercise;
    • The scarcity - or absence - of reliable official statistics and quality sectoral studies to triangulate the "findings" of an evaluation;
    • The remoteness of some evaluators from the objectivity and neutrality required in the evaluation function to remain always in the "politically correct" while thinking about future contracts;
    • The proliferation of many academics whose aim has been to develop socioeconomic studies for decades (such as baseline studies, diagnoses, etc.) and who claim to be evaluators without understanding the foundations and principles of the evaluation function and without prior updating and necessary knowledge of the evaluation ...

    This is what I wanted to share with colleagues as a contribution to this debate ...



  • Dear Isha, dear community,

    That's right. The environment is critical to the success of the evaluations, particularly those related to agriculture and agricultural development. The large number of stakeholders somewhat limits this work of communication and awareness of the importance of evaluation. It seems to me essential today to develop the communication skills and "credibility" of the evaluator. This credibility has not only a technical scope, but also a very important formative dimension. Our face-to-face, understanding the scope of the evaluation, will develop a trust with us and rather than a relationship of mistrust, a real work of collaboration between the parties will be developed, which allows us to achieve very good results for our evaluation and contribute to the development of different agricultural policies and systems.

    Thank you all !!!


  • Dear Hynda,

    Very true. Most think that evaluation is assessment of finding faults rather other way around. 

    I think that the evaluation community does not consider enough the enabling environment for evaluation but focuses too much on conducting the evaluation based on TORs. 

    I take a step further and before the assignment, I conduct a basic awareness raising on evaluation for the contracting organization and their stakeholder, which makes things easier.

  • [English translation below]

    Bonsoir chère communauté,

    Je vous remercie vivement pour votre réactivité.
    Les points de vue sont très pertinents et émanent de véritables professionnels, c'est ce qui rend le débat intéressant.

    Vos contributions se complètent et indiquent que les entraves à l'évaluation sont nombreuses et peuvent réellement impacter ses résultats.

    Pour ma part, je voudrais rajouter en plus du déficit de la culture de l'évaluation, particulièrement dans les pays en voie de développement, la confusion entre l'évaluation et le contrôle, l'audit ou l'enquête. 

    Pour moi, évaluatrice rattachée à une institution publique, c'est la contrainte principale que je rencontre.

    Je m’explique: une évaluation participative repose sur des entretiens. Cependant, lorsque votre vis à vis pense que vous opérez un contrôle, il se ferme car il a peur de la sanction qui peut être le résultat d'un contrôle. 

    Dans ce cas, un travail de sensibilisation est fait, dont le résultat n'est pas toujours satisfaisant.

    La compétence et la distanciation de l'évaluateur, sont également capitales pour une évaluation objective, sans aucun parti pris et qui prenne en compte l'ensemble des facettes.

    Merci encore à notre chère communauté. C'est un grand honneur d'être parmi vous. Cet espace d'échange nous permettra d'apprendre davantage sur la pratique de l'évaluation.

    Je souhaite à tous une bonne continuité. 


    Good evening dear community,

    I thank you very much for your responsiveness.

    The points of view are very relevant and come from real professionals, which makes the debate interesting.

    Your contributions complement each other and indicate that the obstacles to evaluation are numerous and can really impact its results.

    For my part, I would like to add in addition to the lack of evaluation culture, particularly in developing countries, the confusion between evaluation and inspection, auditing or investigation.

    For me, evaluator attached to a public institution, this is the main constraint that I encounter.

    I will explain: a participatory evaluation is based on interviews. However, when the person interviewed thinks you are doing an inspection, he stops because he is afraid of the penalty that may be the result of an inspection.

    In this case, an awareness raining work is carried out, but the result is not always satisfactory.

    The competence and distance of the evaluator are also crucial for an objective evaluation, without any bias and that takes into account all facets.

    Thanks again to our dear community. It is a great honor to be among you.

    This space of exchange will allow us to learn more about the practice of evaluation. I wish all of you good continuity.

  • Dear Hynda,

    I am taking your question on challenges in evaluation from a broad/philosophical perspective.

    The question begs us to scrutinize, the many reasons why we should evaluate interventions, plans, programmes, projects, strategies, policies, processes, and so forth.  The reasons give us an indication of the hoped-for benefits expected from evaluation. We should remember that there should be at the minimum, a set of principles shared by the evaluation team and the target groups/object for the evaluation should we desire a purposeful/impactful evaluation. As such, we can categorize the challenges as technical and non-technical. This response focuses only on these two defined categories.

    In the first instance, technical challenges are so many, and this is rightly so, given the multiple realities that exist in this world).

    Technical challenges at the very least may be addressed with less difficulty provided the appropriate authority figures are consulted and appraised of what’s at stake as well as the communities affected by the evaluation activities. The saying ‘it matters who you know and not what you know’ is closer to the truth that we would dare to imagine.  The other challenge that could arise is the degree to which ‘surprises’ are embraced and accommodated before, during, and after the evaluation exercise. Such technical challenges could be addressed through specific agricultural and professional training in evaluation approaches, methods, and processes, among other topics. These trainings would also incorporate elements or aspects of the Sustainable Development Goals (SDGs) and how the SDGs present big evaluation opportunities at the intersection of food security, agriculture and rural development.

    In the second instance, non-technical challenges especially human-human interactions are a feature to deal with.  Such interactions partly dictate whether participants in the evaluation exercise would be willing to share information and knowledge to further the evaluation agenda.  An analysis of how societies are governed and function in any part of the world sometimes leaves us with wondering whether humans are ever going to get along anytime soon.  These short-comings in the human-human interactions call for the need for skills in creativity, people management, negotiation, and cognitive flexibility.

    I would want to end this note on a sanguine tone.  It is the potential and ability to get along as humans that opens possibilities for the evaluation processes. The exciting thing is that the greater the possibilities opened the richer the human experiences, and consequently, the easier it becomes to realize the objectives of any evaluation exercise and derive meaning from the exercise. Evaluation should after all be a ‘fun and joyous’ exercise.

    Raymond Erick Zvavanyange

    Country Representative

    YPARD - Young Professionals for Agricultural Development

  • Hello,

    A small contribution in response to the big and big question raised by Hynda about the constraints on evaluations of rural development or food security projects. It is difficult to exhaust the subject in this context ...

    • A major constraint is the lack or quality of studies on the baseline situation in project intervention areas. When these studies are not done in the state of the art (good diagnosis, complete analysis of the starting situation with the participation of men and women), the evaluation afterwards is not easy. It is an essential step when one wants to work in a perspective of qualitative change of a given situation because it makes it possible to orient the actions and to make a relevant choice of the actions and the actors and actresses ...
    • As part of the monitoring of the progress of an agricultural development project in a region in Burkina some years ago, the project identified poultry farming, among other things, as an income generating activity for women. This activity never flourished in the localities chosen simply because culturally it is an activity always devolved to men to preserve the harmony in the families ... A good diagnosis and a good analysis at the beginning would have made it possible to choose socially accepted activities or to think about strategies to help make the changes necessary for the well-being of all.
    • Another difficulty is the formulation of quantitative and qualitative indicators: indicators, when they exist, do not always make it possible to measure progress or changes induced by interventions at the community level. Gender issues: this theme is often forgotten or added to project documents as an attachment; which does not facilitate evaluations. We have already talked about it earlier with this  group ( ). Gender is a cross-cutting issue and should be taken care of from the beginning of the project formulation process. This is a very important issue in our countries in Africa where women contribute 70% to 80% of agricultural and vegetable production.
    • Another constraint is the time allocated to the evaluation and the budget allocated for this purpose: in the formulation of projects, there are not enough resources for monitoring and evaluation. This is detrimental to the successful completion of this activity, which is very important for the proper implementation and achievement of the objectives of the interventions.

    My modest contribution on this subject.



  • Dear Hynda,

    You have raised a very important question, which affects the quality of evaluation work. Evaluations of development programmes in the broadly defined areas of rural development, agriculture and food security are inherently complex. The assessments of results in these areas are affected by a multiplicity of biophysical, economic, and social systems and factors. There are different types of constraints and challenges in evaluation work that depend mostly on the context of the programmes or policy work being evaluated.  For example, accurate and timely assessments of potential impact and development change may be affected by the remote location of project sites, social stratification of rural communities, time required to produce productivity gains, adoption capacities of local communities, and many other factors.

    Evaluators often encounter issues with availability of baseline data, or information on the prevailing conditions of the development situation at the start of the projects or programmes addressing food security and agriculture development. This issue could be addressed by reconstructing baselines, for example, using ‘recall’ technique, i.e. requesting key beneficiaries or stakeholders to recollect information about these conditions in the past.

    Security situation in the country may also have a huge impact on the access to data and methods we chose for evaluation. The choice of evaluators could also be highly limited, as not all may have necessary clearance to visit high-risk areas, or experience in working in similar situations. 

    Accessibility of project sites may also be restricted or banned. To address these constraints, local consultants with access to restricted zones may provide support in data collection, and potential alternative evaluation methods could be also considered. In recent FAO’s evaluation of the large irrigation rehabilitation programme in Afghanistan, evaluation team faced a constraint of accessing some of the programme sites. The team opted for alternative method by using the open-source data from Google to assess the potential impact of the programme on the livelihoods in those specific sites. Google Earth maps were utilized to measure the expansion of the irrigated area and the vegetative cover along different sections of the rehabilitated canals. The methodology for measuring these areas was also using preliminary information from enumerators in the field who had access to the restricted zones, and were engaged in supporting collection of necessary data and information for the evaluation (e.g. the GPS coordinates of the irrigated areas in the vicinity of the irrigation canals). Then this information was analyzed based on historic data available from Google Earth on before- and after-project conditions and the changes based on vegetative cover at different periods during a year.

    These are just a few highlights of the constraints and challenges that evaluators may encounter in their work and an example of possible ways to address those. The range of such constraints is quite broad, and we encourage all members of this community to share their experiences in addressing different types of constraints and limitations.

    Kind regards,

    Serdar Bayryyev,

    Evaluation Officer

    Food and Agriculture Organization (FAO)