Neutrality-impartiality-independence. At which stage of the evaluation is each concept important?  

Neutrality-impartiality-independence. At which stage of the evaluation is each concept important?  
32 contributions

Neutrality-impartiality-independence. At which stage of the evaluation is each concept important?  

©FAO/Antonello Proto

Dear members,

I believe that if the purpose of evaluation is to generate programme correction, learning and change, there needs to be independence throughout the process.

In addition, data collection requires neutrality and impartiality in order to ensure that all stakeholders are listened to, not just the beneficiaries of an intervention or programme. Not collecting data in a neutral, impartial and fair manner will result in the loss of information that may be essential for the evaluation commissioner. Data analysis and interpretation may be influenced by the personality and the culture of the evaluator. However, data collected in a neutral and fair manner will be unbiased and will  help make the necessary corrections and adjustments later on. This allows us to refer to evidence-based decision making.

In summary, the administrative and political independence of the evaluator is crucial throughout the entire process, but to ensure that everyone has been heard, neutrality and impartiality in an evaluation is more important in the data collection and reporting than in the results interpretation.

  • What is your opinion/experience on this?

Here are examples of guides that discuss these concepts:


Malika Bounfour

This discussion is now closed. Please contact info@evalforward.org for any further information.
  • I do very much agree with your views because if the data collection process is undermined, then the wrong data would be collected, which would definitely affect its interpretation and by extension the reporting. The data collection, interpretation and reporting processes must follow a logical sequence like the result chain - One must inform the next.

  • Good afternoon, very interesting input from each of you and which enriches the evaluator's view. I agree that impartiality, neutrality and independence must be present throughout the process. But this does not mean that the evaluator's experience, culture and ideology are factors that can influence the data and its analysis.

  • I appreciate Silva reflection.

    For an evaluation to be successful there is no need to be a scientific finding, just making participate all stakeholders around a thematic is really a challenge, it is impossible to avoid that a result and conclusion are unbiased. We are not in a laboratory we are in an open system.

    Mohammed Lardi

     

  • I really enjoyed reading the note and seeing how carefully it was written, taking into consideration all views.

    It is useful to see where the discussion is at. But the subject, "closing remarks" is a bit off-putting.  :-)

    As Malika says, it is more useful to keep the discussion open.

     

    There is an assumption whereby evaluations need to impartial and neutral (and that the evaluator is a guardian of this),

    a tendency to equate evaluations with research (even research cannot always be impartial!):

    The underlying understanding of evaluation is: a product generated by an expert who selects the perfect sample and gets to scientific conclusions.

    Is this really what an evaluation should look like and be?

    Shouldn't an evaluation rather be an opportunity to apply evaluative thinking about a programme?

    An opportunity where different people, with different worldviews get to understand better where a programme is at, what possibilities are ahead, what can be learned?

    I really feel strongly about this: we are presenting ALL evaluations as if they need to be "scientific products", originated by experts, capable of being impartial and wise.

    Some evaluation (or better, some research) might well have this focus.

    But assuming that this should always be the goal for evaluation in general is very problematic.

    Participatory evaluations, for example, are not at all about creating one impartial view.

    They are about the perspectives of diverse people together, to make sense of a situation.

    They might not even get at shared / agreed findings, yet they can be incredibly powerful in injecting needed critical thinking about action.

    The evaluator is not always the scientific expert... s/he can be the facilitator.

    Certainly s/he then needs to think about inclusion, representation, and be very aware of the relationships, position, and power of stakeholders.

    But inclusion, representation are fundamentally different concepts from neutrality / impartiality / independence (which should also not be mixed in the same bag).

    It is about being aware (as much as possible) and honest about what are the dynamics at play, about the choices made...

    rather than pretending that we can achieve objectivity.

    Many of my evaluations, for example, are not neutral BY CHOICE.

    I strive to give more voice to the people who are usually less represented.

    I talk to more women, to more outcasts, to more people with special challenges.

    Yet I truly think that this open choice of being biased, is much more useful than an attempt to neutrality and impartiality.

    With the limited time and resources of an evaluation, which voices are worth listening to, which conversations are worth having?

    Being aware and open of what are our choices is more powerful and honest than pretending we can be unbiased. :-) (and if the point is to have scientific evidence, then let's embark in research... which is something else)

    Thanks again for sharing interesting points so far, and for facilitating the discussion.

    I hope that this interesting discussion can continue.

     

    Best

    SIlva

     

     

     

     

     

  • Dear members

    Thank you all for your insights and contributions. The discussion brought together different experiences/views but most seem to agree on the core principles of the question.

    Before going any further, I will explain my perspective:

    Even in laboratory experiments where all the conditions are controlled, scientists allow themselves a level of error but they try to make it as small as possible. Therefore, I am not talking about 100% sure of the results with human (complicated being)  interventions.

    My take away from the discussion is that we all strive to be “objective and  inclusive “as much as we can. The latter expresses our “confidence interval'' and “degrees of freedom”.

    The discussion brought in a wide array of subjects pertinent to independence/impartiality/neutrality of evaluation. From discussing the concepts to suggesting work methodologies, contributors enriched the discussion.

    Different contributions brought up important factors that may influence the independence, neutrality and impartiality of evaluators.  Mr. Jean de Dieu Bizimana and Mr. Abubakar Muhammad Moki  raised the issue of  the influence of evaluation commissioner and the termes of reerences on these concepts. Dr Emile  HOUNGBO brings in the financial dependence of the evaluator, especially if the organization/team financing the evaluation is also responsible for the implementation of the intervention. Mr Richard Tinsley sees that even when funds are available, evaluators may lack neutrality in order to ensure future assignments. Mr Tinsley gave the example of farmer organizations that do not play the role intended but still are pushed on small holders. From Mr Lasha Khonelidze perspective, “open mindedness” of the evaluator is important in bringing in diverse points of view, but as important is to ensure that the evaluation is useful to end users (these are to be defined in ToR).

    Mr. Sébastien Galéa suggests working on norms/standards at the level of programme management in the field. He also brings in the importance of peer to peer information and experience exchange around the world (ex evalforward). He gracefully shared a document, the title of which clearly indicates that the aim of the evaluation is better results in the future, either through subsequent interventions, or through adjustments to the evaluated intervention. The paper also explains independence/impartiality from ADB perspective and how this organization worked on these principles. In my view, Weiss' paper shared by Mrs Umi Hanik came in as a complement. Weiss' paper analyzes program development, implementation and evaluation. The main idea is that programs are decided according to a political environment and since evaluation is meant to guide decision-making, it also shares the pressure from the political participants in the programme. Thus for a program participant, public acceptance is more important than program relevance. Therefore, I think this is where evaluation  independence/impartiality/neutrality come into play.

    Abubakar Muhammad Moki added that some companies recrute evaluators they know, which is confirmed by Mrs Isha Miranda who added that this fact impacts the quality of evaluation, leading to decrease in the quality of evaluation reports (evidence-based argument) :) . Mr. Olivier Cossee added that recruited  evaluators should be “reasonably neutral”. This, I believe, puts pressure on the evaluation commissioner to verify/ check for “reasonably” neutral evaluators and introduces another variable : to what extent is the evaluator reasonably neutral ? (can we refer to behavior studies?). For Mrs Siva Ferretti, the evaluator's individual choices are influenced by her/his culture and thus, it is difficult to be “really” inclusive/neutral. Podcast shared by Mrs Una Carmel Murray gives an example of inclusion of all participants in order to dilute the subjectivity of the researcher. Also, Mr Abado Ekpo suggests taking time to integrate the logic of the different actors and understand them in order to conduct an objective evaluation. In addition, Mr Steven Lam and Mr Richard Tinsley discuss the importance of the methodology in bringing in all participants’ interest. Mr Lal Manavado summarized the reflection in terms of  accountability to fund providers or  politicians or social groups. My view is to be accountable to the project objectives. Were they achieved or not? if not why not?. if achieved for whom?

    Mr. Khalid El Harizi added that the availability of data/information at the start of the evaluation as well as the capability of evaluators to synthesize the data are important. It is to be noted however that, even when data are available, they may not be easily accessible to evaluators. This is confirmed by Mr Ram Chandra Khanal who brought up the issue that lack of time and limited access to information on stakeholders will impact data collection.

    This discussion clearly raised the issue of term definition. As previously stated, end users need to be defined. Also Mrs. Svetlana Negroustoueva asked for examples to contextualize the term of independence. In addition, Mr. Thierno Diouf raised the importance of defining all the terms discussed from the perspective of all stakeholders and evaluators. These definitions should be clear in guides, standards and norms. 

    Mr. Diagne Bassirou talks about loss of quality and deepness of analysis with “too much” objectivity since the evaluator may not know about the socio-demographic conditions of the area. In my perspective and as Mr Lahrizi stated, there are data / information available (or should be available) at the start and the commissioner should make these available to the evaluation team. My experience is that there is always an inception meeting where these issues are discussed and cleared. Ability to analyze these data/information would be a matter of competence of the evaluator and not his/her independence nor impartiality.

    In summary it is possible to achieve a relevant degree of impartiality/neutrality/ in evaluation, given, the terms of references are clear, data are available and independence of the evaluator is ensured through sufficient funding and administrative independence. The evaluator needs to do work on self in terms of beliefs, culture and biases. Methodological approaches could help reverse possible unbiasedness.

    Program funders as well as program managers and evaluators are accountable for the changes brought by interventions. Could we link this reflection to cost-social benefits for development intervention?

    At last, this is probably an “open ended question” and could lead Therefore, let’s keep the discussion open. 

    Some exchanged links:

    https://www.sfu.ca/~palys/Weiss-1973-WherePoliticsAndEvaluationMeet.pdf

    https://www.ecgnet.org/sites/default/files/evaluation-for-better-results.pdf

    https://disastersdecon.podbean.com/e/s6e2-researcher-positionality

    https://agsci.colostate.edu/smallholderagriculture/request-for-information-basic-business-parameters/.

  • Evaluating requires distancing oneself from one's culture, logic and value system. The context of the project must be integrated upstream to evaluate the project in order to reduce bias. Because it is difficult to make an evaluation without our cultural logic. We need to take time to integrate the logic of the different actors and understand them in order to conduct an objective evaluation.

  • Dear Umi,

    Many thanks for sharing this excellent paper by Carol Weiss [earlier contribution here]. Old is but gold is. I just finished it and I want to carve some of its sentences in the concrete of my office walls. For instance:

    Weiss brings to light an impressive series of assumptions baked in evaluation practice. Not all of them are always assumed true, but I think she is right that they tend to “go without saying”, i.e. be silently and even unconsciously accepted most of the times. Here is a list of such assumptions, based on her piece:

    1. The selection of which programs or policies get to be evaluated and which do not is done fairly – i.e. there’s no hidden agenda in the evaluation plan and no programs are protected from evaluation.
    2. The program to be evaluated had reasonable, desirable and achievable objectives, so that it can be assessed based on these objectives.
    3. The explicit program objectives can be trusted as true; there’s no hidden agenda, they reflect the real objectives of the intervention.
    4. The evaluated program is a coherent set of activities, reasonably stable over time and independent from other similar programs or policies, so that it makes sense to focus on it in an evaluation – it is a valid unit of analysis.
    5. Program stakeholders, recipients and evaluators all agree about what is good and desirable; any difference in values can be reconciled, so that the discussion is generally limited to means to get there.
    6. Program outcomes are important to program staff and to decision makers, who can be expected to heed the evidence collected by the evaluation in order to improve outcomes.
    7. The questions in the TORs are the important ones and reflect the preoccupation of program recipients, not just of program implementers.
    8. The evaluation team as composed can achieve a fair degree of objectivity (neutrality-impartiality-independence…) in its analysis.
    9. In most programs, what is needed to improve outcomes is incremental change (renewed efforts, more resources) as opposed to scrapping the program altogether or changing radically its approach.

    The last assumption is based on the fact that most recommendations emanating from evaluations are about minor tweaks in program implementation. Weiss relates this to the type of messaging that can be accepted by evaluation commissioners.

    In practice they are all problematic, at least occasionally, and Weiss does a great job at showing how some of them are often not borne by facts. For instance on assumption 1, she shows that new programs are typically subjected to more evaluative pressure than old, well-established ones.

    So thank you again for a very insightful paper. Evaluation is indeed political in nature and evaluators can only benefit from understanding this.

    Olivier

  • On Accountability

    If I were told that I am accountable for certain actions of mine, I would be in a very awkward position unless I knew ---

       • What I am accountable for and

       • To whom I am accountable.

    As far as I can see, I would not be able to make a sensible response to the query whether I have successfully accounted for my actions unless and until I have received reasonable answers to these two questions.

    Now, if my actions are guided by the norms of several groups, for instance, fund providers, political poltroonery etc., on the one hand, and one or more concrete needs of a social group on the other, my position will be extremely difficult with respect to the two questions above.

    Then, are my actions to be accountable with reference to ---

       • Norms of the fund provider,

       • A parcel of politicians or

       • One or more concrete needs of a social group my actions are intended to satisfy?

    So far in this discussion, most participants seem to believe that the answers to above questions are reconcilable. Indeed, in a cooperative world it would be so, but most people champion a competitive environment.

    The same difficulty becomes even more glaring, when one has to face fund providers, politicians and the most vociferous representatives  of a ‘target group.’

    Perhaps, it is time the evaluators paused for a moment to check their basic premises carefully, for when we face what may seem irreconcilable, an impartial examination of our premises would show us that one or more of them is untenable.

    The perceptive reader may already have noticed that ‘neutrality’, ‘impartiality’ and ‘objectivity’ are terms relative to the norms used by fund providers, politicians, target groups not to mention what is humourously called ‘media’. Under these circumstances, ‘independence’ becomes an extremely questionable notion.

    Cheers!

    Lal.

  • I agree [with Steven Lam below]. It is still important to try and strive for neutrality, independence and impartiality (taking these concepts as roughly synonymous) even if we know that in practice these "ideals" may not be achieved 100%. It is still important to try and control biases, still important to consult broadly etc. even if we know that perfect neutrality is humanely impossible to attain. And the reason it is important is indeed linked to getting a more convincing, more credible and useful outcome. A biased evaluation often remains unused, and rightly so.
     

  • Accountability is much more than reporting on a work plan (which is, unfortunately, how it is often portrayed).

    Accountability means that we make explicit or implicit promises to other people and groups (in the case of development / humanitarian projects, to MANY other people with different perspectives and priorities) .We are responsible to account for these promises. That means: to make the promises happen - when possible and useful...  but also to change, improve, evolve our promises as needed, *always respecting the bond underpinning these promises*. What matters for accountability is the *relation*.

    Things, conditions can change. But people are accountable to each other when they keep each other informed of changes, and when they set up strong processes for negotiating the way forward for keeping the promise alive and relevant. And possibly, to improve it.

    If you have this view of accountability, learning is clearly part of it.

    Learning is what improves the promise, and what improves the trust needed to negotiate promises and conditions of accountability.

    Of course we always need to remember that this happens in messy situations, and we are often accountable, as mentioned, to diverse people, with different interests. We might be accountable to many people. But what does accountability really matter to us? The interests of the donors are not always, for example, the interests of the marginalized people we are supposed to serve... or the interests of future generations...

    When we stick to accountability as "sticking to results" we are missing the point.

    And often, rather than accountability, we have bureaucratic control.

    To get back to the question that started the debate, accountability itself is not a neutral word.

    Who we chose to be accountable to has deep consequences on how we act and look at change.

    It is really important to be aware of it, rather than thinking that a larger sample will solve the issue.

    And even the humanitarian discourse is becoming aware of this and reframing the understanding of neutrality...

     

  • Again, we have another interesting, critical, and challenging topic to comment on. As I have previously stated, the most important contribution of any project evaluation is the guidance it provides to future project so they can better serve the beneficiaries. Unfortunately, too often evaluations become propaganda tools necessary to appease donors and assure future projects, while doing little if anything for the intended beneficiaries.

    Thus, while the commentary shows a good consensus on the importance of Neutrality-impartiality-independence, the commentary also shows that obtaining such can be very challenging as rarely do projects provide funds for external reviews by independent evaluators, but if when funds are available, vested interest in potential future assignments, will often bias the objectively needed to be sufficiently critical of results to guide future projects.  

    One way to minimize the bias might be to have some clear well-defined targets that will separate project success from failure. Has anyone ever seen a set of evaluation criteria that would do this? I have not! This set of criteria needs to be established at the beginning of a project. They also need to be close to what interested underwriting taxpayers are expecting and reflected in the project reporting. They may also need to be expressed in percentages of potential, instead of aggregate number. As when your emphasis aggregate numbers you can generate some very impressive but meaningless value that only reflect the massive size of projects and investments, while still having a trivial total impact. This is basically what the USAID MEL evaluation reflects. Instead, emphasis on percentage of the potential will give a better evaluation of project effectiveness. Also, there is a need to remember that most project are defined by the community they serve, and thus the evaluation needs to reflect community impact more than individual impact.

    Allow me to illustrate with my favorite development concern. That is the nearly 40 years of over reliance on producer organizations to funnel assistance to smallholder farmers. While I agree producer organizations may be socially desirable, but they are also administratively cumbersome which quickly converts to overhead costs, that can more than consume the overall financial benefits so relying on the producer organizations will force smallholder farmers deeper into poverty. This saying nothing about the super inconvenience of consignment marketing in a highly cash-oriented society. Thus, the famers widely and wisely avoid them, so that they require continuous external facilitation and subsidies to exist, then collapse once external support end. Yet the smallholder agriculture development effort is totally committed to imposing them on the communities, and claiming they are the essential salvation for smallholders, as shown by the rhetoric accompanying any presentation that would lead one to believe anyone was foolish not to participate.

    My approach is prior to my international agriculture class discussion on producer organizations, I always asked the students, many of whom were interested in international development with plans to join the Peace Corps, to list what they thought would be the minimum values for various business parameters associated with Producer Organizations and compared them to the best estimate I could obtain scrutinizing various reports with some simple computation. The results are in the webpage: https://agsci.colostate.edu/smallholderagriculture/request-for-information-basic-business-parameters/.

    The expectation is for producer organizations to enroll well over 50% of the potential beneficiaries and they market through the organization 70% of their produce while side selling less that 10%. The actual result is only about 10 - 15% of the farmers participated, and even those will side-sell the bulk of their produce when possible, so the total impact on the community is a trivial >5%. Not what you would really want as a successful project.  Please take a few minutes and review the webpage and some of the linked pages and provide comment back to this forum. Are the:

    • Criteria I suggest correct for evaluating a project providing a business service to smallholders?
    • Criteria ever included in an evaluation of a business service for smallholders that you know of? I have never seen them used.
    • Students’ expectations for minimum values realistic from an underwriting perspective?
    • Actual values accurate from your experience?
    • If these target results were promoted at the beginning, would they allow an independent or even in-house evaluation to be more forthright in criticizing the results without jeopardizing future opportunities?
    • Would it have guided future projects to seek more effective means of assisting smallholder farmers out of poverty?

    I feel the persistence in relying on producer organization represents a lack of or compromised independent evaluations. The limited acceptance by smallholders should have been identified decades ago, and the development effort moved on to identify and implement more effective options. This then represents donors with no sincere interest in assisting the plight of smallholder producers, but fully committed to imposing this horrible business model on them, that fortunately they are not gullible enough to fall for. The best you can say for the practice is it shows an easily publicized good intention without accomplishing anything. My bottom line on producer organizations is a real multi-decade, multi-billion-dollar scandal, with some possible serious liabilities for the cover-up.  Am I wrong? If there had been clear target as what constituted success vs. failure could this have been avoided?

    Some additional webpages that go into more detail are:

    https://agsci.colostate.edu/smallholderagriculture/perpetuating-cooperatives-deceptivedishonest-spin-reporting/

    https://agsci.colostate.edu/smallholderagriculture/loss-of-competitive-advantage-areas-of-concern/

    https://agsci.colostate.edu/smallholderagriculture/vulnerability-for-class-action-litigation-a-whistleblowers-brief/

    Thank you.

  • Hello everyone

    Interestingly, I listened to a podcast last night about "Researcher Positionality"

    https://disastersdecon.podbean.com/e/s6e2-researcher-positionality/

    Go to about 4 minutes into the podcast for the discussion to really start (4.12)

    With regards

    Una

    (Evaluator & Lecturer)

  • Dear all

    I enjoyed the discussion and actually shared the same curiosity five years ago then did some literature study, 
    From some of the literatures I read, I found one relevant paper to our discussion today, 
    old, but still relevant to eval situations nowadays.
     
    👉🏽https://www.sfu.ca/~palys/Weiss-1973-WherePoliticsAndEvaluationMeet.pdf…;

    Enjoy the read

    Umi

    https://www.monevstudio.org 

  • Dear colleagues, 

    This discussion is surely very insightful.

    I would like to replace the initial questions for the position of M&E (MEAL, MEL…) responsable/officers/consultants at field or project/programme level and their room for manoeuvre to strongly reaffirm the adherence to norms and standards if there is not a strong back up from the independence function - making sure norms and standards make their way all down the line. It may not be a burning issue whenever an independent evaluation office is existing and functional. But if not, the field M&E position/officer turns into an interchangeable piece/pawn that may not feed the independent evaluation.

    My view is that whenever a « norm and standard » issue is detected - the claim should be covered and driven by the independent evaluation function as the consultant is not in a position to push for long when marginalized or the contract is over.  And if there is no independent evaluation department then peer exchange groups as this one or national and international evaluation associations could be the stage to bring all the concerns previously emitted in this thread and bring them one step further for the systematization of independent bodies.

    I would like to suggest this reading with inspiring insights from ADB, back from 2014 : Evaluation for Better Results - "Accountability and Learning: Two Sides of the Same Coin"  https://www.ecgnet.org/sites/default/files/evaluation-for-better-result…;

    This quote from Moises Schwartz (former director of the independent of the IMF) : "To be precise, when evaluation reports have pointed to instances in which the IMF has fallen short in its performance (the accountability element), the exercise turns into a quest to identify the reason for such behavior, and the findings and conclusions then contribute toward an enhanced organization (the learning element)."

    May seems obvious by now and earned? What are your experiences?

    This point I had missed is to what extent accountability is a pre-condition for any learning - within all previously expressed limits of fairness/impartiality » => but also clear limits to complaisance given the seriousness of issues we are facing - specifically thinking in the call for a faster and systemic adaptation to climate change.

    Warm regards, 

    Sébastien Galéa
     

  • Hi all,

    This discussion reminds me of debates within qualitative vs quantitative research. Qualitative research assumes that the position of the researcher – as the primary research instrument – impacts all aspects of the research. Quantitative research is perceived to be neutral/impartial, despite the fact that the researcher gets to pick the questions to ask, who to ask, where to look, and so on.

    Rather than striving for principles that do not really exist in evaluation, I think it is more fruitful to be aware of how the identities, experiences, and interests of evaluators and clients are intertwined in the evaluation. When designing the evaluation, ask: Whose interests does the evaluation serve? Who are we (not) asking? In what ways do we influence the evaluation process? Will the data be convincing? This awareness could lead to planning that results in stronger, more credible evaluations.

  • Dear Ms Bounfour

    In my experience, neutrality and impartiality are very important in the constitution of an evaluation team in order to overcome biases in the collection and also in the analysis of the results obtained for a better objectivity in a scaling up or reproduction in another context. However, in some mixed approach situations with a whole part of qualitative data collected and analysed too much reliance on a neutral and impartial team can lead to a situation of non-in-depth or superficial analysis of cases or phenomena during the process. Therefore, I think it is important to make good use of it without excess, which could lead to biased results otherwise due to a lack of understanding of the intervention or the intervention context or even the socio-demographic characteristics of the beneficiary populations....

     

  • Dear All,

    My experience when working with independent evaluation offices of Rome-based agencies has been positive regarding independence. They have generally 'had my back' when there is conflict with project staff or country/regional teams, insisting on the team's independence and right to access information without interference, discussing ratings - and even concluding that we should be more critical, at times. Usually in these cases there is at least some involvement of the evaluation office staff together with the independent consultants (though not always during the field visit). Re interview notes - they are usually in my possession so it isn't really a case of 'obtaining them'. In fact, I am not certain on the legality of using them in other assignments - obviously there would need to be care involved as the information was obtained under different contract.

    Best wishes,

    Pamela

  • Dear colleagues,

    This discussion has already been an insightful and reassuring of the standards in the evaluation profession.

    It would be great to unpack the issue of 'indepedence', to contextualize based on an example not previoulsy highllighted.  For those members who have consulted for independent evaluation offices , i.e. of Rome-base agencies, AfDB, IEG, GEF, any others, and those members who work in those evaluation offices- what does 'indepedence' mean between evaluation office and a consultant/team hired to implement an evaluation.

    - How 'independent' is a consultant/consultant team from a commisioner (aka indeedent evaluation office)?

    - Is there a point at which technical guidance and quality assurance by the commisioner (aka indeedent evaluation office)  threatens indepedence of the evaluaton consultant/team?

    - What about collected evidence (interview notes): is commisioner entitled to obtain them, to be able to 'fall back' on, once the evaluation consultant/team is no longer under contract?  

    In pondering about the issue, let us all be reminded that independent evaluation arrangaments do not report to management by design of the governance and assurance structures. 

    I am looking forward to hearing from all of you, on both sides.

    Cheers,

    Svetlana Negroustoueva

    Lead, Evaluation Function

    CGIAR Advisory Services Shared Secretariat (CAS) Rome, Italy

     

  • Dear Isha,

    sure on one side you are right this is the real time to change the practice in evaluation and bringing change with innovation.

    True, some evaluators provide reports with 100 gapes without productive analysis and sometimes recommendations don't match with the issues raised.

    Let 's all evaluators put together and improve our works by contributing to our global objectives.

    Kinds Regards,

  • Dear Mallika

    I am so proud of you that you brought this topic openly. We as evaluators always performed neutrality and impartiality. But many organization such as UN, WB, ADB etc. (as Abubakar says "Most often Evaluators are selected due to some connections and when selected they hope to be selected again in the future") get into a comfort zone with some sets of evaluators or evaluations companies over and over. 

    I  have review many evaluation reports and found deterioration of professionalism in evaluation reporting, either because: a) very biased, b) lacking of synergies between findings vs. recommendation vs. conclusion, c) large reports of nearly 100s pages, without productive analysis, lack of productive data collection, less professionalism in data collections as well as questionnaires. These are few of my observations.

    I think time is up now to raise a voice on this also pushing for evaluation professionalism given priorities above, in order to save this profession. 

  • Dear all,

    There seem to be two discussions or similar related issues being discussed; or maybe I'm confused because of the two threads.

    For me, it's first of all being on the same page when discussing concepts but above all, putting them into perspective. What do the concepts of neutrality, impartiality and independence mean for evaluators, evaluation managers and users, for beneficiaries? As both may not have the same culture, and they may also not use the same knowledge generation framework, all these concepts may have different meanings.

    Therefore, the "decolonization" of evaluation theories and frameworks will be crucial for the use of these concepts and their interpretation.

    Rgds

    Thierno Diouf

    Monitoring & Evaluation Specialist

    UNFPA

     

  • Dear all,

    It’s hard not to firmly agree with Oliver’s arguments on neutrality vs. usefulness. Higher inclusiveness and stakeholder participation in the process of evaluation combined with the sufficient conditions stimulating “open-mindedness” of the evaluators towards diverse views- are key to success in achieving objectivity, and in turn, in leading to a valuable learning outcome. It is important not to forget about the end users of the evaluation results as "customers" who anticipate the evaluation product to be useful.  

    Cheers,

    Lasha

  • I agree with Olivier and Silva as they speak a word of caution against a potential radicalisation of evidence-based décision making approaches. That is not to say that Malika fell in that risk ( she may just be well justified to call for more objective data to assess gender issues) but let me say that  as we plan for an evaluation we should always take into consideration two things ( among many others) :

    1) the inherent and irreducible complexity of the evaluand l, as already emphasized by previous Interventions ;

    2) the starting point: what is the level of data / information/ knowledge we have on the topic at the start of an evaluation. Critical to the credibility of the evaluation is the capacity of the evaluation team to absorb and synthesize this information in order to put its findings in perspective.

    Khalid El Harizi

    Independent evaluation Consultant

  • I agree as well. There is no such thing as a perfectly neutral human being or methodology, and if that's what we aim for, we set ourselves to fail.

    The most we can do is try and be aware of our own biases, beliefs and presuppositions, be aware of the limitations and biases involved in this vs. that method, and try and manage these limitations and biases.

    So instead of "unbiased information will  help make the necessary corrections", I would say that for an evaluation to be credible (and hence useful), all stakeholders need to trust that the evaluators and the process are ***reasonably*** neutral i.e. not too strongly influenced by one particular stakeholder, that no stakeholder is marginalized or silenced during the process, and that the evaluators are not ideologically wedged to a particular view but rather keep "an open mind" towards diverse views and interpretations. This to my mind would come closer to a more achievable standard for evaluation independence than perfect objectivity.

    Thanks,

    Olivier

  • Dear all,

    Thank you for the ideas you have forwarded. That evaluators are physically and emotionally distanced from the project (if you suggest an outsider to evaluate) may not make them neutral, impartial and independent. What is perhaps more important is to identify objectively verifiable indicators and use data triangulation to check the authenticity of the information generated and use a blend of qualitative and quantitative methods. 

    Best,

    Bamlaku Alamirew Alemu (Ph.D), CIPM, PMP

    Associate Professor 

  • Greetings!

    After the lucid remarks of Silva Ferretti, I can only say, I can't agree more.

    Cheers!

    Lal.

  • Most often Evaluators are selected due to some connections and when selected they hope to be selected again in the future. In such a scenario, the Evaluator  will to look good so that he or she does not miss out in the next assignments. In so doing he or she gets compromised and that affects the neutrality, impartiality and independence.   True or false?

  • 1. The Perception of the recipients and governments of the expected benefits from the evaluation results may compromise the evidence where by if they perceive it may tent the image they down play the evidence if it is for rally support or resources they may over blow

    2. The capacity of the data collectors may also affect the Neutrality-impartiality-independence of evidence especially in qualitative data collection involving probing where the data collector is lacking.

  • Dear Malika,

    sure your ideas are good, but remember that the evaluator is binded by Terms of References, core values and conditions for evaluators.

    But an evaluator is required to gather data with impartiality and understand every stakeholder involved in the project/program even though it is not easy . The impartiality, equity and transparency should characterize the evaluator. The independence, fairness of the evaluator is required and should guide everyone.

    Kind Regards,

  • Is it really useful to pretent that we can be neutral and impartial?

    Or is it more useful that we are all inherently biased (and that our approaches are)... and it is then better to be open and aware about it, and about the limitations inherent to all our approaches?

    Thinking that we can have the "perfect" information, in contexts that are complex and messy is probably just wishful thinking.... :-)

     

  • Dear Malika,

    I could not agree more with the points you have raised. Independence and enough coverage of data/information are key for a credible evaluation. But the other side of the coin is that evaluators are generally not given adequate time/ days and full information (despite requests) to reach out to the stakeholders, beneficiaries, comparable groups and potentially negatively affected people (this is seriously lacking in most of the evaluations) during the evaluation. This makes evaluation processes are not accountable to the people for whom the interventions are planned.  

    Best regards,

    -----------------------------------------

    Ram Chandra Khanal

     

  • Dear All,


    The impartiality, neutrality and independence of the evaluator are all necessary qualities in evaluation. Ideally, therefore, they should all be observed in the evaluation process. Unfortunately, they do not require the same level of care or are not always easy to meet at all levels of the process. Impartiality is applicable at all levels of the evaluation process: scoping, planning of data and information collection, validation of collection tools, data and information collection, analysis of collected data and information, interpretation and feedback on the results, reporting, restitution of results and recommendations. Neutrality is necessary, but it should be intelligently observed during the interviews in order to avoid biased information as much as possible and to better understand the answers received. The data and information collection stage requires the evaluator to use his or her previous knowledge and expertise in the field to better understand the responses and collect information. It would therefore be a mistake to naively record all the responses provided without probing further if necessary. The most difficult aspect of the evaluation is the independence of the evaluator. This means administrative, political and, above all, financial independence. It is this aspect that seriously tests the consultant, especially if the person financing the evaluation mission was directly responsible for the implementation of the project and would therefore like to have a good result at all costs. Under these conditions, the tendency to put pressure on the consultant is considerable. Thus, depending on the degree of dependence and the will of the funder, in some unfortunate cases the evaluator may have to reduce his or her neutrality and impartiality in order to allow the mission to be carried out, if he or she does not want to abandon the mission altogether.
     

    Thank you.

     

    ==========
    Dr Ir. Emile N. HOUNGBO

    Maître de Conférences des Universités (CAMES), Agroéconomiste

    Directeur, Ecole d'Agrobusiness et de Politiques Agricoles, Université Nationale d'Agriculture, Bénin

    Expert, Elaboration et suivi-évaluation des projets de développement

    Membre, Communauté de pratique sur l’évaluation pour la sécurité alimentaire, l’agriculture et le développement rural (EVAL-ForwARD), FAO/CGIAR/PAM/FIDA

    05 BP 774 Cotonou (Republic of Benin)

    Tel. (229) 67763722 / 95246102
    E-mail: enomh2@yahoo.fr

    https://www.researchgate.net/profile/HOUNGBO_E

    https://www.leabenin-fsauac.net/en/profiles/emile-n-houngbo/

    « Le bonheur de ne pas tout avoir ».