Sébastien [user:field_middlename] Galéa

Sébastien Galéa

Centre de ressources en évaluation EVAL.FR

My contributions

    • Dear colleagues, 

      This discussion is surely very insightful.

      I would like to replace the initial questions for the position of M&E (MEAL, MEL…) responsable/officers/consultants at field or project/programme level and their room for manoeuvre to strongly reaffirm the adherence to norms and standards if there is not a strong back up from the independence function - making sure norms and standards make their way all down the line. It may not be a burning issue whenever an independent evaluation office is existing and functional. But if not, the field M&E position/officer turns into an interchangeable piece/pawn that may not feed the independent evaluation.

      My view is that whenever a « norm and standard » issue is detected - the claim should be covered and driven by the independent evaluation function as the consultant is not in a position to push for long when marginalized or the contract is over.  And if there is no independent evaluation department then peer exchange groups as this one or national and international evaluation associations could be the stage to bring all the concerns previously emitted in this thread and bring them one step further for the systematization of independent bodies.

      I would like to suggest this reading with inspiring insights from ADB, back from 2014 : Evaluation for Better Results - "Accountability and Learning: Two Sides of the Same Coin"  https://www.ecgnet.org/sites/default/files/evaluation-for-better-result… 

      This quote from Moises Schwartz (former director of the independent of the IMF) : "To be precise, when evaluation reports have pointed to instances in which the IMF has fallen short in its performance (the accountability element), the exercise turns into a quest to identify the reason for such behavior, and the findings and conclusions then contribute toward an enhanced organization (the learning element)."

      May seems obvious by now and earned? What are your experiences?

      This point I had missed is to what extent accountability is a pre-condition for any learning - within all previously expressed limits of fairness/impartiality » => but also clear limits to complaisance given the seriousness of issues we are facing - specifically thinking in the call for a faster and systemic adaptation to climate change.

      Warm regards, 

      Sébastien Galéa

    • Dear all, 

      I agree this is a wonderful discussion and I am fully in line with Silva.

      Sound like a platitude here, but I was always convinced M&E systems should be owned and developed by programme team/stakeholders/beneficiaries engagement. Though, for the last 18 months, I was fully engaged to support a programme at this level (this is under programme direction) thinking it was the greatest opportunity of all time and I would make the best of it.

      But the result is disappointing, to say the least. Beware whenever you ear « we don’t want to shoot ourself in the foot, do we? ». Or whenever official communication is about self-promotion/self-gratification (how fantastic we are, etc.) while beneficiaries did not witness yet anything happening in their direct surroundings or daily life. 

      I think one of the key is where evaluation fits in the organizational chart (see below). How do M&E officers at project level interact with M&E officers at programme level and so on? How do M&E people in charge at programme level interact with any evaluation office at Managing director level or any existing « independent evaluation office » attached to the executive board. Setting up a MEAL system being a support function but also serving accountability : how both functions coordinate and complement one another?

      Also, do we have M&E professional at project level but also at stakeholder levels (governments, donors and primarily beneficiary representative levels, etc.) and are they all connected before we can say a M&E system is in place?

      A good practice I have seen is to have the steering committee (contractually) validating the M&E system at the end of inception period.

      Another common thought is that evaluation is a state of mind rather than complex technical instruments (Lal. mentions How to make ‘planners’ understand an evaluation. - which is correct though sometimes evaluation is not only spontaneously understood but pushed in internally).

      Then you have a risk of a « double sentence » for final beneficiaries. Engaged programme managers that even intuitively fully embrace evaluation and make the most of it while at the same time « reluctant » ecosystem that will use blackholes in the organisation chart for evaluation to take place too late, not linked to strategical decision-making and finally extract nice color charts with « number of people trained » and usual stuff. 

      Happy to participate, hope this conversation keeps going  ;-)