Olivier [user:field_middlename] Cossée

Olivier Cossée

Senior Evaluation Manager
FAO
Italy

I am an evaluator with over 20 years of evaluation practice, mainly as part of the central evaluation offices of two United Nations agencies: FAO and UNDP. Prior to that, I have worked for 10 years for NGOs and for the UN Capital Development Fund (UNCDF) as a rural development expert and programme manager in Afghanistan, Ethiopia and Mauritania. An agronomist by training and a generalist by inclination, I have evaluated programmes a wide array of domains and sectors beyond rural development and natural resource management, such as in the “humanitarian-development nexus” and resilience building, or assistance to democratic governance, rule of law and elections. I have also evaluated development approaches and strategies, such as participatory approaches, programme approach, and global development goals and agendas (MDGs, SDGs).

My contributions

  • How are we progressing in SDG evaluation?

    Discussion
    • To Chris' question, I am not certain that there will be a successor global development agenda, after 2030. Multilateralism is not in a good place right now. The MDGs were approved after the end of the cold war, but unfortunately, the cold war is making a come back...  So, whether there will be enough good will between nations in 2030 to arrive at a successor agenda remains to be seen. 

    • This is in response to the question of Jean Providence: Are there cases you might have seen in the professional conduct of evaluation where quantitative methods PRECEDED qualitative methods? 

      I would say that 1) you cannot measure what you don't conceive well, so a qualitative exploration is always necessary before any measurement attempt. If my wife visits some furniture shop and text me: "Honey, I found this marvelous thing for the kitchen, cost 500 and it's 2 m. long and 1.5 m. wide. You agree?" I wouldn't know what to answer, because in spite of all the numbers she gave me, I have no idea what she is talking about, qualitatively. Does she mean a table, a cupboard or a carpet? It makes no sense to quantify anything without first qualifying it.

      2) This being said, there is also room for qualitative approaches after a quantification effort. You are right about that: in some cases, a survey may yield results that appear odd, and one way to make sense of them is to "zoom" on that particular issue through a few additional qualitative interviews. 

      Hope this makes sense.

    • Hello Jean, and thank you for your question!

      I think all evaluation questions require a mixed approach. Data collection tools are just tools; they should be used opportunistically -- when it works -- but definitely not idolized. That would be like a plumber who loves wrenches but doesn't like screwdrivers, and who tries to do everything with a wrench, including screwing in screws... That would be absurd: a good plumber uses several tools, when and as necessary, and doesn't ask himself what type of plumbing requires only one tool...

      Likewise, a good evaluator needs to know how to use a toolbox, with several tools in it, not just a wrench. 

      I agree with Vincente that qualitative work must always PRECEDE a quantitative effort. Before measuring something, you need to know why and how to measure it, and for that you need a QUALITATIVE understanding of the object of measurement. One of the most common mistakes made by "randomistas" is precisely that they spend a lot of time and money on surveys that are too long and complex, because they don't know what's important to measure. So they try to measure everything with endless questionnaires, and regularly fail.

      [Translated from French]

  • In their thought provoking, data-packed keynote address to the 14th EES conference in Copenhagen last month, Peter Dahler-Larsen and Estelle Raimondo asked participants to recognize that sometimes, evaluation is more of a problem than a solution. Taking stock of the growth of evaluation as a practice and as a discipline, they argued for a better balance between benefits and costs of evaluation systems.  

    What happens when evaluators turn their gaze onto themselves? Sometimes this may lead to navel gazing and auto-congratulation, but this is not what Peter Dahler-Larser[1] and Estelle Raimondo[2] had in store for participants of the

    • Thank you Seda for highlighting the TAPE tool. I had heard about it from the SDG 2 review we did in 2019/20, which you kindly referenced.

      The TAPE guidelines provide very good sample questionnaires in annex, which could be adapted locally and used by evaluators (and others) to build their own tool or questionnaire. The questions included in there also help not just measure but also define agro-ecology by expliciting a number of key variables.

      So the TAPE guidelines help answer the remark of Laurent on the need to define what success looks like in the transition to agroecology. I think this is an important issue.

      There has been very little progress on the transition to more sustainable agriculture, and one of the reasons may be that we don't necessarily agree on what success looks like. While it has produced interesting experiences by civil society and farmer organizations since the 1980s, agroecology has so far failed to convince decision makers in ministries of agriculture -- except in a handful of countries such as Senegal, thanks to the relentless efforts of ENDA Pronat, its secretary Mariam Sow and many others.

      Agroecology is even perceived as ideological or militant by certain governments, due to its historical roots as an alternative to the Green Revolution. So defining the approach more objectively would help firmly anchor it in science, and TAPE can contribute there as well.

      Evidently, what success looks like will depend on the agro-ecological context. It would make no sense to apply exactly the same criteria all over the globe. It would also contradict a basic principle of agro-ecology which is that it's supposed to be bottom up.

      So it seems to me that the right way to define most precisely an agroecological product or system is to do so locally, based on minimum standards agreed with local food producers, traders and consumer organizations. This is for instance what Nicaragua has done with its Law for the Promotion of Agroecological and Organic Production (2011) followed by Mandatory Technical Standards approved and passed in 2013 to characterize, regulate and certify agroecological production units. Many countries have done the same, in an effort to promote agroecology through consumer education and food labelling.

    • Dear Umi,

      Many thanks for sharing this excellent paper by Carol Weiss [earlier contribution here]. Old is but gold is. I just finished it and I want to carve some of its sentences in the concrete of my office walls. For instance:

      Weiss brings to light an impressive series of assumptions baked in evaluation practice. Not all of them are always assumed true, but I think she is right that they tend to “go without saying”, i.e. be silently and even unconsciously accepted most of the times. Here is a list of such assumptions, based on her piece:

      1. The selection of which programs or policies get to be evaluated and which do not is done fairly – i.e. there’s no hidden agenda in the evaluation plan and no programs are protected from evaluation.
      2. The program to be evaluated had reasonable, desirable and achievable objectives, so that it can be assessed based on these objectives.
      3. The explicit program objectives can be trusted as true; there’s no hidden agenda, they reflect the real objectives of the intervention.
      4. The evaluated program is a coherent set of activities, reasonably stable over time and independent from other similar programs or policies, so that it makes sense to focus on it in an evaluation – it is a valid unit of analysis.
      5. Program stakeholders, recipients and evaluators all agree about what is good and desirable; any difference in values can be reconciled, so that the discussion is generally limited to means to get there.
      6. Program outcomes are important to program staff and to decision makers, who can be expected to heed the evidence collected by the evaluation in order to improve outcomes.
      7. The questions in the TORs are the important ones and reflect the preoccupation of program recipients, not just of program implementers.
      8. The evaluation team as composed can achieve a fair degree of objectivity (neutrality-impartiality-independence…) in its analysis.
      9. In most programs, what is needed to improve outcomes is incremental change (renewed efforts, more resources) as opposed to scrapping the program altogether or changing radically its approach.

      The last assumption is based on the fact that most recommendations emanating from evaluations are about minor tweaks in program implementation. Weiss relates this to the type of messaging that can be accepted by evaluation commissioners.

      In practice they are all problematic, at least occasionally, and Weiss does a great job at showing how some of them are often not borne by facts. For instance on assumption 1, she shows that new programs are typically subjected to more evaluative pressure than old, well-established ones.

      So thank you again for a very insightful paper. Evaluation is indeed political in nature and evaluators can only benefit from understanding this.

      Olivier

    • I agree [with Steven Lam below]. It is still important to try and strive for neutrality, independence and impartiality (taking these concepts as roughly synonymous) even if we know that in practice these "ideals" may not be achieved 100%. It is still important to try and control biases, still important to consult broadly etc. even if we know that perfect neutrality is humanely impossible to attain. And the reason it is important is indeed linked to getting a more convincing, more credible and useful outcome. A biased evaluation often remains unused, and rightly so.
       

    • I agree as well. There is no such thing as a perfectly neutral human being or methodology, and if that's what we aim for, we set ourselves to fail.

      The most we can do is try and be aware of our own biases, beliefs and presuppositions, be aware of the limitations and biases involved in this vs. that method, and try and manage these limitations and biases.

      So instead of "unbiased information will  help make the necessary corrections", I would say that for an evaluation to be credible (and hence useful), all stakeholders need to trust that the evaluators and the process are ***reasonably*** neutral i.e. not too strongly influenced by one particular stakeholder, that no stakeholder is marginalized or silenced during the process, and that the evaluators are not ideologically wedged to a particular view but rather keep "an open mind" towards diverse views and interpretations. This to my mind would come closer to a more achievable standard for evaluation independence than perfect objectivity.

      Thanks,

      Olivier

  • SDG evaluations: How to eat an elephant

    Blog

    I thought I might share some of the lessons we learned from our Evaluation of FAO’s contribution to SDG 2 – Zero Hunger.[1]

    I was recently discussing the challenges of evaluating SDG support with Ian Goldman from CLEAR and Dirk Troskie from the Western Cape Government Department of Agriculture, South Africa. They seemed somewhat surprised that we had embarked on such an endeavour, as the causal links between the 2030 Agenda and action at country level are tenuous and hard to pinpoint.

    Nations assess their progress against SDG targets through Voluntary National Reviews presented to the High-level Political

    • Hi Silvia and all,

      I agree that evaluators are not the only one trying to find solutions, and that programme managers and decision makers should not be off the hook, but I do think that evaluators need to propose reasonable solutions to the problems they raise.

      Otherwise I don’t see their value added, nor what makes evaluation different from research. Also, an evaluation that would shy away from proposing solutions would be in my opinion a rather facile and negative exercise: it’s not so hard to spot issues and problems, anyone can do that; the hard part is to propose something better, in a constructive manner. Forcing oneself to come up with reasonable alternatives is often an exercise in humility, in that it forces one to realize that “critique is easy, but art is difficult”.

      All the best,

      Olivier

    • Hi everyone!

      I too have been following the thread with much interest. I cannot agree more with Lal about the need for clarity, brevity and freedom from jargon. Until you are able to explain in a few simple words the project that you have evaluated to someone who knows nothing about it (like your grandmother or your uncle), then you haven’t evaluated it yet. You’ve just collected data, that’s all. You have yet to digest this data, to see what it means, to synthetize it into a crisp, usable diagnostic of problems and possible solutions. Being able to summarize an evaluation into a clear and convincing “elevator pitch” is key to utility. It is also important for the evaluator to be able to hammer this clear message again and again, consistently across different audiences.

      Cheers,

      Olivier

  • The theme of this year's report is "Transforming food systems for affordable healthy diets". The SOFI 2020 report examines the cost of healthy diets around the world, by region and in different development contexts. Food quality is an important factor in food security.

    The report uses a number of indicators, and I would like to take this opportunity to talk about the political and cultural dimensions of development indicators, by briefly analyzing two indicators related to the SDG target 2.1 which aims to eradicate hunger, and which (among others) are used in the SOFI report.

    The idea

    • Thanks to Christine for launching this interesting topic. It has been a worry for some time for me. I work in the FAO evaluation office and lead country programme evaluations in African countries. From this work, it seems to me that the Agriculture Ministry is often seen as one of the most bureaucratic and least effective in government, the one least likely to make good use its funding. I venture that the lack of funding for agriculture may be linked to this perception. Consequently, an organization such as mine (FAO) should crank up its assistance to Agriculture Ministries, not to use their programme delivery system without question but rather to ***reform*** it and make them less bureaucratic and more efficient, and therefore more attractive for national and international donors.

      This is certainly the picture we got in Uganda, where half of all national treasury funding for agriculture goes to Operation Wealth Creation implemented by the Prime Minister Office and the Army, not by MAAIF (ag. ministry). When we asked why, the response we got was that MAAIF needed to get its act together rather than operate as an unstructured collection of departments competing for resources... The same picture emerges in Ethiopia, where the Government created the Agriculture Transformation Agency in 2010 precisely because they did not trust the Ministry of Agriculture to change, reform and deliver. ATA is getting massive funding from donors, far more than the Ministry of Agriculture, and the two institutions see one another as competitors.

      In other words, there may be more funding to agriculture than meet the eye. Some of it doesn't flow through the 'regular' channels, because these channels are seen as problematic by donors and governments. I think this diagnostic applies to FAO itself. There is currently a debate among donors about creating a global fund for agriculture on the model of the GFATM, and the rationale for it is: the current delivery channels can't absorb more funding and make good use of it.

      I don’t know if this resonates with others' experience, and welcome both rejoinders and rebuttals!

      Olivier