Silva [user:field_middlename] Ferretti

Silva Ferretti

Freelance consultant
Italy

Silva Ferretti is a freelance consultant with extensive international experience in both development and humanitarian work. She has been working with diverse organizations, committees, networks and consortia (e.g. Agire, ActionAid, CDAC, DEC, ECB project, Handicap International, HAP, Plan International, Save the Children, SPHERE, Unicef, WorldVision amongst many others).

Her work is mainly focused on looking at the quality of programs and on improving their accountability and responsiveness to the needs, capacities and aspirations of the affected populations.

Her work has included impact evaluations / documentation of programs; set up of toolkits, methodologies, standards, frameworks and guidelines; coaching, training and facilitation; field research and assessments.

Within all her work Silva emphasizes participatory approaches and learning. She has a solid academic background, and also collaborated with academic and research institutions in short workshops on a broad range of topics (including: innovations in impact evaluation, Disaster Risk Management, participatory methodologies, protection, communication with affected populations).

She emphasizes innovation in her work, such as the use of visuals and videos in gathering and presenting information.

My contributions

    • Hello Yosi, thanks for this very important question! 

      I am collecting some tips on including environmental issues in evaluation. This is one of them. Hopefully, I will share more.

      See, a "thinking environment" is a mindset. 

      The moment we take a more ecosystemic perspective, we will immediately realize the limitations of our approaches. 

      But we also discover that simple things - as an extra question - can go a long way.  :-)

      cartoon on evaluation tips by Silva Ferretti
    • Thanks Harriet for sharing all these tools. Really useful!

      However...  a friendly warning. 
      Using visuals is not just about tools. Visuals are an attitude; they are languages with rules and challenges.
      Just as having access to "Word" (and other word processors) does not guarantee you can write effectively, using visual tools does not ensure good visual communication.

      Unfortunately, in our world, visuals are just an add-on. 
      Writing is the default; then, we can add a cute visual.
      And in many cases, such visuals are bad, possibly harmful.

      I remember pointing out to some colleagues that their visuals had challenges and that they could be misinterpreted.
      And they just shrugged their shoulders, not seeing the issue.
      "It is just a graph; why do you worry so much about petty details?"
      These colleagues would be anal about a wrong comma in their text, yet they shared visuals contradicting their messages without caring, without even seeing the point.

      So... by all means, try to become conversant with visuals.
      But take the time to learn the language, ask for feedback, and be humble.
      We do need more languages - beyond the written one - in evaluation.
      But there is a vicious cycle: because now the written word is predominant, experts and practitioners are predominantly "writers and readers" and might resist other languages.
      Visuals are cute, but what matters is a written report. So it is predominantly writing people that will be enrolled.
      This is a major issue, blocking appropriation by people with different communication preferences and leading to ineffectively sharing messages that would be better shared visually, theatrically, or in other languages.

      And, if you think that "nice, but if it is not written in words, it is not reliable, credible, acceptable..."  you are part of the problem! :-)

      So... be inspired by the great tools and resources shared by Harried (thanks!), and explore visuals. But do remember that they are not an add-on.

      They are a needed language to master but challenging to use well! :-)

       

    • great point Ram… may i just suggest that mechanical evaluations serve mechanical compliance, and not accountability? (especially if we aspire to be accountable to the primary stakeholders… and to mindful donors)

    • Evaluations are not "written reports".

      Evaluations are processes to understand if, how, and to what extent the programme produces change (expected and unexpected).

      If you embrace this view, then communication is clearly at the core of it: to communicate purpose, to elicit ideas, and to formulate and share findings.

      Unfortunately, evaluators are most often conversant with written words and not with other forms of communication.

      This greatly limits engaging stakeholders and sharing findings, as other people might prefer other communication methods.

      In my experience, just about anything works better than reports: cartoons, graphs, infographics, theatre, music, multimedia, etc.

      (yes, I tried them all. and they were welcomed by all sorts of stakeholders, including donors)

      Evaluators should not just think "report". They should think about the best combination of different ways of communicating.

      Illiterate people can perfectly understand visuals - if visuals are properly set -

      Participatory toolboxes contain ideas for showing and discussing percentages through visual aids.

      Definitely, they are more likely to understand visuals rather than reports written in English...

      Of course, if we understand "visuals" only as Excel graphs, we miss a whole world of possibilities.

      And visuals cannot be improvised: as there is a grammar to write words, there is also grammar and a style to produce visuals.

      Even looking at the specifics of data charts, there are whole books on data visualizations, offering examples (and also highlighting potential challenges for miscommunications).  A simple visual can go a long way. But a good visual is not simple to do.

      Definitely, let's go beyond the written word. But let's remember that this cannot be improvised.

       

       

       

    • Very important discussion. It is, however, constrained by a narrow understanding of evaluation: a conventional consultancy. Sticking to this format - i.e.  accepting as a starting point that evaluation is mostly about putting some recommendation in a report - limits possibilities and innovation.

      We should reframe evaluation as a set of thinking processes and practices allowing programme stakeholders to gauge the merit, the achievements, and the learning from a programme. Consultants might have diverse roles within it (and might not even be necessary). The possibilities are endless. If evaluations are designed with users, use, participation in mind, the entire approach to communication and involvement changes from the start.

      It is very unfortunate that we keep on sticking with conventional, routine evaluations and never consider the cost opportunity of missing out on more interesting options. This message goes in the right direction, indicating the urge to shift from reporting to communication. But if we stick to conventional evaluation formats, we might make minor improvements but always miss out on the potential of evaluations, in the broader sense.

    • Thanks so much for this post. On top of what you say, to have significant data, each small farmer should have exact data about their production - considering crop variety, quality and current market prices. Getting this data and the systems needed to collect them is a work in itself, requiring technical capacities, discipline, and tools. To do this properly, we should transform each small farmer or extension worker into a mini data collection and management officer, and more is needed (what about crop diseases, type of soil, the workforce in the family, and weather - just to mention a few?).

      The sad part of M&E now is how we impose the burden of (irrelevant) measures on beneficiaries, local actors, and small intermediaries. And to a level, we do not ask ourselves. All this for nothing of a practical impact on change. One day someone should denounce the opportunity cost and the distortion caused by asking for irrelevant metrics just because we need an indicator to put in the log frame.

      Also, we are confusing M&E with research. So we have irrelevant M&E for decision-making. And poor attempts at getting data and evidence, which should be done with other means, competencies resources to be useful and credible.

       

  • Evaluators are interpreters. Conversations with real people, at the grassroots, happen in simple, everyday language. Evaluators translate them into the jargon used in development talks and reports (where people are “empowered”, “aware”, “mobilized”, “create platforms” or “demand their rights”), to make them more fit for analysis and sharing.

    When I started using videos in my work – capturing soundbites from these conversations – I discovered how development and humanitarian professionals, with little exposure to the grassroots (and too often used to the lingo), may be deceived by the simplicity of everyday language, and fail to see the point.

    For

    • If we accept that evaluation means "results, indicators" we might have killed the possibility of cultural appropriation from the start.

      "Evaluation" means different things for different people. Making it equate to  "documenting results and indicators" undermines many other alternatives.

      As in feminist evaluation (which is not only about "gender" but about rethinking approaches to make them intersectionally inclusive), we should question what an evaluation is for, what ways of seeing change it embraces. Beyond results there are processes, principles, worldviews. The moment you are discussing with local actors: "what matters for you in looking at change?" you are already working to make it culturally appropriate. If it is just about "defining indicators", sorry but this is a non-starter.

    • Hello...

      again I am not really adding here a practical lesson, sorry...

      but I just found quoted this recent paper by USAID, which might be of interest to the people following this thread.

      https://usaidlearninglab.org/resources/report-integrating-local-knowledge-development-practice

      USAID’s Agency Knowledge Management and Organizational Learning (KMOL) function in the Bureau for Policy, Planning and Learning, Office of Learning, Evaluation and Research (PPL/LER) facilitated conversations with development practitioners to learn how development organizations are integrating local knowledge into their programs. The report explores three aspects of this topic: Leveraging Best Practices, Addressing Challenges, and Achieving Best Outcomes.

    • I just accessed an interesting article / website, highlighting characteristics of a white supremacy culture.

      Evaluators do risk to - willingly or unwillingly - to embrace them.

      (and the sector really pushes us to do so).

      So... these are not lessons or experiences.

      But a useful checklist to break the issue down and harvest practices.

      image

      The article is on:

      https://www.whitesupremacyculture.info/

      And I found it mentioned here

      https://aidnography.blogspot.com/2022/09/development-ict4d-digital-communication-academia-link-review-455.html

       

    • My two cents.

      When a "theory of change" (as illustrated in the manual) looks like this - with arrows going in the same direction,  and with a linear outline (IF... THEN),  it is just a logframe in disguise.

      logframe

      A proper "theory of change", fit for complex setups, will have arrows going in many diverse directions, interlinking ideas in ways that are hard to disentangle.

      It is messier, harder to navigate, but... ehy! This is what reality most often looks like.  (this is the "obesity map", in case you wonder)

      TOC

      There is a WORLD OF DIFFERENCE amongst the two.

      You cannot really approach them in the same way and with the same thinking.

      This should be the starting point of any analysis!

      Once you understand if you are dealing with a linear or complex theory of change, then you need to remember

      • In some cases linear thinking has a reason to be.
      • when addressing social change, most often not.

       

      I feel that it is quite unfortunate that the "theory of change" idea - born to appreciate complexity, ended up just being a different way to picture logframe thinking.

      At least we should be able to distinguish what is a logframe on steroids and what is appreciating complexity, and move on from there.

    • I really enjoyed reading the note and seeing how carefully it was written, taking into consideration all views.

      It is useful to see where the discussion is at. But the subject, "closing remarks" is a bit off-putting.  :-)

      As Malika says, it is more useful to keep the discussion open.

       

      There is an assumption whereby evaluations need to impartial and neutral (and that the evaluator is a guardian of this),

      a tendency to equate evaluations with research (even research cannot always be impartial!):

      The underlying understanding of evaluation is: a product generated by an expert who selects the perfect sample and gets to scientific conclusions.

      Is this really what an evaluation should look like and be?

      Shouldn't an evaluation rather be an opportunity to apply evaluative thinking about a programme?

      An opportunity where different people, with different worldviews get to understand better where a programme is at, what possibilities are ahead, what can be learned?

      I really feel strongly about this: we are presenting ALL evaluations as if they need to be "scientific products", originated by experts, capable of being impartial and wise.

      Some evaluation (or better, some research) might well have this focus.

      But assuming that this should always be the goal for evaluation in general is very problematic.

      Participatory evaluations, for example, are not at all about creating one impartial view.

      They are about the perspectives of diverse people together, to make sense of a situation.

      They might not even get at shared / agreed findings, yet they can be incredibly powerful in injecting needed critical thinking about action.

      The evaluator is not always the scientific expert... s/he can be the facilitator.

      Certainly s/he then needs to think about inclusion, representation, and be very aware of the relationships, position, and power of stakeholders.

      But inclusion, representation are fundamentally different concepts from neutrality / impartiality / independence (which should also not be mixed in the same bag).

      It is about being aware (as much as possible) and honest about what are the dynamics at play, about the choices made...

      rather than pretending that we can achieve objectivity.

      Many of my evaluations, for example, are not neutral BY CHOICE.

      I strive to give more voice to the people who are usually less represented.

      I talk to more women, to more outcasts, to more people with special challenges.

      Yet I truly think that this open choice of being biased, is much more useful than an attempt to neutrality and impartiality.

      With the limited time and resources of an evaluation, which voices are worth listening to, which conversations are worth having?

      Being aware and open of what are our choices is more powerful and honest than pretending we can be unbiased. :-) (and if the point is to have scientific evidence, then let's embark in research... which is something else)

      Thanks again for sharing interesting points so far, and for facilitating the discussion.

      I hope that this interesting discussion can continue.

       

      Best

      SIlva

       

       

       

       

       

    • Accountability is much more than reporting on a work plan (which is, unfortunately, how it is often portrayed).

      Accountability means that we make explicit or implicit promises to other people and groups (in the case of development / humanitarian projects, to MANY other people with different perspectives and priorities) .We are responsible to account for these promises. That means: to make the promises happen - when possible and useful...  but also to change, improve, evolve our promises as needed, *always respecting the bond underpinning these promises*. What matters for accountability is the *relation*.

      Things, conditions can change. But people are accountable to each other when they keep each other informed of changes, and when they set up strong processes for negotiating the way forward for keeping the promise alive and relevant. And possibly, to improve it.

      If you have this view of accountability, learning is clearly part of it.

      Learning is what improves the promise, and what improves the trust needed to negotiate promises and conditions of accountability.

      Of course we always need to remember that this happens in messy situations, and we are often accountable, as mentioned, to diverse people, with different interests. We might be accountable to many people. But what does accountability really matter to us? The interests of the donors are not always, for example, the interests of the marginalized people we are supposed to serve... or the interests of future generations...

      When we stick to accountability as "sticking to results" we are missing the point.

      And often, rather than accountability, we have bureaucratic control.

      To get back to the question that started the debate, accountability itself is not a neutral word.

      Who we chose to be accountable to has deep consequences on how we act and look at change.

      It is really important to be aware of it, rather than thinking that a larger sample will solve the issue.

      And even the humanitarian discourse is becoming aware of this and reframing the understanding of neutrality...

       

    • Is it really useful to pretent that we can be neutral and impartial?

      Or is it more useful that we are all inherently biased (and that our approaches are)... and it is then better to be open and aware about it, and about the limitations inherent to all our approaches?

      Thinking that we can have the "perfect" information, in contexts that are complex and messy is probably just wishful thinking.... :-)

       

    • Isha, you mention that "We, as evaluators, are obliged to execute the TORs duly"

      My take is that, as evaluators, we should also question the TORs and negotiate them!

      One of the main contribution we can offer is to propose alternatives ways to look at change, beyond the "cut and paste" TORs that are offered to us.

      Some organizations and evaluation managers are actually quite open to this.

      Others are not.... and, if it is the case, well... their problem.

      I certainly would not want to work on an evaluation that I feel is missing the point from the start. :-)

       

      See, as cyclo-activists say about car drivers... "you are not IN traffic, you ARE traffic".

      As consultants, we do have a duty to resist TORs which we know are constraining learning and quality of work.

       

      Another point... I was surprised by how the question was presented to us.

      The question says "Major agencies and the UN in particular are considering how to integrate environmental and social impacts in their evaluations"

      "Are considering"? Now... environmental concerns are (unfortunately) relatively new... but social ones, are they really?

      We had all sort of cross cutting themes for ages (gender, disability and the like...).

      I am really scared by how the "triple nexus" (a glorified take of the relief / development continuum - discussed for the past 2 decades) and "social impacts" are presented as if they were a new thing, requiring starting with a blank slate.

      It would be healthier to highlight that these concerns are not at all new, otherwise we just risk going around in circles.

      Best to all

      Silva

    • Hello

      I practice humility by asking myself a different question:

      If people who have been working on an issue for a long time, with a much better understanding of the context did not find a good solution... how could I, an external evaluator, do so?

      As an evaluator I cannot certainly find solutions but i can - with a facilitative and not an expert approach:

      * help to find "missing pieces" of the puzzle, by bringing, in the same place, the views and ideas of different actors.

      * help articulating and systematizing reality better, so that people can have a better map on which to find solutions

      * capture ideas, lessons that too often are implicit and that - if shared - can help changing the way of working

      * share some ideas about things that I had seen working elsewhere (but, watchout, I would always do this in the evidence gathering phase, as a way to get feedback on these "conversation starters". and people often find quickly a lot of things to be checked and improved)

      * create spaces, within the process, for people to be exposed and react to evidence, as it is shared

      * identify what seem to be the priority concerns to address - linking them to challenges, opportunities, possibilities surfaced.

      This is not research. And these are not solutions.

      There is a whole world of things amongst "problems" and "solutions"... it includes learnings, possibilities, systematized evidence.

      And I see people really interesting and willing to engage with these... Much more than when I used to preach some simple solutions to them. :-)

       

      Also, an evaluation does not always highlight "problems". There are often so many solutions that are just left hidden.

      And evaluations have also a role in finding these and to help valuing the work done, and the many challenges solved, which should never just be given for granted.

    • Clarity... of course, absolutely! Elevator pitch... yes and no.

       

      An elevator pitch is very useful as an entry point.

      But there should then be a recognition that the purpose of a good evaluation is to unveil the complexity of reality (without being complicated).

      It can give new elements and ideas, but not the solution.

      The elevator pitch is the entry point, it highlights main areas to be addressed, and it can certainly outline some pressure points.

      But I am not so sure that we can always offer a crisp idea of possible solutions.

      As they say, "for each problem there is always a simple solution. And it is wrong".

       

      Solutions are to be found, as Bob so well said - beyond the evaluation.

      (or within it only if it is a participatory one, where key local actors are truly engaging in formulating findings, and truly own the process)

       

      So the tools and messages we need are not just the elevator pitches, but these helping to convey and navigate complexity in simpler, actionable ways.

       

      Being aware that it is not for the evaluator to hammer messages, but for the project stakeholders to own them.

    • Great take-away...

      One point to stress.

      Going beyond the report does not mean "make a visual report".

      A visual report is nicer, but still a report.

      "Going beyond the report" means to consider the evaluation as a process that does not end just one product - being visual or not.

      Communication of findings, sharing of ideas need to happen throughout, in many forms.

      A good evaluation does not need to be a "report".

      I advocate for strategies, options for sharing ideas and findings with different audiences, throughout.

      Which might NOT include a report. Report writing is extremely time consuming, and takes away a big % of evaluation time.

      Is it the best investment? Is it needed? We are so used to think that an evaluation is a report that we do not question it.

      Also... beside real time evaluations there is "real-time information sharing".

      This is something too little explored. Yet it can create big changes in the way evaluation happens.

      It is about sharing preliminary ideas, evidence, so that people involved in the evaluation can contribute to shape findings.

      Again: we are so used to think that we share the "end products" that the possibilities of real-time information sharing are not really understood...

      Thanks again for the great summary, it really help to step up discussion and to generate new ideas

      (and, you know what? It is a good example of "real-time information sharing" of new ideas! :-)

    • Oh, well done!

      Great to see that there is some recognition of the value of pictures and visuals.
      The materials you shared are really helpful and inspirational, thanks.

      Now... as someone who thinks visually and in pictures I consistently tried to sum up findings in a more visual way.
      Graphics, drawings, multimeria are seen as "nice" and cool. Everyone likes them and feel they are useful.

      But, guess what? I then have to produce a normal report, because this is what donors want.
      So, visuals are to be done as an aside. Of course, for free.

      Time for reporting is already usually not enough in a consultancy, so if you want to prove that visuals or other media are better, you basically need to work for free.
      Because, at the end of the day, you will still have to write the proper report.

      The bottom line?

      Until evaluations will be mainly perceived as bureaucratic requirements and reports...we will miss out on fantastic possibilities to learn better.
      And also, to involve people who might have fantastic learning, analytical, communication skills, but who are not report writers.
      It is so unfortunate that we assume that "report writing" alone is the best way to capture and convey evidence and insights...

    • Is there any chance that we could stop thinking that an evaluation is... a report?

      So many possibilities would be unlocked.

    • What strikes me is that we all discuss ToCs as if they were "a thing"....

      Talking about a "logframe" is easy: there is standard format to it. 

      It might be slightly adapted, but it is quite clear what it is, how it looks like, how it works.

      The same is not true for ToCs.  What a ToC is can be vastly different.

      I feel we might all use the same word, but having something vastly different in mind...

      Best

      Silva

    • It depends on what is a Theory of Change, and how it has been generated/shared.

      If it remains the same as a big logframe, hidden in some proposals... it does not add much value.

      If it is co-generated and owned... possibly EMERGING from the process of change, then it is an added value.

      As an evaluator, I see that staff on the ground welcome discussions at theory of change level when they help to systematize experience.

      But they might be clueless and confused by TOCs as proposal annexes.

      So, if the Theory of Change is just bureaucracy it is actually a complication.

      If it is a process of systematizing experience, owned by these involved in making change, it is super useful.

      Unfortunately, the latter are very rare.