Silva [user:field_middlename] Ferretti

Silva Ferretti

Freelance consultant
Italy

Please add your fields of expertise and work experience

Silva Ferretti is a freelance consultant with extensive international experience in both development and humanitarian work. She has been working with diverse organizations, committees, networks and consortia (e.g. Agire, ActionAid, CDAC, DEC, ECB project, Handicap International, HAP, Plan International, Save the Children, SPHERE, Unicef, WorldVision amongst many others).

Her work is mainly focused on looking at the quality of programs and on improving their accountability and responsiveness to the needs, capacities and aspirations of the affected populations.

Her work has included impact evaluations / documentation of programs; set up of toolkits, methodologies, standards, frameworks and guidelines; coaching, training and facilitation; field research and assessments.

Within all her work Silva emphasizes participatory approaches and learning. She has a solid academic background, and also collaborated with academic and research institutions in short workshops on a broad range of topics (including: innovations in impact evaluation, Disaster Risk Management, participatory methodologies, protection, communication with affected populations).

She emphasizes innovation in her work, such as the use of visuals and videos in gathering and presenting information.

My contributions

    • My two cents.

      When a "theory of change" (as illustrated in the manual) looks like this - with arrows going in the same direction,  and with a linear outline (IF... THEN),  it is just a logframe in disguise.

      logframe

      A proper "theory of change", fit for complex setups, will have arrows going in many diverse directions, interlinking ideas in ways that are hard to disentangle.

      It is messier, harder to navigate, but... ehy! This is what reality most often looks like.  (this is the "obesity map", in case you wonder)

      TOC

      There is a WORLD OF DIFFERENCE amongst the two.

      You cannot really approach them in the same way and with the same thinking.

      This should be the starting point of any analysis!

      Once you understand if you are dealing with a linear or complex theory of change, then you need to remember

      • In some cases linear thinking has a reason to be.
      • when addressing social change, most often not.

       

      I feel that it is quite unfortunate that the "theory of change" idea - born to appreciate complexity, ended up just being a different way to picture logframe thinking.

      At least we should be able to distinguish what is a logframe on steroids and what is appreciating complexity, and move on from there.

    • I really enjoyed reading the note and seeing how carefully it was written, taking into consideration all views.

      It is useful to see where the discussion is at. But the subject, "closing remarks" is a bit off-putting.  :-)

      As Malika says, it is more useful to keep the discussion open.

       

      There is an assumption whereby evaluations need to impartial and neutral (and that the evaluator is a guardian of this),

      a tendency to equate evaluations with research (even research cannot always be impartial!):

      The underlying understanding of evaluation is: a product generated by an expert who selects the perfect sample and gets to scientific conclusions.

      Is this really what an evaluation should look like and be?

      Shouldn't an evaluation rather be an opportunity to apply evaluative thinking about a programme?

      An opportunity where different people, with different worldviews get to understand better where a programme is at, what possibilities are ahead, what can be learned?

      I really feel strongly about this: we are presenting ALL evaluations as if they need to be "scientific products", originated by experts, capable of being impartial and wise.

      Some evaluation (or better, some research) might well have this focus.

      But assuming that this should always be the goal for evaluation in general is very problematic.

      Participatory evaluations, for example, are not at all about creating one impartial view.

      They are about the perspectives of diverse people together, to make sense of a situation.

      They might not even get at shared / agreed findings, yet they can be incredibly powerful in injecting needed critical thinking about action.

      The evaluator is not always the scientific expert... s/he can be the facilitator.

      Certainly s/he then needs to think about inclusion, representation, and be very aware of the relationships, position, and power of stakeholders.

      But inclusion, representation are fundamentally different concepts from neutrality / impartiality / independence (which should also not be mixed in the same bag).

      It is about being aware (as much as possible) and honest about what are the dynamics at play, about the choices made...

      rather than pretending that we can achieve objectivity.

      Many of my evaluations, for example, are not neutral BY CHOICE.

      I strive to give more voice to the people who are usually less represented.

      I talk to more women, to more outcasts, to more people with special challenges.

      Yet I truly think that this open choice of being biased, is much more useful than an attempt to neutrality and impartiality.

      With the limited time and resources of an evaluation, which voices are worth listening to, which conversations are worth having?

      Being aware and open of what are our choices is more powerful and honest than pretending we can be unbiased. :-) (and if the point is to have scientific evidence, then let's embark in research... which is something else)

      Thanks again for sharing interesting points so far, and for facilitating the discussion.

      I hope that this interesting discussion can continue.

       

      Best

      SIlva

       

       

       

       

       

    • Accountability is much more than reporting on a work plan (which is, unfortunately, how it is often portrayed).

      Accountability means that we make explicit or implicit promises to other people and groups (in the case of development / humanitarian projects, to MANY other people with different perspectives and priorities) .We are responsible to account for these promises. That means: to make the promises happen - when possible and useful...  but also to change, improve, evolve our promises as needed, *always respecting the bond underpinning these promises*. What matters for accountability is the *relation*.

      Things, conditions can change. But people are accountable to each other when they keep each other informed of changes, and when they set up strong processes for negotiating the way forward for keeping the promise alive and relevant. And possibly, to improve it.

      If you have this view of accountability, learning is clearly part of it.

      Learning is what improves the promise, and what improves the trust needed to negotiate promises and conditions of accountability.

      Of course we always need to remember that this happens in messy situations, and we are often accountable, as mentioned, to diverse people, with different interests. We might be accountable to many people. But what does accountability really matter to us? The interests of the donors are not always, for example, the interests of the marginalized people we are supposed to serve... or the interests of future generations...

      When we stick to accountability as "sticking to results" we are missing the point.

      And often, rather than accountability, we have bureaucratic control.

      To get back to the question that started the debate, accountability itself is not a neutral word.

      Who we chose to be accountable to has deep consequences on how we act and look at change.

      It is really important to be aware of it, rather than thinking that a larger sample will solve the issue.

      And even the humanitarian discourse is becoming aware of this and reframing the understanding of neutrality...

       

    • Is it really useful to pretent that we can be neutral and impartial?

      Or is it more useful that we are all inherently biased (and that our approaches are)... and it is then better to be open and aware about it, and about the limitations inherent to all our approaches?

      Thinking that we can have the "perfect" information, in contexts that are complex and messy is probably just wishful thinking.... :-)

       

    • Isha, you mention that "We, as evaluators, are obliged to execute the TORs duly"

      My take is that, as evaluators, we should also question the TORs and negotiate them!

      One of the main contribution we can offer is to propose alternatives ways to look at change, beyond the "cut and paste" TORs that are offered to us.

      Some organizations and evaluation managers are actually quite open to this.

      Others are not.... and, if it is the case, well... their problem.

      I certainly would not want to work on an evaluation that I feel is missing the point from the start. :-)

       

      See, as cyclo-activists say about car drivers... "you are not IN traffic, you ARE traffic".

      As consultants, we do have a duty to resist TORs which we know are constraining learning and quality of work.

       

      Another point... I was surprised by how the question was presented to us.

      The question says "Major agencies and the UN in particular are considering how to integrate environmental and social impacts in their evaluations"

      "Are considering"? Now... environmental concerns are (unfortunately) relatively new... but social ones, are they really?

      We had all sort of cross cutting themes for ages (gender, disability and the like...).

      I am really scared by how the "triple nexus" (a glorified take of the relief / development continuum - discussed for the past 2 decades) and "social impacts" are presented as if they were a new thing, requiring starting with a blank slate.

      It would be healthier to highlight that these concerns are not at all new, otherwise we just risk going around in circles.

      Best to all

      Silva

    • Hello

      I practice humility by asking myself a different question:

      If people who have been working on an issue for a long time, with a much better understanding of the context did not find a good solution... how could I, an external evaluator, do so?

      As an evaluator I cannot certainly find solutions but i can - with a facilitative and not an expert approach:

      * help to find "missing pieces" of the puzzle, by bringing, in the same place, the views and ideas of different actors.

      * help articulating and systematizing reality better, so that people can have a better map on which to find solutions

      * capture ideas, lessons that too often are implicit and that - if shared - can help changing the way of working

      * share some ideas about things that I had seen working elsewhere (but, watchout, I would always do this in the evidence gathering phase, as a way to get feedback on these "conversation starters". and people often find quickly a lot of things to be checked and improved)

      * create spaces, within the process, for people to be exposed and react to evidence, as it is shared

      * identify what seem to be the priority concerns to address - linking them to challenges, opportunities, possibilities surfaced.

      This is not research. And these are not solutions.

      There is a whole world of things amongst "problems" and "solutions"... it includes learnings, possibilities, systematized evidence.

      And I see people really interesting and willing to engage with these... Much more than when I used to preach some simple solutions to them. :-)

       

      Also, an evaluation does not always highlight "problems". There are often so many solutions that are just left hidden.

      And evaluations have also a role in finding these and to help valuing the work done, and the many challenges solved, which should never just be given for granted.

    • Clarity... of course, absolutely! Elevator pitch... yes and no.

       

      An elevator pitch is very useful as an entry point.

      But there should then be a recognition that the purpose of a good evaluation is to unveil the complexity of reality (without being complicated).

      It can give new elements and ideas, but not the solution.

      The elevator pitch is the entry point, it highlights main areas to be addressed, and it can certainly outline some pressure points.

      But I am not so sure that we can always offer a crisp idea of possible solutions.

      As they say, "for each problem there is always a simple solution. And it is wrong".

       

      Solutions are to be found, as Bob so well said - beyond the evaluation.

      (or within it only if it is a participatory one, where key local actors are truly engaging in formulating findings, and truly own the process)

       

      So the tools and messages we need are not just the elevator pitches, but these helping to convey and navigate complexity in simpler, actionable ways.

       

      Being aware that it is not for the evaluator to hammer messages, but for the project stakeholders to own them.

    • Great take-away...

      One point to stress.

      Going beyond the report does not mean "make a visual report".

      A visual report is nicer, but still a report.

      "Going beyond the report" means to consider the evaluation as a process that does not end just one product - being visual or not.

      Communication of findings, sharing of ideas need to happen throughout, in many forms.

      A good evaluation does not need to be a "report".

      I advocate for strategies, options for sharing ideas and findings with different audiences, throughout.

      Which might NOT include a report. Report writing is extremely time consuming, and takes away a big % of evaluation time.

      Is it the best investment? Is it needed? We are so used to think that an evaluation is a report that we do not question it.

      Also... beside real time evaluations there is "real-time information sharing".

      This is something too little explored. Yet it can create big changes in the way evaluation happens.

      It is about sharing preliminary ideas, evidence, so that people involved in the evaluation can contribute to shape findings.

      Again: we are so used to think that we share the "end products" that the possibilities of real-time information sharing are not really understood...

      Thanks again for the great summary, it really help to step up discussion and to generate new ideas

      (and, you know what? It is a good example of "real-time information sharing" of new ideas! :-)

    • Oh, well done!

      Great to see that there is some recognition of the value of pictures and visuals.
      The materials you shared are really helpful and inspirational, thanks.

      Now... as someone who thinks visually and in pictures I consistently tried to sum up findings in a more visual way.
      Graphics, drawings, multimeria are seen as "nice" and cool. Everyone likes them and feel they are useful.

      But, guess what? I then have to produce a normal report, because this is what donors want.
      So, visuals are to be done as an aside. Of course, for free.

      Time for reporting is already usually not enough in a consultancy, so if you want to prove that visuals or other media are better, you basically need to work for free.
      Because, at the end of the day, you will still have to write the proper report.

      The bottom line?

      Until evaluations will be mainly perceived as bureaucratic requirements and reports...we will miss out on fantastic possibilities to learn better.
      And also, to involve people who might have fantastic learning, analytical, communication skills, but who are not report writers.
      It is so unfortunate that we assume that "report writing" alone is the best way to capture and convey evidence and insights...

    • Is there any chance that we could stop thinking that an evaluation is... a report?

      So many possibilities would be unlocked.

    • What strikes me is that we all discuss ToCs as if they were "a thing"....

      Talking about a "logframe" is easy: there is standard format to it. 

      It might be slightly adapted, but it is quite clear what it is, how it looks like, how it works.

      The same is not true for ToCs.  What a ToC is can be vastly different.

      I feel we might all use the same word, but having something vastly different in mind...

      Best

      Silva

    • It depends on what is a Theory of Change, and how it has been generated/shared.

      If it remains the same as a big logframe, hidden in some proposals... it does not add much value.

      If it is co-generated and owned... possibly EMERGING from the process of change, then it is an added value.

      As an evaluator, I see that staff on the ground welcome discussions at theory of change level when they help to systematize experience.

      But they might be clueless and confused by TOCs as proposal annexes.

      So, if the Theory of Change is just bureaucracy it is actually a complication.

      If it is a process of systematizing experience, owned by these involved in making change, it is super useful.

      Unfortunately, the latter are very rare.