In evaluation, there is no substitute for good listening skills

@WFP

In evaluation, there is no substitute for good listening skills

6 min.

Nick MaunderWe spoke to WFP evaluation team leader Nick Maunder about leadership challenges, engaging affected populations in evaluations and the personal rewards of working in the humanitarian-development sector.

What is the biggest challenge you have faced as a team leader and how did you manage it?

Being a team leader brings its own specific challenges.

Firstly, there is the internal challenge of team management. Over the last decade, evaluations seem to have become far more complex and involve larger teams of evaluators. The role of team leader has evolved and is now much more about drawing on the special skills of team members, rather than just getting on with the evaluation yourself. At the same time, there is the risk of delivering a “patchwork” of individual mini reports that needs to be mitigated. A clear, well understood and shared evaluation methodology – complemented by regular opportunities to share and discuss emerging findings - is essential to keep team members on the same page.     

Secondly, as team leader you are the external face of the evaluation. One of the key roles is getting WFP staff on board with the evaluation process from the start. This is really essential both for the success of the implementation of the evaluation and also for the uptake of the recommendations. Key to the success of this relationship is establishing the credibility and professionalism of the evaluation team from the outset. At the same time, it really helps if WFP staff understand the evaluation process is something that is useful to them, rather than an unwanted externally-imposed accountability-driven exercise.

What do you consider to be the most practical skills an evaluator should possess in carrying out their work?

In recent years there has been a big push towards increasing the rigour of the evaluation process and this has really challenged evaluators to develop stronger analytical skills – with a growing focus on quantitative methods. This has been really useful and can lead to the identification of new and novel insights which would not have otherwise emerged.However, a really important part of our work is often about documenting the evolution and history of a programme. An essential part of the learning is understanding the decision-making process and here there is no substitute for qualitative data gathered through key informant interviews. So, if I had to pick a key evaluation skill, it would be that there is really no substitute for good listening skills.

In terms of “closing the learning loop” generally, what do you consider to be the best way to involve affected populations and how do you feed results/findings back to them?

This is undoubtedly one of the weakest parts of our evaluation practice. Firstly, in terms of gathering data from affected populations, I think we need to be pragmatic about what is possible to do within the scope of an evaluation. We would certainly want to hear from affected populations in any evaluation but the reality is that the opportunities to do so may be severely constrained by time and budget. We are trying to improve this. For example, in WFP’s recent Nigeria L3 evaluation we worked with a team of local consultants to conduct a significant number of focus group discussions. However, I think it’s also important to recognise that evaluations typically rely on secondary data sources and WFP has already invested a lot in feedback mechanisms as part of its regular monitoring processes which we are able to draw on.

Even more problematic is the question of feedback to affected populations of the evaluation outcomes, and to be honest I don’t think I’ve ever seen this done in a meaningful way. However, I am also not convinced that evaluations are framed in a way that is particularly meaningful to affected populations, and therefore they may have limited interest in the evaluation findings. If you really want their participation, then I think we need to explore some innovative trialling of participatory evaluations which involve affected populations from the start in framing the evaluation questions.

In your opinion, how could evidence generated by evaluations be better used by WFP?

I think you need to start by giving credit to the frameworks that WFP has set up for ensuring accountability for the use of evaluation findings. However, the large-scale decentralisation of the evaluation process is going to throw up new challenges around accountability for actions on recommendations from evaluations.

Personally, I find that it’s absolutely essential to involve end users in framing the evaluation recommendations and, typically, I like to workshop the emerging recommendations with the responsible WFP staff. I think this helps to build ownership and also helps to frame far more meaningful and actionable recommendations. Simply expecting people to react to a page of written recommendations is unlikely to be highly impactful.

In my experience, country offices tend to be more responsive to evaluation recommendations, but I think there is scope for further improvement in the uptake of recommendations at the corporate level. Country-based evaluations repeatedly identify similar issues which require action at a corporate level, but struggle to gain traction. While work has been done on drawing together conclusions across evaluations, I think more could still be done to capitalise on the extensive WFP evaluation literature which now exists.

Having been involved in numerous evaluations across the globe, what aspects of your work do you find most rewarding? What drives you on?

At the risk of sounding trite, I do believe that providing humanitarian assistance is as – or even more - essential than ever. I think we have a shared moral obligation to help those in need, rooted in respect of the principle of humanity and as an expression of the world we want to live in.

To me, the evaluation process is about learning from how we have tried to help those around us in the past. What we got right and could be replicated, and what we could have done better and should change in the future. What really motivates me is when I’m able to identify tangible ways in which we can improve the lives of the most vulnerable and needy – and see those ideas put into action. I guess what it all comes down to is the handful of recommendations at the end of the report. If some of those are accepted on the basis of their merits, then you can feel that you’ve contributed.

 

This interview was originally published in the WFP Office of Evaluation newsletter.