As an evaluator, I often face situations in which results, labelled as outputs in project documents, really represent changes in the lives of beneficiaries rather than deliverables under project control. Evaluation handbooks often use the latter definition of “output” and offer standard evaluation questions related to outputs based on this definition; and these questions end up in the evaluation TORs.
I raised this issue with the Community: members shared similar challenges and provided examples of how they addressed various interpretations of terminology used to describe different levels of results.
Three important – and interlinked - themes emerged from the exchange:
1. use and quality of evaluation handbooks;
2. capacities of the project staff in charge of planning, monitoring and evaluation;
3. communication and terminology used to describe project results.
Evaluation handbooks are created to be able to group in one place all the necessary information on what and how those involved in the evaluation should act during the evaluation process. The handbooks are written by evaluation professionals who spend years building their evaluation capacity through study and practice. Yet, we often give these handbooks to people who have very little evaluation capacity and still expect them to be able to use them intelligently. So, project people do what works best for them: they copy and paste from template TORs and then hardly accept the possibility of discussing any changes in the TORs with hired evaluators.
Instead of a handbook, it would be better to give project staff who has to commission an evaluation the opportunity to spend some time with an evaluator. Ideally, the project team should be able to work for several days with an M&E professional at the planning stage to ensure that the project collects meaningful monitoring data and is “evaluable” when it is time for the evaluation.
The story shared by one participant is quite meaningful. He was “locked up for 3 days with the entire project team” to support the development of the monitoring system for their project. Initially team members were “unable to clearly differentiate between the deliverable (the tarmac road) and the effects this deliverable could engender on the living and income conditions of its beneficiaries”. But “slowly, my intervention and assistance made it possible for the project staff to start differentiating between a deliverable and its effect”, he shared.
Working with project staff is an excellent opportunity to strengthen their capacities and awareness on evaluation.
The importance of good communication also emerged. As evaluators, we need to invest in building our communication skills, as communication with stakeholders in the course of the evaluation process is crucial for evaluation utility and eventually for our professional success – both individually and as a profession. The easiest thing we can do is to avoid the use of professional terminology as much as possible when talking to “outsiders”. Terminology facilitates the discussion among people of the same trade but excludes non-professionals. Sure, it takes less effort to say “an output” rather than “a result that stems directly from the project activities and is under full control of the project”, but a longer description makes more sense to non-evaluators, especially because in a common language the word “output” does not have a very distinct meaning. In addition, the longer description becomes handy when the outputs in the LogFrame of the project you are evaluating look more like changes in the lives of beneficiaries and you still may have to call them outputs – because the project people have been calling them this way for the last three or more years.
Building a shared understanding of the evaluation process and its aims is key to a fruitful evaluation, and dealing with terminology is a milestone in this direction.
This topic was raised by Natalia Kosheleva. Binod Chapagain, Hiswaty Hafid, Emile Houngbo, Hynda Krachni, Lal Manavado, Mustapha Malki, Paul Mendy,Isha Miranda, Bintou Nimaga, Reagan Ronald Ojok and Dowsen Sango contributed to the discussion.