RE: Is this really an output? Addressing terminology differences between evaluators and project managers | Eval Forward

Dear All,

First, let me thank EvalForward for providing a platform for this discussion, and all colleagues who contributed.

I see three important – and interlinked - themes emerging from our conversation:

  1. capacity of the project staff in charge of planning, monitoring and evaluation;
  2. quality (rigidity) of evaluation handbooks;
  3. terminology used to describe project results.

I think that the story of a very expensive bridge in a distant area intended to link a moderately inhabited island with the main land, shared by Lal Manavado, has a lot in common with examples of the use of evaluation handbooks presented by other colleagues. In case of the bridge, the intent was that islanders working on the mainland would not have to use ferry and would use the bridge to go to work. But instead people used the bridge to move their goods and chattels and settle down in the main land closer to their places of work, while keeping their old homes as summer houses. The intent behind the evaluation handbooks is to give project people who are managing evaluations all information about what and how they should be doing in the course of the evaluation process in one place. The handbooks are written by evaluation professional who spent years building their evaluation capacity through study and practice. But then we give these handbooks to people who often have very little evaluation capacity and still expect them to be able to use them intelligently. So project people do what works better to them – they copy and paste from ToR examples and then refuse to discuss any possible changes in the ToRs with hired evaluators.

Instead of a handbook, it would be better to give the project people who have to commission an evaluation an opportunity to spend several days with an evaluator. And ideally the project team should be able work for several days with an M&E professional at the planning stage to ensure that the project collects meaningful monitoring data and is “evaluable” when time comes for evaluation.

This lesson emerges from the story shared by Mustapha Malki, which is quite telling. He was “locked up for 3 days with the entire project team” to support the development of monitoring system for their project. Initially team members were “unable to clearly differentiate between the deliverable (the tarmac road) and the effects this deliverable could engender on its beneficiaries' living and income conditions”. But “slowly, my intervention and assistance made it possible for the project staff to start differentiating between a deliverable and its effect”, shares Mustafa.

I’m also grateful to Mustapha Malki for his remarks about the importance of communication. Partnering with communication specialists is a great idea, but it is not always feasible. But our ability to communicate with stakeholders in the course of the evaluation process is crucial for evaluation utility and eventually for our professional success as evaluators – both individually and as a profession.

I strongly believe that evaluators need to invest in building their communication skills. And the easiest thing that we can do is to avoid the use of professional terminology as much as possible when talking to “outsiders”. Terminology facilitates the discussion among people of the same trade but excludes non-professionals. Sure, it takes less effort to say “an output” than “a result that stems directly from the project activities and is under full control of the project”, but a longer description makes more sense to non-evaluators, especially because in a common language the word “output” does not have a very distinct meaning. In addition, the longer description becomes handy when the outputs in the LogFrame of the project you are evaluating look more like changes in beneficiaries’ lives and you still have to call them outputs – because the project people have been calling them this way for the last three or more years.