Natalia [user:field_middlename] Kosheleva

Natalia Kosheleva

Evaluation Consultant
Process Consulting Company
Russian Federation

More about me

I have be doing evaluation since 1996. Done evaluations in the CIS and Eastern Europe region.

My contributions

    • Dear Mustapha, thanks for raising this important topic.

      In my opinion monitoring and evaluation are complementary and both are necessary for assessing and correcting the performance of development interventions. The reason why they may seem to be mutually exclusive is that in most cases monitoring is fully embedded in the intervention management with specialists doing monitoring being part of the intervention team while evaluation is often positioned as external and independent and evaluation policies adopted by many major players in the development field include serious safeguards to ensure independence of the evaluation team.

      To my knowledge in many less developed countries there is a growing number of M&E departments in national executive agencies, which may be interpreted as a sign that monitoring and evaluation are seen as complimentary. Still at present these M&E departments reportedly focus more on monitoring than evaluation and evaluation they do is often limited to comparing extent of achievement of targets for a set of pre-selected indicators.

      I would agree that monitoring is not receiving much attention within evaluation community, but it is positioned as an integral part of Results-Based Management (RBM) and is a part of discussions within RBM community.

      I also think that both monitoring and evaluation could benefit if we talked more about complementarity of the two practices. For example, in my experience theories of change, an instrument that emerged from the evaluation practice, are most useful when they are developed during the planning phase of the intervention and serve as the basis for development of its’ monitoring system. And evaluations could be more useful in terms of generating lessons from the intervention practice if evaluation ToRs and evaluation questions were informed by questions that intervention teams have when looking on their monitoring data.

      When it comes to SDGs implementation, given the complexity of issues that countries and development partners have to tackle to achieve SDGs and hence the need to use innovative approaches and constant adaptation of interventions, I think that we should be talking about further integration between monitoring and evaluation so that intervention team can commission an evaluation when their monitoring data indicates that the intervention may be getting off track and use results of this evaluation to see if any adaptation is necessary.

      Natalia Kosheleva

      Evaluation consultant

    • I would like to share my experience with applying a “change maps” participatory technique within the framework of the evaluation of the economic empowerment project that worked with female farmers in Kyrgyzstan, Central Asia. The project provided female farmers training on growing vegetables and preserving them and supported them to establish self-help groups and village associations to pool resources, e.g. for procurement of quality seeds and cattle. In some village the project also introduced instruments of the Gender Action Learning System (GALS). Evaluation was conducted at the end of the first phases of the project and was to inform preparation of the second phase.

      “Change maps” is a participatory technique where small groups of project participants are offered blank maps (e.g. flipchart sheets) divided into several sections – one per each area where the project was or could be expected to create change – and asked to fill them based on their actual project experiences. In my case the potential change areas were identified in consultation with the project team. For the second phase the team wanted to align the project with the Women Empowerment in Agriculture Index (WEAI) so we agreed to focus the discussion about the changes induced by the project within WEAI domains. As a result our change maps included the following sectors:

      •             Do you see any changes in how decisions about agricultural production are made?

      •             Do you see any changes in access to and decision-making power over productive resources?

      •             Do you see any changes in control over use of income?

      •             Do you see any changes in leadership in the community?

      •             Do you see any changes in time use?

      •             Do you see any other changes?

      During the meeting at villages we had up to 45 women involved in the project. Breaking them in small group was easy – each woman was a member of a small self-help group, and each self-help group developed a separate map. Then we gave women three beans each and asked to identify priority changes among those identified in their group. Then each group shared their perspective on key changes that emerged from the project with other groups. And in the end we asked women to assess the “merit” of the project for them on a scale of 10.

      The lessons that we learned from application of this approach include:

      •             The “Change map” technique allowed to turn data collection into a semi-structured discussion among female farmers supported by the about what changed in their lives as a result of the project and its worth and merit. This helped me to distance evaluation from “control” visits the women were used to and enable a more open conversation about their project experiences.

      •             WEAI domains did not exactly match the way female farmers perceived their daily experiences, but they address this challenge by reinterpreting change sectors of the map. But in the future I would have used change sectors based on what the project was doing rather some external theoretical constructs.

      •             Filled change maps and discussions around them provided evaluation team with reach material for analysis. For example, based on the content of the maps I was able to identify more nuanced types of changes induced by the project and how common these changes were. One of interesting findings was that engaging women in productive agricultural practices led to women having no free time. This was seen as a positive change by female farmers and their families but came as a negative surprise for the project team.


      Natalia Kosheleva

      Evaluation consultant

  • Is this really an output?


    I raised this issue with the Community: members shared similar challenges and provided examples of how they addressed various interpretations of terminology used to describe different levels of results.

    Three important – and interlinked - themes emerged from the exchange:

    1.       use and quality of evaluation handbooks;

    2.       capacities of the project staff in charge of planning, monitoring and evaluation;

    3.       communication and terminology used to describe project results.

    Evaluation handbooks are created to be able to group in one place all the necessary information on what and how those involved in the evaluation should act during the evaluation process. The handbooks are written by evaluation

    • Dear All,

      First, let me thank EvalForward for providing a platform for this discussion, and all colleagues who contributed.

      I see three important – and interlinked - themes emerging from our conversation:

      1. capacity of the project staff in charge of planning, monitoring and evaluation;
      2. quality (rigidity) of evaluation handbooks;
      3. terminology used to describe project results.

      I think that the story of a very expensive bridge in a distant area intended to link a moderately inhabited island with the main land, shared by Lal Manavado, has a lot in common with examples of the use of evaluation handbooks presented by other colleagues. In case of the bridge, the intent was that islanders working on the mainland would not have to use ferry and would use the bridge to go to work. But instead people used the bridge to move their goods and chattels and settle down in the main land closer to their places of work, while keeping their old homes as summer houses. The intent behind the evaluation handbooks is to give project people who are managing evaluations all information about what and how they should be doing in the course of the evaluation process in one place. The handbooks are written by evaluation professional who spent years building their evaluation capacity through study and practice. But then we give these handbooks to people who often have very little evaluation capacity and still expect them to be able to use them intelligently. So project people do what works better to them – they copy and paste from ToR examples and then refuse to discuss any possible changes in the ToRs with hired evaluators.

      Instead of a handbook, it would be better to give the project people who have to commission an evaluation an opportunity to spend several days with an evaluator. And ideally the project team should be able work for several days with an M&E professional at the planning stage to ensure that the project collects meaningful monitoring data and is “evaluable” when time comes for evaluation.

      This lesson emerges from the story shared by Mustapha Malki, which is quite telling. He was “locked up for 3 days with the entire project team” to support the development of monitoring system for their project. Initially team members were “unable to clearly differentiate between the deliverable (the tarmac road) and the effects this deliverable could engender on its beneficiaries' living and income conditions”. But “slowly, my intervention and assistance made it possible for the project staff to start differentiating between a deliverable and its effect”, shares Mustafa.

      I’m also grateful to Mustapha Malki for his remarks about the importance of communication. Partnering with communication specialists is a great idea, but it is not always feasible. But our ability to communicate with stakeholders in the course of the evaluation process is crucial for evaluation utility and eventually for our professional success as evaluators – both individually and as a profession.

      I strongly believe that evaluators need to invest in building their communication skills. And the easiest thing that we can do is to avoid the use of professional terminology as much as possible when talking to “outsiders”. Terminology facilitates the discussion among people of the same trade but excludes non-professionals. Sure, it takes less effort to say “an output” than “a result that stems directly from the project activities and is under full control of the project”, but a longer description makes more sense to non-evaluators, especially because in a common language the word “output” does not have a very distinct meaning. In addition, the longer description becomes handy when the outputs in the LogFrame of the project you are evaluating look more like changes in beneficiaries’ lives and you still have to call them outputs – because the project people have been calling them this way for the last three or more years.