Making more use of local institutions in evaluation

©WFP

Making more use of local institutions in evaluation

Dear members,

One result of the Covid pandemic has been to make travel more difficult or often impossible. Consequently, more of the evaluation load is likely to have shifted from visiting international teams to local specialists. This change in the balance of responsibility for evaluation between foreign experts and local staff might prove to be one positive outcome of an otherwise tragic situation.

It would be very interesting to hear any success stories or at least experience as a result of these changes, including the following aspects:

  1. How is this shift in responsibility being managed? What institutions are involved – government, universities, NGOs, private consultancy companies or individuals? 
  2. How far is this responsibility being taken? Is it still confined to data collection and analysis, or does it include greater responsibility in the management of the evaluation ?
  3. How is this work being financed? Are costs being borne solely by governments or are donors contributing with funds previously allocated for donor staff or consultants?
  4. What are the difficulties met? (i) lack of trained staff; (ii) lack of funding; (iii) problems of peer pressure leading to optimistic reporting; or (iv) other problems?

It would be great to hear the feedback of both evaluation commissioners or managers and of the national evaluators involved.

Any information on the role of universities or agricultural colleges in this situation would be of particular interest.

Many thanks!

John Weatherhogg

This discussion is now closed. Please contact info@evalforward.org for any further information.

Dear Colleagues,

          Very many thanks for taking the time to respond and for sending your comments and experience.

          Needless to say, I had hoped to hear of some successful experience involving a university, but perhaps that was unrealistic optimism.

          The comments from Lewis N. Kisuku in the Democratic Republic of the Congo gave a good idea of the situation in that country. The problems described would be common to many countries, both in Africa and elsewhere. His comments on contributions from university staff were in line with those of Isha Miranda from Sri Lanka. Why should the work of universities or their members frequently be characterised as theoretical and their reports as long and often late? Perhaps it is a lack of the commercial sense, a lack of pressure and the fatal desire for perfection.

          Very good to hear a positive and happy result from The Gambia sent by Paul L. Mendy.  This seems to show what can be done with close collaboration between local consultants and staff of the financing agency.

          The basic problem and starting point is how to acquire good data. This should be a local responsibility, not undertaken through a few hurried visits by an international specialist who has just flown in.

          If the data is locally collected how can it be assured to be impartial, unbiased and objective? There are likely to be pressures to both under and over report results. Also, there will be a temptation for enumerators to dream up results for project participants not on their farms but in some road-side coffee shop. Pressures to “enhance” or modify the results and the subsequent evaluation will continue up to and beyond the delivery of the evaluation report to the financing agency. It is clearly difficult for a private commercial company or individual to resist all these pressures – and much easier for an institution. 

          Let us hope that as a result of the increase in pressure for local involvement in evaluation as a result of Covid there could be more interest from universities.

          Such an interest would be good for evaluation as well as very good for the universities and their students.

          Many thanks again for taking part in the discussion.

John Weatherhogg  

In Yemen, the previous six months have witnessed a high demand of the national evaluators as a result of covid-19. 

A considerable number of Third Party Monitoring companies have communicated with national evaluators to assign them for their activities in Yemen.

This trend, is a significant change that will enhance the professionalism of M&E in Yemen, since it will participate in capacity building of the national experts.

Hi John,

Thanks for bringing on this topic.

Indeed, I agree with you that one of the positives of the Covid-19 pandemic is that it allows for increased participation of local consultants in evaluation given travel restrictions on international consultants. The use of local consultants is without a doubt less financially costly. The increased opportunity for local consultants contributes to strengthening local capacity in evaluation. 

In my case, in The Gambia, the project completion mission on the IFAD-funded National Agricultural Land and Water Management Development Project (locally called Nema Chosso) was held with a combination of both local and international consultants, where the latter carried out the onsite fieldwork and direct consultations with project beneficiaries, staff, and stakeholders, whilst the international consultants (with the exception of a couple from Dakar (Senegal) carried out their assignments (mainly desk review and virtual exchanges with the project team and local consultants). The arrangement provided consultancy opportunities for at least three local experts who would not have had the opportunity, otherwise. The quality of the outputs was considered very good, enriched by the mix of local and international expertise. I must hasten to add that the opportunity contributed greatly to enrich the capacity/skills of the local consultants through a friendly feedback mechanism which was put in place by the Team Leader. This was evident in the rather numerous comments/observations made on some of the reports prepared by the local consultants.

The IFAD Country Director and team directly managed the assignment through the Consultants' Team Leader, who, fortunately, was based in Dakar like the IFAD Team for The Gambia. The Country Director reviewed agreed milestones of the assignment and recommended to the project when payments were due. Payments were done locally through the project's Special Account through Bank Cheques (for local consultants) and Transfers (for the international consultants).

Thanks,

Paul

 

At the onset of this century, North-South and South-South partnerships were encouraged. whereas this has happened in some places, it did not work in others. The COVID pandemic challenges have not been felt where partnerships were built, and local actors' capacities strengthened. In the case of the first question, there has been no shift in such areas as a result of the pandemic. This is because institutions have been working towards strengthening inclusion and partnerships. Our local, regional and international evaluation associations should encourage collaborations (South-South & North-South) that aim to form sustainable partnerships, in which the capacity of local evaluators and their participation in both design and implementation of evaluation studies is enhanced.

 

Relevant remark by Léa. 
Likewise, it should be noted that the implementation of a monitoring and evaluation system in developing countries is still a myth. 

[Comment originally submitted in French]

In my humble opinion, Covid-19 has reduced the mobility of international evaluators. It would be likely that national specialists would take on this task, but this is not effective in the field, I wonder who is now assigned the task. It is very surprising that in some institutions the same project implementation staff are considered to be evaluators with a team of verifiers. This is scandalous.

Returning to Lewis! for the DRC, I myself thought that the Ministry of Planning had a monitoring and evaluation unit. It is unacceptable that the total amount of external funding received by the country is not known and that it is not possible to claim to monitor its use. 

Accountability would be a consequent step of the rigorous monitoring of projects' governance.

[Note that this contribution was originally submitted in French]

 

Dear Isha,

Greeting from Indonesia.

I agree with Lewis and Abubakar, 

1-2. Most of the evaluation in development projects is carried out by an individual or a team, a team usually represents an evaluation consultancy organization. During the Covid19 pandemic, the evaluation becomes more flexible in terms of outsourcing some of the evaluation activities to the local individual or a team due to limited mobilization. For example, conducting field data collection and inputting data, after online intensive training on how to carry the field works. Thus, the main responsibility such as managing the evaluation, analysis, and reporting remains in the hand of hired consultants or consultancy organizations. 

3. For the evaluation the funding remains the same, depending on who needs the evaluation, and for what purpose. In the project, the evaluation funds usually very small (re. less than 5% of the total project budget), so usually the evaluation process also involved the data collection from the evaluated project internal monitoring system and using more less-rigorous methodologies or expensive activities.

4. The challenge to outsourcing some of the evaluation activities was 1). how to ensure the quality of data collections and inputs, 2). how to balance the client expectation vs onsite reality, also 3). During the pandemic, the target isolated region become another challenge to access target respondents with limited communication infrastructure.   

 

Cheers, 

Hiswaty 

 

1. How is this shift in responsibility being managed? What institutions are involved – government, universities, NGOs, private consultancy companies or individuals? 

  • First of all, in my country there is no body or institution with the mandate or responsibility to carry out monitoring and evaluation of the actions of the government, of NGOs or other international institutions such as in South Africa (Monitoring and Evaluation, Office of the Public Service Commission of South Africa) or in Colombia (SINERGIA). In the Democratic Republic of Congo, the Ministry of Planning is the ministry responsible for planning and programming the country's economic and social development policy. It has 9 directorates, including one for "studies and planning" and more than 7 committees and 4 cells, one of which is in charge of projects and public procurement and another for control and monitoring, etc., but the mandate and role has never been clarified and is still not played.
  • None of our universities or training institutions offer a degree course in Monitoring and Evaluation, except for our Higher Institute for Rural Development which has a branch at the "Licence" master2 level with an option in "Regional Planning". This is where we offer some courses related to monitoring and evaluation. This leads to the conclusion that there is a crying lack of skills in monitoring and evaluation.
  • Most independent evaluations are organised by individuals: they can be presented as a single individual, as a duo or as the head of a study centre or association. Their profiles: (i) often former employees of INGOs (who have become independent consultants or are still in service, but acting in their spare time as freelancers); (ii) sometimes university professors or assistants acting in an autonomous capacity, without the direct involvement of their respective universities. Major projects of the World Bank, ADB and some UN agencies very often make these academics comparable to the INGOs that call on them for this first profile. Currently, even World Bank projects use this first profile, because the second profile is too theoretical, the reports are very long (but well written) and often submitted late because their authors are too busy with other priorities.
  • Evaluations within and/or during the course of the project are often the work of one or more staff employed by the project or NGO in question (with the consequent question of the real independence of these staff under hierarchical authority, very often expatriated). 

2. How far is this responsibility being taken? Is it still confined to data collection and analysis, or does it include greater responsibility in the management of the evaluation ?

  • For the first profile of monitoring and evaluation professionals, very often their responsibility is limited to data collection and/or support from international consultants. There is no real transfer of skills, nor decision-making authority upstream or downstream of the evaluation. By way of illustration: once the report has been written by the consultant and validated by the administrative hierarchy of the NGO, even if the approaches or opinions diverge between the consultant and the staff responsible for monitoring and evaluation within the project, it is very rare for the local staff to be proved right.
  • For the second profile, very often they enjoy an initial credit of confidence, and are therefore often called upon for the basic or final evaluations. They are given free rein in the design and planning of survey tools. Very often they are masters in the drafting of final reports. Sometimes the data are under the authority of others, and the interpretation, analysis of the data and writing of the report is left to others.

3. How is this work financed? Are costs borne solely by governments or do donors contribute with funds previously allocated for donor staff or consultants?

  • 95% of the costs of evaluations are borne by donors. With the issue of accountability in vogue: the existence of a monitoring system or accountability mechanism is now part of the conditions or criteria for project selection. The real challenge remains the allocation of the budget, normally 3%-7% of the total project budget should be available for monitoring and evaluation, but few projects respect this proportion, especially the aspect relating to staff capacity building in relation to monitoring and evaluation.

4. What are the difficulties met? 

(i) Lack of trained personnel; The Covid pandemic19 has accentuated this gap. With the restriction of movement, monitoring and evaluation professionals have to fall back on either the use of localizing resources or technology. Both are greatly lacking in my country.

(ii) Lack of funding; as the organisational culture of monitoring and evaluation is still in its infancy in many organisations, staff in monitoring and evaluation departments often have to shake things up to get the attention of budget decision-makers. As a direct consequence, visits are reduced to their strict minimum. Training supported by the project is almost non-existent.

(iii) problems of peer pressure leading to optimistic reports; due to the lack of this culture and training, many colleagues perceive monitoring and evaluation staff as policemen and/or pessimists who only see the worm as half empty. Unfortunately, if the hierarchy is not open to criticism, this attitude can exacerbate inter-relational tension and even put the moral and/or physical integrity of some officers at risk (I myself was a victim in Tanganyika).

(iv) other problems ? insecurity or instability in work areas. As an illustration: the news of the death of the Italian ambassador to eastern DR Congo during a monitoring mission in the field.

 

[Note that this contribution was originally submitted in French]

     

    With Covid-19 I see national consultants in my country more involved in evaluation. Here some responses to the questions proposed:

    1. How is this shift in responsibility being managed? What institutions are involved – government, universities, NGOs, private consultancy companies or individuals? 

    In my context, I see three entities mostly involved in evaluation: private consultancies (mostly), universities and NGOs. Universities are very theoretical and look for details which they are unable to find on the ground level; they usually write long and challenging reports. 

    2. How far is this responsibility being taken? Is it still confined to data collection and analysis, or does it include greater responsibility in the management of the evaluation ?

    I see both. Some evaluation companies are in charge of the complete organization, coordination and collaboration of the evaluation process on behalf of the evaluation unit. Others do only data collection and analysis. 

    Both are designated as National Consultants or national evaluation consultants. 

    3. How is this work being financed?

    I think donor contributions still prevail and most programs budgets embed the evaluation budget.

    I believe that due to Covid19 there is not much of cost effects due to less cost involvement of  international consultancies. In my case, I am mostly hired on a daily basis or with a flat figure with perdiem.

    4. What are the difficulties met? 

    On training: data collectors need to be trained for the task even though they are already trained.

    On reporting: it is usually the national consultant task. It is very challenging and it is where experience comes in place. If you have experience in working in Country program evaluation, you get things correctly.  Mostly you need to identify the process, select the documents and identifying the stakeholders.  Questions and methods in most cases need to be re-designed to adapt to the local context. 

    Challenges faced due to Covid 19 by evaluators also include:

    • more review and zoom interview based on prior set questions given. How ever both parties should have a good understanding of the task and programme in detail. 
    • interview meetings: many do not (government sectors and grassroots) feel comfortable with zoom interviews. 
    • reporting unless if you can get your template organized well it will be bit of challenging. 

    Good morning

    I have not yet observed any shift from visiting international teams to local specialists. International teams most often have local partners so I suspect the shift may not occur in all countries.

    Thanks

    Abubakar Muhammad Moki