Is this really an output? Addressing terminology differences between evaluators and project managers

@FAOEvaluation

Is this really an output? Addressing terminology differences between evaluators and project managers

Hi! I'm Natalia Kosheleva, independent evaluation consultant. I regularly do evaluations commissioned by country offices of UN agencies. These evaluations are decentralized, which means that they are managed by M&E offices or project/program staff members who don't have much experience with evaluation and rely on agency Evaluation Handbook to prepare evaluation ToRs.

Evaluation Handbooks use the common definition of outputs as deliverables/immediate results of the project/program activities and hence production of outputs is under direct control of the project staff. And outcomes are defined as results further down the change chain achieved with the use of outputs by other stakeholders, including target beneficiaries and government.

Based on these definitions, the evaluation questions recommended by Handbooks and consequently put in the ToR are usually formulated like:

*             Have the outputs been delivered in a timely manner?

*             To what extent the project/program outcomes were achieved?

But once I turn to the description of the evaluated project/program, I often find out that outcomes are not framed in a way that you can easily attribute them to the project.

From my experience, people who manage decentralized evaluations are not very open to questioning project outputs or outcomes. In one of my recent evaluations I resolved the situation by making an agreemet with members of the Evaluation Management Group ad going through the full chain of events and changes created by the project.

  • Did you experience the similar challenges due to differences in the use of terminology between people who planned the project and evaluators when doing evaluations?
  • And if you did, how did you handle them?

Thanks in advance for sharing your experience!

Natalia Kosheleva

evaluation consultant

Russia

 

This discussion is now closed. Please contact info@evalforward.org for any further information.

Dear All,

First, let me thank EvalForward for providing a platform for this discussion, and all colleagues who contributed.

I see three important – and interlinked - themes emerging from our conversation:

  1. capacity of the project staff in charge of planning, monitoring and evaluation;
  2. quality (rigidity) of evaluation handbooks;
  3. terminology used to describe project results.

I think that the story of a very expensive bridge in a distant area intended to link a moderately inhabited island with the main land, shared by Lal Manavado, has a lot in common with examples of the use of evaluation handbooks presented by other colleagues. In case of the bridge, the intent was that islanders working on the mainland would not have to use ferry and would use the bridge to go to work. But instead people used the bridge to move their goods and chattels and settle down in the main land closer to their places of work, while keeping their old homes as summer houses. The intent behind the evaluation handbooks is to give project people who are managing evaluations all information about what and how they should be doing in the course of the evaluation process in one place. The handbooks are written by evaluation professional who spent years building their evaluation capacity through study and practice. But then we give these handbooks to people who often have very little evaluation capacity and still expect them to be able to use them intelligently. So project people do what works better to them – they copy and paste from ToR examples and then refuse to discuss any possible changes in the ToRs with hired evaluators.

Instead of a handbook, it would be better to give the project people who have to commission an evaluation an opportunity to spend several days with an evaluator. And ideally the project team should be able work for several days with an M&E professional at the planning stage to ensure that the project collects meaningful monitoring data and is “evaluable” when time comes for evaluation.

This lesson emerges from the story shared by Mustapha Malki, which is quite telling. He was “locked up for 3 days with the entire project team” to support the development of monitoring system for their project. Initially team members were “unable to clearly differentiate between the deliverable (the tarmac road) and the effects this deliverable could engender on its beneficiaries' living and income conditions”. But “slowly, my intervention and assistance made it possible for the project staff to start differentiating between a deliverable and its effect”, shares Mustafa.

I’m also grateful to Mustapha Malki for his remarks about the importance of communication. Partnering with communication specialists is a great idea, but it is not always feasible. But our ability to communicate with stakeholders in the course of the evaluation process is crucial for evaluation utility and eventually for our professional success as evaluators – both individually and as a profession.

I strongly believe that evaluators need to invest in building their communication skills. And the easiest thing that we can do is to avoid the use of professional terminology as much as possible when talking to “outsiders”. Terminology facilitates the discussion among people of the same trade but excludes non-professionals. Sure, it takes less effort to say “an output” than “a result that stems directly from the project activities and is under full control of the project”, but a longer description makes more sense to non-evaluators, especially because in a common language the word “output” does not have a very distinct meaning. In addition, the longer description becomes handy when the outputs in the LogFrame of the project you are evaluating look more like changes in beneficiaries’ lives and you still have to call them outputs – because the project people have been calling them this way for the last three or more years.

Greetings!

While I fully appreciate the evaluation problems caused by the mismatch between the achievement of 'deliverables' and their actual human benefits, I nevertheless cannot help thinking this is a problem we have created for ourselves. It's just another instance of the difficulties every reductive approach entails.

Consider for a moment what would have happened with that 'road' if the planners asked themselves a few simple questions like:

  1. What's the likely daily volume of wheeled traffic on it?

  2. How many living in the vicinity of that road will be using it? And for what purpose? Etc, etc...

In a very affluent industrialized country in the North, a similar thing happened. It involved a very expensive bridge in a distant area intended to link a moderately inhabited island with the main land. The intention was to enable the people living on the island to travel to work on the main land without having to take the regular ferry. The outcome was interesting to say the least.

The islanders used the bridge to move their goods and chattels and settle down in the main land closer to their places of work, while keeping their old homes as summer houses! It was hoped to finance the bridge at least in part, by the daily toll drivers would have had to pay, but this became less than insignificant.

So, the lesson is obvious, but then, what is obvious seems to be the most difficult to understand.

If before planning begins, one achieves a clear understanding of what would really help the potential beneficiaries of a project and balance it against their actual ability to derive those benefits from it, one would arrive at some realistic set of goals. Then, it would be easy to design a project where the gap between the abstract 'deliverables' and real benefits is minimal, thus making the evaluator's task easier and pertinent.

At the risk of being accused of unseemly levity, a fairly unusual example here would be a project to supply mountain mules to the farmers in High Andes cultivating say quinoa in their fields. This seems to be the most effective way to help them to transport their surplus food to the nearest market. Lack of good roads, high expense in road construction and maintenance, length and cost of training people, and most of all, the time all these take will make the traditional beast of burden a not so comical a choice.

Best wishes!

 

Hi all

Contribution of Emile and Bintou exchange on the necessary distinction between outputs and outcomes.

Outputs are all goods and services developed through the project's activities via the use of project's resources and inputs (in the Emile's case, these are the distibuted insecticide-treated nets). Outcomes would be the changes (of course, they should be "positive" changes, otherwise we should close that project) that can appear in the living and income conditions of project's target beneficiaries (in the Emile's case, these are the reduction of malaria incidence).

The difference between the two different items is that outputs - as well as activities and inputs - are part of the project's "controlled" environment (you can decide what and how much to buy and distribute) while outcomes remain the influence that the project is intending IF AND ONLY IF THE PROJECT'S TARGET BENENFICIARIES USE WHAT THE PROJECT DISTRIBUTED. This is why outcomes are part of the project's "influenced" environment.

And this is what makes things more difficult in achieving outcomes in comparison to outputs because the project management unit has no slight control over the changes among beneficiaries. It then depends on how relevant the implemented activities were in order to generate outputs that can really solve the problem situation identified at the onset. If we can borrow concepts from marketing, and if we assume that outcomes represent the changes requested by the beneficiaries (that is the "demand") and the outputs are the mean to bring about these changes (that is the "supply"), it is then needed that the "supply" meets the "demand" in order to changes to occur.

Contribution to Dowsen reaction

Yes, this what I do myself before I go on drafting a Results Framework (or Logical Framework) at the start of project design and this results framework for setting the M&E plan and for guiding any evaluation later. I start with the problem situation assessment (i.e. the Problem Tree tool) using the "cause-effect" causality law. And then turning each problem identified in the Problem Tree into a positive statement I develop the Objective Tree, then a fine-tuning of the Objective Tree using the "means-end" causality law. From the Objective Tree, I can identify the best alternative "result chain" move with it very easily to the results (or logical) matrix and so on...

Contribution to Reagan Ronald reaction on handbooks' quality.

I am not sure that anyone of us has attributed the poor quality of evaluation handbooks to evaluators or international consultants in evaluation. Personally I made it clear that sometimes the handbook's content can be of good quality but presented and disseminated upon a very poor communication processing and dissemination process. Based on what I know, many handbooks' content were prepared by high quality consultants in evaluation. However, relying on my minor competency on knowledge and information systems and communication, a good handbook, in general, and in evaluation, in particular, must rely - as a communicative tool - on 4 necessary 4 criteria: (1) a good, appropriate, relevant, and purposeful content; (2) an adequate mean of dissemination; (3) a good knowledge on the targeted population; and (4) a conducive environment to the use of the information. For many handbooks, we were more focusing on (1) and a bit less on (2) and (3) and this is not enough to give birth to good quality handbooks on any subject and not only on evaluation guidelines. Moreover, the consultant in charge of content can be quite good in terms of content (i.e. the substantive knowledge) but may not be very qualified in terms of communication. This is why I always recommend to build a team on an evaluator + a communication specialist to have a good quality handbook on evaluation.

Hope that I added a bit to this discussion.

Mustapha

[English translation below]

Bonjour Mme NIMAGA,

Je vous remercie pour la question, et voici ma réponse.

La mise en oeuvre des programmes de développement passe par des étapes successives jusqu'à la réalisation des changements souhaités/visés. Par exemple, si nous décidons de réduire la "mortalité liée au paludisme" dans une localité, le changement souhaité est la réduction du taux de mortalité (outcome, résultat). Pour parvenir à cette fin, nous avons pensé que les populations devraient utiliser beaucoup plus les moustiquaires imprégnées. Il faut donc une augmentation du taux d'utilisation de moustiquaires imprégnées (output, produits ou réalisations). Pour que les populations commencent à utiliser beaucoup plus les moustiquaires imprégnées, nous en avons distribuées. La distribution des moustiquaires imprégnées est l'activité (activity). Pour pouvoir réaliser cette activité, nous avons dû nous approvisionner en moustiquaires, carburants, ... (ce sont les intrants, inputs).

Voilà un exemple explicatif qui pourrait vous aider. Je reste disponible pour répondre à d'autres questions. 

Chère Mendy,

Je suis d'accord avec votre texte, sauf un point. Nous ne pouvons pas dire que "le suivi et l'évaluation se préoccupent moins des activités et des résultats". Cette affirmation est vraie pour l'évaluation, mais pas pour le suivi. En effet, le suivi porte principalement sur les activités et les produits.

***

Hello Mrs. NIMAGA,

Thank you for the question, and here is my answer.

The implementation of development programs goes through successive stages until the desired / intended changes are achieved. For example, if we decide to reduce "malaria-related mortality" in an area, the desired change is the reduction of the mortality rate (outcome, result). To achieve this end, we thought that people should use insecticide-treated mosquito nets much more. It is therefore necessary to increase the rate of use of impregnated mosquito nets (output, products or achievements). For people to start using the impregnated mosquito nets much more, we distributed them. The distribution of impregnated mosquito nets is the activity. To be able to carry out this activity, we had to stock up mosquito nets, fuel, etc. (these are the inputs).

Here is an explanatory example that could help you. I remain available to answer other questions.

Thank you.

Dear Mendy,

I agree your text, except one point. We cannot say that "Monitoring and Evaluation is less concerned with activities and outputs." This assertion is true for evaluation, but not for monitoring. For, monitoring is mainly concerned with the activities and outputs.

Dear Natalia,

I would like to thank you for raising this issue. No doubt it’s of very high importance.

The Devil’s Advocate:

I have not seen anyone responding as a commissioner and allow me to attempt to fit in their shoes in this scenario. I begin by making this simple assumption: That most evaluation handbooks are developed by competent consultants who are proud to call themselves international with very voluminous CVs.

Secondly, let us face the reality here. Everyone is criticizing the institutional evaluation handbooks as most time "poorly developed" with a lot of gaps. Who develops those handbooks? Is it not us the consultants? Let us own our mistakes as evaluation consultants that sometimes we end up setting traps for our colleagues in the future by producing work of low quality.

Similar challenges due to differences in terminologies

This is a very common challenge. In my opinion, definition is not cast in stone and I subscribe to the school of thought that is flexible enough to modify it here and there due to complexities in other sector. My approach has always been bringing the matter to the attention of the evaluation management team. Based on what is emerging, I would suggest possible working definitions so that the depth and breadth of the evaluation is appropriate to accommodate whatever is emerging. I think in your case, you stood a better chance of providing findings that might influence the review of the handbook if it has become very limiting in its definition and approach.

I hope my one cent contribution gets a soft landing in the ears and hearts of fellow evaluators.

Thanks

Hello all,

Greetings from Banjul, The Gambia in West Africa.

Not have had much free time to interact with the group since and I’m happy to weigh in on this topic of discussion, which is on the differences between achievements and activities, as inquired by Bintou Nimaga.

My take is that the definition of achievement depends very much on what type of performance indicators one is monitoring and measuring at the time. There are process indicators which are concerned with the level and quality of implementation of activities. In a similar manner the achievements at output level are defined and measured according to output achievements. Likewise outcome and impact level achievements.

What needs to be very clear at this point however is that projects and programs are measured at the higher level of results and this concerns outcomes and impact. Monitoring and Evaluation is less concerned with activities and outputs. It is often easy to count how many boreholes one project has drilled and installed but quite important is how many beneficiaries are drawing water from the borehole and how much difference that is making in terms of the time saved in water collection and free time created for women and girls to redirect their energies into something productive; how this access to water is contributing to reductions in incidences of diarrhea and water borne diseases in the community? How much additional incomes are communities generating from the increased access to water to water their backyard vegetable produce?

From the foregoing one can distinguish achievements from activities and this can be related to the results chain of the project or program.

Hoping this does not add to more confusion.

Dear Mustafa and Natalia,

From my experience, sometimes it is difficult to make a difference between the Output and Outcome that may depend on the nature of project. The easy way to identify is that the Output is usually the *Immediate Result* of the intervention(s), like how may beneficiaries have received the training or access to support system. While the Outcome is usually the *Result of* Behavior Changes that can be measured or observable like the improved business productivity, increased profits, and business cost reduction.

Yes, like Natalia mentioned, the Output is controllable by the project staff and it may be short-term. Meanwhile the Outcome is usually medium-term because of the cumulative change practices results, so it takes more time to see the result of the changes.

Hope it helps,

Hiswaty

Bintou Nimaga

Bintou Nimaga

independent consultant, Mali

Good evening, Mr. HOUNGBO! I find this theme very interesting. In fact I ask now where are the links or differences between achievements and activities? And between products and activities? Greetings to you.

Dear Colleagues,

I would just like to confirm that outputs and outcomes are quite different. In the process of M&E planning and of Evaluation, we define outputs as the realisations nncessary before we can observe a change, say an outcome. Then, it's a question of level of appreciation. Inputs contribute to activities realization, activities contribute to outputs realization, outputs contribute to outcomes realization, and outcomes contribute to impacts realization.

Best regards.  

Emile N. HOUNGBO, PhD

Agricultural Economist, Senior Lecturer Director, School of Agribusiness and Agricultural Policy National University of Agriculture, Benin 

Dowsen Sango

Dowsen Sango

SNV, Zimbabwe

Dear Mustafa,

I found your tarmac example quite instructive. For me the difference between outcomes and outputs is not an academic one but more of a practical issues. Thus I often use the Problem Tree Analysis to come up with these. I think you already guess that I lean more towards the Logical Framework Approach. The difference between Outcomes and Output or any other result type is a 'logical' link. Therefore I start from "What is the problem the action is trying to solve?" From there I work backwards and forwards on the causal relations. The problem tree that comes out can then be flipped into a 'positives' tree. The tree levels translate to outputs, outcomes and impact. So this process is a practical process

Dowsen Sango

Senior Monitoring and Evaluation Advisor.

SNV Netherlands Development Organization

Harare, Zimbabwe

 

Dear Mustafa,

Thank you very much for a brilliantly reasoned analysis of the issues Natalia presented. My interest in the field is not as a practioner, but rather as someone who is aware of the importance of continuous monitoring and evaluation as a necessary condition for the success of any project. Your clear distinction between the ‘deliverables’ and their actual usefulness is crucial, and as you point out, often overlooked.

Cheers!

Lal

Dear Natalia et_al.,

Thank you for putting on the table an important challenge to both the evaluator and the manager of a development project. And I want to apologize for not being able to answer earlier; the situation in my country had taken over my mind and took all my time during the last 3 weeks. The question of clearly distinguishing an output from an outcome is of utmost importance for the development project manager as well as for the evaluator, as well as the project monitoring and evaluation staff. And I doubt that the problem is really a terminology problem, at least theoretically speaking. According to my modest experience, the problem has its origin in several factors that I will try to explain below:

  1. The weak link between project formulation, implementation, and monitoring and evaluation, of which the results framework (or logical framework) of a project is be the basis of this link. In this perspective, coherent and relevant indicators of the different types of results are formulated during the formulation of the project, even before the implementation of the first activity is launched. This weak link sometimes explains the difficulties in developing the ToR of an evaluation and therefore the difficulties that an evaluator may encounter in assessing the achievements and effects of a project, as mentioned by Natalia.
  2. The flagrant lack of skills and resources in project monitoring and evaluation, including monitoring, for various well-known and / or less well-known reasons. In some cases, managers prefer to conduct one or two ad hoc evaluations in the life of a project rather than having to support a follow-up service for the entire project duration, thinking that they will achieve the same purpose.
  3. The "too rigid" procedures in terms of monitoring and evaluation, adopted by some organizations, and confined in handbooks very often poorly developed - as evoked by Isha in this discussion. One of the reasons, in my humble opinion, is very often the focus on the specific content and the less importance attributed to the communicative dimension in the preparation of these handbooks. We may have sometimes mobilized a great resource person for the specific content, if he/she does not have the necessary communicative competence, we will get a very high quality specific content but a handbook that is practically useless.
  4. The apprehension, very often based on mistaken beliefs, made by some project managers on the project monitoring function, that yield a very little importance is given to the monitoring and evaluation staff . This somewhat explains the poor quality of the ToRs that Natalia is talking about in her contribution.
  5. The "voluntary" and "voluntarist" efforts of some enlightened practitioners who have sought at any cost – over the past two decades – to put a barrier between monitoring and evaluation. However, any development project necessarily needs the two "feet" of its monitoring and evaluation system in order to achieve the objectives assigned to it. Monitoring can never explain things exactly as the evaluation can do concerning some project happenings and the evaluation can not fill the gaps of the monitoring function. 

Having this said, a good training on the monitoring and evaluation of project staff, based on good logframe and result chain, can sometimes be the key to this problem. And to support this, I would like to share an experience I personally experienced in Sudan in 2003 on a project co-funded by IFAD and the Islamic Development Bank (IsDB) in North Kordofan State

I was contracted by IFAD to support the consolidation of the monitoring and evaluation system of this 7-year project while it was in the 4th year (first anomaly). The project was to deliver several outputs, including a 60 kilometre tarmac road between the State capital, El-Obeid, and the State second city, Bara, entirely financed by IsDB.

Locked up for 3 days with the entire project team, I was able to clearly see, through the indicators of effect proposed to me, that the project management team, including the principal responsible for monitoring and evaluation, was unable to clearly differentiate between the deliverable (the tarmac road) and the effects this deliverable could engender on its beneficiaries' living and income conditions. And slowly, my intervention and assistance made it possible for the project staff to start differentiating between a deliverable and its effect - as a development intervention - which can be perceptible only at the level of the social segments benefiting from a deliverable and not in the deliverable per se. I fully understand that the transformation of a stony road into a tarmac road is a change, but without the inclusion of the human dimension in our vision, it is difficult to pinpoint the development achieved. For proof, where can we perceive development of a new deliverable realized and closed for 3 years, for example, if human beings do not take advantage of it in order to change their living and income conditions (isn't it Hynda?). Thus, the project team members started, from the second day onwards, to differentiate things, suggesting better outcome indicators – completely different from output indicators, which served 3 years later to a good evaluation of the effects of the deliverable "tarmac road".

Thus, this little story highlights the necessary link that needs to be established between monitoring and evaluation from the start of a project – through mobilizing all necessary resources for the monitoring and evaluation system, including the necessary skills – so that evaluation can be done without much difficulty.

But even more importantly, although I am in favour of the evaluator "freedom of expression" (Isha), this necessary link between monitoring and evaluation will certainly lead to better ToRs for evaluation, guaranteeing this evaluator freedom within the framework defined by the project team. Without this link, too much of the evaluator's freedom of expression may incur a project at risk of receiving an evaluation report that is meaningless.

Sorry to have been a little long but the importance of the question asked by Natalia forced me to resort to certain details. I hope I have contributed a little bit to this discussion.

Mustapha 

Dearest Natalia,

I am not surprised either and I have similar experiences with many organizations.

This is the one of the reasons why I decided to lobby for “freedom of speech for evaluators”. Let me explain what I mean.

First, we cannot be independent and bound to be under ToRs instructions; this is not how evaluation should be conducted. Evaluation is a lesson learning, gap-finding mission to eliminate obstacles and prepare to become a visionary leader, seeing things early, logically and to respond and share with rest of the stakeholders and guide them to take things forward.

I have done some fact-findings on this subject:

  1. In many organizations, the M&E entity or unit lacks evaluation knowledge at the field or ground level. They are leaning the art during the evaluation period from consultants.
  2. Most evaluators are bound to work as per the “Handbook” given by the organization and are unable to attend to any changes if required. Very uninformative.
  3. Most handbooks are more similar to curriculum for evaluation studies than “guidelines for evaluation”.
  4. A very few “Handbooks” are evaluation oriented, and instead focus on research approaches. Even the Kellogg foundation evaluation handbook does not differentiate the Researcher and Evaluator in https://cyc.brandeis.edu/pdfs/reports/EvaluationHandbook.pdf (see page 7).
  5. In some cases the definition of the Evaluation is questionable in both documents, Handbook and TORs. i.e. TOR and handbook are not compatible. The Handbooks give guidance on major programmes evaluations or end/post evaluations but lack guidance and examples in the field and ground challenges.
  6. Very few give templates for a TORs Terms of Reference
  7. Many ToRs are cut and paste
  8. Most manuals include standards for all evaluations including the questionnaires as well as target groups and instruction on methods of conducting the interviewers and on selection of target group identified by the programmes. Most of the time they are very biased.
  9. No questions are suggested to address the indirect outcome or impact of the programme but only direct expect-able answerable questions.

Overall sometimes I have asked the organization "why do you hire me, you can do it yourself" when they give and indicate everything on the methodology to conduct the evaluations.

Best

Isha

Independent evaluator

Sri Lanka

Hello All,

There is a short blog on Planning and M&E syntax, linking with a story in everyday life experiences - could be an useful reference. Here is the link:

https://www.linkedin.com/pulse/planning-monitoring-evaluation-syntax-binod-chapagain-phd

Best regards.

[English translation below]

Bonjour à toute la communauté,

Merci à Natalia pour sa question tout à fait pertinente.

Les résultats sont souvent présentés par le concepteur du programme, du projet ou de la politique à évaluer. Cette présentation peut ne pas être pertinente car justement le concepteur et l’exécutant redoutent l'évaluation des résultats et consciemment ou inconsciemment, le recours à des termes ambigus est souvent utilisé.

Dans l'exercice de mes fonctions, j'ai déjà rencontré ce type de situation et pour le besoin d'une évaluation objective, j'ai dû élaborer des canevas avec la terminologie adéquate et j'ai demandé à ce qu'on me présente les résultats selon les dits canevas.

Bien entendu, le traitement des résultats chiffrés, présentés selon les nouveaux canevas, a permis de dégoupiller la chose et cela s'est avéré très efficace, car nous avons ainsi pu mettre à nu les chiffres initialement annoncés.

Je peux donner un exemple très simple: un projet a été déclaré lancé alors qu'en cherchant ce qui se cachait derrière l’appellation "lancé", il s'est avéré que le projet était à la phase de son inscription comme opération planifiée.

Un autre exemple: un projet était considéré comme "réalisé", alors que la réception provisoire n'était pas prononcée. Or cette dernière peut donner lieu à des réserves, dont la prise en charge prendra des délais plus ou moins longs et concrètement le projet serait en cours de réalisation et non pas réalisé.

J'espère avoir pu vous aider un petit peu.

Trés bonne journée à tous.

Hynda Krachni

Ministère de Finances, Algérie

***

Hello to all the community,

Thanks to Natalia for her very pertinent question.

The results are often presented by the developer of the programme, project or policy to be evaluated. This presentation may not be appropriate because precisely the developer and the projet manager are concerned about the evaluation of the results and consciously or unconsciously, ambiguous terms are often used.

In the performance of my duties, I have already encountered this type of situation and to aim for an objective evaluation, I had to develop templates with the appropriate terminology and I asked to present the results according to the said templates.

Of course, the treatment of the quantified results, presented according to the new templates, made it possible to unlock the issue and this proved very effective because we were able to expose the figures initially announced.

I can give a very simple example: a project was declared launched while looking for what was hidden behind the name "launched", it turned out that the project was in the phase of its registration as a planned operation.

Another example: a project was considered "realized", while provisional acceptance was not obtained. However, the latter can give rise to reservations, the management of which will take longer or shorter periods and concretely the project would be in progress and not realized.

I hope this helps a little bit.

Have a very good day everyone.