RFE [user:field_middlename] Réseau francophone de l'évaluation

RFE Réseau francophone de l'évaluation

Réseau francophone de l'évaluation
France

Le Réseau Francophone de l'Evaluation (RFE) est une tête de réseau créé le 26 février 2013 avec le soutien de l'Organisation Internationale de la Francophonie (OIF). Notre réseau rassemble les Associations nationales d'évaluation (ANÉ) de 24 pays de l'espace francophone.

Nous nous sommes donné pour mission de développer et mutualiser les connaissances et pratiques de l'évaluation de l'action publique dans l'espace francophone. Afin de conforter l'existence d'un monde pluriculturel et multipolaire garantissant à la francophonie sa place à côté d'autres ensembles linguistiques. Pour y parvenir nous visons nous nous sommes fixés les objectifs suivants :

Développer l'offre francophone d'évaluation
Constituer un corpus théorique et technique en langue française
Développer entre les Associations Nationales d'Évaluation une coopération active
Promouvoir l'utilisation des résultats de l'évaluation dans le processus de décision publique

My contributions

  • Evaluators are interpreters. What about ChatGPT?

    Discussion
    • Bonjour Silva,

      Merci beaucoup pour le lancement de cette discussion sur l'évaluation et Chat GPT. Je suis pressé de lire les réactions car ce sujet sera probablement abordé lors de la 5e édition du Forum international francophone de l'évaluation que le RFE (Réseau francophone de l'évaluation) organise avec la SOLEP (Société luxembourgeoise de l'évaluation et de la prospective), les 4, 5 et 6 juillet prochain, à Luxembourg.

      Si le français n'est pas un problème pour toi, je t'invite à soumettre une proposition d'intervention d'ici le 30 mars. Cette invitation à nous adresser des propositions d'interventions vaut également pour l'ensemble des membres de la communauté EvalForward. Les dépôts se font en ligne : www.fife2023.rfevaluation.org.

      Le thème de l'événement est "Évaluation et révolution numérique".

      Jean-Marie Loncle

      Secrétaire permanent du RFE

    • In order to develop effective, inclusive and gender-sensitive MEAL systems, it would probably be appropriate to question the level of available and deployable resources:

      1. First, the question of human resources. What human resources are available locally? If we are looking for people who practice evaluation and who have expertise in gender issues, of course in our countries this expertise exists, but where is it at present?

      They are in the United Nations system, they are in some international NGOs, they are in some ministries. Consequently, if we are looking for people who have expertise in evaluation and gender, and who are available to conduct evaluation missions, we are quickly in trouble.

      This is reflected in the quality of the evaluation work carried out on gender issues, which has a lot of weaknesses, because the people who carry out these evaluations do not have a good grasp of gender analysis tools. These people do not have sufficient expertise to analyse the complexity that is generally found in these issues. As a result, we will see reports that address the issue of men's and women's participation, but do not address other issues such as social norms and practices that are harmful to girls and women, unequal power relations and participation in decision-making spheres, the sexual division of labour and the workload of men and women, gender-based violence, sexual harassment, sexual exploitation and abuse, ...

      Sometimes gender mainstreaming is done around one axis or part of the deliverable, and thus sacrifices the transversality of gender mainstreaming. Thus, gender is only found in a dedicated section, whereas it should be found in all the new analysis sections, whatever the part of the work. Moreover, the commissioning of the evaluations is itself responsible. Gender is commissioned as an additional element to an evaluation, whereas it is the evaluation process that should take gender into account. The evaluator should analyse gender relevance, gender effectiveness, .... and not do a gender section.

      Have the resources responsible for monitoring and evaluation in the ministries been trained in gender? I leave it to each of you to provide the answer that applies to your field.

      2. The "time" resource is also to be taken into account. There is always a lack of time to carry out the required activities.

      Let's take common tools as an example:

      • The activity profile of stakeholders in a community or the profile of access to and control over resources are really basic tools in gender analysis: in a community, it takes about 3 hours to build this in a participatory way with the participants on a site and therefore it requires time.
      • The daily agenda profile of men and women doing the same activity in the same context, a tool that allows to highlight the workload of men and women, divided between production issues, reproduction issues, political and social-community activities, also takes a lot of time to build.
      • Monitoring and evaluation frameworks often do not integrate gender, and the evaluation team must then first construct what might have been the framework for monitoring gender change, before 'looking' for the effects and impacts that may have occurred! All this takes time.

       

      3. There is also the issue of financial resources. The financial resources foreseen to carry out gender activities remain insufficient. What financial resources are the stakeholders prepared to mobilise? Up to now, as much as we may say, gender has tended to be perceived and practised as an appendix, as something that is added to something that already exists. You write a project and then you come and look at what you can add on gender issues.

      As a result, the budgets allocated to gender issues are usually very low, which does not allow for the real activities that should be carried out. Exercises such as gender budgeting are not undertaken when projects are in formulation!

      The mobilisation of these three resources forms a set of challenges that mean that gender issues are not really taken into account in projects, programmes and policies but also in evaluations. How much time do we have to make such systems work?

      This observation on the consideration of gender issues also applies to other vulnerable populations: people with disabilities, people undergoing forced migration, indigenous populations, children, etc.

      Thaddée Yossa

      [translated from French]

    • Hello everyone,

      Monitoring and evaluation are both important (in the sense that they are two different approaches that do not replace each other) and monitoring is important for evaluation (better evaluations are made with a good monitoring system). So they are two different but complementary approaches.

      To come to the question of the means given to the monitoring system to benefit from quality data:

      - It is very important to value the data producers and to give them feedback on the use of the data in the evaluation and decision making. This is to give meaning to data collection and to make decision-makers aware of its importance.

      - One option for reducing costs is to rely as much as possible on users (farmers, fishermen, etc.) to collect data (instead of using only "professional" surveyors).

      The last very important point is that the major challenge is the overall coherence of the system, because it is necessary to have motivated and reliable data collectors at the local level, and these data must also be partly comparable and able to be aggregated at the national level, otherwise we end up with a mass of local data from which nothing can be drawn at the supra-level. This work of articulating the scale, which consists of framing the monitoring system without "locking" local data collection into filling in indicators that they do not understand and that are not useful to them, is very important and constitutes the key skill that a national monitoring officer must have.

      There is often a multiplication and overlapping of data collection and processing systems for management, monitoring and evaluation, whereas a system with shared relevance would be beneficial in many ways (think or rethink the institutional architecture for M&E).