Thank you for raising such an important question. I find it interesting in two respects:
First, because it raises the question of how we can capture the immediate (or medium term) effects of the COVID-19 situation on our realities. Many evaluators are grappling with this question. Some colleagues in the UN system have worked to draw some general directions in this respect. For instance, the recent publication from the ILO office of Evaluation (https://www.ilo.org/wcmsp5/groups/public/---ed_mas/---eval/documents/publication/wcms_757541.pdf) might be of inspiration, as it lists, in annex, typical evaluation questions that match the need for collecting specific information relevant to COVID-19.
I also find your question interesting because it asks about how to make rapid evaluations, which we also had many possible reasons to aim for even prior to the pandemic, and on which therefore there are past experience to build on. And, if our colleague Jennifer is right in underlining that evaluation does not easily lend itself to fast reaction, I think there are ways to expedite processes to cater for the urge of timeliness. I can share the following learning points in respect to what worked when I aimed for conducting evaluations rapidly. First, focus. It makes a difference when someone’s time is entirely to the task, while multitasking takes away the precious focus you need to get to where you want fast. Second, aim for a good enough plan. We often go round in circles to prepare our evaluations, and invest a lot of time in back and forth exchanges over it, a straighter line, it can help to start with a rough scoping and testing and refining your focus and approach as you go along. Third, compensate any cut corners with engaging few select stakeholders with strategic knowledge as sounding board along the way.
Of course, the COVID-19 situation complicates these rules of thumb, in particular when engagement needs to be virtual; and my last piece of advice is to get savvy with modern technologies for engaging by virtual means. As you report, this might last, so might be worth investing in such new competences.
Thank you for your post, which brings up many important topics indeed!
To only take-up a few, I would start by loudly asserting the view that monitoring and evaluating are by no means mutually exclusive and unquestionably complementary.
It may be that Evaluation has developed well as a practice, and more so than its sister function Monitoring. Still, a study we have done (on which we recently shared preliminary results here https://www.evalforward.org/blog/evaluation-agriculture ) did show that in many developing countries, evaluations are done mostly when supported by a dedicated external funding: an indication that the bigger sister is not yet that sustainably established…
Your post still does raise a big question which is a concern to me too: why has the Monitoring function not yet been the subject of the same donor interest; why are monitoring systems not a number one requirement of all donors, considering how essential it is as a tool to learn from past actions and timely improve future ones? As our study also revealed, before promoting evaluation, countries need to establish Results Based Management, which starts, even before monitoring, with planning for results.
It is a fact that in many institutions, from national to international levels, monitoring is still heavily underrated and underinvested. Maybe one way would be to start by identifying who has a stake in ensuring the ‘M’ fulfils its function of identifying what works and does not, why and under what circumstances? In this respect, we evaluators could take a role in supporting the emergence of this function within our respective spheres of influence; putting aside our sacred independence cap for a while… Would other evaluators agree?
All the best to all,
Thank you for these very rich and sensible comments, which demonstrate a solid experience and thinking on these issues.
Dear Tim, the idea of a system that relies on enabled collaborations with research institutes is indeed interesting. You point to the need to unifying data systems, and investing in technology, alongside with the necessary capacity development: does your experience help identify who could support such investments? Is this a matter of inciting the political or finding financing partners?
As regards the need to better connect evaluators with the Ministry of Agriculture, and finding change agents or leaders, as also pointed by our colleagues from AVANTI, we agree these are important levers to promote effective evaluation. The question remains: what incentivizes these positive spins? If anyone want to share any elements of success in this respect, we will welcome your thoughts!
Thank you, Aurelie
Dear Mustapha,Thank you for a contribution that raises an interesting issue for the development of this Community of Practice.I share your hope that EVAL-ForwARD will serve practitioners, to promote evaluations that are useful for refining development interventions. On the other hand, I would be more nuanced about the place, in our exchanges, of more theoretical contributions, which I do not believe should be restricted to an academic community: on the contrary, our platform of exchange plays an important role in that it makes it possible to build bridges between academics and practitioners. Of course, we do not all have the same time to digest the more abstract inputs, but the opportunity is there.As for your substantive question on developmental evaluation, you raise an interesting point, which applies to so many other concepts: that of differences of interpretation. How many times, when reading an evaluation journal, did I tell myself that the author did not have the same understanding as me, on a definition, an approach...
If I share in my turn, what I believe characterizes Developmental Evaluation, over more ‘traditional’ evaluation or M&E, my interpretation is that Developmental Evaluation brings particular value in cases where the subject to be evaluated is still too uncertainly identifiable (e.g. because it is complex or innovative) to allow an evaluation on the basis of already formulated indicators or models. The value add of DE would thus be to accompany the intervention whilst it develops and test its effectiveness according to indicators that the evaluator can develop as the intervention is invented, so as to provide a real-time feedback, and so despite the constraints linked to uncertainty. So it seems to me that there is a real place for this approach, which I perceive as more exploratory - perhaps less mechanical - than the approaches based on change theories known ex-ante. In particular, because often the interventions we evaluate are placed in contexts involving many factors, and in cases where interventions evaluated seek to propose innovative solutions.I hope that this interpretation will enrich the set of contributions on this subject and that the whole, although of a somewhat theoretical nature, can feed the reflections and practices of the members of this network.Best regards,Aurelie