RE: How to evaluate science, technology and innovation in a development context? | Eval Forward

Thank you for keeping the forum open longer than planned. I was reading all the comments with much interest, not daring to contribute both because I'm new to the Eval Forward community and because I'm not an experienced evaluator, especially of science, but more of a general project-level M&E / MEL practitioner.

I'm posting something only now at the last minute in reply to question 3 on MEL practices, and specifically regarding the measurement of impact (which has come up a lot in other posts, thanks Claudio Proietti for introducing ImpresS), where qualitative indicators are often based on either interviews or reports, and making sense of the narrative is not easy.

Not sure if, in terms of practices, IT tools are of interest, but I think in this type of measurement some IT tools can help a lot. Of course the quality of the evaluation depends on the way narrative questions are designed 
and the type of analysis that is foreseen (classifications, keywords, structure of the story, metadata), but once the design is done, it is very handy to use tools that allow you to (sometimes automatically) classify against selected concepts, identify patterns, word / concept frequency, clusters of concepts etc., using text mining and Machine Learning techniques, in some cases even starting directly from video and audio files.

A few tools for narrative analysis I'm looking into are: ATLAS.ti, MAXQDA, NVivo. Other tools I'm checking, which do less powerful narrative analysis but also have design and collection functionalities, are 
Cynefin Sensemaker and Sprockler. An interesting tool, with more basic functionalities but a strong conceptual backbone, helping with the design of the narrative inquiry and supporting a participatory analysis process, is NarraFirma.

(O.T.: I would actually be interested in exchanging views on these tools with other members of the community who've used them.)