by Kate Morris, Making Sense of Media (MSOM) programme, Ofcom, UK.
When we started work on our evaluation toolkit for organisations delivering media literacy interventions, we were committed to basing our work on the realities of our potential audience. We believed our toolkit would have most value for those organisations with little or no experience of evaluation and that knowledge pushed us to keep things as simple as we could. We stripped back jargon, explained terminology and created a fictional media literacy programme to illustrate key points related to the evaluation process, which we knew could feel complex and overwhelming. We were keen to land the message that evaluation should not be seen as a pass or fail judgement on a project, but as a way of proving and improving interventions. And we also wanted to encourage organisations to think about using evaluation methods to demonstrate societal impact – using outcomes and indicators – rather than only reporting engagement numbers. There were other aims too – that our guidance and free online workshops would build capacity in organisations with little time or funds to do evaluation work, and that interventions would be designed with evaluation embedded from the start.
What came next was a period of action research where we trialled this approach with 13 organisations that we commissioned to deliver pilot projects across the UK. Each organisation was required to produce an evaluation report and was offered our toolkit, and the support of an independent evaluation expert, throughout the year that the contract ran. At the heart of this approach was the way that the organisations would design their pilot programmes around evaluation frameworks and would appreciate the utility of evaluation in programme building. It was also a valuable opportunity to trial the toolkit and understand its strengths, and where we could improve it.
Lessons learned – the organisations
Halfway through the year, the 13 organisations came together to share their learning, and some themes emerged.
The value of embedding evaluation from the start: One organisation redesigned their programme after realising it was too easy for their participants, following responses to their pre-intervention survey. They reflected that using a robust evaluation approach on previous programmes they had delivered could have helped them avoid this.
The value of learning from your mistakes: One organisation shared that they forgot to put identifiers on survey responses so couldn’t match up the pre and post surveys. They reflected it was a useful lesson for next time.
The value of the evaluation process overall: Some found the notion of drawing up an evaluation framework as they designed their programme challenging but this shifted when they started work on their evaluation reports. This demonstrated to them that the benefits of evaluation might only become apparent once they had gone through the process for the first time.
Lessons learned – Ofcom
We also took the time to reflect on our approach.
Evaluation requires investment : This trial had evaluation baked in and included specialist support. Despite this, some organisations still struggled to find time to prioritise evaluation alongside project delivery.
Evaluation is challenging: Organisations were not necessarily comfortable with the evaluation process, and its specialist language – even with the support of an independent evaluation expert and our toolkit. Challenges included mapping outcomes to survey questions, writing outcomes, writing survey questions and finding suitable indicators.
Measuring impact is tough for smaller organisations : Gathering longitudinal data from some of the cohorts in this programme was challenging for a range of reasons – from staffing shortages to the difficulty of getting back in touch with audiences.
Key takeaways
In one sense, this action research confirmed what we learned from the scoping work we did in preparation for our toolkit work – that funding, and lack of expertise, meant organisations were not able to evaluate as fully as they might like. But it also raised another theme we think is worth investigating, linked to how smaller media literacy organisations might demonstrate the impact of their projects. From what we observed over the year, it was not realistic to expect a small charity to produce a robust evaluation akin to an academic project given expertise levels, budgets, time constraints and the size of data sets available. The research suggests that appropriate evaluation methods are contingent on the size of the organisation and available budgets – in other words, that it is all about proportionality.
Next steps
For now, we are working on our evaluation toolkit to add more examples, and explainer text to help people better understand how to devise outcomes, and design pre and post survey questions. We will also be publishing a report summarising lessons learned from our pilot projects – because we ourselves are committed to following our own advice: that course-correction is a key ingredient of successful delivery.
Kate Morris, Senior Associate, Evaluation, Ofcom, UK