Digital Manager
6th February 2019
10:25am
When working to create change in communities, everyone involved is faced with the thorny and fundamental question: “how will we know if our work makes a difference?”
Over the next five years, UnLtd will support 600 social entrepreneurs to create positive social change in 31 ‘Resilient Communities’ across the UK, from Glasgow North to Par Bay in Cornwall. In some of these places, Big Local areas, we are also partnering with Local Trust to back people who have ideas to benefit their communities and establish a local ecosystem of support.
This place-based entrepreneurial support work arises from eight years of grassroots work during UnLtd’s ‘Star People’ programme. This work showed that a diverse range of local people, when co-designing solutions to the problems they face in their communities, provide elegant solutions, designed through their lived experience and expertise.
Grappling with how to attribute changes in outcomes to actions is not a new challenge for social purpose organisations. This is a particular issue in place based work where social issues are complex, and lots of different individuals and organisations are involved.
The ‘spill-over’ effects of place-based regeneration approaches mean that local lives can change indirectly, especially when people who are usually the recipients of support are asked to ‘do’ the helping. David Boyle in his essays on Local Trust areas describes this as an ‘obliquity’ and documents some personal stories that can add up to changes in a locality.
There are crucial questions of how to aggregate impact, how to account for the contribution of other actors and factors, and how best to evaluate legacy. Add to that the need to avoid respondent fatigue in communities (constantly being asked to fill in surveys) – a ‘data burden’ recently highlighted by Social Investment Business in their report on measuring and managing impact.
To inform the design our Resilient Communities evaluation, we wanted to identify good practice. We commissioned independent researchers, Claire Bastin and Jen Dyer, to conduct a literature review so we could understand best practice for evaluating place-based work, including the best metrics and methodologies.
Three key findings emerged:
1. There is limited information about place-based evaluation approaches, metrics, and good practice.
Although rich in examples of place-based research, both the academic and practitioner literature is ‘light on detail around metrics and methods used for evaluating PB approaches’. There is limited reflection in the literature (mostly qualitative approaches) and as a result, it’s difficult to understand what a robust evaluation looks like, which evaluation methodologies are effective and which metrics are meaningful.
The fact that organisations use a variety of metrics means that there is little standardisation, making it especially difficult to aggregate the impact of place-based work or compare and contrast different approaches.
2. Focus on outputs rather than impact.
Where metrics are used (for example in terms of numbers of participants, amount of project spending, or report against project goals), they tend to be in more about ‘accountability’ rather than ‘learning’ approaches. The focus is on outputs, rather than understanding impact and effectiveness of approach.
Output metrics don’t actually tell us what has changed or what difference a specific place-based intervention or project has made.
3. Limited evidence about effectiveness of PB approaches.
Evaluators don’t pay sufficient attention to dissemination and communication of generalisable findings or learning after the work is complete (including what doesn’t work). Bastin & Dyer found that the literature shows place based approaches are seen as being ‘complex and difficult to evaluate’. Some organisations are creative in how they communicate the impact of their work (a few examples, such as NESTA Neighbourhood Challenge Project, are highlighted in slides). However, there’s less critical reflection about what works, for who and in what context.
These findings informed our Resilient Communities evaluation design and we created a Learning Framework which dovetails with our Theory of Change. Both focus on what UnLtd’s support is, what it enables social entrepreneurs to do and what that means for a place. Although being flexible in how we collect our data, we have set key overarching research questions to guide researchers and provide boundaries.
Our research principles aim to balance the need for validity and generalisable knowledge, with the ability to map key challenges and opportunities along the way and feed these back usefully, internally and externally.
As a response to the findings above, we want to be more open and transparent about how we’re approaching evaluation in our Resilient Communities work. We’ve attached our Evaluation Framework and will continue to publish learning blogs and learning papers throughout to share our findings about the work, but also reflect on how our place based evaluation approach turns out in practice.
To help in building an evidence base about what good practice in place based evaluation looks like, we’d love others who are interested to join the conversation, share their experience and be a critical friend.
Read the latest news, stories and insights from our blog.
This month we made 118 awards to social entrepreneurs from across...