When public sector digital, technology or innovation projects happen, we all want to know one important thing: what's the impact?
Technology projects are usually really complicated, require millions of pounds of investment, and take a huge amount of time and organisational resource. Yet measuring their true impact remains one of our biggest challenges today. Without robust evidence tracking both outcomes and social value, we’re missing crucial insights into what has happened, whether it has worked, and what we might learn to inform future projects.
Today, we're really proud to be launching the latest guidebook in our series on evaluating digital projects in the public sector. Our first guide, which resonated across the public sector, focused on the foundations of digital evaluation: why it's important, how to get started, and the importance of a clear Theory of Change.
We hope our last edition can help teams start their evaluation journey, and the second can empower them to take the next step: putting evaluation into practice.
One model that many people have heard of but are unlikely to have used before is the Randomised Controlled Trial (RCT). RCTs involve teams randomly rolling out an intervention, policy or service to some users but not others, which allows them to better understand and isolate its impact. Currently these kinds of trials are common in other policy areas - like the healthcare sector or homelessness policy - but are rarely used for digital and innovation projects. The result is that digital projects often lack the evidence base we see in other sectors.
We believe this needs to change. Digital and innovation projects should benefit from the same standards of evidence as any other major policy intervention. That’s why we’ve created this guide - accessible for digital and innovation teams, as well as economists - to show how technical evaluation methods can work in practice. From RCTs to statistical analysis tools, we cover a strategic view of evaluation approaches. It also helps teams to strengthen their qualitative research approaches, which are often just as important as quantitative tools for understanding the impact of projects.
To make this practical, we've packed the guide with lots of examples of how to put these evaluation methods into practice. Whether you’re working on AI consultation tools, place-based innovation funds, digital skills programmes or cross-government data platforms, you’ll find relevant case studies showing how these methods work in context. Our aim is to show that for almost any project, digital and innovation teams can use a range of different evaluation methods to properly understand their project's impacts.
Join us for the launch event on the 6th November 2024 at our offices in London. We have talks from digital and evaluation leaders from the Cabinet Office, MHCLG and leading UK academic and research institutions.
We hope that this guide - and this event - can help to bring together a community of practitioners interested in digital impact. And that we can all work together to build an evidence base that can shape and guide our digital and innovation projects going forward.
For any further information about the report, our or work on digital evaluation more broadly, please reach out to our Director, Johnny Hugill (johnny@public.io).
You can read the full 'Evaluating Digital Projects' and 'Evaluation Methods' Guidebooks below: