Regulators are increasingly under pressure from the Government to drive innovation, investment and growth. This is particularly salient when it comes to the UK Government’s ‘pro-innovation’ approach to regulating AI, where a key focus is ensuring that any mandatory measures do not dampen innovation and competition. However, this presents a challenge for regulators. In the face of market disruption and changing business models driven by rapid advancements in AI, how can UK regulators balance innovation and growth with compliance and consumer protection?
Based on published responses to DSIT’s white paper, UK regulators understand this need to adapt and many are considering new approaches, tools and resources to deliver effectively. Yet often efforts are duplicated between regulators and across different sectors, despite huge opportunity for greater cross-regulatory communication and collaboration - particularly with many facing similar challenges around AI.
In an effort to support cross-regulatory collaboration and innovation, PUBLIC recently convened a closed roundtable in partnership with key regulatory leaders and experts from Ofcom, the Health and Safety Executive (HSE), the Equality and Human Rights Commission (EHRC) and others.
The discussion focused on an overarching question:
In response to market disruption from technologies like AI, how can regulators both encourage innovation to drive competition, sustainable growth and compliance, while also leveraging innovative tools and processes themselves to deliver effectively at pace?
Throughout the discussion, participants communicated a shared appetite for more Government-led initiatives supporting regulators to collaborate with one another. With regulators often operating as siloed entities without sufficient coordination from Government, the regulatory landscape is dotted with cases of duplicated efforts and inefficient resource allocation. Across the group, a core pain point was the fragmented approach to navigating market and regulatory disruption from AI.
In the context of AI regulation, regulators face key capability gaps in access to market intelligence and data, timely horizon scanning insights, technical expertise and data science skills to enable AI regulatory preparedness and joined-up implementation. There is a clear need for targeted pooling and sharing of capabilities to ensure preparedness for AI regulation. Where resources can be limited, establishing a deployable pool of technical resources could especially benefit regulators seeking to experiment with new ideas.
In addition, participants raised a shared problem of ‘not knowing what we don’t know.’ The lack of central coordination often leaves regulators in the dark about what cooperation, information sharing and joint capabilities they might be missing out on. Standing up more Government-led collaboration initiatives would go a long way in closing this awareness gap and providing transparent, accessible routes for regulators to work together more efficiently.
In terms of what a collaborative regulatory approach could look like in practice, the Digital Regulation Cooperation Forum (DRCF) offers one useful model. That said, more formal and centrally-funded collaboration models are needed which support a larger set of regulators across multiple sectors.
In practical terms, sponsoring departments across Government (e.g. DSIT, DBT) can play a critical role in helping regulators join the dots, particularly in the face of rapidly evolving markets around AI. Central to this approach is cross-regulatory cooperation, enhanced tooling, industry dialogue, and strategic recruitment. By streamlining regulatory efforts, optimising resources, and bridging existing capability gaps, this collective effort empowers regulators of all sizes to anticipate AI impacts and ensure adaptability of regulatory frameworks.
While most participants welcomed the Government’s AI principles as flexible, uncontroversial and helpful guiding ideals, they raised questions about how this might translate into practical application and enforcement. Participants stated a core need for clear, concrete standards and risk-based frameworks to effectively implement AI approaches. The recent announcement of a new £10 million package by DSIT to boost regulators’ AI capabilities sparked discussions on research priorities and practical tools for effective regulation. While this financial support is welcomed and important, participants acknowledged that the current funds might not be sufficient to address all regulatory needs adequately and with the flexibility required to uphold product safety. Participants unanimously agreed there was a necessity for central coordination of AI standards, approaches to market categorisation and prioritisation to anticipate and address emerging risks effectively.
Regulators face challenges in characterising the risks and harms from AI in the context of potentially uncertain regulatory remits from the existing blurred lines between virtual and physical products and services. Given the absence of common standards and assurance mechanisms for AI, participants encouraged a holistic approach to better understand emerging challenges through networks and institutions.
While AI presents important challenges for regulators, it also offers new opportunities to develop innovative regulatory ways of working and adapt existing statutory instruments and levers. During the discussion, participants explored an array of tools available to both promote compliance and accelerate market innovation responsibly. Participants shared their innovation approaches and existing tools considering their application within the changing regulatory environment.
Participants discussed the importance of upskilling and innovation learning in order to deliver on the UK Government’s expectations around AI regulation. Despite varying risk appetites, regulators expressed a willingness to embrace calculated risks and further their window of tolerance, recognising the potential rewards in driving innovation, growth and competition. This signalled the importance of innovation learning and upskilling initiatives to reduce scepticism and enable cultural transformation to harness the potential of AI innovation.
Notably, central innovation programmes and funding mechanisms - such as the Regulators’ Pioneers Fund - have proven to act as catalysts for regulatory innovation and collaboration. Examples from the HSE demonstrated how central funding - when effectively utilised - can validate concepts and enhance the adoption of innovative tools in day-to-day operations. Additionally, participants highlighted the value of regulatory sandboxes, dedicated research labs - such as the HSE Science and Research Centre - and technology trials in promoting smarter regulation and market innovation.
There was strong agreement around the importance of conducting formal evaluations of these programmes - tailored to varying risk appetites and approaches to experimentation between regulatory teams - to both ensure they are effective and foster a strong culture of innovation. PUBLIC’s recently published guidebook on evaluating digital projects - as distinct from non-digital or traditional evaluations - provides practical advice for teams looking to perform such evaluations effectively.
Building on insights from this roundtable, PUBLIC has identified a set of strategic recommendations for both central government and regulators to foster innovation, compliance and growth in evolving regulated markets:
To find out more about how PUBLIC supports regulators to navigate evolving digital markets, get in touch with our Director of Strategy & Transformation, Daniel Fitter, at daniel.fitter@public.io.