AI governance is not about constraining a static technology — it’s about building institutional capacity to manage a transition already underway.
Jamie Green
Founder, AI Policy Exchange
The dominant framing of AI policy as a question of “regulation” is actively unhelpful. It implies that the primary task is to constrain a defined technology within legal boundaries – an approach that worked tolerably for previous technologies but fails fundamentally for AI.[1]The framing invites legislators to reach for familiar tools – product safety rules, licensing regimes, sector-specific compliance requirements – and in doing so obscures what is genuinely novel about the challenge. AI is not a product category. It is a general-purpose capability that is simultaneously transforming financial services, healthcare, education, defence, public administration, and creative industries.[2] No single regulatory instrument can meaningfully govern something that pervasive, and the attempt to build one produces either legislation so abstract it is unenforceable, or so specific it is obsolete before the ink dries.
This brief argues that the United Kingdom needs to reframe the conversation from regulation to governance. The distinction is not semantic. Regulation is a subset of governance: it concerns binding rules and their enforcement. Governance is the broader system of institutions, norms, processes, and capabilities through which a society steers complex transitions.[3]The UK already governs many domains that resist simple regulation – monetary policy, public health emergencies, cyber security – through adaptive institutional arrangements rather than static rulebooks. AI demands the same approach.
We introduce two original frameworks. First, the “Governance Maturity Model” – a four-level schema that maps where different countries sit on the spectrum from reactive regulation to adaptive governance, and identifies the institutional investments needed to progress. Second, the concept of “governance as infrastructure” – the argument that governance capacity should be built and maintained like digital infrastructure, as a shared platform capability rather than something assembled ad hoc in response to each new AI application.
We also identify what we call the “regulatory lag paradox”: the observation that by the time a regulation is drafted, consulted upon, legislated, and enforced, the technology it targets has already evolved beyond the regulation’s assumptions.[4]This is not a temporary problem that will resolve once AI “matures.” It is a structural feature of a technology whose capabilities shift on a timescale of months while legislative processes operate on a timescale of years. The paradox is sharpest in jurisdictions that have invested most heavily in comprehensive AI legislation – the EU being the most prominent example.[5]
Our central recommendation is the creation of an AI Governance Capacity Unit within the Cabinet Office, charged not with writing new rules but with building the institutional competence that UK regulators and government departments need to govern AI within their existing mandates. This unit would coordinate technical secondments, develop shared evaluation infrastructure, establish common standards for AI impact assessment, and identify governance gaps before they become crises. The goal is not to replace DSIT’s policy function or the AI Safety Institute’s technical work, but to solve the coordination and capacity problems that currently prevent the UK’s sectoral regulators from exercising effective oversight.[6]
The instinct to “regulate AI” is politically irresistible but analytically confused. It assumes AI is a product category like pharmaceuticals or financial instruments – something that can be tested, approved, and monitored within a defined framework. This assumption reflects a mental model of technology governance forged in the twentieth century, when the objects of regulation were comparatively stable. A new drug takes a decade to develop and then remains substantially the same product for years. A financial instrument can be formally described and its risk properties modelled. These characteristics make traditional regulation viable: you can specify what is being regulated, define acceptable behaviour, and enforce compliance through inspection. AI shares none of these properties.[7]
Consider the definitional problem alone. Any regulation requires a clear definition of its object. But what counts as “AI”? The EU AI Act defines it as a “machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment.”[8]This is broad enough to capture almost any software with a feedback loop and narrow enough to miss important uses of large language models that operate without traditional autonomy. The UK’s approach has wisely avoided a statutory definition, but this creates its own problems: without a definition, sectoral regulators must each decide what falls within their remit, and the inevitable inconsistencies create arbitrage opportunities and compliance confusion for firms operating across sectors.[9]
The deeper issue is what we term the “regulatory lag paradox.” Traditional regulation assumes a relatively stable relationship between the rule and its object. Environmental emissions standards, building codes, food safety requirements – these all regulate phenomena that change slowly relative to the legislative cycle. AI capabilities, by contrast, shift on a timescale of months. When the EU began drafting the AI Act in April 2021, GPT-3 had been released for less than a year, ChatGPT did not exist, and the idea that AI systems would be capable of writing legislation, generating photorealistic imagery, or conducting scientific research was confined to speculative futurism.[10]By the time the Act received final approval in March 2024, the technological landscape had transformed beyond recognition. The Act’s risk categories, compliance requirements, and enforcement mechanisms were designed for a world that no longer existed.
This is not a criticism of the EU’s drafters, who were competent and well-intentioned. It is a structural observation about the mismatch between legislative timescales and technological change. The paradox is self-reinforcing: the more comprehensive the regulation, the longer it takes to draft and pass, and therefore the greater the gap between the technology it was designed for and the technology it actually governs. Narrow, targeted rules can be updated more quickly, but they inevitably leave large areas of AI deployment ungoverned. There is no sweet spot within the regulatory paradigm that resolves this tension.[11]
The political economy of AI regulation compounds these problems. Regulation creates compliance costs, and compliance costs create lobbying incentives. The result is that AI regulation tends to be shaped by the firms large enough to absorb compliance overhead – which are, not coincidentally, the firms that already dominate the AI market.[12]The EU AI Act’s requirements for foundation model providers, for instance, are substantially more manageable for a company with a dedicated compliance team of hundreds than for a European startup attempting to compete. Regulation intended to protect citizens can inadvertently consolidate market power, an outcome that is neither pro-innovation nor pro-safety.[13]
The regulatory model that served the twentieth century rests on several assumptions, each of which AI violates. The first is that the object of regulation is identifiable and bounded. A car is a car; a bank is a bank. But an AI system might be a medical diagnostic tool on Monday, a legal research assistant on Tuesday, and a creative writing partner on Wednesday – the same underlying model, accessed through different interfaces, governed by different sectoral regulators, with entirely different risk profiles depending on context of use.[14] This is not an edge case; it is the defining characteristic of general-purpose AI. No previous technology has presented regulators with an object that shifts its risk category depending on how it is deployed.
The second violated assumption is that the supply chain is legible. Traditional regulation traces a clear path from manufacturer to end user. Pharmaceutical regulation can identify who synthesised a compound, who tested it, who approved it, and who prescribed it. AI supply chains are far more diffuse. A frontier model is trained by one company on data from thousands of sources, fine-tuned by another company for a specific domain, deployed by a third through an API, integrated by a fourth into a consumer-facing product, and customised by the end user through prompting. Where in this chain does regulatory responsibility sit? The EU AI Act attempts to distribute obligations across the chain, assigning duties to “providers” and “deployers,”[15] but the practical enforceability of these distinctions is already being tested by the reality of how AI systems are built, modified, and used.
The third assumption is that risk can be assessed prior to deployment and remains relatively stable thereafter. Pharmaceuticals undergo multi-year clinical trials before reaching patients. Financial products are stress-tested against defined scenarios. AI systems, particularly those based on large language models, exhibit emergent behaviours that are not predictable from their training data or architecture.[16] A model that passes every safety benchmark at the time of deployment may develop unexpected capabilities or failure modes as it is exposed to real-world inputs. The concept of pre-market approval, which underpins most product safety regulation, assumes that the product that is assessed is the product that is used. For adaptive AI systems, this assumption does not hold.
The fourth assumption is that the regulated entity and the regulator operate within a shared jurisdiction. AI models are trained in one country, hosted in another, accessed globally through APIs, and fine- tuned locally. A UK regulator attempting to enforce compliance against a model hosted in the United States, trained on data from dozens of jurisdictions, and accessed by UK users through a wrapper built by an Indian startup, faces jurisdictional challenges that make conventional enforcement mechanisms largely theoretical. This is not a new problem – internet regulation faces similar difficulties – but the speed of AI deployment and the opacity of AI supply chains make it considerably more acute.
Finally, regulation assumes that the regulator possesses – or can acquire – sufficient expertise to evaluate compliance. For most regulated industries, this is achievable: the FCA employs people who understand financial instruments, the MHRA employs people who understand pharmacology. But the technical frontier of AI is advancing so rapidly that even well-resourced regulators struggle to maintain relevant expertise.[17] The AI Safety Institute has made significant investments in technical evaluation capacity, but its mandate is focused on frontier models and catastrophic risk.[18]The sectoral regulators who must govern the vast majority of AI deployments – Ofcom dealing with AI-generated content, the FCA with AI in financial services, the CMA with AI and competition, the ICO with AI and data protection – face a persistent capability gap that cannot be closed by hiring alone, because the relevant expertise is scarce and the private sector pays multiples of public sector salaries.
Governance is broader than regulation. Where regulation asks “what rules should we write?”, governance asks “what institutions, processes, and capabilities do we need to manage this transition well?” This reframing is not a retreat from accountability or oversight. It is a recognition that effective oversight of AI requires instruments that regulation alone cannot provide: real-time technical monitoring, adaptive standards that evolve with the technology, cross-sector coordination, and institutional learning at a pace that matches technological change.[3] Regulation remains an important tool within the governance toolkit, but it is one tool among many, and treating it as the entirety of the response is like trying to manage a pandemic with legislation alone.
We propose what we call the “Governance Maturity Model” – a framework for assessing where jurisdictions sit on the spectrum from reactive to adaptive AI governance. The model has four levels. Level 1 is “reactive regulation”: the jurisdiction has no AI-specific governance and responds to AI-related harms only after they occur, using existing legal frameworks that may or may not be adequate. Many developing countries and some US states sit here. Level 2 is “prescriptive regulation”: the jurisdiction has enacted or is enacting comprehensive AI-specific legislation that defines risk categories, mandates compliance procedures, and establishes enforcement mechanisms. The EU, following the AI Act, is the paradigmatic example.[5]The strength of this level is legal certainty; the weakness is rigidity and the regulatory lag paradox described above.
Level 3 is “coordinated oversight”: the jurisdiction distributes AI governance across existing sectoral regulators, with a central coordinating function that ensures consistency and fills gaps. The UK’s current framework aspires to this level but has not yet achieved it, because the coordinating function (currently split between DSIT, AISI, and the Cabinet Office) lacks the authority and resources to be effective.[19]Level 4 is “adaptive governance”: the jurisdiction has built the institutional infrastructure for continuous learning and adaptation – real-time monitoring capabilities, regulatory sandboxes that feed into policy, structured feedback loops between industry and government, and the capacity to update governance frameworks without primary legislation. Singapore, as we discuss in Section 5, comes closest to this level, though no jurisdiction has fully achieved it.[20]
The Governance Maturity Model is not a simple hierarchy in which higher levels are always better. Level 2 prescriptive regulation may be appropriate for specific high-risk applications where legal certainty is paramount – AI in criminal sentencing, for example, or autonomous weapons systems. The argument is not that regulation is never the right tool, but that a jurisdiction’s overall approach to AI governance should aspire to Level 3 or 4, deploying regulation selectively within a broader adaptive framework rather than treating it as the default instrument.
The second conceptual contribution of this brief is the idea of “governance as infrastructure.” The analogy is deliberate. The UK does not build a new broadband network for each digital service; it invests in shared digital infrastructure that multiple applications can use. Similarly, governance capacity should not be built from scratch for each AI application or sector. The ability to evaluate an AI model’s reliability, to audit its training data provenance, to assess its impact on a particular population – these are capabilities that recur across every domain in which AI is deployed. Building them as shared infrastructure, accessible to every regulator and government department, is vastly more efficient than expecting each body to develop its own capability independently.
This infrastructure metaphor also illuminates a critical timing question. You do not wait until traffic congestion is unbearable to start building roads. You invest ahead of demand, accepting that the infrastructure will be underutilised initially. The same logic applies to governance capacity. The UK needs to invest now in the institutions, skills, and processes that will be needed to govern AI over the coming decade, rather than waiting for each governance failure to reveal the next capability gap. The cost of building this infrastructure proactively is a fraction of the cost of responding reactively to governance failures – as the Post Office Horizon scandal illustrates in a closely adjacent domain.[21]
The UK’s “pro-innovation” approach to AI – delegating oversight to existing sectoral regulators through five cross-cutting principles (safety, transparency, fairness, accountability, and contestability) – has the right instinct but lacks the infrastructure to succeed.[9] The approach was set out in the March 2023 white paper and reinforced by subsequent policy statements,[22]but the gap between the framework’s ambition and the regulators’ capacity to deliver it has widened rather than narrowed. The fundamental problem is that the framework distributes responsibility without distributing capability.
Ofcom, the FCA, the CMA, and the ICO each face AI governance challenges that are central to their mandates. Ofcom must address AI-generated disinformation and deepfakes under its Online Safety Act responsibilities. The FCA must govern the use of AI in credit decisions, algorithmic trading, and customer service.[23] The CMA must assess whether foundation model providers are engaging in anti-competitive practices.[24] The ICO must enforce data protection principles against AI systems that process personal data in ways that are technically opaque.[25]Each of these challenges requires deep technical understanding of how AI systems work – not at the level of academic computer science, but at the practical level of knowing what questions to ask, what evidence to demand, and what claims to be sceptical of.
Most regulators do not currently have this capacity, and they cannot build it independently. The talent market for people who combine technical AI expertise with regulatory experience is vanishingly small. The private sector absorbs the vast majority of technical AI talent at salaries that public sector bodies cannot match. Even AISI, which has been relatively successful at attracting technical staff, has benefited from the novelty of its mission and its partial insulation from standard civil service pay scales – advantages that are not available to regulators operating under established HR frameworks.[26] A strategy that relies on each regulator independently recruiting and retaining AI expertise is a strategy that will fail.
The alternative is to build shared governance infrastructure. We propose four components. First, a technical secondment programme that places AI engineers from industry and AISI within key regulatory bodies for 12- month rotations. These secondees would not write policy; they would build the internal technical literacy of the host organisation, training permanent staff and helping to develop AI-literate regulatory processes. The model has precedents: the Government Digital Service pioneered a similar approach to digital skills in the 2010s, and the FCA’s TechSprint programme has demonstrated the value of embedding technical practitioners within regulatory teams.[23]
Second, shared AI evaluation infrastructure. Currently, any regulator that wants to evaluate an AI system must build or procure its own testing capability. This is inefficient and leads to inconsistent standards. A central evaluation facility – building on AISI’s existing work but with a broader mandate covering deployed systems, not just frontier models – would allow regulators to submit AI systems for independent technical assessment against common benchmarks.[18]This is analogous to how the National Physical Laboratory provides shared measurement infrastructure, or how GCHQ’s National Cyber Security Centre provides shared cyber security assessment capability.
Third, common standards for AI impact assessment. Each regulator is currently developing its own approach to assessing the impact of AI systems within its domain. The ICO has its AI and data protection risk toolkit;[25] the FCA is developing guidance on AI model risk management; the CMA has published its own AI principles.[24]While sector-specific adaptation is appropriate, the underlying methodology for assessing AI systems – evaluating training data quality, testing for bias, assessing robustness, examining transparency – should be standardised.[27]DSIT should convene regulators to develop a common AI impact assessment framework, with sector-specific modules that build on a shared foundation. Fourth, a horizon-scanning and gap-identification function within the Cabinet Office that continuously monitors the AI governance landscape for emerging risks that fall between regulatory mandates – the gaps that no existing regulator is responsible for, which are precisely where the most dangerous governance failures tend to occur.
The EU AI Act is the world’s most ambitious attempt at comprehensive AI regulation, and its implementation difficulties are instructive for any jurisdiction considering a similar approach.[5]The Act classifies AI systems into risk tiers – unacceptable, high, limited, and minimal – and imposes corresponding obligations ranging from outright bans to transparency requirements.[8]In theory, this provides legal certainty and a clear compliance framework. In practice, the implementation is revealing problems that were foreseeable from the outset. Companies are struggling to classify their AI systems within the Act’s risk categories, particularly where a single model is used across multiple applications with different risk profiles. The Act’s provisions on foundation models, inserted late in the legislative process in response to the emergence of ChatGPT, sit uneasily with the rest of the framework and have generated significant uncertainty about compliance requirements.[13]
More fundamentally, the EU is discovering that it lacks the enforcement capacity to make the Act effective. The newly created AI Office has a modest staff count and budget relative to the scale of its mandate.[28]National competent authorities in member states are at varying stages of readiness, with many yet to establish the technical infrastructure needed to audit AI systems. The Act’s requirement for “conformity assessments” of high-risk AI systems presumes the existence of qualified auditors and agreed standards – neither of which currently exist at scale. The European Commission has acknowledged that full enforcement will take years, during which the technology will have continued to evolve. The EU’s experience demonstrates Level 2 of the Governance Maturity Model in its purest form: a comprehensive legal framework that is rigorous on paper but faces structural challenges in implementation.
The United States has taken a markedly different path. The Biden administration’s October 2023 Executive Order on AI was the most significant federal AI policy intervention, establishing reporting requirements for frontier model developers, directing agencies to address AI risks within their domains, and investing in AI safety research.[29]The approach had the virtue of speed – an executive order can be issued in weeks, whereas the EU AI Act took three years to legislate – but also fundamental limitations. Executive orders are reversible by subsequent administrations, as demonstrated when the Trump administration rescinded the order shortly after taking office in January 2025.[30]This illustrates a different failure mode: governance that is adaptive but impermanent, subject to political cycles rather than building durable institutional capacity. The US currently lacks any federal AI legislation, and the patchwork of state-level initiatives – from Colorado’s AI consumer protection act to California’s various proposals – creates precisely the kind of fragmented, inconsistent governance landscape that comprehensive approaches are intended to avoid.
Singapore offers the most instructive model for the UK. Rather than pursuing comprehensive legislation, Singapore has built an adaptive governance ecosystem through a combination of voluntary frameworks, regulatory sandboxes, and institutional capacity-building.[20]The Infocomm Media Development Authority’s Model AI Governance Framework, first published in 2019 and regularly updated, provides practical guidance that organisations can adopt incrementally.[31]The AI Verify foundation’s testing framework offers a concrete tool for assessing AI systems against governance principles. Crucially, Singapore has invested in the institutional infrastructure that makes these voluntary approaches effective: a well-resourced national AI office, deep government-industry dialogue mechanisms, and a public sector with genuine technical literacy at senior levels.
Singapore’s approach represents something close to Level 4 on the Governance Maturity Model – adaptive governance that can evolve with the technology without requiring new legislation for each development. Its limitations are also instructive: the model relies heavily on government-industry trust, works best at the scale of a city-state with a relatively small number of major AI deployers, and depends on a quality of public sector that not every country can replicate. The UK cannot simply copy Singapore’s approach, but it can learn from the underlying principles: invest in institutional capacity first, use voluntary frameworks to build norms and expectations, deploy regulation selectively for specific high-risk applications, and maintain the flexibility to adapt as the technology evolves.
Several other jurisdictions offer relevant lessons. Canada’s Artificial Intelligence and Data Act, part of the broader Bill C-27, has faced sustained criticism for vague definitions and uncertain enforcement mechanisms – a cautionary tale about legislating without first building the institutional capacity to implement.[32]Japan has pursued a “social principles” approach, emphasising human-centric AI development through non-binding guidelines and industry self-governance, which has maintained flexibility but raised questions about accountability.[33] The common thread is that jurisdictions which invested in institutional capacity before or alongside legislative action have fared better than those which legislated first and attempted to build capacity afterwards.
Our recommendations are deliberately institutional rather than legislative. The UK does not need more AI laws – it needs the capacity to govern AI well within existing legal frameworks, adapting as the technology evolves. The following six recommendations are ordered by priority and feasibility, with the first three achievable within the current Parliament and the latter three requiring longer-term investment.
First, establish the AI Governance Capacity Unit within the Cabinet Office. This unit should be small (30–50 staff), technically capable, and explicitly tasked with building cross-departmental AI governance competence rather than centralising AI oversight. Its core functions would include: coordinating the technical secondment programme described below; managing shared evaluation infrastructure; convening regulators to develop common standards; and conducting the horizon- scanning function that identifies governance gaps. The unit should report to a minister with genuine cross-departmental authority – the Chancellor of the Duchy of Lancaster or equivalent – to avoid being captured by any single department’s priorities. Its relationship with DSIT’s AI policy team should be complementary: DSIT leads on AI industrial strategy and international engagement; the Cabinet Office unit leads on governance capacity and cross-departmental coordination.
Second, create the AI Technical Secondment Programme. This programme should place 50–100 AI engineers and researchers within key regulatory bodies annually, funded centrally and administered by the Governance Capacity Unit. Priority placements should be Ofcom (AI- generated content and deepfakes), the FCA (AI in financial services), the CMA (AI and market competition), the ICO (AI and data protection), and the NHS (AI in clinical decision-making). Secondees should be drawn from AISI, the Alan Turing Institute, and – critically – industry, with appropriate conflict-of-interest protections.[34] The programme should be designed to build permanent institutional capacity, not to create a permanent dependency on secondees: each placement should include a knowledge transfer plan that trains permanent staff.
Third, develop shared AI evaluation infrastructure. Building on AISI’s existing work on frontier model evaluation, create a broader facility that regulators can access to assess AI systems against sector-specific benchmarks.[18]This facility should have the capacity to evaluate not just frontier models but the deployed AI systems that regulators actually encounter – fine-tuned models, compound AI systems, AI-integrated products – against criteria that include reliability, bias, robustness, and transparency. The facility should operate on a service model, with regulators able to commission assessments as needed, and its methodologies should be open and reproducible to build industry confidence and enable self-assessment.
Fourth, mandate annual AI governance readiness assessments for all government departments and regulators with significant AI exposure. These assessments should evaluate: the organisation’s technical understanding of AI systems within its remit; the adequacy of its processes for identifying and responding to AI-related risks; its capacity to engage meaningfully with industry on AI governance questions; and its ability to coordinate with other regulators on cross-cutting issues. The assessments should be conducted by the Governance Capacity Unit and published, creating both accountability and a benchmark for measuring progress. This is not an exercise in bureaucratic compliance – it is a diagnostic tool for identifying where governance capacity is weakest and directing investment accordingly.
Fifth, convene the AI Governance Standards Board. This body, drawing membership from regulators, industry, academia, and civil society, should develop the common AI impact assessment framework described in Section 4. The Board should operate on a model similar to the Financial Reporting Council – independent of government but with statutory recognition – and should be empowered to develop standards that regulators can adopt within their own frameworks.[27]The goal is convergence without rigidity: a shared foundation that allows sector- specific adaptation while preventing the fragmentation that currently characterises UK AI governance. The Board should also serve as the UK’s interface with international AI standards processes, ensuring that domestic governance standards are compatible with emerging global norms.[35]
Sixth, and most ambitiously, reorient the UK’s international AI strategy around governance capacity rather than safety summitry. The Bletchley Park and Seoul summits established the UK as a convener on AI safety, but the follow-through has been uneven, and the space is increasingly crowded with competing initiatives.[36]The UK’s distinctive contribution should be in practical governance – sharing the institutional models, evaluation methodologies, and capacity-building approaches developed domestically with partner countries, particularly in the Commonwealth and Global South where AI governance capacity is most urgently needed. This is both a public good and a strategic investment: countries that adopt UK-compatible governance approaches become natural partners for trade, data-sharing, and technology cooperation. DSIT and the FCDO should jointly develop an AI Governance Partnership Programme that makes UK governance expertise available to partner governments, funded through existing ODA commitments.
These recommendations share a common logic: they treat governance capacity as infrastructure, to be built proactively and maintained continuously, rather than as a reactive response to individual AI applications or crises. The UK has a genuine opportunity to pioneer an approach to AI governance that is more effective than the EU’s prescriptive regulation, more durable than the US’s executive action, and more scalable than Singapore’s city-state model. But that opportunity will be lost if the debate remains trapped in the binary of “regulate versus not regulate.” The question is not whether to govern AI – it is how to build the institutions capable of governing it well.
Key recommendation
The UK should establish an AI Governance Capacity Unit within the Cabinet Office, tasked with building cross-departmental competence in AI oversight rather than creating new regulatory bodies for individual AI applications.