The UK Government outlined its proposed legislative framework to govern the development and deployment of artificial intelligence (AI) technologies by publishing the long-anticipated ‘White Paper on Artificial Intelligence’ on 29 March 2023 (“AI White Paper”).

Staying true to the strategic vision of the UK Science and Technology Framework to "make the UK a science and technology superpower by 2030", the AI White Paper sets out proposals for implementing a "proportionate, future-proof and pro-innovation framework for regulating AI". The proposed framework favours an “agile” approach – with a strong focus on amassing evidence, learning from experience and adapting in line with the speed at which AI will inevitably evolve.

By adopting this pro-innovation approach to AI regulation, the UK Government intends to "help the UK harness the opportunities and benefits that AI technologies present" to "drive growth and prosperity by boosting innovation and investment and building public trust in AI" – with the overall aim to "strengthen the UK's position as a global leader in AI, by ensuring the UK is the best place to develop and use AI technologies".

The 5 Guiding Principles

The AI White Paper centres around the following five ‘Guiding Principles’, which are designed to “guide and inform the responsible development and use of AI in all sectors of the economy”:

  • Safety, security, robustness – AI systems should function in a secure, safe and robust way where risks are carefully identified, assessed, and managed at all times.
  • Appropriate transparency and “explainability”  organisations developing and deploying AI should communicate appropriate information about the AI system and be willing to explain the decision-making process of an AI system in a level of detail that is commensurate to the risks posed.
  • Fairness  AI systems should comply with the UK’s existing laws and regulations (such as the Equality Act 2010 and the UK GDPR) and should not undermine people’s legal rights, discriminate against individuals or create unfair market outcomes.
  • Accountability and governance  steps should be taken to ensure there is effective oversight of AI systems used and accountability for unjust or unlawful outcomes. 
  • Contestability and redress – users should have the ability to contest harmful outcomes and purely AI-based decisions. 

The Role of UK Regulators 

Rather than handing responsibility to one single regulator to govern AI here the UK, the government intends to empower the UK’s existing regulators to prepare tailored, context-specific approaches that suit how AI is used in each specific sector.

Regulators bestowed with such responsibilities will be expected to:

  • apply the principles to AI use cases falling within their remit;
  • issue relevant guidance explaining and illustrating what compliance looks like; and
  • support business through cross-regulator collaboration.

Addressing Cross-Sector Risks

In recognition of the shortfall which may be created by this patchwork approach to regulation, the government intends to set up a “Central Risk Function” to catch any risks which might otherwise fall between the cracks of each regulator’s specific remit – including the risks posed by Large Language Modules or “LLMs” (such as the likes of GPT 3.5 / GPT 4, which underpin ChatGPT).

Given the broad spectrum of LLMs’ application within the AI supply chain, they are noted as a regulatory priority by the UK Government but will likely not be caught by any one single regulator. While industry-specific guidance may be published by the individual regulators in the future, in the meantime it will fall upon the Central Risk Function to manage the delicate balance between risk and innovation where LLMs are concerned.

The Central Risk Function will be expected to identify, assess, prioritise and monitor cross-cutting AI risks that may require regulatory intervention, and will take a lead from sectors where operational risk management is highly developed.

Sandboxes, Toolboxes (and other box-related metaphors)

The UK Government hopes to gain insight into regulatory needs, gaps and priorities by establishing AI sandboxes and testbeds, building on the success of previous sandbox pilots by the Information Commissioner’s Office and the Financial Conduct Authority.

As of yet, details are few and far between as to who will be invited to join the sandboxes; when the sandboxes will be fully operational; and/or what the sandboxes will actually involve. The goodwill is certainly there though - and the Government intends to roll-out an initial pilot to discover whether innovators would be best supported by simulated test environments or an advice/support model, whereby experts provide targeted advice and support to participants to help them better understand the regulatory environment.  

Equally, the UK Government is in the process of building out its compliance toolbox, having already published a roadmap to effective “AI assurance” (essentially checking and verifying AI systems and the processes used to develop them) via the “UK AI Standards Hub” - a new UK initiative dedicated to the evolving and international field of standardisation for AI technologies.

The White Paper nods to the emerging market for AI assurance services as a key growth opportunity for the UK’s AI industry and international reputation. A ‘Portfolio of AI’, which is set to inform innovators about the role of assurance techniques in compliant AI development, is expected to be published in Spring 2023.

Comment

The UK Government's lighter-touch, pro-innovation approach to AI regulation differs notably from the European Union's draft AI Act, which puts forward a more detailed and stringent regulatory framework for AI systems. A summary of the EU’s proposals in the draft AI Act can be found here.

Through the AI White Paper, the UK Government seeks to strike a balance between protecting public interests and fostering innovation, focusing on agile, adaptable regulation that evolves alongside technological advancements. This approach is designed to ensure that businesses in the UK can continue to develop and deploy AI technologies without being held back by overly restrictive regulations.

In contrast, the EU's draft AI Act focuses on a more granular approach, with detailed requirements and categorisations of AI technologies based on their risk levels. While this approach seeks to address specific concerns and risks associated with AI, it has been criticised for potentially hampering innovation and creating a more rigid regulatory environment.

In a battle of regulatory ideologies, which approach best fits the competing needs for protection and innovation remains to be seen. All attention will therefore be on the UK regulators and the guidance they introduce in the coming months and years.

The UK Government is consulting on the AI White Paper. The deadline for responses is 21 June 2023.