Back in February, the UK government published its much-anticipated consultation response to the March 2023 “AI White Paper” – doubling-down on its proposed “hands off” approach when it comes to regulating AI in the interest of creating an innovation-friendly regulatory environment. The position remains that the UK’s existing regulators should apply their existing powers to regulate AI matters falling within their relevant sector.
The consultation response directed various regulators to publish a “strategic approach” to AI by 30 April 2024 – and this week Ofcom filed its homework early by publishing its strategic approach to AI in 2024/25. The report highlights the work Ofcom has undertaken so far to tackle the AI risks in the communications sector and lays out the regulator’s planned activities in relation to AI over the coming 12 months.
In terms of work to date, Ofcom “has a wide programme of work in place to identify current and emerging AI risks and opportunities”. In particular, the report sets out examples of the steps that Ofcom have taken to address key areas of “cross-cutting” AI risk, particularly relating to synthetic media, personalisation and security. These include:
- Publishing a draft Illegal Harms Codes of Practice under its online safety regime, which includes “proposed measures in relation to accountability and governance to identify and manage risks, including risks posed by the sharing of illegal synthetic content”, as well as “proposed measures recommending that certain services collect safety metrics when testing recommender systems (which may include AI-driven systems), to improve online safety".
- Launching a deep-dive examination into the merits of synthetic content detection methods.
- Commissioning and publishing numerous pieces of research relating to AI (including, for example, on the adoption, attitudes, risks, and opportunities related to Generative AI).
- Carrying out horizon scanning to identify potential risks and benefits that AI could have for UK citizens and consumers.
- Partnering with academic institutions and domestic and international organisations to share knowledge and collaborate on AI-related issues.
Looking ahead, Ofcom’s Plan of Work 2024/25 identifies various projects that will involve work to consider AI’s impacts “even if this is not explicitly referenced”, namely in relation to areas such as online safety, broadcasting and telecoms. These projects generally centre around further research, investigation and monitoring of AI risk, but also involve the issuing of various Codes of Practice “to help regulated services tackle risks, to protect users from illegal and harmful content”.
An overarching theme of the report is the alignment between the Government's 5 guiding AI principles (say it with me: (1) safety, security and robustness; (2) transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress) and the principles of Ofcom's own regulatory regime. These are all elements which Ofcom claims are central to “the outcomes we want to see across the sectors we regulate and for the people who use and rely on communications services” – even going as far as to highlight each of these words in bold wherever they appear in the report.
Ofcom’s report will feed into the UK government’s wider analysis of where there may be overlaps (or indeed gaps) in regulatory coverage and will help to inform policymakers' understanding of whether regulatory powers should be expanded and/or new legislation may be needed. The Communications and Digital Committee of the House of Lords (through its report on large language models (LLMs) and generative AI) has already voiced its support for the introduction of a standardised set of powers for the main regulators that will be dealing with AI, given how enforcement powers differ greatly between different UK regulatory regimes.
As the UK government continues its ongoing consultation and information gathering process, it will also be interesting to see how these sector-specific reports from UK regulators impact the government’s plans for a “central function” to help tie-together the various domestic AI regulatory approaches. For now, plans are currently for the “central function” – likely to be fulfilled by the expanded AI team within the Department of Science, Innovation and Technology (DSIT) – to catalyse the development of skills within regulators, support coherence and information sharing between regulators in addressing AI, and work with them to "analyse and review potential gaps in existing regulatory powers and remits".
One thing is for certain – there’s plenty more to come this year as we watch how the UK’s regulatory landscape for AI evolves in line with the strategic reports to be issued by regulators across various sectors … provided, of course, that the dog doesn’t eat any of their homework.