Late last year, we wrote about the Artificial Intelligence (Regulation) Bill - a private members' bill which had its first reading in the House of Lords. While that Bill didn't make it to the House of Commons, another private members' Bill - the Public Authority Algorithmic and Automated Decision-Making Systems Bill (admittedly, not the snappiest of titles - we'll call it the Bill, for short) - has been introduced and outlines several key provisions to ensure these types of system are used responsibly, transparently, and fairly.
As with the Artificial Intelligence (Regulation) Bill before it, the Bill is only 8 pages long and, therefore, doesn’t contain a huge amount of content. In addition, it's important to remember that only a minority of private members' bills actually become law. However, we've summarised some of the key points below:
Purpose of the Bill
The primary goal of the Bill is to ensure that “algorithmic and automated decision-making systems are deployed in a manner that accounts for and mitigates risks to individuals, public authorities, groups, and society”. The Bill seeks to promote “efficient, fair, accurate, consistent, and interpretable decisions”, while providing for an independent dispute resolution service.
Scope of Application
The Bill applies to any algorithmic or automated decision-making system developed or procured by a public authority from six months after the Bill is passed, namely “any system, tool or statistical model used to inform, recommend or make an administrative decision about a service user or a group of service users” and including systems in development (but not in a test environment).
It excludes systems used for national security purposes and those that merely automate manual calculations.
Algorithmic Impact Assessments
Public authorities are required to complete an Algorithmic Impact Assessment (AIA) before deploying any algorithmic or automated decision-making system (subject to certain exceptions). The AIA must be updated when the system's functionality or scope changes and published within 30 days of completion. The Secretary of State will prescribe the form of the AIA framework, which includes assessing risks, minimising negative outcomes and maximising positive outcomes, and ensuring compliance with equality and human rights laws.
Algorithmic Transparency Records
Before using or procuring an algorithmic system, public authorities must complete an Algorithmic Transparency Record (subject to certain exceptions). This record must be published within 30 days of completion and updated with any changes in the system's functionality or scope. The record should include a detailed description of the system, its rationale, technical specifications, usage in decision-making, and information on human oversight.
Requirements for Public Sector Organisations
Public authorities must:
- Notify the public when decisions are made using algorithmic systems.
- Provide meaningful explanations to affected individuals about how decisions were made.
- Monitor outcomes to safeguard against unintended consequences.
- Validate data accuracy and relevance.
- Conduct regular audits and evaluations of these systems.
Training of Public Sector Employees
Employees involved in using these systems must be trained in their design, function, and risks. They should have the authority and competence to challenge the system's output.
Logging Requirements
All systems must have logging capabilities to record events during operation. Logs must be held for at least five years (subject to exceptions) and should record whether final decisions followed the system's recommendations.
Prohibition on Procuring Systems Incapable of Scrutiny
Public authorities are prohibited from using systems where there are “practical barriers” including “contractual or technical measures and intellectual property interests, limiting their effective assessment or monitoring of the algorithmic or automated decision-making system in relation to individual outputs or aggregate performance”. Vendors must disclose evaluation results and submit systems for evaluation by the AI Safety Institute upon request.
Independent Dispute Resolution Service
The Secretary of State must ensure that there is an independent dispute resolution service available for challenging decisions made by these systems or obtaining redress.
This Bill represents a significant step towards ensuring that algorithmic and automated decision-making systems are used responsibly within the public sector, with a strong emphasis on transparency, accountability, and fairness. However, this is really just a “starter for ten” and will need further review and refinement as it continues through the legislative process. We will have to see how far it gets but, with this second private members' bills on the subject of AI and recent indications from the Labour party on its willingness to legislate on AI, it seems we might need to get ready for UK regulation in this area sooner than expected.
At the beginning of September, the UK has also signed the Council of Europe’s Framework Convention on artificial intelligence and human rights, democracy and the rule of law. This is the first-ever international legally binding treaty aimed at ensuring that use of AI systems undertaken by public authorities (or private actors acting on their behalf) is consistent with the above-mentioned values. Therefore, the Bill proposed in the House of Lords could be seen as UK’s early attempt to implement the obligations enshrined in the treaty.