Earlier this year we published an article about bias and algorithms against the background of the school exam results.   The UK government had already tasked the Centre for Data Innovation and Ethics with examining the use of algorithms in significant decision-making about individuals - the final report was delayed due to the pandemic - but it has now been published.  The emphases in the report on audit and transparency are significant.

The CDEI's review focused on four sectors - policing, local government, financial services and recruitment.  These sectors were chosen because they all involve decisions about individuals and because there is evidence of both the growing uptake of algorithms and a historic bias in these sectors.  It considers three main themes: data, governance and tools and techniques.

The key recommendations are:

  • Government should place a mandatory transparency obligation on all public sector organisations using algorithms that have an impact on significant decisions affecting individuals.
  • Organisations should be actively using data to identify and mitigate bias. They should make sure that they understand the capabilities and limitations of algorithmic tools, and carefully consider how they will ensure fair treatment of individuals.
  • Government should issue guidance that clarifies the application of the Equality Act to algorithmic decision-making. This should include guidance on the collection of data to measure bias, as well as the lawfulness of bias mitigation techniques (some of which risk introducing positive discrimination, which is illegal under the Equality Act).

The CDEI says that there is significant scope to address the risks posed by bias in algorithmic decision-making within the law as it stands, but if this does not succeed then there is a clear possibility that future legislation may be required.

It makes the point that the review has its limitations and facial recognition and the impact of bias within how platforms target content were excluded from scope. Experience from this review suggests that many of the steps needed to address the risk of bias overlap with those for tackling other ethical challenges, for example structures for good governance, appropriate data sharing, and explainability of models.

Looking forward, the CDEI plans to bring together a diverse range of organisations with interest in this area, and identifying what would be needed to foster and develop a strong AI accountability ecosystem in the UK. It says that this is both an opportunity to manage ethical risks for AI in the UK, but also to support innovation in an area where there is potential for UK companies to offer audit services worldwide.