The death of George Floyd and the rise in profile of the Black Lives Matter movement have been a leitmotif of the summer, leading to much reflection about how structural racism can be tackled. In that context, the European Commission promised to come up with an Action Plan to combat structural racism.

In addition, the UK exams result debacle and change in Home Office policy on visa applications, as well as court action on the police use of facial recognition technology, have each contributed to awareness of, and the debate around, the use of technology and discrimination.

Against this background, European Digital Rights (EDRi), which “works to defend rights and freedoms in the digital age”, has issued a briefing note making recommendations to the EU on advancing racial justice and related discrimination in the field of tech.

The EDRi says AI has huge potential to exacerbate discrimination, at a scale and degree of opacity that goes beyond non-automated or “human” processes, and is often portrayed as being neutral and objective, when in fact it embeds and amplifies the underlying structural bias of our society.

The briefing note sets out the main ways in which digital tech and policy affect ethnic minorities, including areas such law enforcement, (over) policing and surveillance, online privacy, and profiling of migrants and racialised groups.  The report also notes that AI can perpetuate inequalities in employment and that ‘profiling’ creates discrimination in the area of social welfare.

While the GDPR already includes provisions on ‘profiling’, the EDRi makes the following recommendations:

  • For the European Commission to:
    • ensure coordination, collaboration and meaningful consultation with racialised communities, anti-racism and digital rights organisations to develop the Action Plan
    • implement a review procedure to ensure any new legislation, policy introduced in the field of technology or digital rights does not adversely affect racialised groups
    • prevent abuses of racialised communities by legally restricting impermissible uses of artificial intelligence
    • to ensure adequate legal protection for racialised groups against data-driven profiling
  • For the EU and member states to implement a ban on biometric mass surveillance in publicly accessible spaces and prevent further proposals that could lead to mass surveillance.
  • To review, evaluate and ensure fundamental rights compliance of EU databases in the fields of police cooperation and migration.
  • To review, evaluate and ensure that any EU involvement, funding and support for AI and biometric processing in migration control at the border is consistent with fundamental rights.
  • To ensure choice, accountability and fundamental rights in the Digital Services Act.

On 16 September, during her annual State of the Union speech, European President Ursula von der Leyen announced the appointment of an anti-racism coordinator and saying “we will tackle unconscious bias that exists in people, institutions and even in algorithms”.  The EDRi’s report aims to inform the EU’s thinking and may be taken into account by UK organisations aiming to deal with these issues, such as the Ada Lovelace Institute and the Centre for Data Ethics and Innovation.  

 AI can have so many benefits, but the EDRi report is a useful reminder of how  ‘controls’ and 'awareness' are needed in deploying AI.