Algorithms and AI have been in the news a lot in the past weeks – students in Scotland complained that algorithms were used to allocate or adjust teacher predicted grades when exams were not able to be taken due to the covid pandemic. Concerns were expressed that the statistical model used to adjust teacher predictions in line with school performance had an undue adverse affect on poorer students.  Following the outcry, the Scottish government agreed to revert to the teacher predictions. 

Ofqual agreed to publish the algorithm used for the equivalent English exams and changes to appeals have been made with the aim of ensuring students are not disadvantaged.  However, it seems that the algorithm and use of a bell curve have led to a disproportionate number of students from poorer backgrounds receiving downgrades.

Andrew Pakes of the Prospect trade union tweeted on 12 August:

"Good to see this being talked about. Similar issues also growing at work - hiring, promotions, performance. This is age of HR tech. We need to be on it”.

The Home Office has agreed to change an algorithm it was using to help process visa applications when court action was brought alleging that it was racist.  Ofcom has issued a discussion document about the pros and cons of personalised pricing and highlighted the concerns consumers have that such practices may be neither transparent nor fair.  And Facebook is reportedly re-evaluating its algorithms to ensure that they are not racist.

In April 2020, the Institute for the Future of Work issued a report about the use of algorithms by employers in the recruitment process.  A key finding of the report was that while organisations were at least partially aware of the privacy issues, few, if any, organisations carried out equality impact assessments before deploying their recruitment systems. If there is an inbuilt bias in an algorithm, it can lead to a feedback loop where such bias becomes more and more entrenched. Amazon reportedly trialled (and then rejected) a recruiting tool which it found was not returning appropriate candidates, but it had also found a more sinister problem, it was not returning female candidates, so creating a self-fulfilling prophecy that women do not do tech work. 

AI and automated decision making have been around for a while.  For example, if you apply for a loan, a credit check is likely to be carried out by a machine. If refused, people want to know why, but financial institutions have been reluctant to divulge details or allow a right of appeal.  To help avoid these situations, the ICO and the Alan Turing Institute issued guidance on explaining AI decisions, especially those which do not have human input.  The ICO has also issued guidance on automated decision-making and profiling.  Finally, the ICO published guidance about data protection and AI at the end of July. The government also includes a reference to bias in data in its guidance about procuring AI systems and has produced a data ethics framework.

The growing amount of guidance and discussion illustrates that this is beginning to show on regulators’ radar.  Therefore, if you do deploy AI in your business, it is important to carry out the necessary privacy and equality impact assessments and also to ensure that decision making is transparent and can be readily explained to those affected. Before deploying automated hiring tools, companies should consult their workforce and any affiliated union, to discuss potential impacts on equality and proposal for equality impact assessment.

In 2018, the European Agency for Fundamental Rights issued a paper acknowledging that making algorithms fair and non-discriminatory is a daunting exercise. However, it suggested:

  • checking the quality of the data being used to build algorithms to avoid faulty algorithm ‘training’;
  • promoting transparency – being open about the data and code used to build the algorithm, as well as the logic underlying the algorithm, and providing meaningful explanations of how it is being used. Among others, this will help individuals looking to challenge data-based decisions pursue their claims;
  • carrying out impact assessments that focus on the implications for fundamental rights, including whether they may discriminate based on protected grounds, and seeing how proxy information can produce biased results;
  • involving experts in oversight: to be effective, reviews need to involve statisticians, lawyers, social scientists, computer scientists, mathematicians and experts in the subject at issue

In April 2019, the High-Level Expert Group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence. A key principle was diversity, non-discrimination and fairness: the principles say that unfair bias must be avoided, as it could have multiple negative implications, from the marginalisation of vulnerable groups, to the exacerbation of prejudice and discrimination.  To foster diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.