AI is talk of the town at the moment and regulatory updates in this space are moving at a dizzyingly fast pace.

European lawmakers continue to drive forward their plans for regulation at EU-level through the draft EU Artificial Intelligence Act (“EU AI Act”); UK regulators have started to respond to the UK’s governments own plans for domestic regulation of AI set out in the ‘White Paper on Artificial Intelligence’ published back in March (“AI White Paper”); whilst US regulators and federal agencies are actively engaged in exploring AI policy across the pond.

To keep you up to speed, we’ve pulled together a few key headlines from over the past few weeks on where things are heading when it comes AI regulation.

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

Draft EU AI Act amendments expand ‘prohibited practices’ and bite on ‘Very Large Online Platforms’

Members of the European Parliament have introduced further amendments to the draft text of the draft EU AI Act ahead of talks with the EU Member states, the EU Council and the EU Commission (known as the trilogues) that will decide the final shape of the legislation.

One of these key latest amendments is to expand the list of ‘prohibited practices’ for AI – being those use cases for AI which are banned outright because they are deemed to be incompatible with the fundamental values and rights of the EU. New rules in the adopted negotiating position expand the list of prohibited practices to include:

  • the use of real-time and post-remote biometric identification systems in publicly accessible spaces;
  • the use of biometric categorization systems using sensitive characteristics;
  • the use of predictive policing systems (based on profiling, location, or past criminal behaviour);
  • the use of emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
  • the use of untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.

The recitals under the EU’s new “negotiated position” note that social media platforms which are designated ‘very large online platforms’ (“VLOPs”) – defined under the EU Digital Services Act (“DSA”) as platforms that have more than 45 million users per month in the EU – can be used in a way that “strongly influences safety online, the shaping of public opinion and discourse, election and democratic processes and societal concerns”. Given these concerns, AI systems used by such platforms will now be regulated by the draft EU AI Act.

AI systems that are used by social media platforms (designated as VLOPs) in their “recommender systems” (i.e. automated systems used by the platform to suggest or prioritise specific information to users) are now classified as “High-Risk AI Systems” under the EU AI Act. Providers of such AI systems will be required to comply with the relevant obligations for High-Risk AI Systems – including those relating to risk management; the quality of data sets used to train the AI system; performance-testing; record-keeping; cybersecurity; and a requirement for effective human oversight of the AI system – as well as additional obligations under the DSA (e.g. audit and risk assessment requirements).

Just to underscore the evolving nature of this regulation, the process of adopting the EU AI Act still has some way to go. Spain, which currently holds the presidency of the EU Council of Ministers and is keen to reach a deal on the EU AI Act before the end of 2023, has recently circulated a document outlining the key points for the upcoming trilogue negotiations. We will keep you updated on any significant developments arising from these discussions (which are scheduled for July, September and October this year).

UK Regulators respond to AI White Paper

You might recall that the UK government adopted somewhat of a “wait and see” approach in its first iteration of the “pro-innovation” AI White Paper – proposing to empower the UK’s existing regulators to govern how AI is used in each specific sector. As such, we have now started to see UK regulators respond to the AI gauntlet laid out by the UK government:

In the wake of announcing the rather laissez-faire AI White Paper, attention-grabbing developments in AI have led Rishi Sunak to indicate that the UK may change tac and look to take a more aggressive approach to regulation domestically. This remains to be seen, but the UK government recently announced plans to host the first global summit on AI safety this autumn. The summit "will consider the risks of AI, including frontier systems, and discuss how they can be mitigated through internationally coordinated action. It will also provide a platform for countries to develop a shared approach to mitigate these risks". The summit was also backed by the U.S., with President Biden committing to "attend at a high level".

OpenAI faces class action data privacy lawsuit in the US

OpenAI, the company behind the popular chatbot ChatGPT, is facing a class-action lawsuit in the US for allegedly scraping private user information from the internet without their consent. The lawsuit (filed on behalf of 16 named plaintiffs) alleges that OpenAI trained ChatGPT using data collected from millions of social media comments, blog posts, Wikipedia articles, and other sources without the consent of the respective users. This data included personal information such as names, email addresses, phone numbers, and IP addresses.

The lawsuit claims that OpenAI's actions violated a number of US laws, including under the Computer Fraud and Abuse Act (CFAA), the Electronic Communications Privacy Act (ECPA), and several state consumer protection laws. OpenAI has not yet filed a response to the lawsuit, however the company has previously asserted that it complies with all applicable privacy laws.

Certainly an interesting case to keep an eye on, as lawsuits like this (as well as the lawsuit raised by Getty Images against StabilityAI) raise important questions about the privacy and copyright implications of large language models - it may be only a matter of time before we see similar claims brought against major AI platforms here in the UK.

FTC raises competition concerns around generative AI

The US Fair Trade Commission has also said that generative AI may raise a variety of competition law concerns. In particular, control over one or more of the key building blocks that generative AI relies on could affect competition in generative AI markets, such as data, talent, computational resources and the impact of open source.

Incumbents that control key inputs or adjacent markets, including the cloud computing market, may be able to use unfair methods of competition to entrench their current power or use that power to gain control over a new generative AI market. Firms in generative AI markets could take advantage of “network effects” to maintain a dominant position or concentrate market power. Another related effect is “platform effects”, where companies may become dependent on a particular platform for their generative AI needs. As with network effects, firms could leverage platform effects to consolidate their market power, especially if they take specific steps to lock in customers in an exclusionary or otherwise unlawful way.

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

In what is an exciting and rapidly developing legal sector, we at Lewis Silkin are staying on top of everything you need to know about AI regulation.

If you have any questions about AI and how it might impact your business, please feel free to reach out to a member of the team at Lewis Silkin at any time.