Lewis Silkin tech lawyers were out in force last week at the Society for Computers and Law’s annual AI conference. The event – which sold out for the first time this year – saw a number of experts and practitioners speak on a variety of topics spanning AI, the law, and business, including AI regulation, business deployment of AI, and real-life use cases in legal service delivery. The headliner was undoubtably the thought provoking (and very entertaining) key note speech from Lord Chris Holmes who authored the Artificial Intelligence (Regulation) Bill that was dropped following the dissolution of Parliament prior to the election earlier this year. Don’t despair, Lord Holmes indicated that he wants to bring this Bill back to life during this Parliament. 

Here's a quick round up of our key takeaways from the event:

  • The message was clear: it’s time to legislate in the UK. If we fail to do so, the UK’s regulatory approach risks lapsing into alignment with the EU AI Act which, despite being pioneering and progressive, is not perfect. The UK has an opportunity to forge its own path successfully, thanks to factors such as time zone, geography, higher education and the good fortune of English common law. 
     
  • Lord Chris Holmes made the case for principles-based, outcomes-focused, input-understood legislation. What does this mean in practical terms? Well, have a look at the terms of the (now dissolved) bill that proposed the establishment of a nimble, horizontally focused regulator (the “AI Authority”) that would assess the regulatory landscape and address gaps where there is no regulatory coverage. The bill (accessible here) is a very digestible 9 clauses-long and makes for interesting reading. According to Lord Chris Holmes:

    • The most important clause is clause 6 which requires the AI Authority to implement a programme for “meaningful, long-term public engagement”. Without public education and buy-in, individuals are unlikely to be able to take full advantage of the benefits of AI technology. 
       
    • A standalone bill is preferable to plugging legislative gaps in existing/future legislation, e.g., the Product Safety and Metrology Bill that (unlike any specific AI legislation) made it into the King’s Speech this year. Clarity and consistency in legislative approach is paramount.
       
    • Principles based and “right size regulation” will deter tech developers from “jurisdiction shopping” away from the UK. 
       
    • Regulatory interoperability is critical to ensure that organisations operating under different legal frameworks can work together and to remove cross-jurisdictional barriers to trade. The Fintech Bridge with Singapore was named as an example of successful legal interoperability. 
       
    • In a market where salaries at tech companies dwarf any salary a government body can offer, AI talent should see a stint at a regulator as a valuable and important part of their long-term career. 
       
    • Transparency is key. There must be transparency over the input materials used to train LLMs, to ensure that rightsholders properly consent to such use and are adequately remunerated, and AI labelling should be enforced. 
       
  • We’re seeing a sea change in public demand for regulation, fuelled in part by increased public consciousness, including over the potential harm to children of technology. This isn’t specific to AI but is a focus point for digital regulation generally. 
     
  • That being said, the government’s focus on “AI safety” (which is confined to a “specific set of future Armageddon issues”) risks neglecting other important issues e.g., misinformation. You may have seen it too – a recent report from a group of security experts at the World Economic Forum highlighted that misinformation and disinformation pose the biggest global threat in the next couple of years, ahead of other treats such as war, inflation, or extreme weather. 
     
  •  “We ignore Chinese regulation at our peril”. There is a lack of discourse on the hard regulatory approach of the Chinese, but it is important – although its legislation only captures companies providing GenAI services to the public in China, there will be an impact up and down supply chains, and many companies selling products across the globe will be affected by China’s approach to AI legislation. 
     
  • There is also a growing patchwork of hard regulation coming out of the US. There is, however, currently a dearth of legislation in the global south. 
     
  • In contrast, the EU AI Act isn’t “hard” regulation. The nub of it imposes a set of essential rules on high-risk AI systems that must be followed, including conformance to technical standards that won’t be made available until 2026/2027. 
     
  • Don’t forget about the Digital Services Act, which is similarly important legislation to the EU AI Act and covers a number of regulatory gaps. 
     
  • The technical documentation requirements under the EU AI Act are not conducive to the working practices at start-ups, which often rely on over stretched teams developing tech iteratively, relying heavily on trial and error. How can start-ups navigate this? Bitesize the documentation requirements and impose compliance by design. 
     
  • AI requires engagement at the highest level of businesses. CEOs currently aren’t as AI-minded as they should be, given employees are tinkering with AI throughout organisations. To avoid “accidents”, businesses should embed the following: (i) AI literacy, education, and an understanding of issues across the workforce; (ii) AI policies concerning access and use of data; and (iii) a closed AI interface within an organisation, which can encourage use in the right way. 
     
  • ESG is becoming a key part of the AI conversation. The environmental impacts in terms of energy and water usage are shocking – we heard that asking ChatGPT to write a 100-word email consumes more than half a litre of water. 
     
  • Finally, technology lawyers advising on AI procurement, take note: the SCL AI Group Committee published guidance and sample clauses to assist with the contractual implications of the EU AI Act this week (accessible here). Lewis Silkin’s Roch Glowacki contributed to their development so please feel free to get in touch with any queries on the clauses (and AI contracting more generally).