The term “artificial intelligence” (AI) is often used without a clear consensus on its meaning. Similarly, “AI governance” is not a singular concept but rather a set of guardrails that collectively guide an organisation's design, development, procurement, and deployment of AI to ensure compliance with legal and ethical requirements. 

A fundamental building block in designing a jurisdiction-agnostic approach to AI compliance is the development of an AI governance policy. The policy will be unique to each organisation, depending on the scope of its AI use and the maturity of its governance processes. However, there are seven key questions that all organisations should consider during the drafting process: 

  1. What should be the scope of the policy? 
    While the proliferation of generative AI introduces new risks, organisations should not overlook existing risks related to “traditional” AI and consider whether the policy should also cover those other forms of AI. The scope of an organisation's policy must be specific to it and proportionate to how it intends to use AI and what the policy aims to achieve.  
     
  2. Who is the intended audience for the policy? 
    The intended audience will impact the approach to drafting. For example, if aimed at employees, the policy may be akin to an acceptable use policy. Conversely, if the audience comprises organisation's service providers, it might be designed as a schedule which can be appended to commercial agreements. 
     
  3. Is the policy clear on what AI tools are permitted? 
    Where a policy is designed for employees, it is important to clarify how they can use AI systems. This may consist of a table setting out the system, permitted use cases, and the purpose of those use cases. Additionally, a list of "dos and don’ts" can be an effective way to help employees understand how they can, and should, use the AI tools available to them and clearly define any restrictions on such use. 
     
  4. How does the policy cover AI incidents? 
    Similar to data protection policies, an AI governance policy should include a clear escalation process for incidents related to AI systems. Different situations will require escalation to different teams (ranging from customer services to InfoSec or public relations). The process should be as clear as possible, potentially necessitating the use of flowcharts or other visual aids. 
     
  5. Is it clear who is responsible for what?  
    Accountability is a fundamental principle of trustworthy and responsible AI. Therefore, it is key to ensure that the policy clarifies who is responsible for different elements of the overarching approach to governance. This may include information around which parts of the business are involved in the organisation's AI board or committee, how new use cases can be proposed, and how technical teams might evaluate proposals to ensure transparency around the organisation's baseline requirements for AI. 
     
  6. How does the policy foster AI literacy across the organisation? 
    The AI governance policy is a key method to document and communicate an organisation's training requirements and where employees can access additional information. This is particularly important ahead of 2 February 2025, when Article 4 of the EU AI Act requires providers and deployers of AI systems to ensure their staff (and others dealing with the operation and use of the systems on their behalf) have sufficient levels of AI literacy. 
     
  7. How does the policy deal with jurisdiction specific issues? 
    While a jurisdiction-agnostic approach to AI governance may be more appropriate to manage the emerging patchwork quilt of regulation, it may also be necessary to consider local jurisdiction appendices. For example, if an organisation will use AI within the EU, it might be useful to define the different categories of risk under the EU AI Act to enable users to easily understand whether their use or proposed use of AI requires additional compliance considerations. 

These are the key questions to consider when developing an AI policy. However, there is no one-size-fits-all policy, and governance is an iterative process. While organisations should consider all of the above, they must also avoid being caught in the pursuit of perfection. It might be that an existing governance policy from another part of the business can provide the foundation of an AI governance process that can subsequently evolve. Nevertheless, the policy will need to remain adaptable not only to align with how AI is being used in the organisation, but also so that it can be refined as the organisation evaluates the effectiveness of its AI governance.   

AI governance policies are, and will continue to be, a key tool in demonstrating accountability and proportionate risk mitigation in connection with an organisation's use of AI. Beyond legal requirements, they may also assist in pre-emptively accounting for reputational and ethical factors, thereby building stakeholder trust in the organisation’s use of AI.  

For more detailed discussion on AI governance policies, please refer to our article for the Journal of Robotics, Artificial Intelligence & Law where we discuss the purpose of these policies and provide guidance on how to prepare them effectively.