Since ChatGPT burst onto the scene a year or so ago, the use of generative artificial intelligence has been transforming the way individuals are working, creating both benefits and risks.
As the UK hosts the global AI Safety summit, there has been a flurry of announcements from all directions about AI. The IPA and ISBA are no different and have announced twelve guiding principles for agencies and advertisers on the use of generative AI in advertising.
They say that the principles are broad-brush and designed to ensure that the industry embraces AI in an ethical way that protects both consumers and those working in the creative sector. They cover issues around transparency, intellectual property rights, human oversight and more.
The principles are not exhaustive and apply only to the creative process rather than other areas of the industry. These principles apply to the use of generative AI in content creation. There are many other instances where AI may be used and where abuses should be guarded against. This might include the large-scale creation of poor-quality clickbait on Made for Advertising sites, or AI algorithms deciding to whom online ads are served. There may also be other legal issues to consider, such as the onward use or accessibility of sensitive consumer data fed into an AI system.
The twelve principles are:
- AI should be used responsibly and ethically.
- AI should not be used in a manner that is likely to undermine public trust in advertising (for example, using undisclosed deepfakes, or fake, scam or otherwise fraudulent advertising).
- Advertisers and agencies should ensure that their use of AI is transparent where it features prominently in an ad and is unlikely to be obvious to consumers.
- Advertisers and agencies should consider the potential environmental impact when using generative AI.
- AI should not be used in a manner likely to discriminate or show bias against individuals or particular groups in society.
- AI should not be used in a manner that is likely to undermine the rights of individuals (including with respect to use of their personal data).
- Advertisers and agencies should consider the potential impact of the use of AI on intellectual property rights holders and the sustainability of publishers and other content creators.
- Advertisers and agencies should consider the potential impact of AI on employment and talent. AI should be additive and an enabler – helping rather than replacing people.
- Advertisers and agencies should perform appropriate due diligence on the AI tools they work with and only use AI when confident it is safe and secure to do so.
- Advertisers and agencies should ensure appropriate human oversight and accountability in their use of AI (for example, fact and permission checking so that AI generated output is not used without adequate clearance and accuracy assurances).
- Advertisers and agencies should be transparent with each other about their use of AI. Neither should include AI-generated content in materials provided to the other without the other’s agreement.
- Advertisers and agencies should commit to continual monitoring and evaluation of their use of AI, including any potential negative impacts over and above those set out in the Principles.
The IPA and ISBA will consider publishing additional best practice guidance around the use of AI in other areas in due course. The Principles follow an ASA statement in August 2023.
And if you are interested in learning more about the use of generative AI in advertising, feel free to sign up to our own event on this hot topic, being held (with ISBA) online on 15 November. Sign up here!
"The advent of generative AI globally represents both an opportunity and a challenge to the creative industries." Sir Patrick Vallance, the Government’s Chief Scientific Adviser