This week, Ofcom published a discussion paper exploring the implications of generative AI on media literacy, focusing on its applications, opportunities, and risks. In its paper, Ofcom explores four key areas of media literacy, namely news and personalisation, personalisation and adaptation, content creation and education, and data protection concerns. In this article, we look at some of Ofcom’s findings and their suggested solutions.

News and personalisation

Radicalisation and personalisation

Generative AI's ability to create hyper-personalised content tailored to individual users' preferences can lead to the creation of information bubbles which limit users' exposure to diverse perspectives and reinforce existing viewpoints, potentially leading to radicalisation. Ofcom believes that users should be aware when they are being fed personalised content (in order to ensure that they are aware that they may not be seeing the full spectrum of available information, potentially missing other perspectives or ideas). Whilst putting users on notice in this way might be helpful, ultimately Ofcom acknowledges that ensuring a “plurality of content” is necessary to engage users with a variety of viewpoints and prevent the formation of echo chambers.

Trust in news

As AI-generated content becomes more common, there is a risk that users may find it difficult to differentiate between AI-generated content and human-produced journalism. This could lead to scepticism about the reliability of information, diminishing confidence in traditionally trusted sources. 

This challenge also presents opportunities for innovation. At Lewis Silkin, we have been working closely with clients developing cutting-edge technologies to address these concerns, including blockchain-based tools for authenticating real news content. These solutions not only help instil trust in news by verifying its provenance and authenticity but also create clean, reliable datasets that can be used to train AI models responsibly. Ensuring the integrity of both news and training data is a vital step towards restoring public confidence and supporting the development of ethical AI systems.

Mis and disinformation

As the prevalence of mis and disinformation online increases, so too does the need to verify the origin and authenticity of news content, “helping to distinguish between genuine reporting and potentially manipulated or fabricated information”. Ofcom suggests that technologies which can identify the provenance of information for users may be the way forward.

Ofcom isn't the only regulator interested in this area. This week, the Science, Innovation and Technology Committee launched an inquiry to “investigate the relationship between algorithms used by social media and search engines, generative AI, and the spread of harmful and false content online”. According to DSIT, the enquiry “will specifically consider the role of false claims, spread via profit-driven social media algorithms, in the summer riots. It will also investigate the effectiveness of current and proposed regulation for these technologies, including the Online Safety Act, and what further measures might be needed”. 

Personalisation and adaptation

Effective advertising

The ability to personalise content can be used to quickly create effective advertising campaigns. Of course, the hope for brands is that more personalised advertisements will increase consumer engagement and lead to increased sales. However, Ofcom observes that users (particularly children and other vulnerable users) should be informed when advertising is targeted at them with the intention of increasing the likelihood of them purchasing a product.

Personalised scams

Generative AI can be used for nefarious purposes - particularly scams. Indeed, Ofcom observes that scams can be “highly effective when a user feels like they are being specifically spoken to, or when scammers prey on their interests, needs, or vulnerabilities”. For example, generative AI can be used for voice cloning that makes it difficult for users to identify whether the voice is real or an AI-generated voice clone. Users will need to be supported to spot and avoid scams.

Access to technology

It's not all doom and gloom - generative AI can help to create adaptive user interfaces which could “allow users with different needs and abilities to access technology that they have previously been unable to engage with”. This will allow technology to become more inclusive and accessible.

Content creation and education

Access and inclusion

Generative AI has the potential to democratise creativity by providing tools that enable users to create digital content and express their ideas in new ways. However, Ofcom notes that “for this technology to be truly transformative in supporting human self-expression and creation, it has to be accessible to all”, or risk “deepening the digital divide”. Therefore, ensuring affordable access to these technologies is vital.

Content rights

An issue on which our experts often advise is rights protection and ownership in relation to the training and use of AI tools. Clarifying ownership rights is complex but platforms must inform users about their rights clearly, despite the challenges, to shape a positive user experience. Without understanding their rights, users might find their content used in AI training or reproduced without proper credit or compensation.

Device divide 

Device inequality exacerbates the digital divide, leading to poorer outcomes for those without access to necessary technology, particularly in education. Ofcom notes that ensuring device accessibility for all students, regardless of socio-economic status, is crucial to prevent educational inequalities.

Generative AI bias

The information created using generative AI can be biased or inaccurate if not carefully managed. Recognising this is essential for using AI in education in order to avoid conveying biases and inaccuracies to students. Accordingly, teachers should be given ongoing professional development to understand AI’s capabilities and limitations, and how it can be used most effectively in the classroom.

Data protection concerns

Training AI

Where AI tools are trained on data sets which use personal data, this data may have been obtained without the data subject's knowledge or permission. According to Ofcom, individuals should be “fully informed that their data is being used to train generative AI models” and generative AI developers should be “clear about the lawful basis on which personal data is being used”. As such, developers should be mindful of the risks of training AI models using personal data and make “significant efforts” to remove any personal data from training sets.

In addition, developers should inform users where tools are trained using user interactions and make clear “how users’ content could be used, having clear data protection measures, privacy notices and have user friendly option controls around data sharing”.