The Italian Data Protection Authority (“Garante”) has ordered an immediate temporary limitation on ChatGPT’s processing of Italian user data while it investigates the AI tool over data privacy concerns. This follows a recent data breach affecting users’ chat histories and payment information.
ChatGPT (owned by Open AI, a US company) has quickly become a household name and is known for abilities such as impressive content-rich responses to everyday questions, achieving outstanding results in exams and putting together compelling marketing pitches.
However, in order to operate, the powerful tool processes massive quantities of personal data and the Garante has cited the following concerns (some of which are anticipated in our recent blog: AI 101 What are the key privacy risks and rewards for this new tech?):
- Transparency: there is a lack of information provided to users and interested parties whose data are collected and processed by OpenAI. This is a key principle of EU data protection law – users must be told what purposes their personal data are used and collected for;
- Legal basis: in order to collect and process personal data in, organisations also need a lawful basis. The Garante’s concern is that there is an absence of a legal basis which justifies the collection and use of such large amounts of personal data in order to ‘train’ the algorithms which underpin ChatGPT;
- Inaccuracy of data: the information provided by ChatGPT (for example, in its responses to a question or an essay answer) does not always correspond to the real data, so there is a risk of inaccurate processing of personal data;
- Processing of children’s data: The OpenAI terms state that the service is aimed at people over the age of 13, but there is no age verification of users which means that the service is potentially available to users under that age. It is worth bearing in mind that the Children’s Code in the UK applies to online services which are ‘likely to be accessed’ by individuals under the age of 18. Whilst that Code is not applicable in Italy, it is something OpenAI should be complying with when operating in the UK. For its part, Google imposed a requirement for users of its generative AI, Bard, to be over the age of 18.
This resolution follows an open letter that was published on Tuesday 28th March 2023 calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” This letter was signed by individuals such as Elon Musk and Steve Wozniak (Apple co-founder) and other interested parties including AI experts.
With this pressure from both the regulator and the industry, we may start to see a slow down in the rapid pace implementation of AI tools.
OpenAI has 20 days to communicate to the Garante what measures it has taken to implement the Garante’s suggestions. If it fails to do so it could face GDPR levels of fine (up to 20 million euros or up to 4% of annual global turnover).