Hot on the heels of our recent post about emotional artificial intelligence, the Hungarian DPA has fined Budapest Bank Zrt. approximately €700,000 for carrying out automated decision making and profiling activities using emotional AI without:
- identifying a valid legal basis;
- carrying out a balancing exercise of their interests with the rights of data subjects; or
- putting in place adequate safeguards.
An investigation was launched in September 2021 by the Hungarian DPA to look into the bank’s use of AI software, which was being used to analyse audio recordings of customer service conversations. The investigation covered a period of over 3 years, from May 2018 to September 2021. During this time, the software was used to identify elements of conversations including periods of silence, people talking over each other, and the use of key words. Speed, volume and tone were also analysed in an attempt to gauge customer satisfaction. Automated decisions were then made to spot dissatisfied customers, rank the calls and provide a priority list of customers to be contacted. Following a review of the calls that fell into this category by a bank employee, call-backs were made to these customers in an attempt to resolve any issues. The results of the AI and the calls were stored and could be replayed for around 6 weeks.
Why was the bank fined?
The Hungarian DPA were dissatisfied with the bank’s approach and disagreed with their reasoning and assessment for using the AI.
Despite the bank stating that the systems used did not store any identifiable personal data, the Hungarian DPA held that this was not the case, as customer service calls were allocated a unique number that could be linked back to individual customers and were accessible and could be replayed.
The bank's data protection impact assessment (DPIA) was the next issue. The DPIA stated that no automated decision making was to take place, however, the DPA found as the processing created an outcome that influenced the human decision makers at the end of the process, there was automated decision making and that it had a significant effect on data subjects (perhaps a questionable conclusion in itself). It is important to note the DPA held that the decision itself did not have to be made by the software in order for it to be classed as automated, it merely had to be contributory. The DPA also found that profiling took place as the software used personal data, such as emotional state, to analyse or predict the satisfaction levels of customers. This is slightly concerning as it is a very black letter law interpretation and not really in line with the general understanding of the applicability of Article 22 GDPR when there is a human decision maker in the chain (emphasis added “The data subject shall have the right not to be subject to a decision based solely on automate processing…”).
By the bank's own admission, customers were not informed that AI would be used to analyse their calls. The reason given was that it would make introductions to customer calls too lengthy! This lack of transparency resulted in multiple further breaches of Articles 5(1), 5(2), 12(1) and 13 of the GDPR. As well as this, customers were not informed about their rights to object to automated decision making and profiling. In another twist, the fact that this processing was for the purpose of customer satisfaction (and thereby retention) was also found to have been a marketing purpose, and so customers’ Article 21(2) GDPR rights to object to marketing were also breached. Again this is interesting as it demonstrates another broad (although probably correct) interpretation of the law and something brands will need to be mindful of going forward (at least in Hungary).
Finally, the Hungarian DPA found that an adequate balancing exercise of the bank's interests as opposed to the rights of customers was not carried out. The DPA made the point that the inadequacy of the emotional AI technology used meant that it was unsuitable for achieving the bank's objectives in a way that was proportionate to individuals' rights (for further discussion on this point see our previous article). The issue of proportionality and the suitability of the technology was also brought into question for the bank's own employees, whose voices were also analysed for performance measuring purposes without a system of protections in place for these employees. All these issues eroded the bank's legitimate interest argument.
Conclusion
The Hungarian DPA fined the bank approximately €700,000, ordered them to stop the data processing unless the bank could provide proof that it had appropriately scoped the data that was to be processed, put in place a valid DPIA, and identify a valid legal basis. The DPA also held that employee data processing should be limited to that which was necessary for the bank's intended purpose, and employees should be duly informed.
We expect this to be the first of many actions taken by regulators in the field of AI – so watch this space!