Italy fines OpenAI $15M over data protection, privacy breaches

As a seasoned researcher with a keen eye for privacy and data protection issues, I find the latest developments surrounding OpenAI quite intriguing. The $15.7 million fine imposed by Italy’s Data Protection Agency, coupled with the six-month public awareness campaign, is a stark reminder of the importance of transparency and user consent in the AI sector.

The Italian data privacy authority has penalized OpenAI with a fine worth approximately 15.7 million US dollars (15 million euros) and mandated them to initiate a six-month public education campaign, following an investigation into their collection of data by their leading AI model, ChatGPT.

According to a statement released on December 20th, the Italian Data Protection Authority (Garante) reported that their investigation revealed that OpenAI had failed to inform them about a data leak that occurred in March 2023.

The watchdog claimed that OpenAI didn’t establish a proper legal ground before utilizing user’s personal data for training their Chatbot. This act allegedly contravened the principle of openness and the corresponding duty to inform users about such actions.

According to the IDPA, their investigation revealed that OpenAI’s measures for verifying ages weren’t strong enough to stop minors from accessing and utilizing their services.

As a analyst, I’d rephrase this as: “Additionally, OpenAI lacks age verification tools, which potentially exposes children under the age of 13 to content that may not be appropriate for their cognitive and emotional maturity, according to the IDPA.

In my professional capacity as an analyst, I’ve been tasked with outlining a corrective action plan by the IDPA. This includes OpenAI being mandated to spearhead a six-month public education initiative. This campaign will unfold through various mediums such as radio, television, print media, and online platforms. The primary focus will be on enhancing the general public’s comprehension and awareness of how ChatGPT operates.

Regarding the gathering of data from both users and non-users for teaching generative AI, as well as the rights that relevant parties may claim, such as the right to object, correct, or delete their information, the IDPA stated…

Once the campaign ends, the IDPA advises users to understand how they can prevent their data from being used for generative AI training and to utilize their rights as outlined in the European Union’s General Data Protection Regulation (GDPR).

Companies that violate the GDPR can be fined up to $20 million or 4% of their global turnover.

As stated by the IDPA, OpenAI’s cooperative approach during the probe led to a decrease in the imposed penalty.

During the course of the probe, OpenAI shifted its European base of operations to Ireland. According to the IDPA, it’s now the Irish Data Protection Authority (DPC) that takes charge in carrying out further investigations.

The Investigation by the IDPA started in March 2023, and they stated that their findings were based on a thorough examination of the EDPB’s opinion issued on Dec. 18, 2023, concerning the utilization of personal data to build and implement AI systems.

In March 2023, Italy was the initial Western nation to momentarily restrict access to ChatGPT due to privacy apprehensions, with the IDPA launching an inquiry into alleged violations of data privacy regulations.

Critics in Italy had voiced disapproval towards the initial ban of ChatGPT. However, a few weeks later, the regulators announced that the ban could be lifted if OpenAI complied with certain transparency standards. Consequently, on April 29, ChatGPT was reinstated in Italy once more.

OpenAI did not immediately respond to a request for comment.

Read More

2024-12-23 08:39