OpenAI, a leading artificial intelligence (AI) developer, is under investigation by Austrian data protection authorities following a privacy complaint filed by Noyb, a data rights advocacy group. The complaint alleges that OpenAI’s generative AI chatbot, ChatGPT, provided false information about a public figure when asked for details about them.
OpenAI, a well-known artificial intelligence (AI) development company, is now the focus of a privacy complaint filed against it by an Austrian data protection advocacy organization.
On the 29th of April, Noyb filed a complaint against OpenAI, claiming that they had failed to correct inaccurate data provided by their chatbot, ChatGPT. The organization expressed concern that these actions, or failure to act, might violate privacy regulations in the European Union.
Based on the report, a well-known individual, whose identity remains undisclosed, requested facts about himself from OpenAI’s chatbot but received inaccurate information repeatedly.
OpenAI reportedly declined the public figure’s petition to modify or delete the data, explaining that such actions were beyond their capabilities. Additionally, they maintained secrecy regarding the specifics of their training dataset and its origin.
Maartje de Graaf, a Noyb data protection lawyer, commented on the case in a statement saying:
“If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”
Noyb filed a complaint with the Austrian data protection authority, asking them to look into OpenAI’s handling of data and procedures for ensuring the accuracy of personal information used by their large language models.
According to De Graaf, it’s evident that current businesses struggle to ensure EU law compliance for chatbots like ChatGPT when handling personal data.
As a researcher focused on digital rights, I’d describe Noyb – also recognized as the European Center for Digital Rights – as an organization based in Vienna, Austria. Our mission is to spearhead strategic legal actions and media campaigns that bolster European General Data Protection Regulation (GDPR) laws.
I’ve observed before that chatbots have drawn criticism from both activists and researchers in Europe.
I. In the winter of 2023, two European non-profit entities published a report indicating that Microsoft’s revamped Bing AI chatbot, renamed as Copilot, had been disseminating erroneous or misleading information about local elections in Germany and Switzerland during political elections.
As a diligent researcher, I’ve come across some concerns regarding the accuracy of the information provided by the chatbot. In terms of candidate information, poll results, scandals, and voting data, there were instances where the responses did not align with the facts. Additionally, on certain occasions, the sources it cited were misquoted or taken out of context.
As a crypto investor, I’ve come across situations outside the EU that merit mentioning. One such instance involved Google’s Gemini AI chatbot. This bot was generating inappropriate and inaccurate imagery in its image generator. I was surprised and disappointed by this turn of events. Google acknowledged the issue and issued an apology, assuring us that they would be updating their model to prevent similar occurrences in the future.
Read More
- SQR PREDICTION. SQR cryptocurrency
- JASMY PREDICTION. JASMY cryptocurrency
- DOP PREDICTION. DOP cryptocurrency
- LDO PREDICTION. LDO cryptocurrency
- SAGA PREDICTION. SAGA cryptocurrency
- SEILOR PREDICTION. SEILOR cryptocurrency
- XLM vs. XRP: Which One is Better, Stellar Lumens or Ripple?
- CELL PREDICTION. CELL cryptocurrency
- Top gainers and losers
- NINJA PREDICTION. NINJA cryptocurrency
2024-04-29 10:54