In recent days, many ChatGPT users around the world have received a security warning from OpenAI about the risk of data leakage.
Initial information easily made users worried, but OpenAI quickly clarified the nature of the matter, affirming that the majority of users were not affected and that important data was still preserved.
According to OpenAI's explanation on the official website, the problem did not originate from the company's system but related to Mixpanel, a data analysis partner used by OpenAI to track activities on the API dashboard.
This is also the reason why people who only use ChatGPT through the application or website are not affected.
Data such as chat history, passwords, API locks, payment information or message content are not within the leakage scope.
The group of users likely to be affected is limited to accounts using APIs through the platform.openai.com platform.
According to OpenAI, some of their file level data may have been sent to Mixpanel's export log.
This information includes a registration name for a API account, a linked email address, a near-correct location based on the browser data, the operating system and browser in use, a referral website as well as a user ID or internal organization.
These are all not highly sensitive data groups but are still warned by OpenAI to ensure transparency.
Immediately after the incident was discovered, OpenAI said it had removed Mixpanel from the entire production system and launched an in-depth investigation to determine the scope of impact.
At the same time, the company is directly contacting organizations and administrators of related API accounts to help them check which members are affected.
Some reports suggest that employees of large corporations such as Apple may be among the APIs users whose information was leaked.
However, OpenAI stressed that no customer data or sensitive information of any organization was affected in the incident.
OpenAI's decision to send a warning to all ChatGPT users, although not majority affected, comes from the goal of avoiding misunderstanding and preventing the spread of inaccurate information.
The company wants to ensure that everyone has the right view of the real level of risk.
For regular ChatGPT users, those who only use the application or website to chat, this notification does not mean any risks to personal data.
In contrast, this is seen as a proactive step to strengthen users' confidence in how OpenAI handles security-related issues.
For API developers who have received the notification, OpenAI recommends reviewing the detailed instructions in the email and continuing to monitor updates from the company as the investigation continues.