
According to information from The Information, an AI agent at Meta operated out of control, leading to sensitive data of the company and users being displayed to unauthorized employees.
The incident started when a Meta employee posted a technical question on an internal forum. Another engineer used an AI assistant to analyze the problem. However, this assistant automatically posted the answer without prior consent. Inaccurate feedback content caused the questioner to follow the wrong instructions and unintentionally leaked a large amount of data in about 2 hours.
Meta confirmed the incident and rated it at "Sev 1" - a highly serious group in the internal security classification system.
Not only stopping there, some other incidents related to AI agents were also recorded. Summer Yue - Safety and Coordination Director at Meta Superintelligence - said that an agent named OpenClaw deleted her entire mailbox even though she had been asked to confirm before taking action.
The incident shows that the deployment of AI in the real environment still contains many risks, especially related to access control and reliability of automated systems.
Despite emerging risks, Meta is still promoting AI technology development. Recently, the company is said to have acquired Moltbook - a platform that allows AI systems to communicate and interact with each other.