
A post on social network X by Summer Yue - a security researcher at Meta AI is spreading widely when describing an unintended incident with a personal AI assistant. Accordingly, she requested the OpenClaw tool to support cleaning inboxes by suggesting emails to be deleted or stored.
However, the system worked out-of-control when it started deleting a series of emails without stopping, even though the user had sent a disconnect command from the mobile device. Yue said she was forced to run to the computer to manually intervene to block this process.
OpenClaw is an open-source AI agent, designed to work directly on personal devices as a digital assistant. This tool is attracting attention in the technology world, especially in Silicon Valley, where many similar variants are also being developed.
The incident is believed to have originated from the system processing large volumes of data in the real mailbox. When the amount of information exceeds the processing capacity, AI can automatically shorten the context, leading to ignoring important instructions from users.
In this case, the AI assistant may not have recorded the stop command, continuing to perform actions based on previous settings. Yue admitted that she tested the tool with small data before applying it to the main mailbox, thereby creating subjectivity.
Many opinions on social networks suggest that control commands should not be completely dependent on as a safe mechanism. AI models may misunderstand or ignore instructions in complex situations.
Although it is not yet possible to verify the entire incident, the story still shows the risks when deploying AI agents in daily work. At the current stage, these tools still need to be closely monitored, especially when processing important data.
Experts believe that personal AI assistants have great potential in supporting work such as email management or scheduling. However, this technology still needs more time to be completed before it can be widely used safely.