Meta is testing a new approach in the artificial intelligence race by using employee operational data itself to train AI models.
The company plans to collect information from mouse movements, keyboard operations and how users navigate on computers.
This is part of efforts to find new training data sources, which is considered the core "fuel" to help AI learn how to process tasks and respond to users more effectively.
Meta representatives said that if the goal is to build virtual assistants capable of supporting daily work on computers, then models need to be trained from practical examples. Data such as clicking, opening menus or entering text will help AI better understand how people interact with software.
To serve this goal, Meta is deploying an internal tool to collect input data on certain applications.
The company affirms that it has taken measures to protect sensitive information and data that is only used for AI training purposes, not for other purposes.
However, this move also raises many concerns about privacy in the technology industry. The fact that internal activities, which were previously considered private data, are now becoming resources for AI shows that the boundary between personal data and training data is increasingly blurred.
Not only Meta, the trend of exploiting internal data is spreading in the technology world. Recently, many reports show that companies, especially startups, are becoming targets for collecting data from work platforms such as Slack or work management systems. This information can be converted into training data for AI.
In the context of fierce competition, finding new data sources is understandable. However, the big question is what will be the reasonable limit between technological innovation and protecting user privacy?