Meta has just launched a large-scale layoff, including 600 employees in the artificial intelligence (AI) department and more than 100 employees in the risk assessment team. These are all those who have been responsible for monitoring user privacy and complying with global regulations.
According to an internal note sent to employees by Alexandr Wang, Meta's AI Director, this staff cut is being explained to help the company make faster decisions and develop new products at a higher pace.
Mr. Alexandr Wang said that streamlining the apparatus will reduce discussion layers and allow AI engineers to act more flexibly in deploying products.
However, in parallel with restructuring the AI department, Meta also fired more than 100 employees in the risk and privacy management team, which was the department that ensured that all of the company's products complied with the regulations of the US Federal Trade Commission (FTC) and international security agencies.
This is a group that played an important role after the Cambridge Analytica scandal in 2019, when Meta (at that time was Facebook) was fined up to 5 billion USD by the FTC for violating user privacy.
In an internal message, Michel Protti, Meta's security director, affirmed that the company will switch to an automatic risk assessment model, instead of manual censorship of each product as before.
Mr. Michel Protti said that this ensures more accurate and consistent compliance results across the system.
However, many current and former employees are skeptical about the effectiveness of the automation system, especially when handling complex and sensitive issues related to personal data.
Internal sources revealed that the fired were mainly working in the London office, while the TBD Labs, the AI research group (AGI) led by Wang, was kept intact and continued to receive strong investment.
Meanwhile, many veteran faces in the FAIR research team, such as Yuandong Tian, a research director who has been with Meta for 8 years, have also been laid off.
Analysts say that this double cut reflects two contradictory trends within Meta: one is the desire for rapid innovation to compete with OpenAI and other AI rivals; the other is the responsibility to comply with regulations and protect users, which has caused Meta to be closely monitored by regulators in the US and Europe.
Meta's spokesperson said the company regularly adjusts its organizational structure to reflect the maturity of the program and innovate faster, while committing to maintaining high compliance standards.
However, many experts warn that replacing humans with algorithms in risk monitoring could cause Meta to trade private property to get product development speed, opening up the risk of repeating the mistakes that cost this social network high.