Accordingly, Meta has begun testing the use of facial recognition technology with 50,000 celebrities around the world. The company's purpose is to detect fake ads using images of celebrities to scam other users.
The automatic image recognition system will compare the images used in the ads with the image data on the Facebook and Instagram platforms of these individuals. From there, the system detects which are fake advertisements. If the images on the ad are found to be fake, the ad will be deleted.
Meta also said that participants in this test will be notified in advance.
Along with that, the "tech giant" is also stepping up the use of facial recognition technology to help users recover stolen accounts.
Meta is also testing the use of facial recognition technology to limit fraud in the context of fake images of many celebrities, including Australian billionaires such as Andrew Forrest and Gina Rinehart being spread in fraudulent investment advertisements.
The Australian National Anti-Fraud Center said that the number of technology scams occurring in 2023 was 18.5% higher than in 2022, causing the country's people to lose 2.74 billion AUD. In particular, scams on social media platforms are on the rise.
The Australian National Anti-Fraud Center welcomes the use of facial recognition technology to detect and remove fake advertisements. They said the move would support efforts to combat the scams they are determined to pursue and address.
Meta's testing of facial recognition technology could help detect and handle scams on its platforms. However, this has also raised many concerns about privacy and the use of biometric data that Meta collects in this process.
Mr. David Agranovich - Meta's global threat prevention director - reassured everyone about this concern. He affirmed that the generated facial data will be deleted immediately after the comparison test is complete, regardless of whether the results match or not.