Meta CEO Mark Zuckerberg holds a smartphone as he delivers a keynote speech during the annual Meta Connect event at the company’s headquarters in Menlo Park, California, U.S., September 25, 2024.
Manuel Orbegozo | Reuters
Three years later Meta is shutting down facial recognition software on Facebook amid a groundswell of privacy and regulatory backlash, the social media giant said Tuesday it is retesting the service as part of a crackdown on “celeb bait” scams.
Meta said it will enroll about 50,000 public figures in a trial that will automatically compare their Facebook profile photos to images used in suspected scam ads. If the images match and Meta thinks the ad is a scam, they will be blocked.
The celebrities will be notified of their registration and can opt out if they do not wish to participate, the company said.
The company plans to roll out the trial globally from December, excluding some major jurisdictions where it does not have regulatory approval, such as Britain, the European Union, South Korea and the US states of Texas and Illinois, it added to it.
Monika Bickert, Meta’s vice president of content policy, said in a briefing with journalists that the company was targeting public figures it identified as having been used in scam ads.
“The idea here is: provide as much protection for them as possible. They can opt out if they want, but we want to be able to make these protections available to them and easy for them,” Bickert said.
The test shows a company trying to use potentially invasive technology to address regulators’ concerns about rising scams while minimizing complaints about the handling of user data that social media companies have tracked for years. When Meta shut down its facial recognition system in 2021 and deleted the facial scan data of a billion users, it cited “growing societal concerns.”
In August this year, the company was ordered to pay Texas $1.4 billion to settle a state lawsuit accusing it of illegally collecting biometric data.
At the same time, Meta is facing lawsuits accusing it of not doing enough to stop celeb-bait scams, which use images of famous people, often generated by artificial intelligence, to trick users into giving money to non-existent investment schemes .
Under the new trial, the company said it will immediately delete any facial data generated by comparisons with suspected ads, regardless of whether it has detected a scam.
The tool being tested was put through Meta’s “robust privacy and risk assessment process” internally and discussed externally with regulators, policymakers and privacy experts before testing began, Bickert said.
Meta said it also plans to test the use of facial recognition data to let non-famous users of Facebook and another platform, Instagram, regain access to accounts that have been compromised by a hacker or locked because they forgot a password .