Meta Expands Anti-Scam Facial Recognition Test to UK Amid AI Push

Last October, Meta dipped its toe into the world of facial recognition with an international test of two new tools: one to stop scams based on likenesses of famous people, and a second feature to help people get back into compromised Facebook or Instagram accounts. Now, that test is expanding to one more notable country.


After initially keeping its facial recognition test off in the United Kingdom, Meta on Wednesday began to roll both of the tools there, too. In other countries where the tools have already launched, the “celeb bait” protection is being extended to more people, the company said.

Meta said it got the green light in the U.K. after “after engaging with regulators” in the country, which itself has doubled down on embracing AI. No word yet on the rest of Europe, the other key region where Meta has yet to launch the facial recognition tool “test.”

“In the coming weeks, public figures in the U.K. will start seeing in-app notifications letting them know they can now opt-in to receive the celeb-bait protection with facial recognition technology,” Meta said in a statement. Both this and the new “video selfie verification” for all users will be optional tools, Meta said.

Meta has a long history of tapping user data to train its algorithms, but when it first rolled out the two new facial recognition tests in October 2024, the company said the features were not being used for anything other than the purposes described: fighting scam ads and user verification.

“We immediately delete any facial data generated from ads for this one-time comparison regardless of whether our system finds a match, and we don’t use it for any other purpose,” wrote Monika Bickert, Meta’s VP of content policy in a blog post.

The developments come at a time when Meta is betting the barn on AI.

In addition to building large language models and using AI across its products, Meta is also reportedly working on a standalone AI app. It has also stepped up lobbying efforts around the technology, and given its two cents on what it deems to be risky AI applications — such as those that can be weaponized (the implication being that what Meta builds is not risky, never!).

Given Meta’s track record, a move to build tools that fix immediate issues on its apps is probably the best approach to gaining acceptance of any new facial recognition features — an area where it has had a tricky track record.

This test fits that bill: As we’ve said before, Meta has been accused for many years of failing to stop scammers from misappropriating famous people’s faces to spin up ad scams like dubious crypto investments.

Facial recognition has been one of the thornier areas for Meta over the years. Most recently, the company in 2024 agreed to pay $1.4 billion to settle a long-running lawsuit that alleged inappropriate biometric data collection related to its facial recognition technology.

Before that, Facebook in 2021 shut down its decade-old facial recognition tool for photos, which had faced multiple regulatory and legal problems across several jurisdictions. Interestingly, at the time the company chose to retain one part of the technology — the DeepFace model, saying it would incorporate it into future technology. That could well be part of what is being built on with today’s products.

Post a Comment

Previous Post Next Post