Meta is testing the use of facial recognition to fight deepfake celeb ads

As a seasoned analyst with years of experience in the tech industry, I find Meta’s latest move to combat celebrity scam ads using facial recognition technology both intriguing and potentially effective. Having witnessed the evolution of cybercrime tactics over time, it is clear that these criminals have become increasingly sophisticated, often exploiting loopholes in social media platforms for their gain.


Meta, the social media giant boasting almost 4 billion users, is experimenting with facial recognition tech to tackle misleading advertisements impersonating celebrities, a persistent issue on their platforms.

According to Meta, preliminary tests involving a select number of famous individuals have yielded “encouraging outcomes.” In the upcoming period, they plan to test their facial recognition technology on approximately 50,000 celebrities and influential figures within weeks.

Meta explained that their system checks advertisements containing images with those on a celebrity’s Facebook and Instagram profiles to detect fraudulent ads. If a match is confirmed and the ad is found to be deceitful, they will take action to block it, emphasizing this point in a statement made on October 21st.

Notable figures like Elon Musk (CEO of Tesla), Oprah Winfrey (American TV personality), Andrew Forrest (Australian mining magnate), and Gina Rinehart (another Australian mining billionaire) have been imitated fraudulently in advertisements previously.

Meta announced that this action is a component of their comprehensive strategy to combat “imposter scams” by cyber criminals. These criminals have grown more complex in their methods for obtaining personal data or funds from unsuspecting victims, according to Meta.

“This scheme, commonly called “celeb-bait,” violates our policies and is bad for people that use our products.”

Meta is testing the use of facial recognition to fight deepfake celeb ads

Facebook, where Mark Zuckerberg is the head, plans to send out internal app alerts to numerous selected celebrities. The message will let them know they’ve been signed up for a safety feature, and they have the option to unsubscribe if they prefer.

However, Meta might want to tread carefully after reaching a $1.4 billion settlement with Texas, as the agreement stems from the company’s unauthorized use of personal biometric data belonging to millions of its residents.

As a crypto investor, I understand that Meta has announced its commitment to promptly erase any facial data gathered during the process of identifying potentially fraudulent celebrity ads. This move demonstrates their dedication to privacy and security concerns.

Additionally, it employs facial recognition technology for verifying identities and restoring access to accounts that have been compromised.

In contrast to allegations made by Australia’s consumer watchdog that around 60% of cryptocurrency investment opportunities spotted on Facebook in August were fraudulent, Meta has denied these accusations, stating that the majority are not scams.

As a researcher delving into the world of digital fraud, I’ve come across an intriguing pattern. It seems that many unfortunate victims fall prey to cryptocurrency investment scams due to AI-crafted deepfake presentations. These convincing simulations are designed to deceive and entice potential victims into investing in non-existent or fraudulent crypto opportunities.

Read More

2024-10-23 05:50