Facebook builds AI tool to trick facial recognition system and prevent privacy breach

Social media giant Facebook has rolled out a new tool that tricks the facial recognition system to wrongly identify a person in a video.

facebook update
The Facebook logo. Robert Galbraith/Reuters

Even at a time when it is embroiled in a multi-million lawsuit judgment over its facial recognition practices, social media giant Facebook has launched a new tool that can trick the facial recognition system to incorrectly identify a person in a video.

As per a report published in VentureBeat, the Facebook AI Research (FAIR) has developed a 'de-identification' system which uses machine learning technology to alter key facial features of a subject in a video, thus tricking the facial recognition system into improperly identifying the subject. The new technology also works in live videos and the de-identification technology has existed in the past, but that worked mostly for still images.

The report published in VentureBeat further stated that the existing face recognition can lead to loss of privacy and face replacement technology may be misused to create misleading videos.

"Recent world events concerning advances in and abuse of face recognition technology invoke the need to understand methods that deals with de-identification. Our contribution is the only one suitable for video, including live video, and presents quality that far surpasses the literature methods," said the report.

However, Facebook does not intend to make use of this technology in any of its commercial products. But reports say that the new research may influence future tools developed to protect individuals' privacy.

The Artificial Intelligence (AI) industry is currently working on ways to combat the spread of deepfakes and the increasingly sophisticated tools used to create them. Apart from Facebook, lawmakers and tech companies are trying to make other tools to combat the spread of fake videos, images and audio. Microsoft and Amazon are also developing new tools to fight this problem.

What are Deepfakes?

Deepfake is a technique for human image synthesis based on Artificial Intelligence. It is mainly used to combine and superimpose existing images and videos onto source images and videos using a machine learning technique known as generative adversarial network. Because of these capabilities, deepfakes have been used to create fake celebrity pornographic videos or revenge porn. Deepfakes can also be used to create fake news and malicious hoaxes.

The new work is scheduled to be presented at the International Conference on Computer Vision (ICCV), in Seoul, South Korea, next week.

Currently, Facebook is facing a $35 million lawsuit for allegedly misusing facial recognition data in Illinois. A US court has denied Facebook's request to revoke the lawsuit.

Related topics : Facebook Mark zuckerberg Artificial intelligence