Popular micro-blogging site Twitter has appealed to its users to help it curb the menace of fabricated and manipulative content, popularly known as 'deepfakes' on its platform. It asked users to participate in a survey before formulating a new policy to ban the spread of deepfakes.

The survey asked a multitude of questions regarding media, such as images, posts and photographs, which can be altered to deceive and confuse others. The survey also asked a series of questions on the various strict measures Twitter can undertake to prevent the proliferation of deepfakes.

Twitter is mulling to place a notice near to tweets that share manipulative content. In its statement, Twitter said" "Your individual responses are entirely confidential and will not be shared outside Twitter, but we may share common themes and overall results."

It added: "In addition, if a tweet including synthetic or manipulative media is misleading and could threaten someone's physical safety or lead to other serious harm, we may remove it."

What are Deepfakes?

Deepfake is a technique for human image synthesis based on Artificial Intelligence. It is mainly used to combine and superimpose existing images and videos onto source images and videos using a machine learning technique known as generative adversarial network. Because of these capabilities, deepfakes have been used to create fake celebrity pornographic videos or revenge porn. Deepfakes can also be used to create fake news and malicious hoaxes.

Apart from Twitter, Facebook is also joining hands to ban the spread of deepfakes. The social media giant has joined hands with Artificial Intelligence technology (AI), Microsoft and academics from Cornell Tech, MIT, University of Oxford, University of California-Berkeley, University of Maryland, College Park and University at Albany-SUNY to build the Deepfake Detection Challenge (DFDC).

twitter
Twitter asked users to participate in a survey before formulating a new policy to ban the spread of deepfakes. Kacper Pempel/Reuters

The Facebook AI Research (FAIR) has developed a 'de-identification' system which uses machine learning technology to alter key facial features of a subject in a video, thus tricking the facial recognition system into improperly identifying the subject.

Apart from Facebook, lawmakers and tech companies are trying to come up with other tools, like deepfake detection software, and regulatory frameworks for how to control the spread of fake videos, images, and audio.

Microsoft and Amazon are also developing new tools to fight this malice.