China makes it a criminal offence to publish deepfakes without disclosure

Failing to provide a disclosure that the post in question was created with AI or VR technology is now a criminal offence, according to the Chinese government

In a bid to tackle the spread of fake news and misleading videos created using artificial intelligence (AI) and bots, China has released new rules that ban online video and audio providers from using deep learning to produce fake news without a proper disclosure.

Failing to provide a disclosure that the post in question was created with AI or VR technology is now a criminal offence, according to the Chinese government. The rules go into effect on January 1st, 2020, and will be enforced by the Cyberspace Administration of China, The Verge reported on Friday.

Cyber crime
Pixabay

The regulation comes about one-and-a-half months after California introduced legislation to make political deepfakes illegal, outlawing the creation or distribution of videos, images, or audio of politicians doctored to resemble real footage within 60 days of an election.

The new regulation published said that both providers and users of online video news and audio services are not allowed to use new technologies such as deep learning and virtual reality to create, distribute and broadcast fake news, according to South China Morning Post.

Crackdown on deepfakes across the world

The European Union, earlier in April, released a strategy to investigate online disinformation, including deepfakes. All the tech giants are now working hard to curb deepfakes. In a latest, Twitter, earlier this month, asked users to go through a survey before formulating a new policy on tackling such deepfake or shallowfake content.

Social networking giant Facebook partnered with Microsoft, Massachusetts Institute of Technology (MIT) and other institutions in September to fight deepfakes and has committed $10 million towards creating open source tools that can better detect if a video has been doctored. Deepfake techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online.

READ MORE