YouTube Announces New Rules and Guidelines for Creators to Tackle Deepfake

YouTube warned that creators who consistently neglect to share this information may experience consequences.

In an effort to strike a balance between fostering creativity and safeguarding user interests, YouTube which is owned by Google, has recently introduced new regulations governing the use of generative AI technology on its platform. These guidelines aim to address concerns related to unauthorized use of individuals' likenesses and ensure responsible content creation.

Deepfake

YouTube announced on Tuesday that it will soon mandate creators to disclose the creation of altered or synthetic content that appears realistic, including content generated using generative AI tools. Upon uploading content, creators will now have additional options to indicate if their content contains realistically altered or synthetic material.

"For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn't actually do," the company said.

Amid the growing concern surrounding deepfakes and synthetic audio, YouTube pledges to facilitate the removal of AI-generated content that simulates an identifiable individual, considering both facial features and voice. However, the company acknowledges the complexity of the matter and outlines nuanced considerations, such as the nature of the content—whether it falls under parody or satire—and the involvement of public figures.

YouTube warned that creators who consistently neglect to share this information may experience consequences, such as their content being taken down, suspension from the YouTube Partner Program, or facing other penalties.

YouTube

The company recognized that certain synthetic media, even with labelling, might be taken off the platform if it breaches Community Guidelines, particularly in situations where just a label may not adequately reduce the potential harm

In the world of music, YouTube is granting labels representing artists participating in AI tests the authority to request the removal of content that closely resembles their performers' voices. Factors like the content's association with news reporting, analysis, or critique will be taken into careful consideration when evaluating removal requests.

To enhance transparency, YouTube is implementing new disclosure requirements and content labels specifically for generative AI-created content. This is particularly emphasized when the content addresses realistic or complex geopolitical events, elections, or other public concerns. All content generated using YouTube's AI tools will be distinctly labelled.

These regulations come at a crucial juncture when concerns about generative AI have permeated various industries, including Hollywood, with industry organizations advocating for stringent regulations. Additionally, music labels have expressed apprehensions about unauthorized use of their performers' voices on social platforms.

YouTube's commitment to responsible AI deployment is evident in its cautious approach. Despite the excitement surrounding the technology's potential, the platform emphasizes the need to ensure the safety of its community. The company is poised to collaborate with creators, artists, and other stakeholders in the creative industries to shape a future that not only leverages AI benefits but also prioritizes user well-being.

Related topics : Artificial intelligence
READ MORE