YouTube Now Requires Creators To Label AI Content

YouTube Now Requires Creators To Label AI Content

As AI-generated content continues to encroach on everything from advertising to the voice acting profession, YouTube is adding a requirement for users to flag their videos when they include anything made by an AI program. However, looking at the guidelines, it doesn’t seem like the video hosting site has any way to actually enforce or detect this.

YouTube’s Product Management Vice Presidents, Jennifer Flannery O’Connor and Emily Moxley, broke down the new policy in a blog post on November 14. First and foremost, any video that contains AI-generated content will require disclosure and content labels in the video description that make it clear aspects of the video were created by AI. The examples given include a video that “realistically depicts an event that never happened” as well as deepfakes showing an individual “saying or doing something they didn’t actually do.”

The blog post says this new policy is meant to help combat misinformation, especially regarding real-world issues like elections and ongoing health and world crises. It also states that some AI-generated content, whether it’s labeled or not, may be removed from YouTube if the disclaimer “may not be enough to mitigate the risk of harm.” An example of this given by YouTube is a realistic portrayal of violence that solely exists to gross people out, as opposed to a historical video of the educational or informative sort that also includes violence.

Alongside the disclaimer, YouTube is rolling out community guidelines that will allow those affected by AI-generated content to request videos be removed on those grounds. So if someone is using AI to simulate you doing something you didn’t do, you can request to have those videos removed, with YouTube offering the specific example of musicians whose voices are being mimicked by AI software.

One distinction made is that if AI-generated voices are part of an analysis, such as a creator discussing the trend of AI covers and including audio that sounds like a singer performing someone else’s song, the video may not be taken down. But it sounds like videos that are just songs performed by an AI imitating someone’s voice can be taken down at an artist’s request. Parody or satire is also, apparently, fair game.

The big question here is whether or not YouTube actually has any means of enforcing this beyond the threat of consequences, including “content removal, suspension from the YouTube Partner Program, or other penalties” for those who consistently fail to disclose. Presumably the “other penalties” could mean an eventual ban from the platform, but even so, it sounds as if the entire thing is currently self-imposed and working on an honor system.

While there might be some kinks to work out here, it is a relief to see some work being done on huge platforms to combat the misinformation brought on by AI tools. I spend a lot of time on TikTok and while AI covers and other audio have become prominent on that platform, I’ve anecdotally seen a lot of users make entire accounts that do nothing but churn out AI content without disclosing it at all. I’m a chronic scroller, so I’ve learned the signs to look and listen for, but as AI tools become more and more widespread, it’s becoming more and more likely that people who don’t know better will start to take these videos at face value.


The Cheapest NBN 1000 Plans

Looking to bump up your internet connection and save a few bucks? Here are the cheapest plans available.

At Kotaku, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments


2 responses to “YouTube Now Requires Creators To Label AI Content”

Leave a Reply

Your email address will not be published. Required fields are marked *