O YouTube dirá quando o vídeo que você está assistindo foi gerado pela IA – Olá Nerd

YouTube will tell you when the video you're watching was generated by AI

Adverts

YouTube says it will take steps to ensure that generative AI has a place on the video platform, while also being responsible for it. In a blog post this week, Jennifer Flannery O'Connor and Emily Moxley, vice presidents of product management at YouTube, shared several AI detection tools that will help the video platform highlight AI-generated content.

The two said that YouTube is still in the early stages of its work, but that it plans to evolve the approach as the team learns more. For now, though, they've shared a few different ways the video platform will detect AI-generated content and alert users about it so they can use it responsibly.

Adverts

The first method will require disclosure from creators whenever something is created using AI. This means that if something in the video was created with AI, it should have a disclosure, as well as one of several new content labels, helping you identify what was created with AI and what wasn't.

ferramentas de detecção de IA do YouTube, rótulo de vídeo
An example of the labels YouTube will apply to videos with AI-generated content. Image source: YouTube

This specific issue will be addressed through a system that tells viewers when something they are watching is “synthetic” or AI-generated. If any AI tools were used in the video, they would have to be disclosed to help reduce the potential spread of misinformation and other serious issues, YouTube notes.

YouTube says it won't limit itself to labels and disclosures, though it will also use AI detection tools to help select videos that show things that violate its community guidelines. Additionally, the two say that anything created using YouTube's generative AI products will be clearly labeled as altered or synthetic.

Additionally, YouTube's future AI detection tools will allow users to request the removal of altered or synthetic content that simulates an identifiable individual, including their face or voice. This will be done through the privacy request process, and YouTube states that not all content uploaded here will be removed, but will be considered based on several factors.

AI-generated content is here to stay, especially with ChatGPT continuing to offer so much to so many. And while it's unlikely we'll ever see AI leave the entertainment medium completely, at least YouTube is taking some steps to help mitigate the risks it could pose in the long term. Of course, YouTube's track record with community policing hasn't been the best in the past, so it'll be interesting to see how this all plays out going forward.

—————-

YouTube says it will take steps to ensure that generative AI has a place on the video platform, while also being responsible for it. In a blog post this week, Jennifer Flannery O'Connor and Emily Moxley, vice presidents of product management at YouTube, shared several AI detection tools that will help the video platform highlight AI-generated content.

The two said that YouTube is still in the early stages of its work, but that it plans to evolve the approach as the team learns more. For now, though, they've shared a few different ways the video platform will detect AI-generated content and alert users about it so they can use it responsibly.

The first method will require disclosure from creators whenever something is created using AI. This means that if something in the video was created with AI, it should have a disclosure, as well as one of several new content labels, helping you identify what was created with AI and what wasn't.

ferramentas de detecção de IA do YouTube, rótulo de vídeo
An example of the labels YouTube will apply to videos with AI-generated content. Image source: YouTube

This specific issue will be addressed through a system that tells viewers when something they are watching is “synthetic” or AI-generated. If any AI tools were used in the video, they would have to be disclosed to help reduce the potential spread of misinformation and other serious issues, YouTube notes.

YouTube says it won't limit itself to labels and disclosures, though it will also use AI detection tools to help select videos that show things that violate its community guidelines. Additionally, the two say that anything created using YouTube's generative AI products will be clearly labeled as altered or synthetic.

Additionally, YouTube's future AI detection tools will allow users to request the removal of altered or synthetic content that simulates an identifiable individual, including their face or voice. This will be done through the privacy request process, and YouTube states that not all content uploaded here will be removed, but will be considered based on several factors.

AI-generated content is here to stay, especially with ChatGPT continuing to offer so much to so many. And while it's unlikely we'll ever see AI leave the entertainment medium completely, at least YouTube is taking some steps to help mitigate the risks it could pose in the long term. Of course, YouTube's track record with community policing hasn't been the best in the past, so it'll be interesting to see how this all plays out going forward.