LAPTOP

YouTube takes action against AI to stop deepfakes of top creators

YouTube announced that it is staunchly supporting the No Fakes Act of 2025 in an effort to prevent deepfakes from being used against its top content creators.

In a blog post published on Wednesday, April 9, YouTube discussed what the future of AI will look like on its platform. While it remains enthusiastic about the use of AI as a helpful tool in influencers’ arsenals, it’s also well-aware of the risks it can pose.

For example, deepfakes have already proven themselves to be dangerous, as seen when a number of top female Twitch streamers discovered a website using their likeness – without permission – in inappropriate content back in 2023.

YouTube is hoping to combat this issue by publicly supporting the No Fakes Act of 2025, legislation that aims to “protect the voice and likeness of all individuals from unauthorized, computer-generated recreations from generative artificial intelligence (AI) and other technologies.”

YouTube is taking action to help stop the spread of nonconsensual deepfakes of creators on its platform.

YouTube supports legislation to stop malicious deepfakes

In its blog post, YouTube outlined several key points in its quest to keep creators from having their likeness unfairly used by malicious actors. For one, YouTube has updated its privacy policy to allow users to submit a removal request for any altered or synthetic videos that use their likeness.

On top of this, YouTube is also implementing new “likeness management tools” to help creators suss out and manage how AI is used to depict them on their platform, and have already rolled out a pilot program for just such a purpose.

And finally, YouTube remains committed to supporting legislation to safeguard creators against having their likeness copied by Deepfakes on its website, backing both the No Fakes Act and the Take It Down Act, which criminalizes the publishing of nonconsensual intimate imagery of another person.

AI is already toeing the copyright line in the creator space as more and more influencers discover chatbots of themselves on websites like character.AI, a platform that allows users to create and interact with AI-powered characters.

While influencers have done their best to combat such technology, it’s a tough road ahead. QTCinderella, one of the victims of the deepfake website in 2023, lamented that it was nearly impossible to sue those responsible for the website hosting the Deepfaked content of herself and other female streamers.

While most states in the US have laws prohibiting sharing of sexually intimate content without consent, very few boast laws that actually protect individuals against having deepfaked images of themselves being made or shared online. With a massive corporation like YouTube backing legislation like the No Fakes Act, creators — and people, in general — are one step closer to having legal protection from these tools being used maliciously against them.


Source link

Related Articles

Back to top button