Meta isn’t the only business that has to deal with the rise of material made by AI and how it impacts its platform. In June, YouTube also quietly changed its rules so that people can ask for the removal of AI-generated or other fake material that uses their voice or face. With this change, people can use YouTube’s privacy request process to ask for this kind of AI material to be taken down. This is more of what they had planned when they first talked about their approach to responsible AI in November.
YouTube wants people who are touched by the content to ask for it to be taken down directly as a privacy violation instead of asking for it to be taken down because it is false, like a deepfake. If the person who was hurt is not a child, doesn’t have access to a computer, is dead, or one of a few other situations listed in YouTube’s newly updated Help documentation, then the company doesn’t need first-party claims.
It’s not always the case that the content will be taken down after a takedown request is sent. YouTube warns that the company will decide what to do about the report based on many factors.
It might look at things like whether the content is clearly fake or made with AI, whether it can be used to identify a person, and whether it could be seen as comedy, humor, or something else valuable and in the public interest. The company also says that it might look at whether the AI material includes a famous person or public figure and whether it shows them doing “sensitive behavior” like breaking the law, being violent, or supporting a product or political candidate. The second one is especially scary during election years, when recommendations made by AI could change people’s minds.
YouTube also says the person who uploaded the video will have 48 hours to fix the problem. The case is over if the information is taken down before that time runs out. If not, YouTube will start a review. The company also tells users that deleting a video from the site means taking away the person’s name and any personal information that might be in the title, description, or tags of the video. Other thing users can do is hide the faces of people in their videos. But they can’t just make the video private to meet with the removal request, because the video could be made public again at any time.
The company didn’t make a big deal out of the policy change, but in March, it added a tool to Creator Studio that let producers say when content that looked real was actually made with changed or fake media, such as generative AI. It also started testing a feature not long ago that would let users add notes from other users that give more information about videos, like whether they’re meant to be funny or false in some way.
It’s not that YouTube doesn’t like AI; in fact, it’s already tried creative AI with a comments summarizer and a chat tool for asking questions about a video or getting suggestions. But the company has said in the past that marking AI video as such might not keep it from being taken down because it still has to follow YouTube’s Community Guidelines.
If someone complains about privacy issues with AI content, YouTube won’t punish the person who made the original video right away.
The company told creators on the YouTube Community site last month, “If you get a notice of a privacy complaint, keep in mind that privacy violations are different from Community Guidelines strikes, and receiving a privacy complaint will not automatically result in a strike.” This is how creators can stay up to date on new policies and features.
To put it another way, YouTube’s Community Guidelines are not the same as its Privacy Guidelines. If someone makes a privacy request, YouTube may take down some video, even if it doesn’t break the Community Guidelines. YouTube says it may take action against accounts that violate privacy rules too often, but they won’t punish creators by limiting their ability to share videos when their videos are taken down because of a privacy complaint.
Also Read: Internet Users Are Getting Younger; Now the Uk is Weighing Up if Ai Can Help Protect Them
More information about what YouTube may do when privacy is violated was added on July 1, 2014, at 4:17 p.m. ET.
What do you say about this story? Visit Parhlo World For more.