After looking into how Meta handles explicit pictures made by AI, the Oversight Board, the company’s semi-independent watchdog group, is now telling the company to improve its rules about these kinds of images. That’s what the Board wants Meta to do: change the word “derogatory” to “non-consensual,” and move Meta’s rules about these kinds of pictures from the “Bullying and Harassment” section to the “Sexual Exploitation Community Standards” section.
The rules that Meta has now about explicit images made by AI go beyond the “derogatory sexualized Photoshop” rule that is in its part on bullying and harassment. The Board also told Meta to change the word “Photoshop” to a more general word for changed media.
Meta also doesn’t allow non-consensual images that are “non-commercial or produced in a private setting.” The Board said that this clause shouldn’t make it necessary to get rid of or ban pictures that were made by AI or changed without permission.
After two high-profile cases where explicit images of famous people made by AI and shared on Instagram and Facebook got Meta in a lot of trouble, these suggestions were made.
In one of these cases, an AI-made naked picture of a famous Indian person was shared on Instagram. The picture was reported by several users, but Meta did not take it down. In fact, the ticket was closed within 48 hours without any further review. People tried to change the choice, but the ticket was closed again. The company didn’t do anything until the Oversight Board got involved. They took down the content and banned the user.
The other picture was made by AI and looked like a famous person from the U.S. It was shared on Facebook. According to news sources, Meta already had the picture in its Media Matching Service (MMS) repository, which is a collection of pictures that break its rules and can be used to find similar pictures. When another user posted the picture on Facebook, Meta quickly took it down.
It is important to note that Meta only added the picture of the Indian public figure to the MMS bank after being asked to do so by the Oversight Board. The company told the Board that the repository didn’t have the picture before that because the problem wasn’t covered in the news.
“This is worrying because many victims of deepfake intimate images are not in the public eye and are either forced to let their non-consensual images spread or will have to report every case,” the Board wrote in its note.
Breakthrough Trust, an Indian group that works to stop gender-based violence online, said that these problems and Meta’s rules have cultural effects. Breakthrough told the Oversight Board that non-consensual images are often treated like an identity theft problem instead of a case of gender-based violence.
“People who report these crimes to police or courts are often victimized again when they do so (“why did you put your picture out there, etc.?” even if it’s not really their picture and is a deepfake).” The picture quickly spreads to other websites after being posted online, according to a letter from the organization’s head of media to the Oversight Board. “Taking it down on the source platform is not enough because it quickly spreads to other platforms,” Charkorborty wrote.
Charkorborty told TechCrunch on the phone that users don’t always realize that their reports have been marked as “resolved” after 48 hours, and Meta shouldn’t use the same time frame for all cases. She also said the company should work on making users more aware of these kinds of problems.
Devika Malik, an expert on platform policies who used to work in Meta’s South Asia policy team, told TechCrunch earlier this year that platforms mostly rely on user reports to take down images that don’t have consent, which may not be a good way to deal with media made by AI.
“This makes it unfair for the user who is affected to have to prove who they are and that they didn’t give permission (like Meta’s policy does).” It’s easier for mistakes to happen with fake news, and Malik said, “The time it takes to catch and check these outside signals lets the content spread in a bad way.”
Founding Partner of Delhi-based think tank The Quantum Hub (TQH), Aparajita Bharti, said that Meta should let users give more context when reporting material, since they might not know the different types of rule violations that Meta has.
We hope Meta goes above and beyond the final decision [of the Oversight Board] to make it possible for flexible and user-focused channels to report this kind of material, she said.
Also Read: Mark Zuckerberg Thinks That People Who Make Material Will Make Ai Copies of Themselves
“We agreed that users couldn’t be expected to fully understand the subtle differences between different types of reporting, and we pushed for systems that make sure real problems don’t get missed because of nuances in Meta content moderation policies.”
What do you say about this story? Visit Parhlo World For more.