A false seven-second video of Vice President Biden could change Facebook’s rules on fake news before the 2024 election, but the site and the people who vote are running out of time.
The Oversight Board, an outside advisory group that Meta set up to look over its moderation choices on Facebook and Instagram, made a decision on Monday about a video of Biden that was changed and shared on social media last year.
The first video showed the president going with his granddaughter Natalie Biden to early voting in the 2022 midterm elections to help her cast her vote. Trump’s granddaughter gets a sticker that says “I Voted” and a kiss on the face in the video.
A short, edited version of the video cuts out any visual proof of the sticker and loops the clip to show Biden touching the young woman inappropriately while the song’s lyrics are sexual. There was a comment on the seven-second video posted to Facebook in May 2023 calling Biden a “sick pedophile.”
Meta’s Oversight Board said it would take on the case in October of last year, after a Facebook user reported the video and took it to a higher level when Facebook refused to take it down.
The Oversight Board said in its decision released Monday that Meta’s decision to leave the video online was in line with the platform’s rules. However, it found the policy in question to be “incoherent.”
OMB Co-Chair Michael McConnell said, “The policy doesn’t make much sense the way it is now.” “It doesn’t ban posts that show someone doing something they didn’t do, but it does ban videos that have been changed to say things they didn’t say.” It only works for videos made by AI; it doesn’t apply to other fake material.
McConnell also said that the policy didn’t do anything about manipulated audio, which he called “one of the most powerful forms of electoral disinformation.”
Instead of focusing on how a piece of content was made, the Oversight Board’s ruling says that Meta’s rules should be based on the harms they are meant to stop. The ruling says that any changes should be made “urgently” because of the global elections.
The Oversight Board suggested that Meta add labels to videos that have been changed to make them clear that they have been changed instead of relying on fact-checkers, which the group calls “asymmetric depending on language and market.”
The Oversight Board thinks that Meta can increase freedom of speech, lower the risk of harm, and give users more information by labeling more material instead of taking it down.
A Meta spokesperson told TechCrunch that the company is “reviewing the Oversight Board’s guidance” and will give the public an answer within 60 days.
The changed video is still being shared on X, which used to be Twitter. Last month, the video was shared by a confirmed X account with 267,000 followers and the message “The media just pretend this isn’t happening.” More than 611,000 people have watched the film.
This isn’t the first time that the Oversight Board has told Meta to start over with its rules, like with the Biden video. When the group spoke out against Facebook’s decision to ban former President Trump, it said that the indefinite sentence was “vague and standardless,” but it agreed with the decision to suspend his account. In most cases, the Oversight Board has told Meta that its policies need to be clearer and more detailed.
Meta stuck to its decision to leave the changed video online, as the Oversight Board pointed out when it accepted the Biden “cheap fake” case. This is because Meta’s policy on manipulated media, which includes photos and videos that are misleadingly changed, only applies when AI is used or when the subject of a video is shown saying something they didn’t say.
The policy on altered media was made with deepfakes in mind. It only applies to “videos that have been edited or synthesized… in ways that are not obvious to the average person and would likely lead the average person astray.”
Meta’s self-designed review board has been called “too little, far too late” by people who don’t like how the site moderates material.
Meta may have a standard system for reviewing content moderation, but false information and other harmful content spread much faster than that appeals process could handle. In fact, the world today couldn’t have thought how quickly these things would spread just two general election cycles ago.
As the race for president in 2024 heats up, researchers and watchdog groups are getting ready for a flood of false claims and fakes made by AI. Many dangerous lies can now be spread quickly thanks to new technologies, but social media companies have quietly cut back on investments in trust and safety and stopped what seemed like an organized effort to get rid of false information.
McConnell said, “The amount of misleading content is growing, and the tools used to make it are quickly getting better.”
What do you say about this story? Visit Parhlo World For more.