The board, which is funded by Meta but run independently, took
on the Biden video case in October in response to a user
complaint about an altered seven-second video of the president
posted on Meta's flagship social network.
Its ruling on Monday is the first to address Meta's "manipulated
media" policy, which bars certain types of doctored videos, amid
rising concerns about the potential use of new AI technologies
to sway elections this year.
The policy "is lacking in persuasive justification, is
incoherent and confusing to users, and fails to clearly specify
the harms it is seeking to prevent", the board said.
The board suggested Meta update the rule to cover both audio and
video content, regardless of whether AI was used, and to apply
labels identifying it as manipulated.
It stopped short of calling for the policy to apply to
photographs, cautioning that doing so may make the policy too
difficult to enforce at Meta's scale.
Meta, which also owns Instagram and WhatsApp, informed the board
in the course of the review that it was planning to update the
policy "to respond to the evolution of new and increasingly
realistic AI", according to the ruling.
The company said in a statement on Monday that it was reviewing
the ruling and would respond publicly within 60 days.
The clip on Facebook manipulated real footage of Biden
exchanging "I Voted" stickers with his granddaughter during the
2022 U.S. midterm elections and kissing her on the cheek.
Versions of the same altered video clip had starting going viral
as far back as January 2023, the board said.
In its ruling, the Oversight Board said Meta was right to leave
the video up under its current policy, which bars misleadingly
altered videos only if they were produced by artificial
intelligence or if they make people appear to say words they
never actually said.
The board said non-AI altered content "is prevalent and not
necessarily any less misleading" than content generated by AI
tools.
It said the policy also should apply to audio-only content as
well as videos depicting people doing things they never actually
did.
Enforcement, it added, should consist of applying labels to the
content rather than Meta's current approach of removing the
posts from its platforms.
(Reporting by Katie Paul; editing by Jason Neely)
[© 2024 Thomson Reuters. All rights
reserved.]
Copyright 2022 Reuters. All rights reserved. This material may
not be published, broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content.
|
|