The update to the disclosure requirements under the political
content policy requires marketers to select a checkbox in the
"altered or synthetic content" section of their campaign
settings.
The rapid growth of generative AI, which can create text, images
and video in seconds in response to prompts, has raised concerns
about its potential misuse.
The rise of deepfakes, convincingly manipulated content to
misrepresent someone, have further blurred the lines between the
real and the fake.
Google said it will generate an in-ad disclosure for feeds and
shorts on mobile phones and in-streams on computers and
television. For other formats, advertisers will be required to
provide a noticeable "prominent disclosure" for users.
The "acceptable disclosure language" will vary according to the
context of the ad, Google said.
In April, during the ongoing general election in India, fake
videos of two Bollywood actors that were seen criticizing Prime
Minister Narendra Modi had gone viral online. Both AI-generated
videos asked people to vote for the opposition Congress party.
Separately, Sam Altman-led OpenAI said in May that it had
disrupted five covert influence operations that sought to use
its AI models for "deceptive activity" across the internet, in
an "attempt to manipulate public opinion or influence political
outcomes."
Meta Platforms had said last year that it would make advertisers
disclose if AI or other digital tools are being used to alter or
create political, social or election-related advertisements on
Facebook and Instagram.
(Reporting by Jaspreet Singh in Bengaluru; Editing by Mohammed
Safi Shamsi and Alan Barona)
[© 2024 Thomson Reuters. All
rights reserved.]
Copyright 2022 Reuters. All rights reserved. This material may
not be published, broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content.
|
|