The artificial intelligence firm said the threat actors used its
AI models to generate short comments, longer articles in a range
of languages, made up names and bios for social media accounts
over the last three months.
These campaigns, which included threat actors from Russia,
China, Iran and Israel, also focused on issues including
Russia's invasion of Ukraine, the conflict in Gaza, the Indian
elections, politics in Europe and the United States, among
others.
The deceptive operations were an "attempt to manipulate public
opinion or influence political outcomes," OpenAI said in a
statement.
The San Francisco-based firm's report is the latest to stir
safety concerns about the potential misuse of the gen AI
technology, which can quickly and easily produce human-like
text, imagery and audio.
Microsoft-backed OpenAI said on Tuesday it formed a Safety and
Security Committee that would be led by board members, including
CEO Sam Altman, as it begins training its next AI model.
The deceptive campaigns have not benefited from increased
audience engagement or reach due to the AI firm's services,
OpenAI said in the statement.
OpenAI said these operations did not solely use AI-generated
material but included manually written texts or memes copied
from across the internet.
Separately, Meta Platforms, in its quarterly security report on
Wednesday, said it had found "likely AI-generated" content used
deceptively on its Facebook and Instagram platforms, including
comments praising Israel's handling of the war in Gaza published
below posts from global news organizations and U.S. lawmakers.
(Reporting by Jaspreet Singh in Bengaluru; Editing by Alan
Barona)
[© 2024 Thomson Reuters. All rights
reserved.]
Copyright 2022 Reuters. All rights reserved. This material may
not be published, broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content.
|
|