Unveiled late last year, Microsoft-backed OpenAI's ChatGPT has
become the fastest-growing consumer application in history and
set off a race among tech companies to bring generative AI
products to market.
Concerns however are mounting about potential abuse of the
technology and the possibility that bad actors and even
governments may use it to produce far more disinformation than
before.
"Signatories who integrate generative AI into their services
like Bingchat for Microsoft, Bard for Google should build in
necessary safeguards that these services cannot be used by
malicious actors to generate disinformation," Jourova told a
press conference.
"Signatories who have services with a potential to disseminate
AI generated disinformation should in turn put in place
technology to recognise such content and clearly label this to
users," she said.
Companies such as Google, Microsoft and Meta Platforms that have
signed up to the EU Code of Practice to tackle disinformation
should report on safeguards put in place to tackle this in July,
Jourova said.
She warned Twitter, which quit the Code last week, to expect
more regulatory scrutiny.
"By leaving the Code, Twitter has attracted a lot of attention
and its actions and compliance with EU law will be scrutinised
vigorously and urgently," Jourova said.
(Reporting by Foo Yun Chee; Editing by Kirsten Donovan)
[© 2023 Thomson Reuters. All rights
reserved.] Copyright 2022 Reuters. All rights reserved. This material may not be published,
broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content.
|
|