"Today, deepfakes often involve sexual imagery, fraud, or
political disinformation. Since AI is progressing rapidly and
making deepfakes much easier to create, safeguards are needed,"
the group said in the letter, which was put together by Andrew
Critch, an AI researcher at UC Berkeley.
Deepfakes are realistic yet fabricated images, audios and videos
created by AI algorithms, and recent advances in the technology
have made them more and more indistinguishable from
human-created content.
The letter, titled "Disrupting the Deepfake Supply Chain," makes
recommendations on how to regulate deepfakes, including full
criminalization of deepfake child pornography, criminal
penalties for any individual knowingly creating or facilitating
the spread of harmful deepfakes and requiring AI companies to
prevent their products from creating harmful deepfakes.
As of Wednesday morning, over 400 individuals from various
industries including academia, entertainment and politics had
signed the letter.
Signatories included Steven Pinker, a Harvard psychology
professor, Joy Buolamwini, founder of the Algorithmic Justice
League, two former Estonian presidents, researchers at Google
DeepMind and a researcher from OpenAI.
Ensuring AI systems do not harm society has been a priority for
regulators since Microsoft-backed OpenAI unveiled ChatGPT in
late 2022, which wowed users by engaging them in human-like
conversation and performing other tasks.
There have been multiple warnings from prominent individuals
about AI risks, notably a letter signed by Elon Musk last year
that called for a six-month pause in developing systems more
powerful than OpenAI's GPT-4 AI model.
(Reporting by Anna Tong in San Francisco; editing by Miral Fahmy)
[© 2024 Thomson Reuters. All rights
reserved.]
Copyright 2022 Reuters. All rights reserved. This material may
not be published, broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content.
|
|