The
letter, issued a week before the international AI Safety Summit
in London, lists measures that governments and companies should
take to address AI risks.
"Governments should also mandate that companies are legally
liable for harms from their frontier AI systems that can be
reasonably foreseen and prevented," according to the letter,
signed by three Turing Award winners, a Nobel laureate, and more
than a dozen top AI academics.
Currently there are no broad-based regulations focusing on AI
safety, and the first set of legislations by the European Union
is yet to become law as lawmakers are yet to agree on several
issues.
"Recent state of the art AI models are too powerful, and too
significant, to let them develop without democratic oversight,"
said Yoshua Bengio, one of the three people known as the
godfather of AI.
"It (investments in AI safety) needs to happen fast, because AI
is progressing much faster than the precautions taken," he said.
Signatories to the letter include Geoffrey Hinton, Andrew Yao,
Daniel Kahneman, Dawn Song and Yuval Noah Harari.
Since the launch of OpenAI's generative AI models, top academics
and prominent CEOs such as Elon Musk have warned about the risks
on AI, including calling for a six-month pause in developing
powerful AI systems.
Some companies have countered this, saying they will face high
compliance costs and disproportionate liability risks.
"Companies will complain that it's too hard to satisfy
regulations - that 'regulation stifles innovation' - that's
ridiculous," said British computer scientist Stuart Russell.
"There are more regulations on sandwich shops than there are on
AI companies."
(Reporting by Supantha Mukherjee in Stockholm; editing by Miral
Fahmy)
[© 2023 Thomson Reuters. All rights
reserved.] Copyright 2022 Reuters. All rights reserved. This material may not be published,
broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content.
|
|