"I left so that I could talk about the dangers of AI without
considering how this impacts Google," Geoffrey Hinton wrote on
Twitter.
In an interview with the New York Times, Hinton said he was
worried about AI's capacity to create convincing false images
and texts, creating a world where people will "not be able to
know what is true anymore".
"It is hard to see how you can prevent the bad actors from using
it for bad things," he said.
The technology could quickly displace workers, and become a
greater danger as it learns new behaviours.
“The idea that this stuff could actually get smarter than people
— a few people believed that,” he told the New York Times. “But
most people thought it was way off. And I thought it was way
off. I thought it was 30 to 50 years or even longer away.
Obviously, I no longer think that."
In his tweet, Hinton said Google itself had "acted very
responsibly" and denied that he had quit so that he could
criticise his former employer.
Google, part of Alphabet Inc., did not immediately reply to a
request for comment from Reuters. The Times quoted Google’s
chief scientist, Jeff Dean, as saying in a statement: “We remain
committed to a responsible approach to A.I. We’re continually
learning to understand emerging risks while also innovating
boldly.”
(Reporting by Jyoti Narayan in Bengaluru; Additional reporting
by Chandni Shah; Editing by Peter Graff)
[© 2023 Thomson Reuters. All rights
reserved.]
Copyright 2022 Reuters. All rights reserved. This material may
not be published, broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content.
|
|