Biden meets Microsoft, Google CEOs on AI dangers
Send a link to a friend
[May 05, 2023] By
Nandita Bose and David Shepardson
WASHINGTON (Reuters) -President Joe Biden met with CEOs of top
artificial intelligence companies including Microsoft and Alphabet's
Google on Thursday and made clear they must ensure their products are
safe before they are deployed.
Generative artificial intelligence has become a buzzword this year, with
apps such as ChatGPT capturing the public's fancy, sparking a rush among
companies to launch similar products they believe will change the nature
of work.
Millions of users have begun testing such tools, which supporters say
can make medical diagnoses, write screenplays, create legal briefs and
debug software, leading to growing concern about how the technology
could lead to privacy violations, skew employment decisions, and power
scams and misinformation campaigns.
Biden, who has used ChatGPT and experimented with it, told the officials
they must mitigate current and potential risks AI poses to individuals,
society and national security, the White House said.
The meeting included a "frank and constructive discussion" on the need
for companies to be more transparent with policymakers about their AI
systems; the importance of evaluating the safety of such products; and
the need to protect them from malicious attacks, the White House added.
Thursday's two-hour meeting which began at 11:45 a.m. ET (1545 GMT),
included Google's Sundar Pichai, Microsoft Corp's Satya Nadella,
OpenAI's Sam Altman and Anthropic's Dario Amodei, along with Vice
President Kamala Harris and administration officials including Biden's
Chief of Staff Jeff Zients, National Security Adviser Jake Sullivan,
Director of the National Economic Council Lael Brainard and Secretary of
Commerce Gina Raimondo.
Harris said in a statement the technology has the potential to improve
lives but could pose safety, privacy and civil rights concerns. She told
the chief executives they have a "legal responsibility" to ensure the
safety of their artificial intelligence products and that the
administration is open to advancing new regulations and supporting new
legislation on artificial intelligence.
In response to a question about whether companies are on the same page
on regulations, Altman told reporters after the meeting "we're
surprisingly on the same page on what needs to happen."
[to top of second column] |
An exterior view of building BV100,
during a tour of Google's new Bay View Campus in Mountain View,
California, U.S. May 16, 2022. REUTERS/Peter DaSilva
The administration also announced a $140 million investment from the
National Science Foundation to launch seven new AI research
institutes and said the White House's Office of Management and
Budget would release policy guidance on the use of AI by the federal
government. Leading AI developers, including Anthropic, Google,
Hugging Face, NVIDIA Corp, OpenAI, and Stability AI, will
participate in a public evaluation of their AI systems.
Shortly after Biden announced his reelection bid, the Republican
National Committee produced a video featuring a dystopian future
during a second Biden term, which was built entirely with AI
imagery.
Such political ads are expected to become more common as AI
technology proliferates.
United States regulators have fallen short of the tough approach
European governments have taken on tech regulation and in crafting
strong rules on deepfakes and misinformation.
"We don't see this as a race," a senior administration official
said, adding that the administration is working closely with the
U.S.-EU Trade & Technology Council on the issue.
In February, Biden signed an executive order directing federal
agencies to eliminate bias in their AI use. The Biden administration
has also released an AI Bill of Rights and a risk management
framework.
Last week, the Federal Trade Commission and the Department of
Justice's Civil Rights Division also said they would use their legal
authorities to fight AI-related harm.
Tech giants have vowed many times to combat propaganda around
elections, fake news about the COVID-19 vaccines, pornography and
child exploitation, and hateful messaging targeting ethnic groups.
But they have been unsuccessful, research and news events show.
(Reporting by Nandita Bose in Washington; Editing by Heather
Timmons, Gerry Doyle and Jonathan Oatis)
[© 2023 Thomson Reuters. All rights
reserved.]
This material may not be published,
broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content.
|