Biden administration to host international AI safety meeting in San
Francisco after election
Send a link to a friend
[September 18, 2024]
By MATT O'BRIEN
Government scientists and artificial intelligence experts from at least
nine countries and the European Union will meet in San Francisco after
the U.S. elections to coordinate on safely developing AI technology and
averting its dangers.
President Joe Biden's administration on Wednesday announced a two-day
international AI safety gathering planned for November 20 and 21. It
will happen just over a year after delegates at an AI Safety Summit in
the United Kingdom pledged to work together to contain the potentially
catastrophic risks posed by AI advances.
U.S. Commerce Secretary Gina Raimondo told The Associated Press it will
be the “first get-down-to-work meeting” after the UK summit and a May
follow-up in South Korea that sparked a network of publicly backed
safety institutes to advance research and testing of the technology.
Among the urgent topics likely to confront experts is a steady rise of
AI-generated fakery but also the tricky problem of how to know when an
AI system is so widely capable or dangerous that it needs guardrails.
“We’re going to think about how do we work with countries to set
standards as it relates to the risks of synthetic content, the risks of
AI being used maliciously by malicious actors," Raimondo said in an
interview. "Because if we keep a lid on the risks, it’s incredible to
think about what we could achieve.”
Situated in a city that's become a hub of the current wave of generative
AI technology, the San Francisco meetings are designed as a technical
collaboration on safety measures ahead of a broader AI summit set for
February in Paris. It will occur about two weeks after a presidential
election between Vice President Kamala Harris — who helped craft the
U.S. stance on AI risks — and former President Donald Trump, who has
vowed to undo Biden's signature AI policy.
Raimondo and Secretary of State Antony Blinken announced that their
agencies will co-host the convening, which taps into a network of newly
formed national AI safety institutes in the U.S. and UK, as well as
Australia, Canada, France, Japan, Kenya, South Korea, Singapore and the
27-nation European Union.
[to top of second column]
|
The biggest AI powerhouse missing from the list of participants is
China, which isn't part of the network, though Raimondo said “we’re
still trying to figure out exactly who else might come in terms of
scientists.”
“I think that there are certain risks that we are aligned in wanting to
avoid, like AIs applied to nuclear weapons, AIs applied to
bioterrorism,” she said. “Every country in the world ought to be able to
agree that those are bad things and we ought to be able to work together
to prevent them.”
Many governments have pledged to safeguard AI technology but they've
taken different approaches, with the EU the first to enact a sweeping AI
law that sets the strongest restrictions on the riskiest applications.
Biden last October signed an executive order on AI that requires
developers of the most powerful AI systems to share safety test results
and other information with the government. It also delegated the
Commerce Department to create standards to ensure AI tools are safe and
secure before public release.
San Francisco-based OpenAI, maker of ChatGPT, said last week that before
releasing its latest model, called o1, it granted early access to the
U.S. and UK national AI safety institutes. The new product goes beyond
the company's famous chatbot in being able to “perform complex
reasoning” and produce a "long internal chain of thought" when answering
a query, and poses a “medium risk” in the category of weapons of mass
destruction, the company has said.
Since generative AI tools began captivating the world in late 2022, the
Biden administration has been pushing AI companies to commit to testing
their most sophisticated models before they’re let out into the world.
“That is the right model," Raimondo said. “That being said, right now,
it’s all voluntary. I think we probably need to move beyond a voluntary
system. And we need Congress to take action.”
Tech companies have mostly agreed, in principle, on the need for AI
regulation, but some have chafed at proposals they argue could stifle
innovation. In California, Gov. Gavin Newsom on Tuesday signed three
landmark bills to crack down on political deepfakes ahead of the 2024
election, but has yet to sign, or veto, a more controversial measure
that would regulate extremely powerful AI models that don't yet exist
but could pose grave risks if they're built.
All contents © copyright 2024 Associated Press. All rights reserved |