US, Britain, other countries ink agreement to make AI 'secure by design'
Send a link to a friend
[November 27, 2023]
By Raphael Satter and Diane Bartz
WASHINGTON (Reuters) - The United States, Britain and more than a dozen
other countries on Sunday unveiled what a senior U.S. official described
as the first detailed international agreement on how to keep artificial
intelligence safe from rogue actors, pushing for companies to create AI
systems that are "secure by design."
In a 20-page document unveiled Sunday, the 18 countries agreed that
companies designing and using AI need to develop and deploy it in a way
that keeps customers and the wider public safe from misuse.
The agreement is non-binding and carries mostly general recommendations
such as monitoring AI systems for abuse, protecting data from tampering
and vetting software suppliers.
Still, the director of the U.S. Cybersecurity and Infrastructure
Security Agency, Jen Easterly, said it was important that so many
countries put their names to the idea that AI systems needed to put
safety first.
"This is the first time that we have seen an affirmation that these
capabilities should not just be about cool features and how quickly we
can get them to market or how we can compete to drive down costs,"
Easterly told Reuters, saying the guidelines represent "an agreement
that the most important thing that needs to be done at the design phase
is security."
The agreement is the latest in a series of initiatives - few of which
carry teeth - by governments around the world to shape the development
of AI, whose weight is increasingly being felt in industry and society
at large.
In addition to the United States and Britain, the 18 countries that
signed on to the new guidelines include Germany, Italy, the Czech
Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and
Singapore.
[to top of second column]
|
AI (Artificial Intelligence) letters are placed on computer
motherboard in this illustration taken, June 23, 2023. REUTERS/Dado
Ruvic/Illustration//File Photo
The framework deals with questions of how to keep AI technology from
being hijacked by hackers and includes recommendations such as only
releasing models after appropriate security testing.
It does not tackle thorny questions around the appropriate uses of
AI, or how the data that feeds these models is gathered.
The rise of AI has fed a host of concerns, including the fear that
it could be used to disrupt the democratic process, turbocharge
fraud, or lead to dramatic job loss, among other harms.
Europe is ahead of the United States on regulations around AI, with
lawmakers there drafting AI rules. France, Germany and Italy also
recently reached an agreement on how artificial intelligence should
be regulated that supports "mandatory self-regulation through codes
of conduct" for so-called foundation models of AI, which are
designed to produce a broad range of outputs.
The Biden administration has been pressing lawmakers for AI
regulation, but a polarized U.S. Congress has made little headway in
passing effective regulation.
The White House sought to reduce AI risks to consumers, workers, and
minority groups while bolstering national security with a new
executive order in October.
(Reporting by Raphael Satter and Diane Bartz; Editing by Alexandra
Alper and Deepa Babington)
[© 2023 Thomson Reuters. All rights
reserved.]This material
may not be published, broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content. |