U.S. commission cites 'moral imperative' to explore AI weapons
Send a link to a friend
[January 27, 2021]
By Jeffrey Dastin and Paresh Dave
(Reuters) - The United States should not
agree to ban the use or development of autonomous weapons powered by
artificial intelligence (AI) software, a government-appointed panel said
in a draft report for Congress.
The panel, led by former Google Chief Executive Eric Schmidt, on Tuesday
concluded two days of public discussion about how the world's biggest
military power should consider AI for national security and
technological advancement.
Its Vice Chairman Robert Work, a former deputy secretary of defense,
said autonomous weapons are expected to make fewer mistakes than humans
do in battle, leading to reduced casualties or skirmishes caused by
target misidentification.
"It is a moral imperative to at least pursue this hypothesis," he said.
The discussion waded into a controversial frontier of human rights and
warfare. For about eight years, a coalition of non-governmental
organizations has pushed for a treaty banning "killer robots," saying
human control is necessary to judge attacks' proportionality and assign
blame for war crimes. Thirty countries including Brazil and Pakistan
want a ban, according to the coalition's website, and a United Nations
body has held meetings on the systems since at least 2014.
While autonomous weapon capabilities are decades old, concern has
mounted with the development of AI to power such systems, along with
research finding biases in AI and examples of the software's abuse.
The U.S. panel, called the National Security Commission on Artificial
Intelligence, in meetings this week acknowledged the risks of autonomous
weapons. A member from Microsoft Corp for instance warned of pressure to
build machines that react quickly, which could escalate conflicts.
[to top of second column]
|
Activists from the Campaign to Stop Killer Robots, a coalition of
non-governmental organisations opposing lethal autonomous weapons or
so-called 'killer robots', stage a protest at Brandenburg Gate in
Berlin, Germany, March, 21, 2019. REUTERS/Annegret Hilse
The panel only wants humans to make decisions on launching nuclear
warheads.
Still, the panel prefers anti-proliferation work to a treaty banning
the systems, which it said would be against U.S. interests and
difficult to enforce.
Mary Wareham, coordinator of the eight-year Campaign to Stop Killer
Robots, said, the commission's "focus on the need to compete with
similar investments made by China and Russia ... only serves to
encourage arms races."
Beyond AI-powered weapons, the panel's lengthy report recommended
use of AI by intelligence agencies to streamline data gathering and
review; $32 billion in annual federal funding for AI research; and
new bodies including a digital corps modeled after the army's
Medical Corps and a technology competitiveness council chaired by
the U.S. vice president.
The commission is due to submit its final report to Congress in
March, but the recommendations are not binding.
(Reporting By Jeffrey Dastin in San Francisco and Paresh Dave in
Oakland; Editing by Cynthia Osterman)
[© 2021 Thomson Reuters. All rights
reserved.] Copyright 2021 Reuters. All rights reserved. This material may not be published,
broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content.
|