AI summit a start but global agreement a distant hope
Send a link to a friend
[November 06, 2023]
By Martin Coulter and Paul Sandle
LONDON (Reuters) -British Prime Minister Rishi Sunak championed a series
of landmark agreements after hosting the first artificial intelligence
(AI) safety summit but a global plan for overseeing the technology
remains a long way off.
Over two days of talks between world leaders, business executives and
researchers, tech CEOs such as Elon Musk and OpenAI's Sam Altman rubbed
shoulders with the likes of U.S. Vice President Kamala Harris and
European Commission chief Ursula von der Leyen to discuss the future
regulation of AI.
Leaders from 28 nations – including China – signed the Bletchley
Declaration, a joint statement acknowledging the technology's risks; the
U.S. and Britain both announced plans to launch their own AI safety
institutes; and two more summits were announced to take place in South
Korea and France next year.
But while some consensus was reached on the need to regulate AI,
disagreements remain over exactly how that should happen – and who will
lead such efforts.
Risks around rapidly-developing AI have been an increasingly high
priority for policymakers since Microsoft-backed Open AI released
ChatGPT to the public last year.
The chatbot’s unprecedented ability to respond to prompts with
human-like fluency has led some experts to call for a pause in the
development of such systems, warning they could gain autonomy and
threaten humanity.
Sunak talked of being "privileged and excited" to host Tesla-founder
Musk, but European lawmakers warned of too much technology and data
being held by a small number of companies in one country, the United
States.
"Having just one single country with all of the technologies, all of the
private companies, all the devices, all the skills, will be a failure
for all of us," French Minister of the Economy and Finance Bruno Le
Maire told reporters.
The UK has also diverged from the EU by proposing light-touch approach
to AI regulation, in contrast to Europe's AI Act, which is close to
being finalized and will bind developers of what are deemed "high risk"
applications to stricter controls.
"I came here to sell our AI Act," Vera Jourova, Vice President of the
European Commission.
Jourova said, while she did not expect other countries to copy the
bloc's laws wholesale, some agreement on global rules was required.
"If the democratic world will not be rule-makers, and we become
rule-takers, the battle will be lost," she said.
While projecting an image of unity, attendees said the three main power
blocs in attendance – the U.S., the EU, and China – tried to assert
their dominance.
[to top of second column]
|
British Prime Minister Rishi Sunak attends an in-conversation event
with Tesla and SpaceX's CEO Elon Musk in London, Britain, Thursday,
Nov. 2, 2023. Kirsty Wigglesworth/Pool via REUTERS/File Photo
Some suggested Harris had upstaged Sunak when the U.S. government
announced its own AI safety institute – just as Britain had a week
earlier – and she delivered a speech in London highlighting the
technology’s short-term risks, in contrast to the summit’s focus on
existential threats.
"It was fascinating that just as we announced our AI safety
institute, the Americans announced theirs," said attendee Nigel Toon,
CEO of British AI firm Graphcore.
China's presence at the summit and its decision to sign off on the
"Bletchley Declaration" was trumpeted as a success by British
officials.
China’s vice minister of science and technology said the country was
willing to work with all sides on AI governance.
Signaling tension between China and the West, however, Wu Zhaohui
told delegates: "Countries regardless of their size and scale have
equal rights to develop and use AI."
The Chinese minister participated in the ministerial roundtable on
the Thursday, his ministry said. He did not participate in the
public events on the second day, however.
A recurring theme of the behind-closed-door discussions, highlighted
by a number of attendees, was the potential risks of open-source AI,
which gives members of the public free access to experiment with the
code behind the technology.
Some experts have warned that open-source models could be used by
terrorists to create chemical weapons, or even create a
super-intelligence beyond human control.
Speaking with Sunak at a live event in London on Thursday, Musk
said: "It will get to the point where you’ve got open-source AI that
will start to approach human-level intelligence, or perhaps exceed
it. I don’t know quite what to do about it."
Yoshua Bengio, an AI pioneer appointed to lead a "state of the
science" report commissioned as part of the Bletchley Declaration,
told Reuters the risks of open-source AI were a high priority.
He said: "It could be put in the hands of bad actors, and it could
be modified for malicious purposes. You can't have the open-source
release of these powerful systems, and still protect the public with
the right guardrails."
(Reporting by Martin Coulter and Paul Sandle; Additional Editing by
Matt Scuffham and Louise Heavens)
[© 2023 Thomson Reuters. All rights
reserved.]This material
may not be published, broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content. |