In
a pair of blog posts due to be published Wednesday, Britain's
National Cyber Security Centre (NCSC) said that experts had not
yet got to grips with the potential security problems tied to
algorithms that can generate human-sounding interactions -
dubbed large language models, or LLMs.
The AI-powered tools are seeing early use as chatbots that some
envision displacing not just internet searches but also customer
service work and sales calls.
The NCSC said that could carry risks, particularly if such
models were plugged into other elements organization's business
processes. Academics and researchers have repeatedly found ways
to subvert chatbots by feeding them rogue commands or fool them
into circumventing their own built-in guardrails.
For example, an AI-powered chatbot deployed by a bank might be
tricked into making an unauthorized transaction if a hacker
structured their query just right.
"Organizations building services that use LLMs need to be
careful, in the same way they would be if they were using a
product or code library that was in beta," the NCSC said in one
its blog posts, referring to experimental software releases.
"They might not let that product be involved in making
transactions on the customer's behalf, and hopefully wouldn't
fully trust it. Similar caution should apply to LLMs."
Authorities across the world are grappling with the rise of LLMs,
such as OpenAI's ChatGPT, which businesses are incorporating
into a wide range of services, including sales and customer
care. The security implications of AI are also still coming into
focus, with authorities in the U.S. and Canada saying they have
seen hackers embrace the technology.
(Reporting by Raphael Satter; Editing by Alex Richardson)
[© 2023 Thomson Reuters. All rights
reserved.] Copyright 2022 Reuters. All rights reserved. This material may not be published,
broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content.
|
|