YouTube case at US Supreme Court could shape protections for ChatGPT and
AI
Send a link to a friend
[April 24, 2023]
By Andrew Goudsward
WASHINGTON (Reuters) - When the U.S. Supreme Court decides in the coming
months whether to weaken a powerful shield protecting internet
companies, the ruling also could have implications for rapidly
developing technologies like artificial intelligence chatbot ChatGPT.
The justices are due to rule by the end of June whether Alphabet Inc's
YouTube can be sued over its video recommendations to users. That case
tests whether a U.S. law that protects technology platforms from legal
responsibility for content posted online by their users also applies
when companies use algorithms to target users with recommendations.
What the court decides about those issues is relevant beyond social
media platforms. Its ruling could influence the emerging debate over
whether companies that develop generative AI chatbots like ChatGPT from
OpenAI, a company in which Microsoft Corp is a major investor, or Bard
from Alphabet's Google should be protected from legal claims like
defamation or privacy violations, according to technology and legal
experts.
That is because algorithms that power generative AI tools like ChatGPT
and its successor GPT-4 operate in a somewhat similar way as those that
suggest videos to YouTube users, the experts added.
"The debate is really about whether the organization of information
available online through recommendation engines is so significant to
shaping the content as to become liable," said Cameron Kerry, a visiting
fellow at the Brookings Institution think tank in Washington and an
expert on AI. "You have the same kinds of issues with respect to a
chatbot."
Representatives for OpenAI and Google did not respond to requests for
comment.
During arguments in February, Supreme Court justices expressed
uncertainty over whether to weaken the protections enshrined in the law,
known as Section 230 of the Communications Decency Act of 1996. While
the case does not directly relate to generative AI, Justice Neil Gorsuch
noted that AI tools that generate "poetry" and "polemics" likely would
not enjoy such legal protections.
The case is only one facet of an emerging conversation about whether
Section 230 immunity should apply to AI models trained on troves of
existing online data but capable of producing original works.
Section 230 protections generally apply to third-party content from
users of a technology platform and not to information a company helped
to develop. Courts have not yet weighed in on whether a response from an
AI chatbot would be covered.
'CONSEQUENCES OF THEIR OWN ACTIONS'
Democratic Senator Ron Wyden, who helped draft that law while in the
House of Representatives, said the liability shield should not apply to
generative AI tools because such tools "create content."
[to top of second column]
|
The United States Supreme Court is seen
in Washington, U.S., March 27, 2023. REUTERS/Evelyn Hockstein
"Section 230 is about protecting users and sites for hosting and
organizing users' speech. It should not protect companies from the
consequences of their own actions and products," Wyden said in a
statement to Reuters.
The technology industry has pushed to preserve Section 230 despite
bipartisan opposition to the immunity. They said tools like ChatGPT
operate like search engines, directing users to existing content in
response to a query.
"AI is not really creating anything. It's taking existing content
and putting it in a different fashion or different format," said
Carl Szabo, vice president and general counsel of NetChoice, a tech
industry trade group.
Szabo said a weakened Section 230 would present an impossible task
for AI developers, threatening to expose them to a flood of
litigation that could stifle innovation.
Some experts forecast that courts may take a middle ground,
examining the context in which the AI model generated a potentially
harmful response.
In cases in which the AI model appears to paraphrase existing
sources, the shield may still apply. But chatbots like ChatGPT have
been known to create fictional responses that appear to have no
connection to information found elsewhere online, a situation
experts said would likely not be protected.
Hany Farid, a technologist and professor at the University of
California, Berkeley, said that it stretches the imagination to
argue that AI developers should be immune from lawsuits over models
that they "programmed, trained and deployed."
"When companies are held responsible in civil litigation for harms
from the products they produce, they produce safer products," Farid
said. "And when they're not held liable, they produce less safe
products."
The case being decided by the Supreme Court involves an appeal by
the family of Nohemi Gonzalez, a 23-year-old college student from
California who was fatally shot in a 2015 rampage by Islamist
militants in Paris, of a lower court's dismissal of her family's
lawsuit against YouTube.
The lawsuit accused Google of providing "material support" for
terrorism and claimed that YouTube, through the video-sharing
platform's algorithms, unlawfully recommended videos by the Islamic
State militant group, which claimed responsibility for the Paris
attacks, to certain users.
(Reporting by Andrew Goudsward; Editing by Will Dunham)
[© 2023 Thomson Reuters. All rights
reserved.]This material may not be published,
broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content. |