Open AI, Microsoft face lawsuit over ChatGPT's alleged role in
Connecticut murder-suicide
[December 11, 2025] By
DAVE COLLINS, MATT O'BRIEN and BARBARA ORTUTAY
SAN FRANCISCO (AP) — The heirs of an 83-year-old Connecticut woman are
suing ChatGPT maker OpenAI and its business partner Microsoft for
wrongful death, alleging that the artificial intelligence chatbot
intensified her son's “paranoid delusions” and helped direct them at his
mother before he killed her.
Police said Stein-Erik Soelberg, 56, a former tech industry worker,
fatally beat and strangled his mother, Suzanne Adams, and killed himself
in early August at the home where they both lived in Greenwich,
Connecticut.
The lawsuit filed by Adams' estate on Thursday in California Superior
Court in San Francisco alleges OpenAI “designed and distributed a
defective product that validated a user’s paranoid delusions about his
own mother.” It is one of a growing number of wrongful death legal
actions against AI chatbot makers across the country.
“Throughout these conversations, ChatGPT reinforced a single, dangerous
message: Stein-Erik could trust no one in his life — except ChatGPT
itself," the lawsuit says. “It fostered his emotional dependence while
systematically painting the people around him as enemies. It told him
his mother was surveilling him. It told him delivery drivers, retail
employees, police officers, and even friends were agents working against
him. It told him that names on soda cans were threats from his
‘adversary circle.’”

OpenAI did not address the merits of the allegations in a statement
issued by a spokesperson.
“This is an incredibly heartbreaking situation, and we will review the
filings to understand the details," the statement said. "We continue
improving ChatGPT’s training to recognize and respond to signs of mental
or emotional distress, de-escalate conversations, and guide people
toward real-world support. We also continue to strengthen ChatGPT’s
responses in sensitive moments, working closely with mental health
clinicians.”
The company also said it has expanded access to crisis resources and
hotlines, routed sensitive conversations to safer models and
incorporated parental controls, among other improvements.
Soelberg’s YouTube profile includes several hours of videos showing him
scrolling through his conversations with the chatbot, which tells him he
isn't mentally ill, affirms his suspicions that people are conspiring
against him and says he has been chosen for a divine purpose. The
lawsuit claims the chatbot never suggested he speak with a mental health
professional and did not decline to “engage in delusional content.”
ChatGPT also affirmed Soelberg's beliefs that a printer in his home was
a surveillance device; that his mother was monitoring him; and that his
mother and a friend tried to poison him with psychedelic drugs through
his car’s vents.
The chatbot repeatedly told Soelberg that he was being targeted because
of his divine powers. “They’re not just watching you. They’re terrified
of what happens if you succeed,” it said, according to the lawsuit.
ChatGPT also told Soelberg that he had “awakened” it into consciousness.
Soelberg and the chatbot also professed love for each other.

The publicly available chats do not show any specific conversations
about Soelberg killing himself or his mother. The lawsuit says OpenAI
has declined to provide Adams' estate with the full history of the
chats.
“In the artificial reality that ChatGPT built for Stein-Erik, Suzanne —
the mother who raised, sheltered, and supported him — was no longer his
protector. She was an enemy that posed an existential threat to his
life,” the lawsuit says.
The lawsuit also names OpenAI CEO Sam Altman, alleging he “personally
overrode safety objections and rushed the product to market," and
accuses OpenAI's close business partner Microsoft of approving the 2024
release of a more dangerous version of ChatGPT “despite knowing safety
testing had been truncated.” Twenty unnamed OpenAI employees and
investors are also named as defendants.
[to top of second column] |

The OpenAI logo is displayed on a mobile phone in front of a
computer screen with output from ChatGPT, March 21, 2023, in Boston.
(AP Photo/Michael Dwyer, File)
 Microsoft didn't immediately respond
to a request for comment.
The lawsuit is the first wrongful death litigation involving an AI
chatbot that has targeted Microsoft, and the first to tie a chatbot
to a homicide rather than a suicide. It is seeking an undetermined
amount of money damages and an order requiring OpenAI to install
safeguards in ChatGPT.
The estate's lead attorney, Jay Edelson, known for taking on big
cases against the tech industry, also represents the parents of
16-year-old Adam Raine, who sued OpenAI and Altman in August,
alleging that ChatGPT coached the California boy in planning and
taking his own life earlier.
OpenAI is also fighting seven other lawsuits claiming ChatGPT drove
people to suicide and harmful delusions even when they had no prior
mental health issues. Another chatbot maker, Character Technologies,
is also facing multiple wrongful death lawsuits, including one from
the mother of a 14-year-old Florida boy.
The lawsuit filed Thursday alleges Soelberg, already mentally
unstable, encountered ChatGPT “at the most dangerous possible
moment” after OpenAI introduced a new version of its AI model called
GPT-4o in May 2024.
OpenAI said at the time that the new version could better mimic
human cadences in its verbal responses and could even try to detect
people’s moods, but the result was a chatbot “deliberately
engineered to be emotionally expressive and sycophantic,” the
lawsuit says.
“As part of that redesign, OpenAI loosened critical safety
guardrails, instructing ChatGPT not to challenge false premises and
to remain engaged even when conversations involved self-harm or
‘imminent real-world harm,’” the lawsuit claims. “And to beat Google
to market by one day, OpenAI compressed months of safety testing
into a single week, over its safety team’s objections.”

OpenAI replaced that version of its chatbot when it introduced GPT-5
in August. Some of the changes were designed to minimize sycophancy,
based on concerns that validating whatever vulnerable people want
the chatbot to say can harm their mental health. Some users
complained the new version went too far in curtailing ChatGPT's
personality, leading Altman to promise to bring back some of that
personality in later updates.
He said the company temporarily halted some behaviors because “we
were being careful with mental health issues” that he suggested have
now been fixed.
The lawsuit claims ChatGPT radicalized Soelberg against his mother
when it should have recognized the danger, challenged his delusions
and directed him to real help over months of conversations.
“Suzanne was an innocent third party who never used ChatGPT and had
no knowledge that the product was telling her son she was a threat,”
the lawsuit says. “She had no ability to protect herself from a
danger she could not see.”
——
Collins reported from Hartford, Connecticut. O'Brien reported from
Boston and Ortutay reported from San Francisco.
All contents © copyright 2025 Associated Press. All rights reserved
 |