Regulators struggle to keep up with the fast-moving and complicated
landscape of AI therapy apps
[September 29, 2025]
By DEVI SHASTRI
In the absence of stronger federal regulation, some states have begun
regulating apps that offer AI “therapy” as more people turn to
artificial intelligence for mental health advice.
But the laws, all passed this year, don't fully address the
fast-changing landscape of AI software development. And app developers,
policymakers and mental health advocates say the resulting patchwork of
state laws isn't enough to protect users or hold the creators of harmful
technology accountable.
“The reality is millions of people are using these tools and they’re not
going back,” said Karin Andrea Stephan, CEO and co-founder of the mental
health chatbot app Earkick.
___
EDITOR’S NOTE — This story includes discussion of suicide. If you or
someone you know needs help, the national suicide and crisis lifeline in
the U.S. is available by calling or texting 988. There is also an online
chat at 988lifeline.org.
___
The state laws take different approaches. Illinois and Nevada have
banned the use of AI to treat mental health. Utah placed certain limits
on therapy chatbots, including requiring them to protect users’ health
information and to clearly disclose that the chatbot isn’t human.
Pennsylvania, New Jersey and California are also considering ways to
regulate AI therapy.

The impact on users varies. Some apps have blocked access in states with
bans. Others say they're making no changes as they wait for more legal
clarity.
And many of the laws don't cover generic chatbots like ChatGPT, which
are not explicitly marketed for therapy but are used by an untold number
of people for it. Those bots have attracted lawsuits in horrific
instances where users lost their grip on reality or took their own lives
after interacting with them.
Vaile Wright, who oversees health care innovation at the American
Psychological Association, agreed that the apps could fill a need,
noting a nationwide shortage of mental health providers, high costs for
care and uneven access for insured patients.
Mental health chatbots that are rooted in science, created with expert
input and monitored by humans could change the landscape, Wright said.
“This could be something that helps people before they get to crisis,”
she said. “That’s not what’s on the commercial market currently.”
That's why federal regulation and oversight is needed, she said.
Earlier this month, the Federal Trade Commission announced it was
opening inquiries into seven AI chatbot companies — including the parent
companies of Instagram and Facebook, Google, ChatGPT, Grok (the chatbot
on X), Character.AI and Snapchat — on how they "measure, test and
monitor potentially negative impacts of this technology on children and
teens.” And the Food and Drug Administration is convening an advisory
committee Nov. 6 to review generative AI-enabled mental health devices.
Federal agencies could consider restrictions on how chatbots are
marketed, limit addictive practices, require disclosures to users that
they are not medical providers, require companies to track and report
suicidal thoughts, and offer legal protections for people who report bad
practices by companies, Wright said.
Not all apps have blocked access
From "companion apps” to “AI therapists” to “mental wellness” apps, AI’s
use in mental health care is varied and hard to define, let alone write
laws around.

That has led to different regulatory approaches. Some states, for
example, take aim at companion apps that are designed just for
friendship, but don't wade into mental health care. The laws in Illinois
and Nevada ban products that claim to provide mental health treatment
outright, threatening fines up to $10,000 in Illinois and $15,000 in
Nevada.
But even a single app can be tough to categorize.
Earkick's Stephan said there is still a lot that is “very muddy” about
Illinois' law, for example, and the company has not limited access
there.
Stephan and her team initially held off calling their chatbot, which
looks like a cartoon panda, a therapist. But when users began using the
word in reviews, they embraced the terminology so the app would show up
in searches.
Last week, they backed off using therapy and medical terms again.
Earkick’s website described its chatbot as “Your empathetic AI
counselor, equipped to support your mental health journey,” but now it’s
a “chatbot for self care.”
[to top of second column]
|

The "Speaker's gavel" is seen in the House of Representatives at the
Illinois State Capitol Tuesday, March 19, 2013, in Springfield, Ill.
(AP Photo/Seth Perlman, File)
 Still, “we’re not diagnosing,”
Stephan maintained.
Users can set up a “panic button” to call a trusted loved one if
they are in crisis and the chatbot will "nudge” users to seek out a
therapist if their mental health worsens. But it was never designed
to be a suicide prevention app, Stephan said, and police would not
be called if someone told the bot about thoughts of self-harm.
Stephan said she's happy that people are looking at AI with a
critical eye, but worried about states' ability to keep up with
innovation.
"The speed at which everything is evolving is massive,” she said.
Other apps blocked access immediately. When Illinois users download
the AI therapy app Ash, a message urges them to email their
legislators, arguing “misguided legislation” has banned apps like
Ash "while leaving unregulated chatbots it intended to regulate free
to cause harm.”
A spokesperson for Ash did not respond to multiple requests for an
interview.
Mario Treto Jr., secretary of the Illinois Department of Financial
and Professional Regulation, said the goal was ultimately to make
sure licensed therapists were the only ones doing therapy.
“Therapy is more than just word exchanges,” Treto said. "It requires
empathy, it requires clinical judgment, it requires ethical
responsibility, none of which AI can truly replicate right now.”
One chatbot company is trying to fully replicate therapy
In March, a Dartmouth University-based team published the first
known randomized clinical trial of a generative AI chatbot for
mental health treatment.
The goal was to have the chatbot, called Therabot, treat people
diagnosed with anxiety, depression or eating disorders. It was
trained on vignettes and transcripts written by the team to
illustrate an evidence-based response.

The study found users rated Therabot similar to a therapist and had
meaningfully lower symptoms after eight weeks compared with people
who didn't use it. Every interaction was monitored by a human who
intervened if the chatbot’s response was harmful or not
evidence-based.
Nicholas Jacobson, a clinical psychologist whose lab is leading the
research, said the results showed early promise but that larger
studies are needed to demonstrate whether Therabot works for large
numbers of people.
“The space is so dramatically new that I think the field needs to
proceed with much greater caution that is happening right now,” he
said.
Many AI apps are optimized for engagement and are built to support
everything users say, rather than challenging peoples’ thoughts the
way therapists do. Many walk the line of companionship and therapy,
blurring intimacy boundaries therapists ethically would not.
Therabot’s team sought to avoid those issues.
The app is still in testing and not widely available. But Jacobson
worries about what strict bans will mean for developers taking a
careful approach. He noted Illinois had no clear pathway to provide
evidence that an app is safe and effective.
“They want to protect folks, but the traditional system right now is
really failing folks,” he said. “So, trying to stick with the status
quo is really not the thing to do.”
Regulators and advocates of the laws say they are open to changes.
But today's chatbots are not a solution to the mental health
provider shortage, said Kyle Hillman, who lobbied for the bills in
Illinois and Nevada through his affiliation with the National
Association of Social Workers.
“Not everybody who's feeling sad needs a therapist,” he said. But
for people with real mental health issues or suicidal thoughts,
"telling them, ‘I know that there’s a workforce shortage but here's
a bot' — that is such a privileged position.”
All contents © copyright 2025 Associated Press. All rights reserved
 |