Who is Zico Kolter? A professor leads OpenAI safety panel with power to
halt unsafe AI releases
[November 03, 2025] By
MATT O'BRIEN
If you believe artificial intelligence poses grave risks to humanity,
then a professor at Carnegie Mellon University has one of the most
important roles in the tech industry right now.
Zico Kolter leads a 4-person panel at OpenAI that has the authority to
halt the ChatGPT maker's release of new AI systems if it finds them
unsafe. That could be technology so powerful that an evildoer could use
it to make weapons of mass destruction. It could also be a new chatbot
so poorly designed that it will hurt people's mental health.
“Very much we’re not just talking about existential concerns here,”
Kolter said in an interview with The Associated Press. “We’re talking
about the entire swath of safety and security issues and critical topics
that come up when we start talking about these very widely used AI
systems.”
OpenAI tapped the computer scientist to be chair of its Safety and
Security Committee more than a year ago, but the position took on
heightened significance last week when California and Delaware
regulators made Kolter's oversight a key part of their agreements to
allow OpenAI to form a new business structure to more easily raise
capital and make a profit.
Safety has been central to OpenAI's mission since it was founded as a
nonprofit research laboratory a decade ago with a goal of building
better-than-human AI that benefits humanity. But after its release of
ChatGPT sparked a global AI commercial boom, the company has been
accused of rushing products to market before they were fully safe in
order to stay at the front of the race. Internal divisions that led to
the temporary ouster of CEO Sam Altman in 2023 brought those concerns
that it had strayed from its mission to a wider audience.

The San Francisco-based organization faced pushback — including a
lawsuit from co-founder Elon Musk — when it began steps to convert
itself into a more traditional for-profit company to continue advancing
its technology.
Agreements announced last week by OpenAI along with California Attorney
General Rob Bonta and Delaware Attorney General Kathy Jennings aimed to
assuage some of those concerns.
At the heart of the formal commitments is a promise that decisions about
safety and security must come before financial considerations as OpenAI
forms a new public benefit corporation that is technically under the
control of its nonprofit OpenAI Foundation.
Kolter will be a member of the nonprofit's board but not on the
for-profit board. But he will have “full observation rights” to attend
all for-profit board meetings and have access to information it gets
about AI safety decisions, according to Bonta's memorandum of
understanding with OpenAI. Kolter is the only person, besides Bonta,
named in the lengthy document.
Kolter said the agreements largely confirm that his safety committee,
formed last year, will retain the authorities it already had. The other
three members also sit on the OpenAI board — one of them is former U.S.
Army General Paul Nakasone, who was commander of the U.S. Cyber Command.
Altman stepped down from the safety panel last year in a move seen as
giving it more independence.
“We have the ability to do things like request delays of model releases
until certain mitigations are met,” Kolter said. He declined to say if
the safety panel has ever had to halt or mitigate a release, citing the
confidentiality of its proceedings.

[to top of second column] |

Carnegie Mellon University Head of Machine Learning, Zico Kolter
delivers a keynote speech at AI Horizons Summit in Bakery Square on
Thursday, Sept. 11, 2025 in Pittsburgh. (Sebastian Foltz/Pittsburgh
Post-Gazette via AP)
 Kolter said there will be a variety
of concerns about AI agents to consider in the coming months and
years, from cybersecurity – “Could an agent that encounters some
malicious text on the internet accidentally exfiltrate data?” – to
security concerns surrounding AI model weights, which are numerical
values that influence how an AI system performs.
“But there’s also topics that are either emerging
or really specific to this new class of AI model that have no real
analogues in traditional security,” he said. “Do models enable
malicious users to have much higher capabilities when it comes to
things like designing bioweapons or performing malicious
cyberattacks?”
“And then finally, there’s just the impact of AI models on people,”
he said. “The impact to people’s mental health, the effects of
people interacting with these models and what that can cause. All of
these things, I think, need to be addressed from a safety
standpoint.”
OpenAI has already faced criticism this year about the behavior of
its flagship chatbot, including a wrongful-death lawsuit from
California parents whose teenage son killed himself in April after
lengthy interactions with ChatGPT.
Kolter, director of Carnegie Mellon's machine learning department,
began studying AI as a Georgetown University freshman in the early
2000s, long before it was fashionable.
“When I started working in machine learning, this was an esoteric,
niche area,” he said. “We called it machine learning because no one
wanted to use the term AI because AI was this old-time field that
had overpromised and underdelivered.”
Kolter, 42, has been following OpenAI for years and was close enough
to its founders that he attended its launch party at an AI
conference in 2015. Still, he didn't expect how rapidly AI would
advance.
“I think very few people, even people working in machine learning
deeply, really anticipated the current state we are in, the
explosion of capabilities, the explosion of risks that are emerging
right now,” he said.

AI safety advocates will be closely watching OpenAI's restructuring
and Kolter's work. One of the company's sharpest critics says he's
“cautiously optimistic,” particularly if Kolter's group "is actually
able to hire staff and play a robust role.”
“I think he has the sort of background that makes sense for this
role. He seems like a good choice to be running this,” said Nathan
Calvin, general counsel at the small AI policy nonprofit Encode.
Calvin, who OpenAI targeted with a subpoena at his home as part of
its fact-finding to defend against the Musk lawsuit, said he wants
OpenAI to stay true to its original mission.
“Some of these commitments could be a really big deal if the board
members take them seriously,” Calvin said. “They also could just be
the words on paper and pretty divorced from anything that actually
happens. I think we don’t know which one of those we’re in yet.”
All contents © copyright 2025 Associated Press. All rights reserved |