Ex-OpenAI workers ask California and Delaware AGs to block for-profit
conversion of ChatGPT maker
[April 23, 2025] By
MATT O'BRIEN
Former employees of OpenAI are asking the top law enforcement officers
in California and Delaware to stop the company from shifting control of
its artificial intelligence technology from a nonprofit charity to a
for-profit business.
They’re concerned about what happens if the ChatGPT maker fulfills its
ambition to build AI that outperforms humans, but is no longer
accountable to its public mission to safeguard that technology from
causing grievous harms.
“Ultimately, I’m worried about who owns and controls this technology
once it’s created,” said Page Hedley, a former policy and ethics adviser
at OpenAI, in an interview with The Associated Press.
Backed by three Nobel Prize winners and other advocates and experts,
Hedley and nine other ex-OpenAI workers sent a letter this week to the
two state attorneys general.
The coalition is asking California Attorney General Rob Bonta and
Delaware Attorney General Kathy Jennings, both Democrats, to use their
authority to protect OpenAI's charitable purpose and block its planned
restructuring. OpenAI is incorporated in Delaware and operates out of
San Francisco.
OpenAI said in response that “any changes to our existing structure
would be in service of ensuring the broader public can benefit from AI.”
It said its for-profit will be a public benefit corporation, similar to
other AI labs like Anthropic and tech billionaire Elon Musk's xAI,
except that OpenAI will still preserve a nonprofit arm.
“This structure will continue to ensure that as the for-profit succeeds
and grows, so too does the nonprofit, enabling us to achieve the
mission,” the company said in a statement.

The letter is the second petition to state officials this month. The
last came from a group of labor leaders and nonprofits focused on
protecting OpenAI's billions of dollars of charitable assets.
Jennings said last fall she would “review any such transaction to ensure
that the public’s interests are adequately protected." Bonta’s office
sought more information from OpenAI late last year but has said it can’t
comment, even to confirm or deny if it is investigating.
OpenAI's co-founders, including current CEO Sam Altman and Musk,
originally started it as a nonprofit research laboratory on a mission to
safely build what's known as artificial general intelligence, or AGI,
for humanity's benefit. Nearly a decade later, OpenAI has reported its
market value as $300 billion and counts 400 million weekly users of
ChatGPT, its flagship product.
OpenAI already has a for-profit subsidiary but faces a number of
challenges in converting its core governance structure. One is a lawsuit
from Musk, who accuses the company and Altman of betraying the founding
principles that led the Tesla CEO to invest in the charity.

[to top of second column] |

The OpenAI logo appears on a mobile phone in front of a screen
showing part of the company website in this photo taken on Nov. 21,
2023 in New York. (AP Photo/Peter Morgan, File)
 While some of the signatories of
this week's letter support Musk's lawsuit, Hedley said others are
“understandably cynical” because Musk also runs his own rival AI
company.
The signatories include two Nobel-winning economists, Oliver Hart
and Joseph Stiglitz, as well as AI pioneers and computer scientists
Geoffrey Hinton, who won last year's Nobel Prize in physics, and
Stuart Russell.
“I like OpenAI’s mission to ‘ensure that artificial general
intelligence benefits all of humanity,’ and I would like them to
execute that mission instead of enriching their investors," Hinton
said in a statement Wednesday. "I’m happy there is an effort to hold
OpenAI to its mission that does not involve Elon Musk.”
Conflicts over OpenAI's purpose have long simmered at the San
Francisco institute, contributing to Musk quitting in 2018, Altman's
short-lived ouster in 2023 and other high-profile departures.
Hedley, a lawyer by training, worked for OpenAI in 2017 and 2018, a
time when the nonprofit was still navigating the best ways to
steward the technology it wanted to build. As recently as 2023,
Altman said advanced AI held promise but also warned of
extraordinary risks, from drastic accidents to societal disruptions.
In recent years, however, Hedley said he watched with concern as
OpenAI, buoyed by the success of ChatGPT, was increasingly cutting
corners on safety testing and rushing out new products to get ahead
of business competitors.
“The costs of those decisions will continue to go up as the
technology becomes more powerful,” he said. “I think that in the new
structure that OpenAI wants, the incentives to rush to make those
decisions will go up and there will no longer be anybody really who
can tell them not to, tell them this is not OK.”
Software engineer Anish Tondwalkar, a former member of OpenAI’s
technical team until last year, said an important assurance in
OpenAI’s nonprofit charter is a “stop-and-assist clause” that
directs OpenAI to stand down and help if another organization is
nearing the achievement of better-than-human AI.
“If OpenAI is allowed to become a for-profit, these safeguards, and
OpenAI’s duty to the public can vanish overnight,” Tondwalkar said
in a statement Wednesday.
Another former worker who signed the letter puts it more bluntly.
“OpenAI may one day build technology that could get us all killed,"
said Nisan Stiennon, an AI engineer who worked at OpenAI from 2018
to 2020. "It is to OpenAI’s credit that it’s controlled by a
nonprofit with a duty to humanity. This duty precludes giving up
that control.”
All contents © copyright 2025 Associated Press. All rights reserved |