- Rosie Campbell, a safety expert, said she is leaving OpenAI.
- Campbell said that one reason she quit was because OpenAI got rid of a safety-focused team.
- She is the second OpenAI safety expert this year to leave the company.
Another safety expert has told OpenAI that they are leaving the company.
Rosie Campbell, a policy worker at OpenAI, wrote on Substack on Saturday that her last week at the company was over.
She said she was leaving because Miles Brundage, the senior policy assistant who led the AGI Readiness team, quit in October. The AGI Readiness team was broken up when he left, and its members were sent to work in different parts of the company.
The AGI Readiness team told the company how safe it is for the world to handle AGI, which is a hypothetical form of artificial intelligence that could one day be as smart as or smarter than humans.
Campbell said in her post that she was leaving for the same reason Brundage did: she wanted more freedom to deal with problems that affected the whole business.
“I’ve always been strongly driven by the mission of ensuring safe and beneficial AGI and after Miles’s departure and the dissolution of the AGI Readiness team, I believe I can pursue this more effectively externally,” she said.
She also said that OpenAI is still at the cutting edge of study, especially important safety research.
“During my time here I’ve worked on frontier policy issues like dangerous capability evals, digital sentience, and governing agentic systems, and I’m so glad the company supported the neglected, slightly weird kind of policy research that becomes important when you take seriously the possibility of transformative AI.”
She did say that she’s “been unsettled by some of the shifts” in the company’s direction over the past year.
OpenAI announced in September that it was changing its governance structure and becoming a for-profit business. This came almost a decade after it started out as a nonprofit organization whose mission was to create artificial general intelligence.
Some former employees were upset about the change because they thought it went against the company’s goal to develop technology that helps people in favor of rushing to release products. Since June, the company has hired about 100 more salespeople to win new business and take advantage of what its sales chief called a “paradigm shift” toward AI.
Sam Altman, CEO of OpenAI, said that the changes will help the company get the money it needs to reach its goals, which include making AI that is smarter than humans and can help everyone.
In a May interview at Harvard Business School, Altman said, “The simple fact was we just needed a lot more money than we thought we could get—not that we thought, we tried—than we could get as a nonprofit.”
More recently, he said that OpenAI is not the only company that needs to set safety standards for AI.
“It should be a question for society,” he said in an interview that aired on Sunday on Fox News Sunday with Shannon Bream. “It should not be OpenAI to decide on its own how ChatGPT, or how the technology in general, is used or not used.”
Since Altman was unexpectedly and briefly fired last year, a number of well-known researchers have left OpenAI. These include creator Ilya Sutskever, Jan Leike, and John Schulman, who all said they were worried about the company’s safety record.
Also Read: Ceo of Openai Sam Altman Says the Company’s Goods Are Being Held Up by a Lack of Computing Power
Parhlo World asked OpenAI for a response, but they didn’t answer right away.
What do you say about this story? Visit Parhlo World For more.