OpenAI Researcher Quits as Concerns Grow Amid AGI Safety

In a significant development for the AI research community, OpenAI policy researcher Rosie Campbell announced her resignation from the company, citing the dissolution of its AGI Readiness team as a primary reason. Campbell’s departure marks the latest in a series of resignations from OpenAI by researchers concerned about the organization’s evolving priorities and approach to artificial general intelligence (AGI) safety.

A Shift in Priorities at OpenAI

The AGI Readiness team, once tasked with assessing global readiness to safely manage AGI—a theoretical form of AI capable of surpassing human intelligence—was disbanded following the resignation of its leader, Miles Brundage, in October. Members of the team were reassigned to other roles within the company, signaling a shift in OpenAI’s operational focus.

Campbell expressed her concerns in a Substack post, stating, “I’ve always been strongly driven by the mission of ensuring safe and beneficial AGI, and after Miles’s departure and the dissolution of the AGI Readiness team, I believe I can pursue this more effectively externally.”

OpenAI’s Changing Mission

OpenAI’s transformation from a nonprofit into a for-profit entity has sparked debate about its long-term commitment to safety. The organization initially launched as a nonprofit with the mission of developing AGI in a manner that benefits humanity. However, in recent years, it has restructured to secure the significant funding required to achieve its ambitious goals.

CEO Sam Altman defended the shift, explaining, “We just needed vastly more capital than we thought we could attract as a nonprofit.” He also emphasized that OpenAI alone cannot dictate the standards for AI safety, suggesting that these decisions should involve broader societal input.

Critics, however, worry that the pursuit of profit and accelerated product development could undermine the company’s original mission. Since September, OpenAI has expanded its sales staff significantly to target business clients and capitalize on the growing demand for AI solutions.

Resignations Reflect Growing Unease

Rosie Campbell is not alone in her concerns. High-profile researchers, including co-founder Ilya Sutskever, Jan Leike, and John Schulman, have also departed OpenAI, expressing doubts about the organization’s dedication to safety protocols.

Campbell noted that while OpenAI continues to lead in critical safety research, she has been “unsettled by some of the shifts” in its direction over the past year. During her tenure, she worked on critical issues such as evaluating dangerous AI capabilities and governing agentic systems—topics she believes remain essential as the world edges closer to transformative AI technologies.

The Path Forward

Campbell’s departure and the broader exodus of safety researchers highlight a pivotal moment for OpenAI and the AI industry. As OpenAI accelerates its efforts to commercialize AI technologies, the question remains whether safety and ethical considerations can keep pace.

The challenge of ensuring AGI development aligns with humanity’s best interests will likely require collaboration across organizations, governments, and independent researchers. For Campbell, this collaborative approach represents a new opportunity outside OpenAI to continue advocating for responsible AGI development.