OpenAI's AI Safety Exodus: Nearly Half of AGI Researchers Depart

· 2 min read

article picture

In a startling revelation, Daniel Kokotajlo, a former governance researcher at OpenAI, has disclosed that nearly half of the company's staff focused on long-term risks of advanced AI have departed in recent months. This exodus raises questions about the company's commitment to its original mission of developing artificial general intelligence (AGI) safely and ethically.

The Departure Wave

According to Kokotajlo, who left OpenAI in April 2023, the company has seen a steady stream of resignations throughout 2024. Of the approximately 30 staff members originally working on AGI safety issues, only about 16 remain. Notable departures include Jan Hendrik Kirchner, Collin Burns, Jeffrey Wu, Jonathan Uesato, Steven Bills, Yuri Burda, Todor Markov, and co-founder John Schulman.

These exits follow the high-profile resignations of chief scientist Ilya Sutskever and researcher Jan Leike in May, who co-led OpenAI's "superalignment" team. Leike cited concerns that safety had "taken a backseat to shiny products" at the San Francisco-based AI company.

Shifting Priorities

Kokotajlo attributes the departures to OpenAI's increasing focus on commercial products and a diminishing emphasis on research aimed at ensuring AGI's safe development. He suggests that the company may be "fairly close" to developing AGI but is not adequately prepared "to handle all that entails."

The former researcher also points to a "chilling effect" within the company on publishing research about AGI risks and an "increasing amount of influence by the communications and lobbying wings of OpenAI" over what is deemed appropriate to publish.

Implications for AI Safety

The mass exodus of AGI safety researchers from OpenAI is significant because it may indicate a shift in the company's approach to managing potential risks associated with advanced AI systems. As one of the leading organizations in AI development, OpenAI's actions and priorities can have far-reaching implications for the entire field.

Kokotajlo expresses disappointment in OpenAI's recent stance against SB 1047, a bill aimed at regulating AI development. He views this as a "betrayal" of the company's earlier plans to evaluate long-term AGI risks and use those findings to inform legislation and regulation.

Looking Ahead

While Kokotajlo doesn't regret his time at OpenAI, he cautions against groupthink in the race to develop AGI. He warns that companies may naturally conclude that winning the AGI race is good for humanity, as this aligns with their incentives.

As the AI industry continues to evolve rapidly, the departure of safety-focused researchers from a leading company like OpenAI raises important questions about the balance between innovation and responsible development in the pursuit of artificial general intelligence.