OpenAI Superalignment Team Members Resign Over Resource and Priority Disputes

0:00

OpenAI’s Superalignment team, tasked with developing methods to control “superintelligent” AI systems, was allotted 20% of the company’s compute resources. However, requests for even a fraction of that were often denied, hindering their efforts, according to a team member.

This issue, among others, led to several team resignations this week, including co-lead Jan Leike, a former DeepMind researcher involved in the creation of ChatGPT, GPT-4, and InstructGPT.

Leike publicly explained his resignation on Friday. “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” he posted on X. He emphasized that more focus should be on preparing for future AI models, addressing security, monitoring, safety, alignment, societal impact, and related issues.

OpenAI did not immediately comment on the resources promised and allocated to the Superalignment team.

Formed last July and led by Leike and OpenAI co-founder Ilya Sutskever, who also resigned this week, the Superalignment team aimed to solve the core technical challenges of controlling superintelligent AI within four years. The team included scientists and engineers from OpenAI’s previous alignment division and other company researchers, contributing to the safety of both in-house and external models and sharing work with the broader AI industry through research grants.

Despite publishing safety research and funding millions in grants, the Superalignment team struggled to secure critical investments as product launches dominated OpenAI leadership’s focus.

“Building smarter-than-human machines is an inherently dangerous endeavor,” Leike said. “But over the past years, safety culture and processes have taken a backseat to shiny products.”

An added distraction was Sutskever’s conflict with OpenAI CEO Sam Altman. Last year, Sutskever and the old board moved to abruptly fire Altman over concerns regarding his transparency with the board. Under investor and employee pressure, Altman was reinstated, much of the board resigned, and Sutskever reportedly never returned to work.

Sutskever was vital to the Superalignment team, contributing research and connecting with other divisions within OpenAI. He also underlined the importance of the team’s work to key decision-makers.

Following Leike’s departure, Altman acknowledged the need for more efforts, stating on X that they are committed to improving. Co-founder Greg Brockman elaborated by emphasizing the importance of a tight feedback loop, rigorous testing, and a balance between safety and capabilities.

John Schulman, another OpenAI co-founder, will now oversee the type of work the Superalignment team was doing, though the team will no longer exist as a dedicated unit. Instead, a loosely associated group of researchers will be integrated into various divisions. An OpenAI spokesperson described it as “integrating [the team] more deeply.”

The concern is that AI development at OpenAI may not be as safety-focused as it could have been.

Kyle Wiggers
Kyle Wiggers
Kyle Wiggers is a senior reporter with a special interest in AI. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives in Brooklyn with his partner, a piano educator, and dabbles in piano himself.

Latest stories

Ad

Related Articles

Leave a reply

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!

Ad
Continue on app