Last year in July, OpenAI announced the creation of a new research team to prepare for the emergence of highly intelligent artificial intelligence that could potentially surpass its creators. Ilya Sutskever, the chief scientist of OpenAI and one of the company’s founders, was appointed as the colead of this team with 20% of the company’s computing power allocated to them.
However, OpenAI has now confirmed that the “superalignment team” no longer exists. This comes after the departure of several researchers involved, the news of Sutskever leaving the company, and the resignation of the team’s other colead. The work of this team will now be integrated into OpenAI’s other research initiatives.
Sutskever’s departure gained attention due to his role in establishing OpenAI and contributing to the research that led to ChatGPT. He was also among the board members who removed CEO Sam Altman in November, leading to a reinstatement after a staff revolt and negotiations in which Sutskever and other directors left the board.
Shortly after Sutskever’s departure, Jan Leike, the former DeepMind researcher who was the other colead of the superalignment team, announced his resignation. Despite this, neither Sutskever nor Leike have provided public comments on their reasons for leaving OpenAI. Sutskever did express confidence in OpenAI’s current path in a post, praising the company’s progress.
The disbanding of OpenAI’s superalignment team adds to recent internal changes following the governance crisis in November. Two team members were dismissed for leaking company information, and another member left in February. Additional researchers working on AI policy and governance have also left the company recently, with research on AI risks now led by John Schulman.
OpenAI has refrained from commenting on the departures or the future of its work on long-term AI risks. The responsibility for researching risks associated with advanced AI models has shifted to John Schulman’s team, specializing in refining AI models after training.