The co-founder of OpenAI resigned to join a rival founded by other defectors.
OpenAI has lost another AI safety scientist.
17/8/2024,
OpenAI has lost one of its co-founders.
John Schulman, an AI safety researcher, left the company that created ChatGPT and joined Anthropic, a competitor AI startup started by fellow OpenAI foreigners.
Schulman revealed that he had “made the difficult decision to leave” the AI startup he helped found on Monday in an X-formerly-Twitter post. He gave his reasons for wanting to “deepen [his] focus on AI alignment” and go back to a more hands-on position.
In the AI sector, “alignment” refers to making sure that an AI model’s goals, morality, and logic are consistent with those of humans. It is, in essence, the skill of ensuring that AI does not rebel against humanity or cause us harm in any other way.
A “Superalignment” safety team, formerly based at OpenAI, was tasked with maintaining alignment even if AI develops intelligence enough to outsmart humans.
However, in May, the unit’s leaders, alignment researcher Jan Lieke and former OpenAI head scientist Ilya Sutskever, abruptly left, rocking the business.
Schulman, who was already spearheading alignment efforts for products like as ChatGPT, was elevated to oversee safety at OpenAI when the Superalignment team swiftly disbanded.
And now he’s out too, a few months later. However, he claims in his X post that OpenAI is not to blame for it.
This is the now-former OpenAI leader. “To be clear, I’m not leaving due to lack of support for alignment research at OpenAI,” the writer stated. “On the contrary, company leaders have been very committed to investing in this area.”
“My decision is a personal one,” he stated, “based on how I want to focus my efforts in the next phase of my career.”
Following up on Schulman’s farewell letter, Altman publicly thanked the cofounder for his work, describing him as a “brilliant researcher, a deep thinker about product and society,” and a “great friend.”
It’s crucial to remember that, despite Schulman’s unwavering support for OpenAI‘s alignment efforts, the company’s safety protocols have drawn criticism.
This criticism is especially relevant from Anthropic, Schulman’s new employer, which was established by former OpenAI scientists who believed that OpenAI wasn’t focussing enough on safety initiatives.
Additionally, in June, a group of whistleblowers made up primarily of OpenAI employees wrote an open letter advocating for a “Right to Warn” AI firm executives and the general public about potential risks associated with AI.
The letter’s signatories contend that burdensome non-disclosure agreements and inadequate whistleblower channels have impeded efforts to mitigate AI risk.
This letter was released in response to a shocking Vox report that claimed Altman and other OpenAI executives had acknowledged signing papers that included a provision enabling OpenAI to demand extraordinarily stringent nondisclosure agreements (NDAs) from departing workers, who faced the possibility of having their equity reclaimed if they refused to sign the agreements.
Again, though, Schulman believes that the security protocols of the Microsoft-backed AI business are top-notch. It’s not OpenAI; it’s him.
“I’ll be rooting for you all even when I work somewhere else,” the scientist concluded his parting words.
Discover more from
Subscribe to get the latest posts sent to your email.