The former head scientist of OpenAI recently funded $1 billion for
a new company that develops ethical AI.
Ilya Sutskever, an AI researcher, is the most recent former employee of OpenAI to launch a company devoted to AI safety.
The former head scientist of OpenAI has raised $1 billion for his new company, which aims to create secure AI systems.
After leaving OpenAI in May after an unsuccessful attempt, initially supported by Sutskever, to remove CEO Sam Altman in November 2023, Ilya Sutskever co-founded Safe Superintelligence (SSI) in June.
The former head scientist of OpenAI has raised $1 billion for his new company, which aims to create secure AI systems.
According to reports, the investment is worth $5 billion to SSI. According to SSI, the investors include Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel in addition to NFDG, which is co-managed by CEO and co-founder of SSI Daniel Gross.
Ten people work with SSI at this time in Tel Aviv, Israel, and Palo Alto, California.
Reuters reports that SSI intends to use the funding for both the acquisition of the required processing capacity and the hire of elite AI researchers and engineers. When it comes to creating AI, personnel and computing are both expensive.
At first, Sutskever supported Altman’s ousting attempts, which seemed to center mostly on the conflict between delivering practical AI products and maintaining AI safety.
But he quickly made a U-turn amid the mayhem that broke out at the AI behemoth, saying, “I sincerely regret my participation in the board’s actions.” In a statement that was uploaded to X, Sutskever declared, “I never intended to harm OpenAI.”
Sutskever stated in May that he was “confident that OpenAI will build AGI that is both safe and beneficial” when he announced his departure from the company.
However, he and Daniel Levy, who was also formerly of OpenAI, along with Gross, who had worked on AI at Apple, announced the launch of safety-focused SSI a few weeks later in mid-June.
Sutskever previously collaborated with “father of AI” Geoffrey Hinton, who left Google in May 2023 to discuss the dangers of super-intelligent AI and artificial general intelligence (AGI) more candidly.
With an emphasis on safer AI, SSI is not the first business to come out of OpenAI. After departing the company in 2021, Dario Amodei and his sister Daniela Amodei started Anthropic to develop safer AI reportedly concerned about the direction of the company.
Safe Superintelligence’s plans
SSI publicized its launch via a single website page with plain text on white background.
“We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence,” the company said at the time.
SSI publicized its launch via a single website page with plain text on white background.
“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” the statement says. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”
Gross said in an interview with Reuters not to expect a product for years a contrast to companies like OpenAI that are pushing out marketable versions of AI to fund wider work on AGI.
“It’s important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market,”
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.