Ilya Sutskever’s $1 Billion Vision for
Safe Superintelligence
A New Chapter Begins After OpenAI
Ilya Sutskever, former Chief Scientist at OpenAI, is stepping into the spotlight once again this time with a bold new mission. In June, shortly after leaving OpenAI, he launched a new venture called Safe Superintelligence Inc. (SSI).
The goal? To build AI systems that are both powerful and safe, without compromising one for the other.
The $1 Billion Backing for Safer AI
Backed by $1 billion in funding, SSI has already drawn attention from top investors. Funding firms include:
Andreessen Horowitz
Sequoia Capital
DST Global
SV Angel
NFDG (co-managed by SSI CEO Daniel Gross)
The company currently operates with a small but elite team based in Palo Alto, California, and Tel Aviv, Israel.
Sutskever’s new company is reportedly valued at $5 billion.
Why Sutskever Left OpenAI
Sutskever’s exit from OpenAI in May 2024 followed internal conflict. In late 2023, he briefly supported a controversial board move to remove CEO Sam Altman, a decision that sparked widespread turmoil within the organization.
Soon after, Sutskever retracted his stance:
“I sincerely regret my participation in the board’s actions… I never intended to harm OpenAI,” he wrote on X.
Despite his exit, he expressed confidence in OpenAI’s vision, saying he believed it would develop AGI (Artificial General Intelligence) that is “safe and beneficial.”
But just a few weeks later, he and Daniel Levy, also a former OpenAI researcher, teamed up with Daniel Gross, a former Apple AI lead, to co-found SSI.
Mission: One Product, One Goal – Safe Superintelligence
Unlike other AI companies aiming to ship quick products, SSI has a single long-term focus: to build a safe superintelligence.
“We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence,” states their homepage.
Their public-facing site is intentionally minimal a white page with a plain-text mission, signaling seriousness and focus.
Engineering Safety and Power in Parallel
SSI takes a unique approach by treating safety and performance as equal priorities, solving them together through breakthrough research and engineering.
“We approach safety and capabilities in tandem… advancing capabilities as fast as possible while making sure our safety always remains ahead,” the company says.
Unlike OpenAI or Google DeepMind, SSI doesn’t plan to rush out consumer-facing products. Instead, it will spend years in R&D before releasing anything to the public.
Strategic Investors and a Long Road Ahead
Daniel Gross emphasized the importance of choosing investors who share their mission:
“It’s important for us to be surrounded by investors who understand, respect, and support our mission,” he told Reuters.
Gross also clarified that the company isn’t aiming for fast profits or marketable demos. The goal is deep research, not short-term gains.
This deliberate pace sharply contrasts with current trends, where many AI firms race to launch products like chatbots, copilots, and automation tools to fuel revenue.
Part of a Growing Trend in AI Safety
Sutskever is not the first to leave OpenAI to pursue safer AI goals. In 2021, Dario Amodei and Daniela Amodei launched Anthropic, citing concerns about OpenAI’s direction.
Even Geoffrey Hinton, often called the “Godfather of AI,” left Google in 2023 to speak openly about the dangers of superintelligent AI.
SSI joins this rising group of AI ethics-focused startups, responding to growing global concerns about how fast and how far AI should go.
A Future-Shaping Vision, Still in the Making
Although the launch of SSI has made waves, its work is only just beginning. The firm aims to redefine what responsible AI development looks like, focusing on alignment, ethics, and control from day one.
While it may be years before we see a product, the message is clear: SSI isn’t here for the short term. It’s building for a future where superintelligent AI can be powerful and safe, without compromise.
Former OpenAI scientist Ilya Sutskever launches Safe Superintelligence Inc. with $1 billion in funding to build ethical, secure AI. Learn about their mission, goals, and investors.
#SafeAI, #Superintelligence, #IlyaSutskever, #OpenAI, #AIStartup, #AIEthics, #FutureOfAI, #SecureAI, #DanielGross, #SSILaunch,
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.