(AAPL), (GOOGL), (MSFT), (NVDA), (IBM)
In the immortal words of Yogi Berra, “The future ain't what it used to be.” And if Ilya Sutskever has his way, it's going to be a whole lot smarter—hopefully less apocalyptic.
As the former chief scientist at OpenAI, Sutskever is no stranger to pushing the boundaries of artificial intelligence.
After a dramatic exit from OpenAI in May that shook up Silicon Valley, he's back with a bold new venture: Safe Superintelligence Inc. (SSI).
And here’s the kicker—he’s already secured a cool $1 billion in funding.
Now, you might be wondering, what makes SSI worth a billion-dollar bet? For starters, it’s co-founded by a trio of tech heavyweights.
Besides Sutskever, who’s practically a legend in AI circles, there’s Daniel Gross, who sold his startup to Apple (AAPL) back in 2013, and Daniel Levy, a former OpenAI researcher who knows a thing or two about AI safety.
This dream team has one mission: to develop AI systems that are not just powerful but safe enough to avoid any Terminator-style scenarios.
The company’s mantra is all about balancing safety and capabilities—tackling these twin challenges as if they were two sides of the same Bitcoin. Sutskever puts it this way: SSI aims to advance AI capabilities “as fast as possible while making sure our safety always remains ahead.”
In other words, they want to create AI that’s smarter than us but not smart enough to go rogue.
And let’s talk about that $1 billion. Investors aren’t throwing their cash around for fun—this is serious money from serious players.
Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel, and NFDG (run by Nat Friedman and SSI’s CEO Daniel Gross) are all in. It’s like getting the Avengers of venture capital to back your startup.
But what’s even more impressive is the company’s rumored valuation—$5 billion. That’s not pocket change, even in Silicon Valley.
So, what’s the plan for all this loot? SSI is gearing up to acquire massive computing power and assemble a top-tier team of researchers and engineers.
Right now, they’re operating with a lean crew of 10, split between Palo Alto, California, and Tel Aviv, Israel. But with this kind of funding, you can bet they’re going to grow—and fast.
Now, let’s get into the nitty-gritty of AI safety. It’s the hot topic du jour, with everyone from tech giants to regulators debating how to keep AI from turning into our worst nightmare.
In California, a bill aimed at regulating AI safety has caused a rift in the industry. OpenAI and Google (GOOGL) are against it, while Anthropic and Elon Musk’s xAI are all for it.
Meanwhile, SSI is keeping its focus on building what they call a “safe superintelligence,” steering clear of the commercial pressures that often lead to shortcuts in safety.
Sutskever, at just 37, is already a big deal in the AI world thanks to his impressive portfolio and having Geoffrey Hinton, known as the "Godfather of AI,” as his mentor.
Alongside Gross and Levy, SSI is poised to become a key player in the race to AGI—Artificial General Intelligence, or as I like to call it, the holy grail of AI.
But here’s the twist: While OpenAI is focused on creating a range of commercial products on the way to AGI, SSI has a singular focus.
They’re all about creating one thing: a superintelligent AI that won’t decide to wipe us out.
But, of course, SSI isn’t alone in this mission. Giants like Alphabet Inc. (GOOGL) and Microsoft Corporation (MSFT) are also deeply entrenched in the AI safety race.
Alphabet, through its subsidiary DeepMind, has been making waves with its groundbreaking research on AI alignment and ethics.
Microsoft, with its Azure AI platform and strategic partnership with OpenAI, has committed to advancing AI technologies with a strong emphasis on fairness, transparency, and accountability.
NVIDIA Corporation (NVDA) is another key player, providing the essential hardware that powers AI advancements.
While their focus is on developing the most powerful GPUs, NVIDIA’s technology is crucial for the safe development and deployment of AI systems.
And let’s not forget IBM (IBM), which has been a pioneer in AI with its Watson platform. IBM’s approach to AI safety revolves around principles of trustworthy AI, emphasizing transparency and explainability.
These companies, like SSI, recognize that AI safety isn’t some buzzword—it’s the cornerstone of responsible innovation.
But let’s be real—AI safety is easier said than done. As AI systems become more powerful, the chances of them going off the rails increase.
Misalignment between AI and human values could lead to outcomes straight out of a sci-fi horror flick. But despite the risks, venture capitalists are still willing to pour money into companies like SSI that promise to push the envelope.
And speaking of pushing the envelope, Sutskever has always been a big believer in the power of scaling—using vast amounts of computing power to supercharge AI models.
This idea was central to the rise of generative AI, like the now-ubiquitous ChatGPT.
Just to be clear though, SSI isn’t copying the OpenAI playbook. Sutskever hints that they’ll be approaching scaling in a “new” way, though he’s keeping the details under wraps for now.
“Everyone just says ‘scaling hypothesis.’ Everyone neglects to ask, what are we scaling?” Sutskever quipped in an interview. It’s a fair question.
Scaling without a clear direction is like flooring the gas pedal without knowing where you’re headed. SSI plans to chart a different course, and if they pull it off, it could be something special.
As SSI moves forward, they’re laser-focused on hiring people who not only have the skills but also the right mindset.
Gross mentioned that they spend hours vetting candidates for “good character” and are more interested in people who are passionate about the work rather than the hype surrounding AI. It’s a refreshing approach in an industry where hype can often overshadow substance.
With $1 billion in the bank and a mission to make AI both powerful and safe, Safe Superintelligence Inc. is a company to watch. They’ve got the talent, the funding, and the vision.
Now, it’s up to them to deliver on the promise of creating AI that won’t just change the world—but do so without burning it down.