Former OpenAI star Sutskever is looking for super-intelligent AI with a new company

Magnify / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.

On Wednesday, former OpenAI chief scientist Ilya Sutskever announced that he is starting a new company called Safe Superintelligence, Inc. (SSI) with the goal of safely building “superintelligence,” a hypothetical form of artificial intelligence that surpasses human intelligence, perhaps to an extreme.

We will pursue a safe superintelligence in head-on, with one focus, one goal, and one product,” Sutskever wrote on X. “We will do this through revolutionary discoveries made by a small crack team.

Sutskever was a founding member of OpenAI and previously served as the company’s chief scientist. Joining Sutskever at SSI initially are two others: Daniel Levy, who previously led the optimization team at OpenAI, and Daniel Gross, an AI investor who worked on machine learning projects at Apple from 2013 to 2017. The trio released a statement on the new website .

Screenshot of the first Safe Superintelligence formation announcement taken on June 20, 2024.
Magnify / Screenshot of the first Safe Superintelligence formation announcement taken on June 20, 2024.

Sutskever and several of his associates resigned from OpenAI in May, six months after Sutskever played a key role in ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about his departure from OpenAI—and OpenAI executives like Altman wished him well in his new adventures—another departing Superalignment OpenAI team member, Jan Leike, publicly complained that “over the past years, the security culture and processes [had] Leike later joined OpenAI competitor Anthropic in May.

A hazy concept

OpenAI is currently working to create an AGI, or artificial general intelligence, that would hypothetically match human intelligence in performing a wide range of tasks without specific training. Sutskever hopes to skip that by directly attempting a moonshot without distractions along the way.

“This company is special in that its first product will be a secure superintelligence, and until then it won’t do anything else,” Sutskever said in an interview with Bloomberg. “He will be completely insulated from the external pressures of having to deal with a large and complicated product and being stuck in a competitive rat race.”

During his earlier employment at OpenAI, Sutskever was part of the “Superalignment” team, which studied how to “align” (shape the behavior) of this hypothetical form of AI, sometimes called “ASI” for “artificial super-intelligence”, to benefit humanity.

As you can imagine, it’s hard to reconcile something that doesn’t exist, so Sutskever’s quest was met with skepticism at times. On X, University of Washington computer science professor (and frequent OpenAI critic) Pedro Domingos wrote:Ilya Sutskever’s new company is guaranteed to succeed because superintelligence, which is never achieved, is guaranteed to be safe.

Like AGI, superintelligence is a nebulous concept. Because the mechanics of human intelligence are still poorly understood—and because human intelligence is difficult to quantify or define because there is no specific type of human intelligence—identifying superintelligence when it emerges can be tricky.

Computers already far surpass humans in many forms of information processing (such as basic mathematics), but are they superintelligent? Many proponents of superintelligence envision a sci-fi scenario of “alien intelligence” with some form of consciousness that operates independently of humans, and that’s more or less what Sutskever hopes to achieve and safely control.

“You’re talking about a giant super data center that autonomously develops technology,” he told Bloomberg. “That’s crazy, isn’t it? It’s the safety of what we want to contribute to.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top