Sam Altman ouster spotlights rift over extinction threat posed by AI

Industry leaders warn of major risk while others balk.

November 21, 2023, 4:26 PM

Months before OpenAI board member Ilya Sutskever would gain notoriety for his key role in the ouster of CEO Sam Altman, Sutskever co-authored a little-noticed but apocalyptic warning about the threat posed by artificial intelligence.

Superintelligent AI, Sutskever co-wrote on a company blog, could lead to "the disempowerment of humanity or even human extinction," since engineers are unable to prevent AI from "going rogue." The message echoed OpenAI's charter, which calls for avoiding AI uses if they "harm humanity."

The cry for caution from Sutskever, however, arrived at a period of breakneck growth for OpenAI. A $10 billion investment from Microsoft at the outset of this year helped fuel the development of GPT-4, a viral conversation bot that the company says now boasts 100 million weekly users.

The forced exit of Altman arose in part from frustration between him and Sutskever over a tension at the heart of the company: heightened awareness of the risks posed by AI, on the one hand, and explosive growth in the release and commercialization of new products on the other, The New York Times reported.

PHOTO: Ilya Sutskever, Russian, Israeli-Canadian computer scientist and co-founder and Chief Scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv, June 5, 2023.
Ilya Sutskever, Russian, Israeli-Canadian computer scientist and co-founder and Chief Scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv, June 5, 2023.
Jack Guez/AFP via Getty Images

To be sure, details remain scant about the reason for Altman's departure. The move came after a review undertaken by the company's board of directors, OpenAI said on Friday.

"Mr. Altman's departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities," the company said in a statement.

Altman was hired by Microsoft days after his exit, eliciting a letter on Monday signed by nearly all of the employees at OpenAI that called for the resignation of the company's board and the return of Altman, according to a copy of the letter obtained by ABC News.

The OpenAI board, the letter said, "informed the leadership team that allowing the company to be destroyed 'would be consistent with the mission' of the company.

Stuart Russell, an AI researcher at the University of California, Berkeley who co-authored a study on societal-scale dangers of the technology, said OpenAI faces a tension centered on its mission of developing artificial general intelligence, or AGI, a form of AI that could mimic human intelligence and potentially surpass it.

"If you're funding a company with multiple billions of dollars to pursue AGI, that just seems like a built-in conflict with the goal of ensuring that AI systems are safe," Russell told ABC News, emphasizing that it remains unclear why exactly Altman left the company.

The divide over the existential threat posed by AI looms industry-wide as the technology sweeps across institutions from manufacturing to mass entertainment, prompting disagreement about the pace of development and the focus of possible regulation.

An open letter written in May by the Center for AI Safety warned that AI poses a "risk of extinction" akin to pandemics or nuclear war, featuring signatures from hundreds of researchers and industry leaders like Altman and ​​Demis Hassabis, the CEO of Google DeepMind, the tech giant's AI division.

For his part, Altman has said rapid deployment of AI allows for stress-testing of products and offers the best way to avert considerable harm.

Other AI luminaries, however, have balked at the purported risk. Yann LeCun, chief AI scientist at Meta, told the MIT Technology Review that fear of an AI takeover is "preposterously ridiculous."

Warnings from industry titans about the risks of AI have arisen alongside an increasingly competitive industry in which the speedy development of products requires massive investment, which in turn places pressure on firms to pursue commercial uses for the technology, Anjana Susarla, a professor of at Michigan State University's Broad College of Business who studies the responsible deployment of AI, told ABC News.

"The very large investments needed to build these kinds of technologies means the companies have a tradeoff between the profits they would generate from these investments and thinking about some abstract benefit from artificial intelligence," Susarla said.

The multi-billion dollar investment from Microsoft earlier this year deepened a longstanding relationship between Microsoft and OpenAI, which began with a $1 billion investment from the tech giant four years ago.

PHOTO: Microsoft's CEO Satya Nadella speaks at the Asia-Pacific Economic Cooperation CEO Summit in San Francisco, Nov. 15, 2023.
Microsoft's CEO Satya Nadella speaks at the Asia-Pacific Economic Cooperation CEO Summit in San Francisco, Nov. 15, 2023.
Carlos Barria/Reuters

OpenAI was founded as a nonprofit in 2015. As of last month, the company was set to bring in more than $1 billion in revenue over a year-long period through the sale of its artificial intelligence products, The Information reported.

In addition to uniting OpenAI employees behind Altman, his recent ouster appears to have resolved some of the tension with Sutskever.

"I deeply regret my participation in the board's actions," Sutskever, a longtime AI researcher and co-founder of OpenAI, posted on X on Monday. "I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."

The choice of Altman's replacement, meanwhile, could offer a hint of the company's future approach to safety.

OpenAI appointed interim CEO Emmett Shear, the former chief executive at video game streaming platform Twitch.

In a podcast interview on "The Logan Bartlett Show," in July, Shear described AI as "pretty inherently dangerous," and placed the odds of a massive AI-related disaster in a range between 5% and 50% -- an estimate that he called the "probability of doom."

In September, Shear said on X that he favors "slowing down" the development of AI.

"If we're at a speed of 10 right now, a pause is reducing to 0," Shear wrote. "I think we should aim for a 1-2 instead.