OpenAI CEO warns Senate: 'If this technology goes wrong, it can go quite wrong'

OpenAI CEO Sam Altman called for government regulation in a Senate hearing.

May 16, 2023, 1:46 PM

OpenAI CEO Sam Altman on Tuesday warned federal lawmakers that artificial intelligence could cause "significant harm to the world" if the technology falls into a downward spiral.

"I think if this technology goes wrong, it can go quite wrong," Altman, whose company developed the widely used AI-driven conversation program ChatGPT, told a Senate committee.

In turn, Altman called for government intervention to protect against the worst effects and abuses of AI.

Like other AI-enabled chat bots, ChatGPT can immediately respond to prompts from users on a wide range of subjects, generating an essay on Shakespeare or a set of travel tips for a given destination.

Microsoft launched a version of its Bing search engine in March that offers responses delivered by GPT-4, the latest model of ChatGPT. Rival search company Google in February announced an AI model called Bard.

Programs like ChatGPT and Bard, however, have raised concern about bias, misinformation and other potentially harmful responses.

"GPT-4 is more likely to respond helpfully and truthfully, and refuse harmful requests, than any other widely deployed model of similar capability," Altman said on Tuesday.

PHOTO: Samuel Altman, CEO of OpenAI during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law oversight hearing to examine artificial intelligence, on Capitol Hill in Washington, D.C., on May 16, 2023.
Samuel Altman, CEO of OpenAI during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law oversight hearing to examine artificial intelligence, on Capitol Hill in Washington, D.C., on May 16, 2023.
Andrew Caballero-reynolds/AFP via Getty Images

"However, we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," he added, suggesting the adoption of licenses or safety requirements necessary for the operation of AI models.

The risks posed by AI have drawn greater attention in recent months in response to major breakthroughs like ChatGPT.

Hundreds of tech leaders, including billionaire entrepreneur Elon Musk and Apple co-founder Steve Wozniak, signed an open letter in March calling for a six-month pause in the development of AI systems and a major expansion of government oversight.

"AI systems with human-competitive intelligence can pose profound risks to society and humanity," the letter said.

In comments last month to Fox News host Tucker Carlson, Musk raised further alarm: "There's certainly a path to AI dystopia, which is to train AI to be deceptive."

OpenAI, which launched in 2015 as a nonprofit, has grown dramatically since then and transformed into a for-profit company.

In January, Microsoft announced it was investing $10 billion in OpenAI. The move deepened a longstanding relationship between Microsoft and OpenAI, which began with a $1 billion investment four years ago.

"This is a remarkable time to be working on artificial intelligence but as this technology advances, we understand that people are anxious about how it could change the way we live," Altman said on Tuesday. "We are too."