WASHINGTON -- Vice President Kamala Harris met on Thursday with the heads of Google, Microsoft and two other companies developing artificial intelligence as the Biden administration rolls out initiatives meant to ensure the rapidly evolving technology improves lives without putting people's rights and safety at risk.
President Joe Biden briefly dropped by the meeting in the White House's Roosevelt Room, saying he hoped the group could “educate us” on what is most needed to protect and advance society.
“What you're doing has enormous potential and enormous danger,” Biden told the CEOs, according to a video posted to his Twitter account.
The popularity of AI chatbot ChatGPT — even Biden has given it a try, White House officials said Thursday — has sparked a surge of commercial investment in AI tools that can write convincingly human-like text and churn out new images, music and computer code.
But the ease with which it can mimic humans has propelled governments around the world to consider how it could take away jobs, trick people and spread disinformation.
The Democratic administration announced an investment of $140 million to establish seven new AI research institutes.
In addition, the White House Office of Management and Budget is expected to issue guidance in the next few months on how federal agencies can use AI tools. There is also an independent commitment by top AI developers to participate in a public evaluation of their systems in August at the Las Vegas hacker convention DEF CON.
But the White House also needs to take stronger action as AI systems built by these companies are getting integrated into thousands of consumer applications, said Adam Conner of the liberal-leaning Center for American Progress.
“We’re at a moment that in the next couple of months will really determine whether or not we lead on this or cede leadership to other parts of the world, as we have in other tech regulatory spaces like privacy or regulating large online platforms,” Conner said.
The meeting was pitched as a way for Harris and administration officials to discuss the risks in current AI development with Google CEO Sundar Pichai, Microsoft CEO Satya Nadella and the heads of two influential startups: Google-backed Anthropic and Microsoft-backed OpenAI, the maker of ChatGPT.
Harris said in a statement after the closed-door meeting that she told the executives that “the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products.”
ChatGPT has led a flurry of new “generative AI” tools adding to ethical and societal concerns about automated systems trained on vast pools of data.
Some of the companies, including OpenAI, have been secretive about the data their AI systems have been trained upon. That's made it harder to understand why a chatbot is producing biased or false answers to requests or to address concerns about whether it’s stealing from copyrighted works.
Companies worried about being liable for something in their training data might also not have incentives to rigorously track it in a way that would be useful "in terms of some of the concerns around consent and privacy and licensing,” said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face.
“From what I know of tech culture, that just isn’t done,” she said.
Some have called for disclosure laws to force AI providers to open their systems to more third-party scrutiny. But with AI systems being built atop previous models, it won’t be easy to provide greater transparency after the fact.
“It’s really going to be up to the governments to decide whether this means that you have to trash all the work you’ve done or not," Mitchell said. "Of course, I kind of imagine that at least in the U.S., the decisions will lean towards the corporations and be supportive of the fact that it’s already been done. It would have such massive ramifications if all these companies had to essentially trash all of this work and start over.”
While the White House on Thursday signaled a collaborative approach with the industry, companies that build or use AI are also facing heightened scrutiny from U.S. agencies such as the Federal Trade Commission, which enforces consumer protection and antitrust laws.
The companies also face potentially tighter rules in the European Union, where negotiators are putting finishing touches on AI regulations that could vault the 27-nation bloc to the forefront of the global push to set standards for the technology.
When the EU first drew up its proposal for AI rules in 2021, the focus was on reining in high-risk applications that threaten people’s safety or rights such as live facial scanning or government social scoring systems, which judge people based on their behavior. Chatbots were barely mentioned.
But in a reflection of how fast AI technology has developed, negotiators in Brussels have been scrambling to update their proposals to take into account general purpose AI systems such as those built by OpenAI. Provisions added to the bill would require so-called foundation AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislation obtained by The Associated Press.
A European Parliament committee is due to vote next week on the bill, but it could be years before the AI Act takes effect.
Elsewhere in Europe, Italy temporarily banned ChatGPT over a breach of stringent European privacy rules, and Britain’s competition watchdog said Thursday it’s opening a review of the AI market.
In the U.S., putting AI systems up for public inspection at the DEF CON hacker conference could be a novel way to test risks, though not likely as thorough as a prolonged audit, said Heather Frase, a senior fellow at Georgetown University’s Center for Security and Emerging Technology.
Along with Google, Microsoft, OpenAI and Anthropic, companies that the White House says have agreed to participate include Hugging Face, chipmaker Nvidia and Stability AI, known for its image-generator Stable Diffusion.
“This would be a way for very skilled and creative people to do it in one kind of big burst,” Frase said.
O'Brien reported from Cambridge, Massachusetts. AP writers Seung Min Kim in Washington and Kelvin Chan in London contributed to this report.
Follow the AP's coverage of artificial intelligence at https://apnews.com/hub/artificial-intelligence.