What to know about landmark AI regulations proposed in California
A proposed bill aims to address potential extreme risks posed by the technology.
Sweeping advances in artificial intelligence have elicited warnings from industry leaders about the potential for grave risks, including weapon systems going rogue and massive cyberattacks.
A state legislator in California, home to many of the largest AI companies, proposed a landmark bill this week that would impose regulations to address those dangers.
The bill requires mandatory testing for wide-reaching AI products before they reach users. Every major AI model, the bill adds, should be equipped with a means for shutting the technology down if something goes wrong.
"When we're talking about safety risks related to extreme hazards, it's far preferable to put protections in place before those risks occur as opposed to trying to play catch up," state Sen. Scott Wiener, the sponsor of the bill, told ABC News. "Let's get ahead of this."
Here's what to know about what the bill does and how it could impact AI regulation nationwide:
What would the bill do to police the risks of AI?
The bill would heighten the scrutiny faced by large AI models before they gain wide option, ensuring that state officials test the products prior to their release.
In addition to mandating an emergency off-switch, the bill would implement hacking protections to make AI less vulnerable to bad actors.
To bolster enforcement, the measure would establish the Frontier Model Division within the California Department of Technology as a means of carrying out the regulations.
Since the legislation focuses on extreme risks, it will not apply to small-scale AI products, Wiener said.
"Our goal is to foster innovation with safety in mind," Wiener added.
Even more, the bill would promote AI development by creating CalCompute, a publicly owned initiative that would facilitate shared computing power among businesses, researchers and community groups.
The effort would help lower the technical threshold for small firms or organizations that may lack the immense computing capacity enjoyed by large companies, Teri Olle, the director of nonprofit Economic Security California, told ABC News.
"By expanding that access, it will allow for there to be research and innovation and AI development that is aligned with the public interest," said Olle, whose organization helped develop this feature of the bill.
Sarah Myers West, managing director of AI Now Institute, a nonprofit group that supports AI regulation, applauded the preventative approach taken by the measure.
"It's great to see the focus on addressing and mitigating harms before they go into the market," Myers West told ABC News.
However, she added, many current risks posed by AI remain unaddressed, including bias in algorithms used to set worker pay or grant access to healthcare.
"There are so many places where AI is already being used to affect people," Myers West said.
For his part, Wiener said the California legislature has taken up other bills to address some of the ongoing harms caused by AI. "We're not going to solve every problem in one bill," Wiener added.
How could the bill impact AI legislation nationwide?
The California measure on extreme AI risk comes amid a surge of AI-related bills in statehouses nationwide.
As of September, state legislatures had introduced 191 AI-related bills in 2023, amounting to a 440% increase over the full previous year, according to BSA the Software Alliance, an industry group.
Legislation proposed in California carries special weight, however, since many of the largest AI companies are based in the state, said Olle, of Economic Security California.
"Regulations in California set the standard," Olle said. "In complying with these standards in California, you affect the market."
Despite recent policy discussion and hearings, Congress has achieved little progress toward a comprehensive measure to address AI risks, Myers West said.
"Congress has been kind of stuck," Myers West added. "That does mean there's a really important role for the states."
Dylan Hoffman, executive director for California and the Southwest at industry lobbying group TechNet, emphasized the importance of U.S.-based AI regulation that shapes global rules surrounding the technology.
"America must set the standards for the responsible development and deployment of AI for the world," Hoffman told ABC News in a statement. "We look forward to reviewing the legislation and working with Senator Wiener to ensure any AI policies benefit all Californians, address any risks, and strengthen our global competitiveness."
While crafting the bill, Wiener borrowed some concepts from an executive order issued by President Joe Biden in October, such as the threshold used to determine whether an AI model reaches a large enough scale to warrant regulation, Wiener said.
Still, Wiener said he remains skeptical of the likelihood for federal legislation that would mimic the approach taken by the California bill.
"I would love for Congress to pass a strong, pro-innovation pro-safety AI law," Wiener added. "I don't have extreme confidence that Congress will be able to do anything in the near future. I hope they prove me wrong."