Biden executive order imposes new rules for AI. Here's what they are.
Move aims to protect against AI use for devastating weapons or cyberattacks.
President Joe Biden issued a wide-ranging executive order on Monday that aims to safeguard against threats posed by artificial intelligence, ensuring that bad actors do not use the technology to develop devastating weapons or mount supercharged cyberattacks.
The move stakes out a role for the federal government in a nearly half-trillion-dollar industry at the center of fierce competition between some of the nation's largest companies, including Google and Amazon.
The Biden administration also calls on Congress to pass data privacy legislation, an achievement that has eluded lawmakers for years despite multiple attempts.
The executive order exerts oversight over safety tests that companies use to evaluate conversation bots such as ChatGPT and introduces industry standards like watermarks for identifying AI-fueled products, among other regulations.
The batch of reforms amounts to "the strongest set of actions any government in the world has ever taken on AI safety, security, and trust," White House deputy chief of staff Bruce Reed said in a statement.
Here's what's in the executive order that seeks to rein in AI:
AI companies must conduct safety tests and share the results with the federal government
A key rule established under the executive order demands that AI companies conduct tests of some of their products and share the results with government officials before the new capabilities become available to consumers.
The safety tests undertaken by developers, known as "red teaming," ensure that new products do not pose a major threat to users or the wider public.
If a safety assessment returns concerning results, the federal government could force a company to either make product improvements or abandon a given initiative.
The new government powers are permitted under the Defense Production Act, a law enacted three-quarters of a century ago that granted the White House a broad role in overseeing industries tied to national security, the Biden administration said.
"These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public," the White House added.
A new set of standards establishes AI industry norms
The executive order lays out a sprawling set of industry standards in the hope of creating transparent products secure from dangerous outcomes, such as AI-concocted biological material or cyberattacks.
One high-profile new standard would codify the use of watermarks that alert consumers when they encounter a product enabled by AI, which could limit the threat posed by impostor content such as deepfakes.
Another rule would ensure that biotechnology firms take appropriate precautions when using AI to create or manipulate biological material.
The industry guidance will function as suggestions rather than mandates, leaving firms free to set aside the government recommendations.
The federal government will use its leverage as a key funder of scientific research to advocate for compliance on the warning around biological material, the White House said. To bolster the push for watermarks, meanwhile, the White House will require federal agencies to use the markers when deploying AI products.
Still, the executive order risks presenting an ambitious vision for the future of AI but insufficient power to bring about the industry-wide shift, Sarah Kreps, professor of government and director of the Tech Policy Institute at Cornell University, said in a statement.
"The new executive order strikes the right tone by recognizing both the promise and perils of AI," Kreps said. "What's missing is an enforcement and implementation mechanism. It's calling for a lot of action that's not likely to receive a response."
Government agencies face strict oversight of their use of AI
The executive order instructs a wide swathe of government agencies to implement changes in their use of AI, elevating federal institutions as examples of practices that the administration ultimately hopes will be adopted by the private sector.
Federal benefits programs and contractors, for instance, will take steps to ensure that AI does not worsen racial bias in their activities, the White House said. Similarly, the Department of Justice will establish rules around how best to investigate AI-related Civil Rights abuses.
Meanwhile, the Department of Energy as well as the Department of Homeland Security will take steps to address the threat that AI poses for critical infrastructure.
Robert Weissman, the president of Washington D.C.-based consumer advocacy group Public Citizen, commended the executive order while acknowledging its limitations.
"Today's executive order is a vital step by the Biden administration to begin the long process of regulating rapidly advancing AI technology," Weissman said. "But it's only a first step."