Microsoft, Google and other leading artificial intelligence companies committed Friday to put new AI systems through outside testing before they are publicly released and to clearly label AI-generated content, the White House announced.
The pledges are part of a series of voluntary commitments agreed to by the White House and seven leading AI companies -- which also include Amazon, Meta, OpenAI, Anthropic and Inflection -- aimed at making AI systems and products safer and more trustworthy while Congress and the White House develop more comprehensive regulations to govern the rapidly growing industry. President Joe Biden met with top executives from all seven companies at the White House on Friday.
In a speech Friday, Biden called the companies commitments "real and concrete" adding they will help fulfill their "fundamental obligations to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values and our shared values."
"We'll see more technology change in the next ten years, or even in the next few years, than we've seen in the last 50 years. That has been an astounding revelation," Biden said.
White House officials acknowledge that some of the companies have already enacted some of the commitments but argue they will as a whole raise "the standards for safety, security and trust of AI" and will serve as a "bridge to regulation."
"It's a first step, it's a bridge to where we need to go," White House deputy chief of staff Bruce Reed, who has been managing the AI policy process, said in an interview. "It will help industry and government develop the capacities to make sure that AI is safe and secure. And we pushed to move so quickly because this technology is moving farther and faster than anything we've seen before."
While most of the companies already conduct internal "red-teaming" exercises, the commitments will mark the first time they have all committed to allow outside experts to test their systems before they are released to the public. A red team exercise is designed to simulate what could go wrong with a given technology -- such as a cyberattack or its potential to be used by malicious actors -- and allows companies to proactively identify shortcomings and prevent negative outcomes.
Reed said the external red-teaming "will help pave the way for government oversight and regulation," potentially laying the groundwork for that outside testing to be carried out by a government regulator or licenser.
The commitments could also lead to widespread watermarking of AI-generated audio and visual content with the aim of combating fraud and misinformation.
The companies also committed to investing in cybersecurity and "insider threat safeguards," in particular to protect AI model weights, which are essentially the knowledge base upon which AI systems rely; creating a robust mechanism for third parties to report system vulnerabilities; prioritizing research on the societal risks of AI; and developing and deploying AI systems "to help address society's greatest challenges," according to the White House.
All of the commitments are voluntary and White House officials acknowledged that there is no enforcement mechanism to ensure the companies stick to the commitments, some of which also lack specificity.
Common Sense Media, a child internet-safety organization, commended the White House for taking steps to establish AI guardrails, but warned that "history would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations."
"If we've learned anything from the last decade and the complete mismanagement of social media governance, it's that many companies offer a lot of lip service," Common Sense Media CEO James Steyer said in a statement. "And then they prioritize their profits to such an extent that they will not hold themselves accountable for how their products impact the American people, particularly children and families."
The federal government's failure to regulate social media companies at their inception -- and the resistance from those companies -- has loomed large for White House officials as they have begun crafting potential AI regulations and executive actions in recent months.
"The main thing we stressed throughout the discussions with the companies was that we should make this as robust as possible," Reed said. "The tech industry made a mistake in warding off any kind of oversight, legislation and regulation a decade ago and I think that AI is progressing even more rapidly than that and it's important for this bridge to regulation to be a sturdy one."
The commitments were crafted during a monthslong back-and-forth between the AI companies and the White House that began in May when a group of AI executives came to the White House to meet with Biden, Vice President Kamala Harris and White House officials. The White House also sought input from non-industry AI safety and ethics experts.
White House officials are working to move beyond voluntary commitments, readying a series of executive actions, the first of which is expected to be unveiled later this summer. Officials are also working closely with lawmakers on Capitol Hill to develop more comprehensive legislation to regulate AI.
"This is a serious responsibility. We have to get it right. There's an enormous, enormous potential upside as well," Biden said.
In the meantime, White House officials say the companies will "immediately" begin implementing the voluntary commitments and hope other companies sign on in the future.
"We expect that other companies will see how they also have an obligation to live up to the standards of safety, security and trust. And they may choose -- and we would welcome them choosing -- joining these commitments," a White House official said.