President Biden will issue an executive order on Monday outlining the federal government’s first regulations on artificial intelligence systems. They include requirements that the most advanced AI products be tested to ensure they cannot be used to produce biological or nuclear weapons, with the results of those tests reported to the federal government.
The testing requirements are a small but central part of what Mr. Biden, in a speech scheduled for Monday afternoon, is expected to describe as the most sweeping government action aimed at protecting Americans from the potential risks of enormous progress of AI in the past. several years.
The regulations will include recommendations, but not requirements, that photos, videos and audio files developed by such systems be watermarked to clearly indicate that they were created by AI. This reflects a growing fear that AI will make it much easier to create “deep fakes” and convincing disinformation, especially as the 2024 presidential campaign accelerates.
The United States recently restricts export of high-performance chips in China to slow down its ability to produce what are called large language models, the mass of data that has made programs like ChatGPT so effective at answering questions and speeding up tasks. Likewise, the new regulations will require companies that operate cloud services to notify the government about their foreign customers.
Mr. Biden’s order will be issued days before a gathering of world leaders on AI security hosted by British Prime Minister Rishi Sunak. On the issue of AI regulation, the United States is lagging behind The European Union, which writes new lawsand other nations, like China and Israel, which have published regulatory proposals. From ChatGPTthe AI-powered chatbot, exploded in popularity last year, as global lawmakers and regulators questioned how artificial intelligence could change jobs, spread misinformation and potentially develop its own kind of intelligence.
“President Biden is implementing the most robust set of measures on AI safety, security and trust that any government in the world has ever taken,” said Deputy Chief of Staff Bruce Reed. from the White House. “This is the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks.”
The new U.S. rules, some of which are expected to take effect within the next 90 days, will likely face numerous challenges, some legal and some political. But the order is aimed at the most advanced future systems and does not, to a large extent, address the immediate threats from existing chatbots that could be used to spread disinformation linked to Ukraine, Gaza or the presidential campaign.
The administration did not release the text of the order Sunday, but officials said some of the steps in the order would require approval from independent agencies, like the Federal Trade Commission.
The order only affects U.S. companies, but because software development takes place all over the world, the United States will face diplomatic challenges in enforcing the regulations. That’s why the administration is trying to encourage its allies and adversaries to develop similar rules. Vice President Kamala Harris is representing the United States at the conference in London on the subject this week.
The regulations also aim to influence the technology sector by establishing standards for safety, security and consumer protection for the first time. Using the power of its purse strings, White House directives to federal agencies aim to force companies to comply with standards set by their government clients.
“This is an important first step and, importantly, the executive orders set standards,” said Lauren Kahn, a senior research analyst at the Center for Security and Emerging Technologies at Georgetown University.
The order directs the Department of Health and Human Services and other agencies to create clear safety standards for the use of AI and streamline systems to make it easier to purchase AI tools . It directs the Ministry of Labor and the National Economic Council to study the effect of AI on the labor market and develop possible regulations. And it calls on agencies to provide clear guidance to landlords, government contractors and federal benefits programs to prevent discrimination due to algorithms used in AI tools.
But the White House’s authority is limited and some directives are not enforceable. For example, the order calls on agencies to strengthen their internal guidelines to protect consumers’ personal data, but the White House has also recognized the need for privacy legislation to fully ensure data protection.
To encourage innovation and strengthen competition, the White House will direct the FTC to strengthen its role in monitoring consumer protections and antitrust violations. But the White House does not have the authority to order the FTC, an independent agency, to create regulations.
Lina Khan, chair of the trade committee, has already signaled her intention to act more aggressively as an AI watchdog. In July, the commission opened an investigation into OpenAIthe creator of ChatGPT, over possible violations of consumer privacy and accusations of spreading false information about individuals.
“While these tools are new, they are not exempt from existing rules, and the FTC will vigorously enforce the laws we are charged with administering, even in this new market,” Ms. Khan said. wrote in a guest essay in the New York Times in May.
The tech industry has said it supports the regulations, even though companies disagree on the level of government oversight. Microsoft, OpenAI, Google and Meta are among 15 companies that have accepted voluntary safety and security commitmentsincluding asking third parties to test their systems for vulnerabilities.
Mr. Biden called for regulations that support the opportunities offered by AI to contribute to medical and climate research, while creating safeguards to protect against abuse. He emphasized the need to balance regulation with support for U.S. companies in the global race for AI leadership. And to that end, the order directs agencies to streamline the visa process for highly skilled immigrants and nonimmigrants with expertise in AI who want to study and work in the United States.
Central regulations aimed at protecting national security will be outlined in a separate document, called the National Security Memorandum, which will be produced by next summer. Some of these regulations will be public, but many are expected to remain confidential – particularly those regarding measures to prevent foreign nations or non-state actors from exploiting AI systems.
A senior Energy Department official said last week that the National Nuclear Security Administration has already begun exploring how such systems could accelerate nuclear proliferation, solving the complex problems of building a weapon nuclear. And many officials have focused on how these systems could allow a terrorist group to assemble what is needed to produce biological weapons.
Still, lawmakers and White House officials have cautioned against moving too quickly to draft laws on rapidly evolving AI technologies. The EU did not take into account major linguistic models in its first legislative projects.
“If you move too fast, you risk ruining everything,” Senator Chuck Schumer, Democrat of New York and majority leader, said last week.