The US Government to Receive AI Models from
OpenAI and Anthropic for Safety Testing
Government Access to AI Models
On Thursday, leading AI developers OpenAI and Anthropic agreed to give the U.S. government access to their latest generative AI models. This access will allow for in-depth safety testing and evaluation.
The agreement was made with the U.S. AI Safety Institute, a division of the National Institute of Standards and Technology (NIST). Since the launch of ChatGPT, the regulation of artificial intelligence has become a hot topic. In response, tech companies have started to advocate for voluntary oversight by government agencies.
Collaboration for Safer AI
The U.S. AI Safety Institute will work closely with its UK counterpart. The agencies will provide feedback to OpenAI and Anthropic before and after the public release of new models. Their role is to suggest improvements and highlight potential safety concerns.
Elizabeth Kelly, director of the U.S. AI Safety Institute, emphasized the importance of these agreements. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI,” she said.
Support from Industry Leaders
OpenAI and Anthropic have pledged to support responsible innovation. The U.S. AI Safety Institute said these evaluations are part of a broader goal to ensure the safe development of AI.
“Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment,” said Jack Clark, co-founder and head of policy at Anthropic. “This strengthens our ability to identify and mitigate risks, advancing responsible AI development.”
Tied to the White House Executive Order
This partnership supports the goals of the White House AI Executive Order introduced in 2023. The order aims to create a legal framework for the rapid and safe deployment of AI technologies in the United States.
Diverging Global Approaches
Unlike the European Union, which has passed a detailed AI Act to regulate the technology, the U.S. prefers a more flexible approach. Washington is encouraging tech companies to experiment while maintaining a degree of oversight.
However, California lawmakers are taking a more cautious stance. On Wednesday, they passed a state-level AI safety bill that now awaits the governor’s signature. This state legislation could introduce stricter controls and penalties for misuse.
Industry Reaction
Sam Altman, CEO of OpenAI, expressed support for national regulation. He posted on social media that such oversight is “important” and welcomed the agreement with the federal government. This statement appeared to contrast with OpenAI’s earlier opposition to the California bill, which it argued could limit innovation and hinder AI research.