My page - topic 1, topic 2, topic 3 Postbox Live

AI models from OpenAI and Anthropic.

The Us Government Will Receive Ai Models From Openai And Anthropic

The US government will receive

AI models from OpenAI and Anthropic.

As part of agreements made on Thursday, top generative AI developers OpenAI and Anthropic have agreed to grant the US government access to their new models for safety testing.

The US AI Safety Institute, a division of the federal National Institute of Standards and Technology (NIST), was the party to the agreements.
Since the release of OpenAI’s ChatGPT, artificial intelligence regulation has become a hot topic, with tech companies advocating for a voluntary method to exposing their systems to governmental monitoring.

Working closely with its equivalent at the UK AI Safety Institute, the agency said it will provide comments to both businesses on potential safety enhancements to their models both before and after their public release.


Elizabeth Kelly, director of the US AI Safety Institute, stated, “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”


Leading AI model developers, such OpenAI and Anthropic, have made voluntary promises to assist innovation, and the agency stated that this would be the goal of the evaluations.


Jack Clark, co-founder and head of policy at Anthropic, stated, “Our collaboration with the US AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment.”

“This strengthens our ability to identify and mitigate risks, advancing responsible AI development,” he stated.
The partnership is a component of the efforts related to the White House AI executive order, which was unveiled in 2023 and intended to establish a legislative framework for the swift implementation of AI models in the US.


In contrast to the European Union, where politicians passed an ambitious AI Act to more tightly oversee the industry, Washington is willing to give tech companies full freedom to explore and experiment with AI.


However, pro-regulatory legislators in California, the home of Silicon Valley, passed a state bill on AI safety on Wednesday; the bill now needs the governor’s signature to become law.

Sam Altman, the CEO of OpenAI, stated on social media that national regulation is “important” and he welcomed his company’s deal with the US government.


This was a subdued critique of the California state law that was passed, which OpenAI opposed on the grounds that it would impede innovation and research and impose fines for any transgressions.

 

 

 


Discover more from

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading