The US government wants large tech to demonstrate
the security of its cloud and AI services.
The Department of Commerce can demand proof of cybersecurity from cloud and AI firms.
In order to prevent technology companies from abusing AI and cloud computing services, the US government requires them to demonstrate the security of their systems and disclose their capabilities.
A proposal that would force businesses with strong AI models and compute clusters to submit comprehensive reports to the federal government has been suggested by the Department of Commerce.
In addition to evaluating security protocols, the Bureau of Industry and Security (BIS) states that monitoring the “defense-relevant capabilities” of emerging technology is another goal.
Secretary of Commerce Gina Raimondo stated, “AI holds tremendous promise and risk as it advances rapidly.” “This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security.”
Government reactions to new technologies are typically regulations and reporting, according to Ubertas Consulting Head of Consulting Alex Kearns.
“The US Government’s proposal appears to focus reporting requirements on the providers of AI capability (i.e. model developers) rather than the consumers of the models,” he stated to ITPro.
“The EU Artificial Intelligence Act, which is presently in effect, focuses more on how businesses utilize AI and whether or not it is equitable and responsible. It is crucial to manage both of these factors.”
Dual-purpose risk
Information on security measures and development activities, namely the results of red-teaming, would be included in the BIS reporting. In order to strengthen security, that usually refers to testing an organization’s reaction to an attack.
BIS did note, however, that it would like to see the outcomes of testing for potentially harmful capabilities in the technologies themselves, such as the capacity to aid in cyberattacks and whether this may facilitate the development of severe weapons, such as chemical, biological, and nuclear ones, by “non experts”.
According to BIS, the ultimate goal is to guarantee that non-state actors or foreign adversaries cannot abuse “dual-use” foundation models.
As per the proposed definition, a “dual-use foundation model” is one that has been trained on vast amounts of data with tens of billions of parameters.
It can be applied in various scenarios and can be tailored to perform tasks that could potentially jeopardize security, such as economic or public safety-related ones.
However, the government is also interested in learning how those technologies may help with its defense initiatives.
Clever move or meddling from the government?
According to Sysdig’s cybersecurity expert Crystal Morin, businesses need to be thinking about these concepts already, so disclosing that information shouldn’t be a “challenge.”
She told ITPro, “We should think about misuse or potential security risks right from the start for advanced technologies that have huge potential, like AI.“
“By promoting a secure-first approach to software design lifecycles and encouraging enterprises to be transparent about their security practices, this legislation will foster the development of innovative technology with responsible security in mind from the outset.
“In these situations, I think of Jeff Goldblum’s character in Jurassic Park: ‘Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should’.”
However, Cloudhouse’s technical manager, Kashif Nazir, cautioned that smaller businesses would be more affected by the burden of regulatory reporting.
“While these measures tackle important security concerns, they could come at the cost of slowing down innovation, particularly for smaller companies that might struggle with the regulatory burden,” he stated.
To whom will the reporting regulations apply?
The proposed rule primarily affects US-based businesses that have the substantial computing gear required to construct massive dual-use foundation models or that are seeking to develop them.
Should the rule be ratified in its current form, businesses would have to submit quarterly reports detailing their efforts toward training and development, as well as security procedures to safeguard that work, specifically with regard to red-team testing and model weights.
Companies will have 30 days from publication of the proposed rule, expected this week, to respond with any comments. BIS has already run a pilot project earlier this year.
The move is in response to an executive order signed by President Biden last year, looking to ensure AI is developed safely.
“This proposed reporting requirement would help us understand the capabilities and security of our most advanced AI systems,” said Under Secretary of Commerce for Industry and Security Alan F. Estevez.
“It would build on BIS’s long history conducting defense industrial base surveys to inform the American government about emerging risks in the most important US industries.“
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.