California AI bill passes State Assembly,
pushing AI fight to Newsom
The bill, which seeks to make companies liable if their artificial intelligence harms people, is at the center of a debate over how to regulate the technology.
The California State Assembly passed a bill Wednesday that would enact the nation’s strictest regulations on artificial intelligence companies, pushing the fierce fight over how to regulate AI toward Gov.
Gavin’s desk in Newsom.
Businesses using artificial intelligence (AI) would have to assess their technology for “catastrophic” hazards, such as teaching users how to make biological weapons or launch cyberattacks, before releasing it under the proposed law.
If businesses disregard the new rule and their technology is used to harm people, the California attorney general may file a lawsuit against them.
The measure’s Democratic proponent, state senator Scott Wiener, claims that smaller start-ups hoping to take on Big Tech companies won’t be affected by it since he has made it clear that the law only applies to companies developing massive, expensive AI models.
By a vote of 41 to 9, the bill passed.
It will now go back to the state Senate, where it was first proposed, and it is anticipated that Newsom (D) will sign it soon. In light of the fact that Congress is mostly focused on the next presidential election, this would place the well-known governor in a position to either adopt or veto broad and controversial tech regulations.
The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (also known as bill number “1047”) has exacerbated the division over the possible perils of AI technology among researchers, developers, and entrepreneurs. The effective altruism movement, which supports stronger restrictions on AI development to prevent the technology from being exploited to launch cyberattacks, create lethal weapons, or even become sentient on its own, has AI researchers on one side.
On the other side are AI researchers who claim that the technology is still far from advanced enough for those risks to materialize, as well as tech founders, executives, and investors who fear that government regulations could cripple AI innovation or allow other nations to surpass the US in terms of technological advancement.
Heading the Center for AI Safety, a think organization funded by effective altruism leader Dustin Muskovitz, Dan Hendrycks assisted in early bill consultations and testified in favor of the bill in the California assembly.
Legends in AI research Geoff Hinton and Yoshua Bengio have already endorsed the law, and just this week, Elon Musk, the owner of X, who has long spoken about the perils of extremely clever AI, also lent his support to it.
Industry stooges have launched a massive lobbying effort opposing the law, which includes a website that creates letters urging legislators in California to vote against the legislation. In recent months, Newsom’s administration has been meeting with representatives from tech trade associations, such as the Software Alliance, to voice concerns regarding the proposal.
“We will be sending a veto letter if it reaches his desk,” stated Todd O’Boyle, the tech policy lead at Chamber of Progress, a left-leaning organization supported by financing from Amazon, Apple, and Google.
Unusual for national lawmakers, they have entered the Sacramento dispute. Representative Nancy Pelosi (D-Calif.) joined other well-known federal officials from California in opposing the bill earlier this month, stating in a statement that it was “well-intentioned but ill-informed.”
The plan has been met with opposition by tech giants such as Google and Meta, and key figures in the AI industry have stated that it is myopic to govern AI technology in general, which might have infinite applications, instead of focusing on specific negative applications of AI.
The CEO of an AI start-up who has formerly led AI teams at Google and the Chinese tech giant Baidu, Andrew Ng, stated last month on X that “this proposed law makes a fundamental mistake of regulating AI technology instead of AI applications.” The bill is opposed by dozens of other startup founders as well.
Regulators have been researching the matter and discussing whether the new technology should be regulated since OpenAI debuted ChatGPT in November 2022 and sparked a new AI arms race among internet businesses. The most influential AI executives, such as Sam Altman, the CEO of OpenAI, who appeared before Congress in 2023 and recommended that a new agency be established by the government to oversee AI, have all advocated for regulation of this emerging technology.
However, legislators in Washington have not approved any laws pertaining to AI despite holding numerous hearings and putting up measures. As they did with legislation pertaining to privacy and internet safety, Wiener and other California legislators have stated that this means they must intervene to close that gap and act as the country’s de facto tech regulators.
Many of the recommendations in President Joe Biden’s 2023 AI executive order would be codified by S.B. 1047, which the previous president Donald Trump has vowed to revoke and replace if reelected. Opponents of the bill have expressed concern that California is going beyond its authority in pursuing proposals that would address the risks artificial intelligence poses to national security and are leery of a single state having so much control over the direction of this emerging technology.
Regarding the Assembly vote, Wiener stated, “It’s another important step.” “Innovation and safety are not mutually exclusive when it comes to AI,” he stated in an interview.
Adding to the confusion, observers of the Democratic Party believe Wiener is a strong contender to succeed Pelosi in Congress should she decide to retire.
For months, Wiener has maintained that the law will not make the development of AI illegal or impose any onerous new limitations that could hinder innovation. According to him, the purpose of the bill is to boost public confidence in artificial intelligence (AI) at a time when people are typically wary of the tech sector. It attempts to hold tech businesses responsible for the policies that they have already implemented.
Companies like Google, Meta, Microsoft, OpenAI, and Anthropic evaluate their AI chatbots internally to check if they propagate incorrect information, incite people to injure themselves, or exhibit racist and sexist biases. Additionally, the corporations have voluntarily committed to this, and their CEOs regularly discuss the need for responsible development of new AI technologies.
However, the bill’s introduction of possible legal penalties for AI businesses has been seen by many in the AI field as going too far. According to the proposed law, the attorney general may file a civil suit against AI businesses if they neglect to test their products and they end up being misused for malicious intent.
Tech CEOs claim that the legal framework that has protected them from accountability for user conduct on their platforms since the early days of the web has been crucial in enabling the internet to grow.
According to some AI experts, the new technology should be handled differently, and businesses that produce chatbots, image generators, and other AI goods ought to be subject to the same regulations as the pharmaceutical or auto industries and face penalties if they produce instruments that do harm to humans.
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.