California Needs to Take the Lead on AI Regulation
The state that is home to Silicon Valley has previously taken the lead with first-of-its-kind legislation on net neutrality, data privacy, and children’s online safety in the absence of federal action.
The introduction of OpenAI’s ChatGPT in late 2022 was akin to the firing of a starter pistol, igniting a competition among major tech firms to create ever-more-powerful generative AI systems. As billions of dollars in venture financing poured into artificial intelligence businesses, industry titans like Microsoft, Google, and Meta hurried to release new capabilities.
Simultaneously, an increasing number of individuals engaged in AI research and practice started raising the alarm: the field was developing faster than anyone had predicted. There was concern that businesses would release items before they are safe in their haste to take over the market.
More than a thousand academics and business executives demanded a six-month halt to the creation of the most sophisticated AI systems in the spring of 2023, claiming that AI labs were rushing to implement “digital minds” that not even their designers could comprehend, foresee, or reliably control.
They cautioned that there are “profound risks to society and humanity” due to the technology. Leaders of tech companies encouraged Congress to create rules in order to stop harm.
Sen. Scott Wiener, a Democrat from San Francisco, started speaking with business leaders about creating legislation that would eventually become Senate Bill 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, in this context.
The measure is a crucial beginning step toward the creation of responsible AI.
Wiener adopted a different tack as state legislators filed numerous laws addressing diverse AI-related issues, such as preventing election misinformation and safeguarding the creative works of artists.
The goal of his law is to attempt and stop catastrophic damage in the event that AI systems are abused.
The most potent AI model creators would have to implement testing protocols and safety measures, according to SB 1047, to prevent the technology from being used to carry out significant cyberattacks, enable the development of biological weapons, bring down the electrical grid, or cause other serious harms.
Developers may face legal action from the state attorney general if they do not take reasonable precautions to avoid catastrophic harm. In addition, the law would establish CalCompute, a public cloud computing cluster that would assist researchers, universities, and entrepreneurs in the development of AI models, and safeguard whistleblowers working for AI companies.
Major AI safety organizations endorse the measure, including some of the self-dubbed AI godfathers who argue in a letter to Governor Gavin Newsom that “this is a remarkably light-touch piece of legislation, relative to the scale of risks we are facing.”
However, this hasn’t stopped a flood of resistance from researchers, investors, and tech corporations who claim the measure incorrectly holds model makers accountable for foreseeing potential user harm. They contend that liability would discourage developers from sharing their models, which would hinder California‘s creative spirit.
Eight California congressional members wrote to Newsom last week, pleading with him to veto SB 1047 should the Legislature approve it. They contended that politicians should instead concentrate on regulating applications of AI that are already causing harm, including the use of deepfakes in election advertisements and revenge porn, and that the bill is premature with a “misplaced emphasis on hypothetical risks.”
There are many excellent bills that specifically and immediately address AI misuse. That doesn’t lessen the obligation to foresee potential harm and work to prevent it, especially in light of the calls for action made by industry experts. Legislators and the tech industry are aware with the issues raised by SB 1047.
When is it appropriate to control newly developed technologies?
What is the ideal ratio to support innovation and safeguard the general population who must bear its consequences? When the technology is released, can the genie be returned to its bottle?
Too much time spent on the sidelines carries risks. Legislators are still catching up when it comes to protecting user privacy and trying to limit harm on social networking sites. Big tech executives have previously stated in public that they support regulation of their goods but have actively campaigned to thwart particular ideas.
A patchwork of state laws governing AI would ideally be avoided if the federal government took the lead in regulating. However, Congress has not been able to control big tech, or has not wanted to. Legislation to safeguard data privacy and lessen children’s internet hazards has been in the works for years, but progress has been slow.
California, in particular, being the birthplace of Silicon Valley, has taken the lead in the lack of congressional action by enacting groundbreaking laws pertaining to net neutrality, data privacy, and children’s online safety. AI is not any different. In fact, Republicans in the House have already declared that they will oppose any new AI regulations.
By approving SB 1047, California will be able to exert pressure on the federal government to establish guidelines and rules that may take precedence over state laws; in the interim, the bill may act as a crucial safety net.
Discover more from
Subscribe to get the latest posts sent to your email.