California AI Bill Passes State Assembly,
Pushing AI Fight to Newsom
A Landmark AI Regulation Gains Momentum
California has taken a bold step toward regulating artificial intelligence. On Wednesday, the State Assembly passed a groundbreaking bill that could become the nation’s strictest AI law. The legislation now moves to Governor Gavin Newsom’s desk, where he will decide whether to sign or veto it.
What the Bill Demands from AI Companies
The bill requires companies developing large-scale AI models to perform risk assessments before releasing their tools. These assessments must evaluate whether AI systems could be misused, such as teaching users to build biological weapons or carry out cyberattacks.
If a company fails to conduct proper testing and the AI causes harm, the California Attorney General can sue. Notably, the bill targets developers of massive, resource-heavy models, not smaller startups.
Senate Support and Legislative Process
The bill passed the Assembly with a vote of 41 to 9. It now returns to the State Senate, where it originated. Lawmakers expect Governor Newsom to act soon, potentially making California the first U.S. state to impose major AI regulations. With Congress largely focused on the 2024 presidential election, California may lead federal AI policy through state-level action.
Deep Divisions in the AI Community
The bill has sparked sharp debate. Supporters argue that strong rules are essential to prevent dangerous outcomes. Critics warn that excessive regulation could hamper innovation or give other countries a competitive edge.
Proponents include the effective altruism movement, which believes AI must be tightly controlled to avoid risks such as autonomous weapons or out-of-control intelligence. Dan Hendrycks from the Center for AI Safety helped draft the bill and testified in its favor.
Big names like Geoff Hinton, Yoshua Bengio, and Elon Musk also support the measure. Musk, a vocal critic of AI risks, publicly backed the bill this week.
Industry Pushback and Lobbying
Despite high-profile endorsements, the bill faces intense opposition. Tech industry lobbyists have launched campaigns to sway public opinion and lawmakers. One group created a website that helps people send letters opposing the legislation.
Todd O’Boyle of the Chamber of Progress, which receives funding from Amazon, Apple, and Google, said they would urge a veto if the bill reaches Newsom.
Even federal politicians have weighed in. House Speaker Emerita Nancy Pelosi called the bill “well-intentioned but ill-informed,” joining other California leaders in voicing concerns.
Tech Giants and Founders Raise Concerns
Companies like Google and Meta oppose the bill. Critics say it regulates AI technology broadly rather than focusing on specific harmful applications.
Andrew Ng, a leading AI figure and former head of AI at Google and Baidu, posted on X (formerly Twitter) that regulating AI itself, rather than its use, is misguided. Many startup founders echo his concern.
A Vacuum in Federal Regulation
Since OpenAI launched ChatGPT in November 2022, the U.S. has seen a rapid AI arms race. Executives like Sam Altman, CEO of OpenAI, have testified before Congress, calling for the creation of a federal AI oversight body.
Yet no national AI laws have passed. Frustrated by the delay, California lawmakers say they must step in. This mirrors past actions the state has taken on internet privacy and safety when federal action lagged.
SB 1047: The Safe and Secure Innovation Act
The bill, officially titled the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), would codify many of the AI-related provisions in President Joe Biden’s 2023 executive order. Former President Donald Trump has vowed to repeal those rules if re-elected.
Opponents argue that California is overstepping its authority. They fear a single state gaining outsized influence over the global technology’s future.
Wiener’s Vision: Innovation with Responsibility
State Senator Scott Wiener, the bill’s sponsor, says the goal is not to restrict innovation. Instead, the law seeks to build public trust in AI by holding companies accountable for the safety measures they claim to follow.
Companies like OpenAI, Anthropic, Microsoft, Google, and Meta already conduct internal testing. They evaluate whether their models spread misinformation or show harmful bias. CEOs from these firms have also stressed the importance of developing AI responsibly.
Legal Accountability Raises the Stakes
Under SB 1047, companies could face lawsuits if their models are misused due to negligence in testing. While many AI leaders say the internet thrived because platforms weren’t held accountable for users’ actions, others believe AI’s risks demand stricter standards.
Some experts argue AI should be regulated like pharmaceuticals or automobiles, where harm carries legal consequences. Chatbots, image generators, and other AI tools might require the same scrutiny to ensure user safety.
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.