Artificial intelligence – reject the AI law in California?
Artificial intelligence is something that Big Tech wants regulated. Why do they reject the AI law in California? Described
The SB 1047 bill out of California seeks to control AI through kill switches and safety checks. Legislators and IT businesses are against it because of worries about how it will affect open-source models and AI development. We investigate.
Even though many tech firms have expressed strong opposition, California lawmakers are scheduled to vote on a bill that would broadly control how artificial intelligence is produced and used in the state as early as this week.
The following provides background information on the measure, also known as SB 1047, and explains why some senators and Silicon Valley techies have opposed it:
What Actions Does the Bill Take?
The idea, put out by Democratic state senator Scott Wiener, would require safety testing for many of the most sophisticated AI models—many of which cost over $100 million to construct or require a certain amount of processing power. The creators of AI software used by the state would also have to provide instructions on how to disable the AI models basically, a kill switch in the event that they malfunction.
Additionally, if developers do not comply with the bill, the state attorney general would have the authority to sue. This would be especially useful in the event of an ongoing threat, like AI taking over government services like the electricity grid.
In addition, the measure would mandate that developers engage independent auditors to evaluate their security procedures and offer extra safeguards to informers who report instances of AI misuse.
What Has Congress Said?
By a vote of 32-1, SB 1047 has already cleared the state Senate. It was approved by the state Assembly’s appropriations committee last week, allowing the Assembly as a whole to vote on it. By the end of the legislative session on August 31, if it passes, it will go to Governor Gavin Newsom for a signatory or veto by September 30.
Wiener, a representative representing San Francisco, which is home to OpenAI and numerous firms creating the potent software, has stated that regulation is required to safeguard the public before AI advancements become unmanageable or uncontrolled.
Nonetheless, a number of California Congressional Democrats, including Nancy Pelosi of San Francisco, Ro Khanna, whose congressional district includes a large portion of Silicon Valley, and Zoe Lofgren of San Jose, are against the bill.
This week, Pelosi referred to SB 1047 as misinformed and suggested that it would do more harm than good. The Democrats claimed in an open letter last week that the measure may force developers out of the state and jeopardize “open-source” AI models, which are based on publicly available code that anyone can use or alter.
What Say Tech Industry Leaders?
Stronger regulations for the use of AI have been demanded by tech businesses creating AI, which can conduct repetitive jobs with little human intervention and reply to cues with fully formed text, graphics, or voice.
Among other worries, they have mentioned the possibility that the software may eventually avoid human interaction and result in cyberattacks. However, they also mainly objected to SB 1047.
Wiener made changes to the measure to placate tech businesses, partly based on suggestions from the Amazon and Alphabet-backed AI startup Anthropic. He removed the establishment of a government oversight council for AI, among other things.
Additionally, Wiener eliminated criminal consequences for perjury, while civil lawsuits are still permitted.
Google and Meta, two divisions of Alphabet, have written to Wiener expressing concerns. According to Meta, the measure poses a risk to the state’s ability to support the advancement and application of AI. Yann LeCun, the chief scientist of Facebook parent company, described the law as possibly detrimental to research efforts in a July X post.
OpenAI has stated that AI should be regulated by the federal government and that SB 1047 creates an unclear legal environment. OpenAI‘s ChatGPT is attributed with driving the fervor over AI after its widespread deployment in late 2022.
OpenAI stated in a letter to Wiener that it disagrees with SB 1047 because it could drive engineers and business owners out of the state and represent a threat to the development of AI.
The possibility that the bill would apply to open-source AI models is very concerning. Though Meta and others have expressed concern that they may be held accountable for monitoring open-source models in the event that the law is passed, many technologists think that open-source models are crucial for developing less dangerous AI applications more rapidly.
Wiener says he supports open-source models, and the following changes to the law have increased the threshold for determining whether or not open-sourced models are covered by it.
There are also pro-law advocates in the IT sector. Geoffrey Hinson, regarded as the “godfather of AI,” and researcher Yoshua Bengio, a former employee of OpenAI, both favor the measure.
Disclaimer: This article was released straight from a wire agency feed: edits have been made to its language.
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.