News

Why a California Senate Bill is Angering Silicon Valley Over Proposed AI Regulations

Why a California Senate Bill is Angering Silicon Valley Over Proposed AI Regulations

In California, a controversial bill to regulate how AI models are developed and trained is inching closer to law, and many involved in the sector aren’t happy.

California Senate Bill 1047, currently being debated in the state’s Senate, will require AI companies working on models that cost more than $100 million to have a robust safety framework built into their models. 

The tech industry, whose many businesses are based in Silicon Valley, has reportedly been debating the impact the bill will have on their work. 

SB 1047 would require AI developers to include a kill switch, undertake an annual audit for safety compliance, and not produce, use, or distribute a model that is potentially dangerous.

Elon Musk, whose Grok AI platform has recently been criticized for spreading disinformation, has come out in support of the bill. 

“This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” Musk said in a post to X on Monday.

The billionaire tech entrepreneur also noted he has been pushing for greater regulatory oversight, claiming he has broadly advocated for AI regulation for roughly 20 years.

Others, however, are vehemently opposed to the bill, including the company Musk co-founded, OpenAI. 

The San Francisco tech company responsible for creating the popular language learning model, ChatGPT, penned a letter to the bill’s author, Scott Wiener (D-San Francisco), last week claiming it would harm Silicon Valley’s ability to be a global leader in AI.

Andrew Ng, the former head of Google’s deep learning AI research project Deep Brain, also took aim against the bill in June, claiming it would “make builders of large AI models liable if someone uses their models.”

“I’m deeply concerned about California’s proposed law SB-1047,” Ng tweeted at the time. “It’s a long, complex bill with many parts that require safety assessments, shutdown capability for models, and so on.”

If the bill becomes law, AI developers must follow five key rules, which include ensuring they can quickly shut down the model and create a written safety and security plan. They must keep an unedited copy of this safety plan for as long as the model is available, plus an additional five years, and maintain records of any updates.

Starting January 1, 2026, developers would be required to hire an independent auditor annually to check compliance with the law and keep the full audit report for the same duration as the safety plan. 

Developers must also provide the Attorney General access to the safety plan and the audit report if requested. Additionally, developers are prohibited from using or releasing a model for commercial or public use if it poses a significant risk of causing severe harm.

The bill passed an important committee in the Assembly and will be voted on by all Assembly members later this week. The Senate already passed it with strong support in May.

If the Assembly approves it, the bill will go to Governor Gavin Newsom on September 30 to decide whether to veto it or make it law.

Edited by Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source: https://decrypt.co/246739/california-senate-bill-angering-silicon-valley-over-proposed-ai-regulations

Leave a Reply

Your email address will not be published. Required fields are marked *