California lawmakers to vote on AI safety bill amid opposition from tech companies


A bill aimed at preventing the risk of artificial intelligence being used for nefarious purposes such as cyber attacks or advancing biological weapons will be voted on by California state legislators this week.

California Senate Bill 1047, authored by state senator Scott Wiener, would be the first bill of its kind in the US to require AI companies that create large-scale models to test them for safety.

California lawmakers are considering dozens of AI-related bills this session. But Wiener's proposal, called the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” has drawn national attention because of vocal resistance from Silicon Valley, the epicenter of U.S. AI development. Opponents say California's burdensome technical requirements and potential fines would effectively stifle the country's innovation and global competitiveness.

OpenAI is the latest AI developer to voice opposition, arguing in a letter Wednesday that AI regulation should be left to the federal government and claiming companies would leave California if the proposed law passes.

The state Assembly is now set to vote on the bill, which Wiener recently revised in response to criticism from the tech industry. But he says the new language doesn't cover all the issues raised by the industry.

“This is a reasonable, simple bill that would in no way impede innovation, but rather help us get ahead of the risks that exist with any powerful technology,” Wiener told reporters at a news conference Monday.

What will this bill do?

The bill would require companies that build large AI models that cost more than $100 million to train to limit any vulnerabilities in the system that they discover through security testing. This would include building in “full shutdown” capabilities — or a way to shut down a potentially unsafe model in critical situations.

Developers would also be required to create a technical plan to address security risks and retain a copy of that plan for as long as the model is available, plus five years. Firms with large AI operations such as Google, Meta and OpenAI have already made voluntary commitments to the Biden administration to manage AI risks, but the California law would introduce legal liability and enforcement for them.

Each year, a third-party auditor will assess whether the company is complying with the law. Additionally, companies must document their compliance with the law and report any security incidents to the California Attorney General. The Attorney General's office can impose civil fines of up to $50,000 for the first violation and an additional $100,000 for subsequent violations.

What are the criticisms?

A large portion of the tech industry has criticized the proposed bill, saying it is too cumbersome. Anthropic, a popular AI firm that describes itself as security-focused, had argued that an earlier version of the law would create complex legal constraints that would stifle AI innovation, such as the ability for the California attorney general to sue for negligence even if a security disaster has not occurred.

OpenAI suggested that if the bill passes, companies would leave California to avoid its requirements. It also stressed that AI regulation should be left to Congress to prevent a confusing pile of legislation in the states.

Wiener called the idea of ​​companies fleeing California a “tired argument,” and said the bill's provisions would still apply to businesses that provide services to Californians, even if they aren't headquartered there.

Last week, eight members of the U.S. Congress urged Governor Gavin Newsom to veto SB-1047 because it would impose liability on companies that create and use AI. Representative Nancy Pelosi also joined her colleagues in opposing the measure, calling it “well-intentioned but ill-informed.” (Weiner is eyeing the House seat of Speaker Emerita, which could put him face-to-face with his daughter, Christine Pelosi, in the future, according to Politico.)

Pelosi and fellow members of Congress are siding with Stanford University computer scientist and former Google researcher Dr. Fei-Fei Li, the “Godmother of AI.” In a recent op-ed, Li said the legislation would “harm our emerging AI ecosystem,” particularly smaller developers who “are already at a disadvantage compared to today’s tech giants.”

What do the supporters say?

The bill has received support from various AI startups, Notion co-founder Simon Last, and AI “godfathers” Yoshua Bengio and Geoffrey Hinton. Bengio said the law would be “a positive and reasonable step” to make AI safer while encouraging innovation.

Without adequate safeguards, supporters of the bill fear that uncontrolled AI could lead to grave existential threats, such as increased risks to critical infrastructure and the build-up of nuclear weapons.

Wiener defended his “common sense, light touch” legislation, saying it would only require the largest AI companies to adopt security measures. He also praised California's leadership on US tech policy, raising doubts that Congress would pass any concrete AI legislation in the near future.

“California has stepped up again and again to protect our residents and fill the void left by congressional inaction,” Wiener responded, citing a lack of federal action on data privacy and social media regulation.

What will happen next?

Wiener said in his latest statement on the bill that the most recent amendments take into account many of the concerns expressed by the AI ​​industry. The current version proposes civil penalties for lying to the government rather than criminal penalties, as was originally in the bill. It also removes the proposal for a new state regulatory body that would oversee AI models.

Anthropic said in a letter to Newsom that the benefits of the revised law far outweigh the potential harm to the AI ​​industry, with the main benefits being transparency with the public about AI safety and motivating companies to invest in risk mitigation. But Anthropic is still wary of the potential for overly broad enforcement and detailed reporting requirements.

“We believe it’s important to have some framework for managing leading-edge AI systems that broadly meets these three requirements,” Anthropic CEO Dario Amodei told the governor, whether that framework is SB-1047 or not.

California lawmakers have until the end of the session, Aug. 31, to pass the bill. If it is approved, it will be sent to Governor Gavin Newsom for final approval by the end of September. The governor has not indicated whether he plans to sign the bill.

Leave a Comment

“The Untold Story: Yung Miami’s Response to Jimmy Butler’s Advances During an NBA Playoff Game” “Unveiling the Secrets: 15 Astonishing Facts About the PGA Championship”