4 minute read | September.30.2024
TAKEAWAY: California Gov. Gavin Newsom has vetoed SB 1047 – the controversial AI regulation bill that both chambers of the California legislature passed by wide margins. The veto is a good outcome for the nascent AI industry. In his veto message, Gov. Newsom indicated that he was concerned about levying compliance burdens and stifling beneficial innovation in a way that is untethered to specific, articulated risks. California is continuing to evaluate appropriate guardrails for generative AI through a consultation with a trio of experts and as shown by the wave of AI legislation Newsom signed in the last month. We anticipate California and other states will continue to debate legislating AI safety throughout 2025.
The California legislature passed the bill, designed to regulate large-scale frontier artificial intelligence models, by a wide margin.
In his veto message, Gov. Newsom acknowledged the importance of regulating AI proactively, but disagreed with the bill’s scope of applying to only the most expensive and large-scale models without regard to risks arising from smaller AI systems deployed in high-risk environments, those involving critical decision-making or the use of sensitive data.
Gov. Newsom pointed out that California is home to 32 of the world’s top 50 AI companies. He raised concerns that the bill's implementation would curtail the innovation of technologies that advance the public good.
The veto highlights the ongoing challenge of balancing innovation with regulation in the rapidly evolving field of artificial intelligence. California Sen. Scott Wiener, a co-author of the bill, criticized Newsom's move and expressed concerns about leaving the industry to police itself.
The veto underscores a trend in AI legislation enacted in California that focuses on particular threats posed by AI technologies, such as:
Gov. Newsom signed 17 bills addressing generative AI technologies in the 30 days before he vetoed SB 1047.
In connection with the veto, Gov. Newsom announced a partnership with a trio of AI experts to develop an “empirical, science-based trajectory analysis of frontier models and their capabilities and attendant risks,” with an eye towards advising the legislature of their findings.
The experts include Dr. Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence; Mariano-Florentino (Tino) Cuéllar, president of the Carnegie Endowment for International Peace and member of the National Academy of Sciences’ Committee on Social and Ethical Implications of Computing Research and Jennifer Tour Chayes, Dean of the College of Computing, Data Science and Society at UC Berkeley.
While the legislature has the power to override Gov. Newsom’s veto, we do not expect it to pursue that option. Instead, it is possible that Sen. Wiener will introduce a modified version of SB 1047 at the start of the 2025 legislative session in January with updates based on Gov. Newsom’s concerns and the findings of the trio of AI experts.
According to Orrick partner Jeremy Kudon, the head of the firm’s state legislative practice, 8 to 12 state legislatures will consider bills that would subject AI businesses to many of the same regulations, oversight and liability as contemplated in SB 1047. As he explained during a September 10 panel, Kudon predicts that many of the same states that passed data privacy legislation will be at the forefront of comprehensive AI regulation and legislation.
The veto of SB 1047 prevents barriers to entry and compliance burdens in a way that is untethered to specific, articulated risks. As Sen. Wiener noted, the debate around SB 1047 furthered the discussion around AI safety in California and more broadly and resulted in meaningful commitments and protections from AI developers. As a result of this focus, and the commitments from Gov. Newsom, legislators and industry groups, we anticipate more debate and legislative proposals on this topic in California and other states throughout the 2025 legislative cycle.