California Looks to Regulate Cutting-Edge Frontier AI Models: 5 Things to Know About SB-1047


6 minute read | July.19.2024

On May 21, 2024, the California Senate passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB-1047), and referred the bill to the State Assembly. Though the bill has already been amended in the Assembly and may be subject to further amendment, SB-1047 would impose substantial compliance obligations on entities involved in training the most powerful and well-funded artificial intelligence models.

The bill seeks to:

  1. Regulate AI based on the amount of money and computing power used to train a model.
  2. Impose an array of compliance obligations on developers of covered models.
  3. Regulate operators of computing clusters used to train a covered model.
  4. Grant expansive labor protections for employees of developers, contractors and subcontractors.
  5. Confer oversight powers to new and existing regulators.

 

In More Detail

1. The bill would regulate AI models based on the amount of money and computing power used to train the model.

The bill defines a “covered model” as an AI model “trained using a quantity of computing power greater than 10^26 integer or floating-point operations, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market prices of cloud compute at the start of training as reasonably assessed by the developer.” The definition also encompasses covered models fine-tuned using computing power equal to or greater than three times 10^25 integer or floating-point operations.

A floating-point operation is any mathematical operation that uses floating-point numbers, i.e., numbers with decimals, not binary integers. Floating-point operations per second, or FLOPS, is a unit of measurement used to quantify the computing power of a computer or a processor.

The compute threshold for models trained using a computing power greater than 10^26 integer or FLOPS is the same used to impose reporting requirements under the Biden Administration’s executive order on AI.

The bill also seeks to regulate “covered model derivatives,” which include copies of covered models, modified versions of covered models, and covered models combined with other software.

The bill would create a state agency called the Frontier Model Division, a new division of the CA Government Operations Agency. Beginning in 2027, the bill would grant the Frontier Model Division the ability to change the thresholds for regulation either up or down.

2. Developers of covered models would face an array of compliance obligations.

The bill defines a “developer” as an individual or entity that performs the initial training of a covered model or fine-tunes a covered model using the levels of computing power described above.

Before training a covered model, a developer would have to implement:

  • Administrative, technical and physical cybersecurity protections appropriate for the risks associated with the covered model.
  • The capability to promptly enact a full shutdown of the AI.
  • A comprehensive written safety and security protocol that the developer would also have to provide to the state.

Before using a covered model or covered model derivative—or making one available for commercial or public use—a developer would have to:

  • Assess whether the model is reasonably capable of causing or enabling critical harm.
  • Implement reasonable safeguards to prevent the model and its derivatives from causing or enabling critical harm.
  • Ensure, to the extent reasonably possible, that the covered model’s actions and the actions of covered model derivatives, as well as critical harms resulting from their actions, can be accurately and reliably attributed to them.

No more than 30 days after first making a covered model or covered model derivative available for commercial or public use, a developer would have to submit a certificate of compliance to the Frontier Model Division under penalty of perjury.

The bill directs the Frontier Model Division to issue guidance, standards and best practices on these controls and the other duties imposed on developers.

3. SB-1047 would regulate operators of computing clusters used to train a covered model.

The bill defines a “computing cluster” as “a set of machines transitively connected by data center networking of over 100 gigabits per second that has a theoretical maximum computing capacity of at least 10^20 integer or floating-point operations per second and can be used for training artificial intelligence.”

The bill would require operators of a computing cluster to:

  • Obtain a prospective customer’s identifying information and business purpose for using the computing cluster.
  • Assess whether a prospective customer intends to use the computing cluster to train a covered model.
  • Retain records of customers’ IP addresses used for access and administrative action, as well as the date and time of each access or administrative action.
  • Implement the capability to promptly enact a full shutdown of any resources used to train or operate models under customers’ control.

4. SB-1047 would grant expansive labor protections for employees of developers, contractors and subcontractors.

SB-1047 would require developers, contractors and subcontractors to allow employees to disclose information to the California Attorney General or Labor Commissioner if an employee has reasonable cause to believe an artificial intelligence model poses an unreasonable risk of enabling a critical harm.

The bill does not limit these protections to situations involving covered models. That is, employees may raise such concerns about any kind of artificial intelligence model. The bill is unclear whether these protections would apply only to information about models created by the developer or if they extend to information about any model the developer uses.

The bill also requires developers, contractors and subcontractors to:

  • Implement internal processes through which an employee may anonymously disclose information to the developer if the employee believes in good faith that the information indicates that the developer has violated any law, made false or materially misleading statements related to its safety and security protocol, or failed to disclose known risks to employees.
  • Update disclosing employees on the status of the employee’s disclosure and the actions taken in response.
  • Maintain records of any disclosures and responses for seven years.
  • Provide clear notice to all employees working on covered models of their rights under SB-1047.

The bill empowers the Labor Commissioner to enforce these rules.

5. Regulators would receive new oversight powers.

SB-1047 would create the Frontier Model Division to oversee compliance, issue guidance, advise state officials on certain aspects of AI, and annually revise the thresholds for what constitutes a covered model. The Frontier Model Division would be overseen by a new Board of Frontier Models comprised of representatives from the open-source community, AI industry, academia, and government.

The bill would also empower the Attorney General or Labor Commissioner to seek civil penalties, monetary damages, and, in special cases, to order the modification, shutdown, or deletion of covered models or covered model derivatives controlled by developers.

What does this mean for your business?

The definition of covered models under SB-1047 sets a high threshold for regulation. To date, no publicly disclosed models surpass the compute threshold necessary to satisfy the definition of a covered model. Accordingly, the bill contemplates regulating only those entities involved with the training or fine-tuning of the most powerful and well-funded AI models. While existing laws and emerging legislation may address other aspects of the development, internal use, and deployment of AI models, SB-1047 is unlikely to impact most businesses developing or using AI.

Nonetheless, businesses should monitor the bill as it makes its way through the legislative process, as the regulatory thresholds are subject to change. Moreover, should the bill as written become law, the Frontier Model Division would be required to revisit the definition of covered models annually. Consequently, updated definitions may expand the applicability of the bill.

Even if SB-1047 fails to become law, California is often a first mover in the realm of technology regulation and has proposed over 140 bills relating to AI in the current legislative session. Businesses should monitor whether California or other states introduce bills like SB-1047, as they may set lower applicability thresholds and thereby regulate more broadly.

If you have questions about this development, please contact Shannon Yavorsky, Jeremy Kudon, Matthew Coleman, Peter Graham or other members of the Orrick team.