6 minute read | July.19.2024
On May 21, 2024, the California Senate passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB-1047), and referred the bill to the State Assembly. Though the bill has already been amended in the Assembly and may be subject to further amendment, SB-1047 would impose substantial compliance obligations on entities involved in training the most powerful and well-funded artificial intelligence models.
The bill seeks to:
The bill defines a “covered model” as an AI model “trained using a quantity of computing power greater than 10^26 integer or floating-point operations, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market prices of cloud compute at the start of training as reasonably assessed by the developer.” The definition also encompasses covered models fine-tuned using computing power equal to or greater than three times 10^25 integer or floating-point operations.
A floating-point operation is any mathematical operation that uses floating-point numbers, i.e., numbers with decimals, not binary integers. Floating-point operations per second, or FLOPS, is a unit of measurement used to quantify the computing power of a computer or a processor.
The compute threshold for models trained using a computing power greater than 10^26 integer or FLOPS is the same used to impose reporting requirements under the Biden Administration’s executive order on AI.
The bill also seeks to regulate “covered model derivatives,” which include copies of covered models, modified versions of covered models, and covered models combined with other software.
The bill would create a state agency called the Frontier Model Division, a new division of the CA Government Operations Agency. Beginning in 2027, the bill would grant the Frontier Model Division the ability to change the thresholds for regulation either up or down.
The bill defines a “developer” as an individual or entity that performs the initial training of a covered model or fine-tunes a covered model using the levels of computing power described above.
Before training a covered model, a developer would have to implement:
Before using a covered model or covered model derivative—or making one available for commercial or public use—a developer would have to:
No more than 30 days after first making a covered model or covered model derivative available for commercial or public use, a developer would have to submit a certificate of compliance to the Frontier Model Division under penalty of perjury.
The bill directs the Frontier Model Division to issue guidance, standards and best practices on these controls and the other duties imposed on developers.
The bill defines a “computing cluster” as “a set of machines transitively connected by data center networking of over 100 gigabits per second that has a theoretical maximum computing capacity of at least 10^20 integer or floating-point operations per second and can be used for training artificial intelligence.”
The bill would require operators of a computing cluster to:
SB-1047 would require developers, contractors and subcontractors to allow employees to disclose information to the California Attorney General or Labor Commissioner if an employee has reasonable cause to believe an artificial intelligence model poses an unreasonable risk of enabling a critical harm.
The bill does not limit these protections to situations involving covered models. That is, employees may raise such concerns about any kind of artificial intelligence model. The bill is unclear whether these protections would apply only to information about models created by the developer or if they extend to information about any model the developer uses.
The bill also requires developers, contractors and subcontractors to:
The bill empowers the Labor Commissioner to enforce these rules.
SB-1047 would create the Frontier Model Division to oversee compliance, issue guidance, advise state officials on certain aspects of AI, and annually revise the thresholds for what constitutes a covered model. The Frontier Model Division would be overseen by a new Board of Frontier Models comprised of representatives from the open-source community, AI industry, academia, and government.
The bill would also empower the Attorney General or Labor Commissioner to seek civil penalties, monetary damages, and, in special cases, to order the modification, shutdown, or deletion of covered models or covered model derivatives controlled by developers.
The definition of covered models under SB-1047 sets a high threshold for regulation. To date, no publicly disclosed models surpass the compute threshold necessary to satisfy the definition of a covered model. Accordingly, the bill contemplates regulating only those entities involved with the training or fine-tuning of the most powerful and well-funded AI models. While existing laws and emerging legislation may address other aspects of the development, internal use, and deployment of AI models, SB-1047 is unlikely to impact most businesses developing or using AI.
Nonetheless, businesses should monitor the bill as it makes its way through the legislative process, as the regulatory thresholds are subject to change. Moreover, should the bill as written become law, the Frontier Model Division would be required to revisit the definition of covered models annually. Consequently, updated definitions may expand the applicability of the bill.
Even if SB-1047 fails to become law, California is often a first mover in the realm of technology regulation and has proposed over 140 bills relating to AI in the current legislative session. Businesses should monitor whether California or other states introduce bills like SB-1047, as they may set lower applicability thresholds and thereby regulate more broadly.
If you have questions about this development, please contact Shannon Yavorsky, Jeremy Kudon, Matthew Coleman, Peter Graham or other members of the Orrick team.