17 minute read | February.18.2025
The rapid rise of artificial intelligence (AI) across numerous sectors of the economy has garnered the attention of state attorneys general (AGs). As various industries increasingly adopt AI to streamline or enhance their operations, state AGs from both parties are focusing on how AI affects consumers. The growing interest in AI among state AGs is evident in the slate of AI-focused panels at nearly every attorney general conference over the past year.[1]
State AGs are focusing on four primary concerns related to AI:
As the Trump administration plans to roll back the Biden administration’s artificial intelligence regulatory framework, state AGs will likely fill in the void by using existing consumer protection or Unfair or Deceptive Acts or Practices (UDAP) authority to bring enforcement actions. States are already passing AI legislation, which often includes AG enforcement authority.
Two recent examples of state AGs using their existing statutory authority to enforce potential AI regulations are California Attorney General Rob Bonta and New Jersey Attorney General Matt Platkin. Both recently issued legal advisories providing guidance on the application of existing state laws to AI. We summarize the key takeaways from these advisories below.
In the first of two advisories, Attorney General Bonta notes the “great potential” for AI to produce greater economic growth and scientific breakthroughs but also the potential risks associated with AI, such as exacerbating bias, discrimination, and fraud.
According to the advisory, the exponential proliferation of AI systems appears in “nearly all aspects of everyday life,” including consumer credit risk evaluations, tenant screenings, and targeted advertising. Given the rapid acceleration of AI, Attorney General Bonta cautions that AI developers and entities that use AI systems must comply with California law, including regulations that safeguard consumers from fraud, anticompetitive harm, discrimination, bias, and abuse of their data.
Under California law, the AG, local prosecutors, and plaintiff’s attorneys have authority to enforce existing statutes in most instances. Below is an analysis of the AG’s legal advisory focusing on existing California laws that may be used to enforce AI and protect consumers in certain contexts.
California’s Competition Laws
According to Attorney General Bonta’s guidance, AI developers and users should be aware of any risks to fair competition created by AI systems. Inadvertent harm to competition resulting from AI systems may violate one or more of California’s competition laws. Anticompetitive actions by dominant AI companies may also harm competition in AI markets and violate both state and federal competition laws, such as:
California’s Unfair Competition Law (UCL) was written with broad, sweeping language to protect consumers from “obvious and familiar forms of fraud and deception as well as new, creative, and cutting-edge forms of unlawful, unfair, and misleading behavior.”[4]
Attorney General Bonta cites the following examples of how the use of AI could run afoul of the California UCL:
California’s False Advertising Law
California’s False Advertising Law protects California consumers against deceptive advertising.[11] According to Attorney General Bonta, this includes false advertising regarding the capabilities, availability, and utility of AI products, the use of AI in connection with a good or service, as well as false advertising regarding any topic, whether it is generated by AI or not.
California’s Civil Rights Laws
The AG notes that his office has “seen AI systems incorporate societal and other biases into their decision-making,” citing an investigation into racial and ethnic bias in healthcare algorithms. California has several protections against bias and discrimination, including:
Attorney General Bonta further notes that laws which requires entities that take “adverse action” against consumers to provide specific reasons for those adverse actions, including when AI was used to make the determination. For example:
California Consumer Privacy Act
Attorney General Bonta’s advisory states that the California Consumer Privacy Act (CCPA) provides consumers further protection from AI by regulating the collection, use, sale, and sharing of their personal information. Personal information may also include inferences about consumers made by AI systems.[17]
Thus, according to the advisory, “AI developers and users that collect and use Californians’ personal information must comply with CCPA’s protections for consumers, including by ensuring that their collection, use, retention, and sharing of consumer personal information is reasonably necessary and proportionate to achieve the purposes for which the personal information was collected and processed.”[18]
Additionally, new California legislation signed into law in September 2024 “confirms that the protections for personal information in the CCPA apply to personal information in AI systems that are capable of outputting personal information.”[19] A second bill expands the definition of “sensitive personal information” to include “neural data,”[20] the information generated by measuring the activity of a consumer’s central or peripheral nervous system. Attorney General Bonta further warns that “AI developers and users should also be aware that using personal information for research” is “subject to several requirements and limitations.”[21]
California Invasion of Privacy Act
According to Attorney General Bonta, the California Invasion of Privacy Act (CIPA) “may also impact AI training data, inputs, or outputs.”
CIPA restricts recording or listening to private electronic communication, including wiretapping, eavesdropping on or recording communications without the consent of all parties, and recording or intercepting cellular communications without the consent of all parties.[22]
CIPA also prohibits the use of systems that examine or record voice prints to determine the truth or falsity of statements without consent.[23] Thus, according to the advisory, “developers and users should ensure that their AI systems, or any data used by the system, do not violate CIPA.”
Student Online Personal Information Protection Act
California’s Student Online Personal Information Protection Act (SOPIPA) broadly “prohibits education technology service providers from selling student data, engaging in targeted advertising using student data, and amassing profiles about students, except for specified school purposes.”[24]
SOPIPA applies to services and apps used primarily for “K-12 school purposes.” This includes services and apps for home or remote instruction, as well as those intended for use at a public or private school.
Thus, according to Attorney General Bonta’s guidance, “developers and users should ensure any educational AI systems comply with SOPIPA, even if they are marketed directly to consumers.”
Attorney General Bonta issued a second legal advisory providing guidance to “healthcare providers, insurers, vendors, investors, and other healthcare entities that develop, sell, and use” AI and “other automated decision systems” about their obligations under California law.
The advisory notes that AI systems are already guiding medical diagnosis and treatment decisions and have the potential to “improve patient and population health, increase health equity, reduce administrative burdens, and facilitate appropriate information sharing.”
However, the advisory further warns of AI “risks causing discrimination, denials of needed care and other misallocations of healthcare resources, and interference with patient autonomy and privacy.” Thus, according to Attorney General Bonta, healthcare-related entities that develop, sell, or use AI systems must “ensure that their systems comply with laws protecting consumers.” This includes “understanding how AI systems are trained, what information the systems consider, and how the systems generate output.”
The legal advisory further calls on developers, researchers, providers, and insurers to test and validate AI systems “to ensure that their use is safe, ethical, and lawful, and reduces, rather than replicates or exaggerates, human error and biases.”
According to the advisory, it may be unlawful in California to:
Attorney General Bonta explains the legal advisory applies only to existing California law and does not encompass all possible federal laws and regulations. Below is a summary of existing California laws that apply to AI in the healthcare space.
California’s Patient Privacy Laws
The Confidentiality of Medical Information Act (CMIA) governs the use and disclosure of Californians’ medical information and applies to businesses that offer software or hardware to consumers for the purposes of managing medical information or for diagnosis treatment, or management of medical conditions, including mobile applications or other related devices.[25]
According to Attorney General Bonta, the rise of mental health and reproductive apps led to recent amendments to clarify that digital services for mental health and reproductive or sexual health, such as apps and websites, are subject to the requirements of CMIA, as are any AI systems used in direct-to-consumer healthcare services.
California’s Competition Laws
As noted above, California’s Unfair Competition Law protects the state’s consumers against unlawful, unfair, or fraudulent business acts or practices, including business practices used in the practice of medicine.[26]
According to the legal advisory, using AI or other automated decision tools to make decisions about patients’ medical treatment or to override licensed care providers’ determinations regarding a patient’s medical needs may violate California’s ban on the practice of medicine by corporations and other “artificial legal entities,”[27] in addition to constituting an “unlawful” or “unfair” business practice under the Unfair Competition Law.
Thus, Attorney General Bonta notes that the scope of the Unfair Competition Law incorporates numerous other California laws that may apply to AI in a variety of healthcare contexts, such as the protections against false advertising and anticompetitive practices described above. Practices that deceive or harm consumers fall squarely within the purview of the Unfair Competition Law, and traditional consumer legal protections apply equally in the AI context. This includes creating, marketing, or disseminating an AI system that does not comply with civil rights, privacy, false advertising, competition, and other laws.
California’s Consumer Protection Laws
The advisory further highlights recent amendments to the Knox-Keene Act and California Insurance Code limit health care service plans’ ability to use AI or other automated decision systems to deny coverage.[28] Thus, Attorney General Bonta warns that when employed for utilization review or management purposes, a plan cannot use these types of tools to “deny, delay, or modify health care services based, in whole or in part, on medical necessity.”[29]
Instead, the AG notes that plans must ensure that AI and other software:
California’s Anti-Discrimination Laws
The advisory further provides that California law prohibits discrimination by any entity or individual receiving “any state support,” including an “entity principally engaged in the business of providing […] health care.”[31]
According to Attorney General Bonta, these rules prohibit the types of discriminatory practices likely to be caused by AI, including disparate impact discrimination (also known as “discriminatory effect” or “adverse impact”) and denial of full and equal access.[32]
For example, an AI system that makes less accurate predictions about demographic groups of people who have historically faced barriers to healthcare (and whose information may be underrepresented in large datasets), though facially neutral, may have a disproportionate negative impact on members of protected groups.[33]The advisory further explains that protected classifications under California Gov. Code § 11135 may be intersectional and overlap with socioeconomic marginalization. Therefore, “even if such models are applied to all patients regardless of race, they may still cause disparate impact discrimination because ‘identical treatment may be discriminatory.’”[34]
Furthermore, a “disparate impact is permissible only if the covered entity can show that the AI system’s use is necessary for achieving a compelling, legitimate, and nondiscriminatory purpose, and supported by evidence that is not hypothetical or speculative.”[35]
The use of AI in healthcare is subject to additional state laws prohibiting discrimination against healthcare consumers in various settings, such as:
Developers, vendors, and users should take proactive steps when designing, acquiring, and implementing health AI to ensure that these systems do not have a discriminatory impact.
California’s Patient Privacy and Autonomy Laws
Vast quantities of patient data underlie the massive growth in the health AI sector. Data is used to build and train AI and render decisions impacting health services. Developers and entities that use AI in healthcare should carefully monitor training data, inputs, and outputs to ensure respect for Californians’ rights to medical privacy.
The same week as the legal advisory issued by Attorney General Bonta, New Jersey Attorney General Platkin published guidance to address algorithmic discrimination, noting New Jersey’s existing LAD applies in the same manner that it applies to other forms of discriminatory conduct.
New Jersey’s Law Against Discrimination (LAD)
According to the advisory, employers, housing providers, places of public accommodation, and other entities covered by the New Jersey LAD have started using automated tools to make decisions that affect consumers. Specifically, the LAD “prohibits algorithmic discrimination” in employment, housing, places of public accommodation, credit, and contracting on the basis of actual or perceived race, religion, color, national origin, sexual orientation, sex, disability, and other protected characteristics. If these tools are not designed and deployed responsibly, “they can result in algorithmic discrimination.” According to Attorney General Platkin, the LAD “prohibits all forms of discrimination, irrespective of whether discriminatory conduct is facilitated by automated decision-making tools or driven purely by human practices.”
The guidance provides examples of how “automated decision-making” tools can “find and leverage correlations in the datasets they analyze in ways that may contribute to or amplify discriminatory outcomes.” Furthermore, Attorney General Platkin explains that a covered entity or business can violate the LAD “even if it has no intent to discriminate, and even if a third-party was responsible for developing the automated decision-making tool.”
According to Attorney General Platkin, the LAD “prohibits all forms of discrimination, irrespective of whether discriminatory conduct is facilitated by automated decision-making tools or driven purely by human practices.”
Attorney General Platkin concludes the memo providing the following advice to businesses operating in New Jersey:
“It is critical that employers, housing providers, places of public accommodation, and other covered entities—as well as developers and vendors of automated decision-making tools used by these entities—carefully consider and evaluate the design and testing of automated decision-making tools before they are deployed, and carefully analyze and evaluate those tools on an ongoing basis after they are deployed.”
Creation of Civil Rights and Technology Initiative
The same day Attorney General Platkin released the AI guidance, his office announced the launch of a new Civil Rights and Technology Initiative to address the risks of discrimination and bias-based harassment stemming from the use of artificial intelligence (AI) and other advanced technologies.
According to the announcement, the Civil Rights and Technology Initiative “will identify and develop technology to enhance the [Division of Civil Rights’] enforcement, outreach, and public education work, and will develop protocols to facilitate the responsible deployment of this technology.”
The Initiative will also establish the Civil Rights Innovation Lab, which will develop “technology and tools to enhance … enforcement, outreach, and public education work across the State.” This includes “improving the complaint process, aiding in investigations and enforcement work, and updating internal policies and procedures to help the Division better serve all New Jerseyans.”
Some have likened the rise of Artificial Intelligence to the Fourth Industrial Revolution. AI is rapidly evolving and accelerating. While the federal and state governments continue to keep up, state AGs have broad authority to enforce potential consumer harm caused by AI under existing consumer protection and data privacy laws. State AGs are becoming increasingly concerned with the rise of AI and how it affects their consumers and will continue to utilize their existing powers to regulate AI.
[1] National Association of Attorneys General, “Capital Forum Event Explores the Brave New World of Artificial Intelligence,” Jan. 29, 2024.
[2] Cal. Bus. & Prof. Code, § 16720.
[3] Cal. Bus. & Prof. Code, § 17000 et seq.
[4] People ex rel. Mosk v. Nat’l Research Co., 201 Cal. App.2d 765, 772 (1962).
[5] Citing Bus. & Prof. Code, § 17500 et seq.; Civ. Code, § 1770 [The Consumer Legal Remedies Act.
[6] Citing Civ. Code, §§ 3344, 3344.1; see also Civ. Code, § 1708.86 (prohibiting the creation and disclosure of sexually explicit material without the depicted person’s consent).
[7] Cal. Pen. Code, § 528.5.
[8] Cal. Pen. Code, § 530; see also Cal. Pen. Code, § 529 (false personation of another in private or official capacity while doing specified acts).
[9] See Pen. Code, § 538d (impersonating a peace officer); Pen. Code, § 146a (impersonating a state officer while committing specified acts); Pen. Code, § 538f (impersonating a public utility officer); Pen. Code, § 538g (impersonating a state/ county/city/special district/city or county officer or employee).
[10] See, e.g., People v. Toomey, 157 Cal.App.3d 1, 15 (1984) (liability under section 17200 can be imposed for aiding and abetting).
[11] Bus. & Prof. Code, § 17500 et seq.
[12] Cal. Civ. Code, § 51.
[13] Gov. Code, § 12900 et seq.
[14] See Raines v. U.S. Healthworks Medical Grp., 15 Cal.5th 268, 291 (2023).
[15] Cal. Code of Regs., tit. 2, § 14027.
[16] See 15 U.S.C. § 1681 et seq.; 15 U.S.C. § 1691 et seq.; Civ. Code, § 1785.1 et seq.
[17] See Cal. Civ. Code, § 1798.140(v).
[18] Id. § 1798.100.
[19] Cal. Civ. Code, § 1798.140, added by AB 1008, Stats. 2024, ch. 804.
[20] Cal. Civ. Code, § 1798.140, added by SB 1223, Stats. 2024, ch. 887.
[21] Id. § 1798.140(ab).
[22] Cal. Pen. Code, § 630 et seq.
[23] Id. § 637.3.
[24] Cal. Bus. & Prof. Code, § 22584 et seq.
[25] Civ. Code, § 56 et seq.
[26] Bus. & Prof. Code, § 17200 et seq.
[27] Bus. & Prof. Code, § 2400 et seq.
[28] See Sen. Bill No. 1120 (2023-2024).
[29] Health & Saf. Code, § 1367.01, subd. (k)(1); Ins. Code, § 10123.135, subd. (j)(2).
[30] Health & Saf. Code, § 1367.01, subd. (k)(1) (A-K); Ins. Code, § 10123.135, subd. (j)(1) (A-K).
[31] Gov. Code, § 11135; Cal. Code Regs., tit. 2, § 14020, subd. (m)(6)(B); see also id. at (ii) [covered programs or activities include provision of health services).
[32] Cal. Code Regs., tit. 2, § 14027, subd. (b)(3).
[33] Citing, Ziad Obermeyer et al., Dissecting Racial Bias in an Algorithm used to Manage the Health of Populations Science 336, 447-453 (Oct. 25, 2019) (finding that a model intended to predict patient risk disproportionately underestimated the needs of Black patients in comparison to white patients).
[34] Id. § 14025, subd. (a)(3).
[35] Id. § 14029, subd. (c) (1, 2).
[36] Cal. Civ. Code, § 51, subd. (b); Ins. Code § 1861.03 (applying Unruh Act to insurance).
[37] Cal. Health & Saf. Code, § 1317.3, subd. (b).
[38] Cal. Gov. Code, § 12900 et seq.