State Attorneys General on Applying Existing State Laws to AI


17 minute read | February.18.2025

The rapid rise of artificial intelligence (AI) across numerous sectors of the economy has garnered the attention of state attorneys general (AGs). As various industries increasingly adopt AI to streamline or enhance their operations, state AGs from both parties are focusing on how AI affects consumers. The growing interest in AI among state AGs is evident in the slate of AI-focused panels at nearly every attorney general conference over the past year.[1]

State AGs are focusing on four primary concerns related to AI:

  1. Potential discrimination or disparate impacts that can perpetuate discrimination.
  2. Data sources being used to train AI.
  3. Potential consumer harm caused by misleading information generated by AI.
  4. Exploitation of children through child sexual abuse material (CSAM).

As the Trump administration plans to roll back the Biden administration’s artificial intelligence regulatory framework, state AGs will likely fill in the void by using existing consumer protection or Unfair or Deceptive Acts or Practices (UDAP) authority to bring enforcement actions. States are already passing AI legislation, which often includes AG enforcement authority.

Two recent examples of state AGs using their existing statutory authority to enforce potential AI regulations are California Attorney General Rob Bonta and New Jersey Attorney General Matt Platkin. Both recently issued legal advisories providing guidance on the application of existing state laws to AI. We summarize the key takeaways from these advisories below.

Applying Existing California Law to AI Systems and Their Use

In the first of two advisories, Attorney General Bonta notes the “great potential” for AI to produce greater economic growth and scientific breakthroughs but also the potential risks associated with AI, such as exacerbating bias, discrimination, and fraud.

According to the advisory, the exponential proliferation of AI systems appears in “nearly all aspects of everyday life,” including consumer credit risk evaluations, tenant screenings, and targeted advertising. Given the rapid acceleration of AI, Attorney General Bonta cautions that AI developers and entities that use AI systems must comply with California law, including regulations that safeguard consumers from fraud, anticompetitive harm, discrimination, bias, and abuse of their data.

Under California law, the AG, local prosecutors, and plaintiff’s attorneys have authority to enforce existing statutes in most instances. Below is an analysis of the AG’s legal advisory focusing on existing California laws that may be used to enforce AI and protect consumers in certain contexts.

California’s Competition Laws

According to Attorney General Bonta’s guidance, AI developers and users should be aware of any risks to fair competition created by AI systems. Inadvertent harm to competition resulting from AI systems may violate one or more of California’s competition laws. Anticompetitive actions by dominant AI companies may also harm competition in AI markets and violate both state and federal competition laws, such as:

  • The California Cartwright Act, which prohibits anticompetitive trusts,[2]
  • The Unfair Practices Act, which regulates practices such as below-cost sales and loss leaders, protect California’s economy and anticompetitive behavior.[3]

California’s Unfair Competition Law (UCL) was written with broad, sweeping language to protect consumers from “obvious and familiar forms of fraud and deception as well as new, creative, and cutting-edge forms of unlawful, unfair, and misleading behavior.”[4]

Attorney General Bonta cites the following examples of how the use of AI could run afoul of the California UCL:

  • Falsely advertising the accuracy, quality, or utility of AI systems, including:
    • Claiming that an AI system has a capability that it does not.
    • Representing that a system is completely powered by AI when humans are responsible for performing some of its functions.
    • Representing that humans are responsible for performing some of a system’s functions when AI is responsible instead.
    • Claiming without basis that a system is accurate, performs tasks better than a human would, has specified characteristics, meets industry or other standards, or is free from bias.[5]
  • Using AI to foster or advance deception, for example:
    • The creation of deepfakes, chatbots, and voice clones that appear to represent people, events, and utterances that never existed or occurred could likely be deceptive.
    • Likewise, in many contexts, it would likely be deceptive to fail to disclose that AI has been used to create a piece of media.
  • Using AI to create and knowingly use another person’s name, voice, signature, photograph, or likeness without that person’s prior consent.[6]
  • Using AI to impersonate a real person for any unlawful purpose, including harming, intimidating, threatening, or defrauding another person.[7]
  • Using AI to impersonate a real person for purposes of receiving money or property.[8]
  • Using AI to impersonate a government official in the execution of official duties.[9]
  • Using AI in a manner that is unfair, including using AI in a manner that results in negative impacts that outweigh its utility, offends public policy, is immoral, unethical, oppressive, unscrupulous, or causes substantial injury.
  • Creating, marketing, or disseminating an AI system that does not comply with federal or state laws, including the false advertising, civil rights, and privacy laws described below, as well as laws governing specific industries and activities.
  • Businesses may also be liable for supplying AI products when they know, or should have known, that AI will be used to violate the law.[10]

California’s False Advertising Law

California’s False Advertising Law protects California consumers against deceptive advertising.[11] According to Attorney General Bonta, this includes false advertising regarding the capabilities, availability, and utility of AI products, the use of AI in connection with a good or service, as well as false advertising regarding any topic, whether it is generated by AI or not.

California’s Civil Rights Laws

The AG notes that his office has “seen AI systems incorporate societal and other biases into their decision-making,” citing an investigation into racial and ethnic bias in healthcare algorithms. California has several protections against bias and discrimination, including:

  • The Unruh Civil Rights Act, which protects the freedom and equality of all people within the state, “no matter what their sex, race, color, religion, ancestry, national origin, disability, medical condition, genetic information, marital status, sexual orientation, citizenship, primary language, or immigration status.”[12]
  • The California Fair Employment and Housing Act (FEHA), which protects Californians from harassment or discrimination in employment or housing based on several protected characteristics, including sex, race, disability, age, criminal history, and veteran or military status.[13] Businesses may be liable for FEHA-prohibited discriminatory screening carried out by an agent, and the agents themselves may be directly liable to the individuals who were discriminated against.[14]
  • California Gov. Code § 11135 prohibits discrimination in state-funded programs and activities. This includes practices that, regardless of intent, have an adverse or disproportionate impact on members of a protected class or create, reinforce, or perpetuate discrimination or segregation of members of a protected class.[15]

Attorney General Bonta further notes that laws which requires entities that take “adverse action” against consumers to provide specific reasons for those adverse actions, including when AI was used to make the determination. For example:

  • The federal Fair Credit Reporting Act and Equal Credit Opportunity Act, as well as the California Consumer Credit Reporting Agencies Act, require such specific reasons be provided to Californians who receive adverse actions based on their credit scores.[16] Additionally, the Consumer Financial Protection Bureau (CFPB) recently clarified that creditors who use AI or complex credit models must still provide individuals with specific reasons when they deny or take another adverse action against an individual. Whether the CFPB under the new administration will continue with this interpretation remains to be seen.

California Consumer Privacy Act

Attorney General Bonta’s advisory states that the California Consumer Privacy Act (CCPA) provides consumers further protection from AI by regulating the collection, use, sale, and sharing of their personal information. Personal information may also include inferences about consumers made by AI systems.[17]

Thus, according to the advisory, “AI developers and users that collect and use Californians’ personal information must comply with CCPA’s protections for consumers, including by ensuring that their collection, use, retention, and sharing of consumer personal information is reasonably necessary and proportionate to achieve the purposes for which the personal information was collected and processed.”[18]

Additionally, new California legislation signed into law in September 2024 “confirms that the protections for personal information in the CCPA apply to personal information in AI systems that are capable of outputting personal information.”[19] A second bill expands the definition of “sensitive personal information” to include “neural data,”[20] the information generated by measuring the activity of a consumer’s central or peripheral nervous system. Attorney General Bonta further warns that “AI developers and users should also be aware that using personal information for research” is “subject to several requirements and limitations.”[21]

California Invasion of Privacy Act

According to Attorney General Bonta, the California Invasion of Privacy Act (CIPA) “may also impact AI training data, inputs, or outputs.”

CIPA restricts recording or listening to private electronic communication, including wiretapping, eavesdropping on or recording communications without the consent of all parties, and recording or intercepting cellular communications without the consent of all parties.[22]

CIPA also prohibits the use of systems that examine or record voice prints to determine the truth or falsity of statements without consent.[23] Thus, according to the advisory, “developers and users should ensure that their AI systems, or any data used by the system, do not violate CIPA.”

Student Online Personal Information Protection Act

California’s Student Online Personal Information Protection Act (SOPIPA) broadly “prohibits education technology service providers from selling student data, engaging in targeted advertising using student data, and amassing profiles about students, except for specified school purposes.”[24]

SOPIPA applies to services and apps used primarily for “K-12 school purposes.” This includes services and apps for home or remote instruction, as well as those intended for use at a public or private school.

Thus, according to Attorney General Bonta’s guidance, “developers and users should ensure any educational AI systems comply with SOPIPA, even if they are marketed directly to consumers.”

Applying Existing California Law to AI in Healthcare

Attorney General Bonta issued a second legal advisory providing guidance to “healthcare providers, insurers, vendors, investors, and other healthcare entities that develop, sell, and use” AI and “other automated decision systems” about their obligations under California law.

The advisory notes that AI systems are already guiding medical diagnosis and treatment decisions and have the potential to “improve patient and population health, increase health equity, reduce administrative burdens, and facilitate appropriate information sharing.”

However, the advisory further warns of AI “risks causing discrimination, denials of needed care and other misallocations of healthcare resources, and interference with patient autonomy and privacy.” Thus, according to Attorney General Bonta, healthcare-related entities that develop, sell, or use AI systems must “ensure that their systems comply with laws protecting consumers.” This includes “understanding how AI systems are trained, what information the systems consider, and how the systems generate output.”

The legal advisory further calls on developers, researchers, providers, and insurers to test and validate AI systems “to ensure that their use is safe, ethical, and lawful, and reduces, rather than replicates or exaggerates, human error and biases.”

According to the advisory, it may be unlawful in California to:

  • Deny health insurance claims using AI or other automated decision-making systems in a manner that overrides doctors’ views about necessary treatment.
  • Use generative AI or other automated decision-making tools to draft patient notes, communications, or medical orders that include erroneous or misleading information, including information based on stereotypes relating to race or other protected classifications.
  • Determine patient access to healthcare using AI or other automated decision-making systems that make predictions based on patients’ past healthcare claims data, resulting in disadvantaged patients or groups. Disadvantage could result from select groups with robust past healthcare access being provided enhanced services while other groups are denied access based on limited past access to healthcare.
  • Double-book a patient’s appointment, or create other administrative barriers, because AI or other automated decision-making systems predict that patient is the “type of person” more likely to miss an appointment.
  • Conduct cost/benefit analysis of medical treatments for patients with disabilities using AI or other automated decision-making systems that are based on stereotypes that undervalue the lives of people with disabilities.

Attorney General Bonta explains the legal advisory applies only to existing California law and does not encompass all possible federal laws and regulations. Below is a summary of existing California laws that apply to AI in the healthcare space.

California’s Patient Privacy Laws

The Confidentiality of Medical Information Act (CMIA) governs the use and disclosure of Californians’ medical information and applies to businesses that offer software or hardware to consumers for the purposes of managing medical information or for diagnosis treatment, or management of medical conditions, including mobile applications or other related devices.[25]

According to Attorney General Bonta, the rise of mental health and reproductive apps led to recent amendments to clarify that digital services for mental health and reproductive or sexual health, such as apps and websites, are subject to the requirements of CMIA, as are any AI systems used in direct-to-consumer healthcare services.

California’s Competition Laws

As noted above, California’s Unfair Competition Law protects the state’s consumers against unlawful, unfair, or fraudulent business acts or practices, including business practices used in the practice of medicine.[26]

According to the legal advisory, using AI or other automated decision tools to make decisions about patients’ medical treatment or to override licensed care providers’ determinations regarding a patient’s medical needs may violate California’s ban on the practice of medicine by corporations and other “artificial legal entities,”[27] in addition to constituting an “unlawful” or “unfair” business practice under the Unfair Competition Law.

Thus, Attorney General Bonta notes that the scope of the Unfair Competition Law incorporates numerous other California laws that may apply to AI in a variety of healthcare contexts, such as the protections against false advertising and anticompetitive practices described above. Practices that deceive or harm consumers fall squarely within the purview of the Unfair Competition Law, and traditional consumer legal protections apply equally in the AI context. This includes creating, marketing, or disseminating an AI system that does not comply with civil rights, privacy, false advertising, competition, and other laws.

California’s Consumer Protection Laws

The advisory further highlights recent amendments to the Knox-Keene Act and California Insurance Code limit health care service plans’ ability to use AI or other automated decision systems to deny coverage.[28] Thus, Attorney General Bonta warns that when employed for utilization review or management purposes, a plan cannot use these types of tools to “deny, delay, or modify health care services based, in whole or in part, on medical necessity.”[29]

Instead, the AG notes that plans must ensure that AI and other software:

  • Does not supplant a licensed health care provider’s decision-making.
  • Base decisions on individual enrollees’ own medical history and clinical circumstances.
  • Does not discriminate and is applied fairly and equitably.
  • Is open to inspection and audit by relevant state agencies.
  • Is periodically reviewed and revised to maximize accuracy and reliability.
  • Does not use patient data beyond its intended and stated purpose.
  • Does not directly or indirectly cause harm to the plan enrollee.[30]

California’s Anti-Discrimination Laws

The advisory further provides that California law prohibits discrimination by any entity or individual receiving “any state support,” including an “entity principally engaged in the business of providing […] health care.”[31]

According to Attorney General Bonta, these rules prohibit the types of discriminatory practices likely to be caused by AI, including disparate impact discrimination (also known as “discriminatory effect” or “adverse impact”) and denial of full and equal access.[32]

For example, an AI system that makes less accurate predictions about demographic groups of people who have historically faced barriers to healthcare (and whose information may be underrepresented in large datasets), though facially neutral, may have a disproportionate negative impact on members of protected groups.[33]The advisory further explains that protected classifications under California Gov. Code § 11135 may be intersectional and overlap with socioeconomic marginalization. Therefore, “even if such models are applied to all patients regardless of race, they may still cause disparate impact discrimination because ‘identical treatment may be discriminatory.’”[34]

Furthermore, a “disparate impact is permissible only if the covered entity can show that the AI system’s use is necessary for achieving a compelling, legitimate, and nondiscriminatory purpose, and supported by evidence that is not hypothetical or speculative.”[35]

The use of AI in healthcare is subject to additional state laws prohibiting discrimination against healthcare consumers in various settings, such as:

  • California’s Unruh Civil Rights Act, which prohibits arbitrary and intentional discrimination by businesses, including those providing healthcare services.[36]
  • The rights of people with disabilities to access healthcare, which are protected through additional specific disability rights statutes.
  • California’s Insurance Code, which prohibits discrimination regarding ratemaking, claims handling, and reviewing insurance applications.
  • California’s Health and Safety Code, which requires that licensed California hospitals have a policy of nondiscrimination in access to emergency healthcare services.[37]
  • California’s Fair Employment and Housing Act (FEHA) also protects Californians from harassment or discrimination in healthcare employment, including discrimination carried out or facilitated by AI.[38]

Developers, vendors, and users should take proactive steps when designing, acquiring, and implementing health AI to ensure that these systems do not have a discriminatory impact.

California’s Patient Privacy and Autonomy Laws

Vast quantities of patient data underlie the massive growth in the health AI sector. Data is used to build and train AI and render decisions impacting health services. Developers and entities that use AI in healthcare should carefully monitor training data, inputs, and outputs to ensure respect for Californians’ rights to medical privacy.

New Jersey Attorney General Guidance on Algorithmic Discrimination and Creation of Civil Rights and Technology Initiative

The same week as the legal advisory issued by Attorney General Bonta, New Jersey Attorney General Platkin published guidance to address algorithmic discrimination, noting New Jersey’s existing LAD applies in the same manner that it applies to other forms of discriminatory conduct.

New Jersey’s Law Against Discrimination (LAD)

According to the advisory, employers, housing providers, places of public accommodation, and other entities covered by the New Jersey LAD have started using automated tools to make decisions that affect consumers. Specifically, the LAD “prohibits algorithmic discrimination” in employment, housing, places of public accommodation, credit, and contracting on the basis of actual or perceived race, religion, color, national origin, sexual orientation, sex, disability, and other protected characteristics. If these tools are not designed and deployed responsibly, “they can result in algorithmic discrimination.” According to Attorney General Platkin, the LAD “prohibits all forms of discrimination, irrespective of whether discriminatory conduct is facilitated by automated decision-making tools or driven purely by human practices.”

The guidance provides examples of how “automated decision-making” tools can “find and leverage correlations in the datasets they analyze in ways that may contribute to or amplify discriminatory outcomes.” Furthermore, Attorney General Platkin explains that a covered entity or business can violate the LAD “even if it has no intent to discriminate, and even if a third-party was responsible for developing the automated decision-making tool.”

According to Attorney General Platkin, the LAD “prohibits all forms of discrimination, irrespective of whether discriminatory conduct is facilitated by automated decision-making tools or driven purely by human practices.”

Attorney General Platkin concludes the memo providing the following advice to businesses operating in New Jersey:

“It is critical that employers, housing providers, places of public accommodation, and other covered entities—as well as developers and vendors of automated decision-making tools used by these entities—carefully consider and evaluate the design and testing of automated decision-making tools before they are deployed, and carefully analyze and evaluate those tools on an ongoing basis after they are deployed.”

Creation of Civil Rights and Technology Initiative

The same day Attorney General Platkin released the AI guidance, his office announced the launch of a new Civil Rights and Technology Initiative to address the risks of discrimination and bias-based harassment stemming from the use of artificial intelligence (AI) and other advanced technologies.

According to the announcement, the Civil Rights and Technology Initiative “will identify and develop technology to enhance the [Division of Civil Rights’] enforcement, outreach, and public education work, and will develop protocols to facilitate the responsible deployment of this technology.”

The Initiative will also establish the Civil Rights Innovation Lab, which will develop “technology and tools to enhance … enforcement, outreach, and public education work across the State.” This includes “improving the complaint process, aiding in investigations and enforcement work, and updating internal policies and procedures to help the Division better serve all New Jerseyans.”

Conclusion

Some have likened the rise of Artificial Intelligence to the Fourth Industrial Revolution. AI is rapidly evolving and accelerating. While the federal and state governments continue to keep up, state AGs have broad authority to enforce potential consumer harm caused by AI under existing consumer protection and data privacy laws. State AGs are becoming increasingly concerned with the rise of AI and how it affects their consumers and will continue to utilize their existing powers to regulate AI.


[1] National Association of Attorneys General, “Capital Forum Event Explores the Brave New World of Artificial Intelligence,” Jan. 29, 2024.

[2] Cal. Bus. & Prof. Code, § 16720.

[3] Cal. Bus. & Prof. Code, § 17000 et seq.

[4] People ex rel. Mosk v. Nat’l Research Co., 201 Cal. App.2d 765, 772 (1962).

[5] Citing Bus. & Prof. Code, § 17500 et seq.; Civ. Code, § 1770 [The Consumer Legal Remedies Act.

[6] Citing Civ. Code, §§ 3344, 3344.1; see also Civ. Code, § 1708.86 (prohibiting the creation and disclosure of sexually explicit material without the depicted person’s consent).

[7] Cal. Pen. Code, § 528.5.

[8] Cal. Pen. Code, § 530; see also Cal. Pen. Code, § 529 (false personation of another in private or official capacity while doing specified acts).

[9] See Pen. Code, § 538d (impersonating a peace officer); Pen. Code, § 146a (impersonating a state officer while committing specified acts); Pen. Code, § 538f (impersonating a public utility officer); Pen. Code, § 538g (impersonating a state/ county/city/special district/city or county officer or employee).

[10] See, e.g., People v. Toomey, 157 Cal.App.3d 1, 15 (1984) (liability under section 17200 can be imposed for aiding and abetting).

[11] Bus. & Prof. Code, § 17500 et seq.

[12] Cal. Civ. Code, § 51.

[13] Gov. Code, § 12900 et seq.

[14] See Raines v. U.S. Healthworks Medical Grp., 15 Cal.5th 268, 291 (2023).

[15] Cal. Code of Regs., tit. 2, § 14027.

[16] See 15 U.S.C. § 1681 et seq.; 15 U.S.C. § 1691 et seq.; Civ. Code, § 1785.1 et seq.

[17] See Cal. Civ. Code, § 1798.140(v).

[18] Id. § 1798.100.

[19] Cal. Civ. Code, § 1798.140, added by AB 1008, Stats. 2024, ch. 804.

[20] Cal. Civ. Code, § 1798.140, added by SB 1223, Stats. 2024, ch. 887.

[21] Id. § 1798.140(ab).

[22] Cal. Pen. Code, § 630 et seq.

[23] Id. § 637.3.

[24] Cal. Bus. & Prof. Code, § 22584 et seq.

[25] Civ. Code, § 56 et seq.

[26] Bus. & Prof. Code, § 17200 et seq.

[27] Bus. & Prof. Code, § 2400 et seq.

[28] See Sen. Bill No. 1120 (2023-2024).

[29] Health & Saf. Code, § 1367.01, subd. (k)(1); Ins. Code, § 10123.135, subd. (j)(2).

[30] Health & Saf. Code, § 1367.01, subd. (k)(1) (A-K); Ins. Code, § 10123.135, subd. (j)(1) (A-K).

[31] Gov. Code, § 11135; Cal. Code Regs., tit. 2, § 14020, subd. (m)(6)(B); see also id. at (ii) [covered programs or activities include provision of health services).

[32] Cal. Code Regs., tit. 2, § 14027, subd. (b)(3).

[33] Citing, Ziad Obermeyer et al., Dissecting Racial Bias in an Algorithm used to Manage the Health of Populations Science 336, 447-453 (Oct. 25, 2019) (finding that a model intended to predict patient risk disproportionately underestimated the needs of Black patients in comparison to white patients).

[34] Id. § 14025, subd. (a)(3).

[35] Id. § 14029, subd. (c) (1, 2).

[36] Cal. Civ. Code, § 51, subd. (b); Ins. Code § 1861.03 (applying Unruh Act to insurance).

[37] Cal. Health & Saf. Code, § 1317.3, subd. (b).

[38] Cal. Gov. Code, § 12900 et seq.