3 minute read | October.03.2024
The health care industry has utilized artificial intelligence for decades, but the recent expansion of AI’s capabilities into generative AI models is raising questions about how and to what extent to regulate the technology.
AI is generally used for analysis and prediction. Generative AI, however, is capable – in theory – of making clinical decisions, such as diagnosing conditions, identifying health risks and developing patient-specific treatment options.
A few states have recently taken measures specifically to regulate generative AI in health care – and more may follow suit. In tackling the issue, states must balance embracing innovative and potentially life-saving technologies while protecting patients from generative AI’s limitations.
One trend we are seeing relates to disclosures to patients when communication related to their care involves generative AI.
California Gov. Gavin Newsom recently signed into law a bill regarding AI that can “generate derived synthetic content, including images, videos, audio, text, and other digital content.”
The law applies to health care facilities, clinics, physician’s offices and group practice offices that use generative AI to create written or verbal patient communications pertaining to “patient clinical information.” It requires them to:
The disclaimers must:
The new California law is similar to a 2024 Utah law addressing generative AI disclosures.
That law applies to AI that is trained on data, interacts with a person by text, audio or visual communication and “generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight.”
It in part requires providers of regulated occupations – physicians, for example – to “prominently disclose” when a patient interacts with generative AI.
The statute does not outline required content for disclosures but does require they be made verbally at the start of a spoken interaction or by electronic messaging at the start of a written interaction.
These developments align with trends and guidance from industry organizations, such as the Federation of State Medical Boards.
Its recent report, “Navigating the Responsible and Ethical Incorporation of Artificial Intelligence into Clinical Practice,” emphasizes the need for transparency and informed consent from patients, while allowing for more automated operational workflows for providers.
The report also details other key considerations related to AI in health care, such as physician accountability and concerns surrounding equity and bias.
As states and regulators across the country focus increasingly on regulating AI in health care, stakeholders at all levels should pay close attention to these trends and their implementation to ensure they meet all requirements.
If you have any questions, please contact the authors (Amy Joseph, Emily Brodkin, Melania Jankowski and Jeremy Sherer) or another Orrick Team member.