4 minute read | September.13.2024
This update is part of our EU AI Act Series. Learn more about the EU AI Act here.
Life sciences and digital health companies face obligations under the AI Act that vary depending on how they use AI – and the level of risk involved.
The Act classifies some AI use cases in life sciences and digital health as high risk, such as when companies use AI in medical devices or in-vitro fertilization. Those uses subject companies to heightened regulation.
Companies that use AI in ways that carry lower risk face fewer obligations. Examples include using AI in drug discovery, non-clinical research and development and earlier stage clinical trials.
Medical devices incorporating AI systems may create risks not addressed by traditional regulatory requirements. The AI Act aims to fill some of those gaps, particularly in high-risk scenarios.
Whether AI systems are classified as high risk largely depends on the intent behind their use, considering whether they are used for clinical management of patients, such as diagnosing patients and informing therapeutic decisions, or in precision medicine.
These contexts typically fall under medical device regulation subject to third-party conformity assessment, giving rise to the high-risk use classification. Conformity testing requirements under device regulations may incorporate the requirements of the AI Act, but the Act will not itself impose additional requirements.
Other AI or machine learning uses likely fall under lower risk classifications. Examples include companies that use AI in:
The Act’s rules governing general purpose AI models (GPAIMs) and systems may affect other use cases. From a developer’s perspective, these largely involve heightened transparency obligations, risk assessment and mitigation.
As AI/ML tools increasingly make their way into life sciences and digital health, deployers of these tools and companies using them must keep in mind the importance of responsible AI/ML practices. AI/ML users should adopt internal governance systems to ensure they:
At a smaller scale, AI/ML tool deployers frequently must contend with pharma or biotech companies concerned about their data training models competitors may use.
How can AI/ML tool deployers continue providing services in life sciences and digital health despite these concerns?
They may consider sandboxing and firewalling data sets for engagements or running data sets through pre-trained models where the specific inputs are not used to train the overall model. The AI Act does not speak to these commercial or competitive considerations, adding another element for AI/ML deployers to navigate.
The Act does, however, provide for “regulatory sandboxes” where companies can test novel technologies under a regulator’s supervision. The aim is to create controlled environments where companies can test and on-ramp technologies while regulators gain insight into how the technologies function prior to more widespread adoption by consumers.
The Act and input from the EMA helped clarify some high-risk, high-regulation scenarios and use cases involving AI/ML in life sciences and digital health. Yet many questions remain, from legal, regulatory and commercial perspectives.
Developers of AI/ML technologies should determine whether their technologies fall under the Act. They may also consider using regulatory sandboxes to ensure their product and service deployment aligns with regulators’ evolving expectations.
Finally, given the increasing importance of AI, stakeholders should monitor legislative developments across jurisdictions as sector-specific laws begin to emerge.
Want to know more? Reach out to a member of our team.