The EU AI Act: What Life Sciences and Digital Health Companies Should Know


4 minute read | September.13.2024

This update is part of our EU AI Act Series. Learn more about the EU AI Act here.

Life sciences and digital health companies face obligations under the AI Act that vary depending on how they use AI – and the level of risk involved.

The Act classifies some AI use cases in life sciences and digital health as high risk, such as when companies use AI in medical devices or in-vitro fertilization. Those uses subject companies to heightened regulation.

Companies that use AI in ways that carry lower risk face fewer obligations. Examples include using AI in drug discovery, non-clinical research and development and earlier stage clinical trials.

Higher Risk Uses Trigger Heightened Regulation

Medical devices incorporating AI systems may create risks not addressed by traditional regulatory requirements. The AI Act aims to fill some of those gaps, particularly in high-risk scenarios.

Whether AI systems are classified as high risk largely depends on the intent behind their use, considering whether they are used for clinical management of patients, such as diagnosing patients and informing therapeutic decisions, or in precision medicine.

These contexts typically fall under medical device regulation subject to third-party conformity assessment, giving rise to the high-risk use classification. Conformity testing requirements under device regulations may incorporate the requirements of the AI Act, but the Act will not itself impose additional requirements.

Lower Risk Classifications

Other AI or machine learning uses likely fall under lower risk classifications. Examples include companies that use AI in:

  • Drug discovery applications, such as identifying potential targets and therapeutic pathways.
  • Non-clinical research and development – using AI/ML modeling techniques to augment or replace animal studies, for example.
  • Earlier stage clinical trials, where companies may use AI to analyze data and model future studies. (The European Medicines Agency takes a similar view in its draft guidance on using AI/ML in drug development.)

The Act’s rules governing general purpose AI models (GPAIMs) and systems may affect other use cases. From a developer’s perspective, these largely involve heightened transparency obligations, risk assessment and mitigation.

Governance Considerations

As AI/ML tools increasingly make their way into life sciences and digital health, deployers of these tools and companies using them must keep in mind the importance of responsible AI/ML practices. AI/ML users should adopt internal governance systems to ensure they:

  • Obtain rights for data used to train models and adhere to any confidentiality obligations with respect to data sets used for training.
  • Rely on diverse and reliable data sets to train models, particularly when there is higher potential for risk.
  • Use AI/ML tools to augment and automate processes without minimizing the need for human oversight.

At a smaller scale, AI/ML tool deployers frequently must contend with pharma or biotech companies concerned about their data training models competitors may use.

How can AI/ML tool deployers continue providing services in life sciences and digital health despite these concerns?

They may consider sandboxing and firewalling data sets for engagements or running data sets through pre-trained models where the specific inputs are not used to train the overall model. The AI Act does not speak to these commercial or competitive considerations, adding another element for AI/ML deployers to navigate.

The Act does, however, provide for “regulatory sandboxes” where companies can test novel technologies under a regulator’s supervision. The aim is to create controlled environments where companies can test and on-ramp technologies while regulators gain insight into how the technologies function prior to more widespread adoption by consumers.

What’s Next?

The Act and input from the EMA helped clarify some high-risk, high-regulation scenarios and use cases involving AI/ML in life sciences and digital health. Yet many questions remain, from legal, regulatory and commercial perspectives. 

Developers of AI/ML technologies should determine whether their technologies fall under the Act. They may also consider using regulatory sandboxes to ensure their product and service deployment aligns with regulators’ evolving expectations.

Finally, given the increasing importance of AI, stakeholders should monitor legislative developments across jurisdictions as sector-specific laws begin to emerge.

Want to know more? Reach out to a member of our team.

The EU AI Act Series: Key Takeaways for Companies Using and Developing AI
AI Law Center & US Law Tracker