The EU AI Act: Open-Source Exceptions and Considerations for Your AI Strategy


8 minute read | May.20.2024

This update is part of our EU AI Act Essentials Series. Click here to view additional updates.

The treatment of open-source AI technologies was a source of heated debate leading up to adoption of the EU AI Act. In the end, legislators adopted an Act that:

  • Does not apply to many AI systems released under free and open-source licenses.
  • Creates a limited open-source exception for general purpose AI models (GPAIMs).

Given the EU’s status as an early mover in tech regulation, AI developers and users should consider the law’s open-source exceptions as a guidepost for how such requirements are likely to develop elsewhere in the near term.

Overview of the Open-Source Exceptions

AI Systems

  • The Act as a whole “does not apply to AI systems released under free and open-source licenses, unless they are placed on the market or put into service as high-risk AI systems or as an AI system that falls under Article 5 or 50.” Those articles cover AI systems that are prohibited or interact directly with people.
  • A fact-specific analysis will determine whether a given AI system falls into one of the categories of prohibited or high-risk AI systems.

GPAIMs

  • The Act creates a limited open-source exception for GPAIMs. It allows for “the access, usage, modification, and distribution of the model…” where the model’s parameters “…including the weights, the information on the model architecture, and the information on model usage, are made publicly available.” GPAIMs will not qualify for the exception if they “present systemic risks” (described below).
  • Qualification: Providers seeking the qualification of a GPAIM under the exception should:
    • Make the GPAIM available on conditions that satisfy the requirements stated above.
    • Account for systemic risk. The latest language states that GPAIMs will be classified as presenting “systemic risks” if they:
      • Have “high-impact capabilities” evaluated on the basis of technical tools and methodologies. (This will be presumed when the cumulative amount of computation used for training the model measured in floating point operations is greater than 10^25) or
      • Are otherwise designated as such by the Commission based on the criteria set out in Annex XIII of the Act.
  • Benefits of the Exception: If an open-source GPAIM otherwise meets the exception requirements, the Act exempts the provider from certain transparency obligations. That includes an exemption from the obligation to create and maintain up-to-date technical documentation and information intended for providers that plan to integrate the GPAIM into their AI systems.
    • The provider of the open-source GPAIM nonetheless must:
      • Share a detailed summary at distribution of the content used for model training with a level of specificity regulators will determine.
      • Establish a policy with respect to EU copyright law, including for identifying and respecting a rightsholders reservation of rights though “the use of state of the art technologies” and pursuant to Article 4(3) of the Directive (EU) 2019/790.

What Does This Mean for Your Open-Source AI Strategy?

Developers, providers and users (called “deployers” under the Act) should keep a few initial considerations in mind as they develop, use and distribute such technologies.

What types of AI technologies do you use or share? Why?

  • The first step of any AI compliance program is to inventory how your company uses AI. Does your company receive a license to the technology – or grant one to third parties? Make sure to review:
    • Proprietary AI systems and GPAIMs.
    • Open-source AI systems and GPAIMs.
    • Commercially licensed AI systems and GPAIMs.
    • Other commercially licensed services that include AI-powered functionality.
  • Companies often use several types of models or AI-powered tools, but they may not always realize it due to the speed at which companies integrate AI technologies into commonly used software services (which might include open-source AI technologies). This means a one-size-fits-all approach could quickly become obsolete. The pace of change makes internal tracking and oversight a cornerstone to managing AI-related risk.
  • Companies should also assess whether they are complying with license terms (including terms on the license scope).
    • Complying with license terms is critical to maintaining a legal right to use such technology.
    • Licensors can often immediately terminate a licensee’s use if the licensee breaches applicable terms.

What are the pros and cons of a “provider” licensing AI systems or GPAIMs to third parties under an open-source license that respects the Act’s requirements?

  • A provider might find it beneficial to make an AI system or GPAIM available under an open-source license. That could bolster innovation and development, enhance the provider’s reputation, market the availability of its more advanced solutions or recruit technical personnel.
  • The Act exempts GPAIM providers from a number of transparency obligations. This could lessen the compliance burden.
  • The Act still requires disclosures related to architecture, weights and training. That could involve information a provider intends to maintain as confidential or a trade secret. To date, few GPAIMs meet the requirements of that open-source exception. That could limit the initial open-source GPAIM options available to users.

What are the pros and cons of a “deployer” (user) licensing AI systems or GPAIMs from providers under the exceptions?

  • Open-source AI systems and GPAIMs are free, easily accessible and already developed, potentially saving companies time and money. Users benefit from the information GPAIM providers must disclose as part of the license terms, such as parameters, weights and architecture. That kind of information goes beyond what often is made available under a traditional open-source software license.
  • Some potential drawbacks of open-source AI systems or GPAIMs are that users may be wary of potential security vulnerabilities or copyleft license effects resulting from offerings with an unclear testing or training regimen. That’s especially true if users do not receive representations, warranties or indemnities from a third party backing the training, suitability and security of such AI technologies. Users may also want a provider to be contractually obligated to:
    • Provide ongoing implementation support or maintenance.
    • Maintain a detailed understanding on how AI technologies were trained and operated in case regulators or customers raise questions.

What safeguards should a company implement when using open-source AI technologies?

  • Beyond corporate oversight, companies should consider extensive operational precautions when evaluating, implementing and operating AI technologies.
  • Some precautions might already be in place for existing technical systems, including internal and external testing, quality management procedures, employee/contractor policies concerning AI, technical documentation of AI usage, log maintenance, maintaining a “human-in-the-loop” for AI operations and tracking and labeling output from AI models.
  • Legal and technical personnel should evaluate the legal terms that apply to new open-source AI technologies before use. (This might already be a part of a company’s open-source software policy or procedures.)
  • Companies should ensure no license terms or outright prohibitions conflict with the business use case, such as commercial use restrictions.
  • A company should track such terms and periodically audit the use of associated code to ensure company use cases remain consistent with license terms.

If you have questions about the AI Act, open-source AI Systems, GPAIMs or legal issues in artificial intelligence reach out to the authors or other members of Orrick’s AI team.