The EU AI Act: 6 Steps to Take in 2024


5 minute read | September.13.2024

This update is part of our EU AI Act Series. Learn more about the EU AI Act here.

The EU AI Act has been formally adopted and the countdown to the start of enforcement has begun.

Here are six steps you can take to jump start your compliance efforts:

  1. Identify the team responsible for AI governance and compliance.
  2. Develop an AI governance framework.
  3. Foster AI knowledge and literacy across the organization.
  4. Inventory the organization’s AI.
  5. Assess EU AI Act applicability and classification.
  6. Start developing compliance measures that require a longer lead time.

1. Identify the team responsible for AI governance and compliance.

Addressing the AI Act’s obligations requires a combination of legal, technical and operational expertise.

Organizations that are most successful in addressing AI obligations and risks typically designate a diverse, cross-discipline group responsible for the company’s AI governance and compliance. Team members are often pulled from legal, compliance, information technology, engineering and product departments. This core team is then supported by AI champions within departments across the organization who act as the first point of contact for new policies, processes and tools.

2. Develop an AI governance framework.

To successfully navigate the evolving AI legal environment, organizations should develop and implement internal policies and procedures to align their use and development of AI with their mission, risk profile, legal obligations and ethical priorities.

While the AI Act may dictate some of the components of this AI governance framework, the most successful companies will build a foundational program upon which specific compliance obligations can attach. This approach allows for greater flexibility in adapting to new laws and technologies and helps to integrate AI governance into related compliance and operational frameworks, such as those related to data protection and cybersecurity.

Most organizations start by developing an AI Responsible Use Policy to guide their initial AI-related compliance efforts.

3. Foster AI knowledge and literacy across the organization.

Starting February 2, 2025, the AI Act will require covered organizations to train their employees and other AI-related workers to have a sufficient level of AI knowledge and competence to ensure appropriate development and use of AI and compliance with applicable law.

Organizations should consider implementing a standard, organization-wide training that provides all employees and relevant workers a baseline knowledge of the technology and related compliance requirements, as well as more specific role-based training where an individual has a greater involvement with AI development or use.

4. Inventory the organization’s AI.

The EU AI Act only regulates certain covered AI-related technologies.

As a first step in assessing the Act’s applicability and corresponding obligations, organizations should inventory AI they are developing, using, distributing, or selling and document:

  • Whether the AI is internally developed or provided by a third party.
  • Where applicable, the party providing the AI or developing the AI on behalf of the organization (along with any applicable contractual terms).
  • What the AI is designed to do, and in which settings and jurisdictions the AI will be used.
  • What type of content the AI uses as input to run, and what the AI produces as output when it finishes running.

Organizations should consider adopting a department-by-department approach to the inventory process, as this will often help identify less visible AI use cases.

5. Assess AI Act applicability and AI classification.

After the organization’s AI inventory is completed, it can be used to help determine whether the organization’s AI qualifies as an in-scope technology and, if it does:

Given the Prohibited AI prohibitions take effect on February 2, 2025, we recommend companies prioritize determining whether any of these prohibitions may apply. The other obligations and restrictions under the AI Act have a later effective date (see here for more details).

6. Start developing compliance measures that require a longer lead time.

As shown in our compliance timeline, the effective dates for the AI Act’s obligations are spread out over several years. Nonetheless, the AI Act is already influencing legislation in other countries (see our U.S. AI Law Tracker), with some laws jumping ahead of the AI Act in terms of implementation. In addition, compliance measures for many of the Act’s obligations and restrictions can require meaningful lead time prior to implementation.

As a result, organizations should consider:

  • Promptly establishing the AI team and governance framework outlined in Point #1 and Point #2 above.
  • Developing and implementing the AI knowledge and literacy measures outlined in Point #3 above.
  • Completing the inventory and assessment process outlined in Point #4 and Point #5 above in Q4.
  • Developing a general AI risk assessment framework designed to help keep the organization’s AI inventory up to date, assess whether the AI is subject to the Act or other legislation and provide a foundation for identifying and minimizing AI-related risk.
  • Update existing vendor management and contracting procedures to ensure diligence processes identify and document AI-related risks and obligations, and contractual provisions appropriately allocate compliance obligations and liability between the parties.
  • For Prohibited AI: Identifying any AI that may qualify as prohibited AI and either discontinuing its use in the EU or altering its use to qualify for an exception or fall outside of the Prohibited AI designation.
  • For High-Risk AI:
    • Developing a risk management framework for identifying, assessing and addressing risks presented by these higher risk systems; and
    • Determining when the system should be placed on the market or put into service in the EU (as the AI Act’s High-Risk AI System obligations trigger on different dates depending on these events).
  • For General-Purpose AI Models:
    • Preparing technical documentation that can be leveraged for the Act’s documentation requirements;
    • Understanding how choices today may impact the information included in such documentation (including data used to train the model and risk assessments performed in connection with the model); and
    • Determining when the model should be placed on the market in the EU (as the AI Act’s GPAIM obligations trigger on different dates depending on when the model is placed on the market).
  • For AI systems interacting directly with individuals or exposing individuals to AI-generated content (Individual-User-Facing AI), preparing product design elements to address applicable transparency obligations.

Check back in January for next steps to implement in 2025.

Want to know more? Reach out to a member of our team.

The EU AI Act Series: Key Takeaways for Companies Using and Developing AI
AI Law Center & US Law Tracker