Artificial Intelligence Act
Status: In force
-
Application on 2 August 2026 (2 years after its entry into force) of generally all rules of the AI Act including obligations for high-risk systems defined in Annex III (list of high-risk use cases), with certain exceptions where obligations are applicable earlier/later. Exceptions in detail:
-
2 February 2025: Member States shall phase out prohibited systems and companies need to comply with the AI literacy requirement;
-
2 August 2025: obligations for new general purpose AI models become applicable;
-
2 August 2027: obligations for high-risk systems defined in Annex I (list of Union harmonisation legislation) apply.
-
Level II legislation and guidance
Expected:
-
Commission guidelines on the definition of ‘AI system’ and prohibited practices: expected in January 2025
-
Code of Practice for providers on General Purpose AI (incl with systemic risks): first draft published on 14 November 2024, second draft expected w/c 16 December 2024, third draft expected w/c 17 February 2025, final draft expected in April/May 2025
-
Commission template for training content summary of GPAI models: expected in July 2025
-
Commission guidance on serious incident reporting for providers of high-risk AI systems: expected on 2 August 2025
-
Harmonised standards for requirements of high-risk AI system to be published by European standardization organization CEN-CENELEC: by end of 2025
-
Commission guidance on the classification of high-risk AI systems: expected in February 2026
No date yet:
-
Commission guidelines on:
-
obligations for high-risk AI systems and obligations along the AI value chain,
-
the transparency obligations for certain AI systems (AI systems directly interacting with natural persons, AI systems generating synthetic audio, image, video or text content, emotion recognition systems and biometric categorisation system, deep fakes, and AI systems generating or manipulating public interest texts),
-
the provisions related to substantial modification, and
-
the interplay of the AI Act and the product safety legislation listed in Annex I of the AI Act.
-
-
Codes of Practice for providers and deployers of AI systems on the obligations regarding the detection and labelling of artificially generated or manipulated content.
-
Commission templates on post-market monitoring plan and fundamental rights impact assessment.
Summary
The AI Act introduces EU-wide minimum requirements for AI systems and proposes a sliding scale of rules based on the risk: the higher the perceived risk, the stricter the rules. AI systems with an ‘unacceptable level of risk’ will be strictly prohibited and those considered as ‘high-risk’ will be permitted but subject to the most stringent obligations. The AI Act is also regulating foundation models and generative AI systems under the label of ‘General Purpose AI’ with a specific set of obligations.
Scope
Applies in varying degrees to providers, users, end-product manufacturers, importers or distributors of AI systems, depending on the risk.
Key elements
- Risk-based approach to AI systems: the higher the perceived risk, the stricter the rules. AI systems with an ‘unacceptable level of risk’ to European fundamental rights, like social scoring by governments, will be strictly prohibited. ‘High-risk’ systems, like automated recruitment software, will be subject to the most stringent obligations and limited-risk systems, like chatbots and deep fakes, will be subject to transparency rules. Free use of minimal-risk systems like AI enabled video games or SPAM filter.
- Specific regulation on General Purpose AI (foundation models), tiered approach with baseline obligations for all General Purpose AI (GPAI) systems and models, and add-on obligations for GPAI models with ‘systemic risks’.
- Developers of high-risk AI systems must conduct a self-conformity assessment. High-risk AI systems and foundation models must be registered in an EU database.
- Fines:
- up to €35m or 7% of global annual turnover for infringements on prohibited practices or non-compliance related to requirements on data;
- up to €15m or 3% of global annual turnover for other requirements or obligations of AI Act, including the rules on general-purpose AI models;
- up to €7.5m or 1% of global annual turnover for providing incorrect information, incomplete or misleading information.
Challenges
- Legal uncertainty from self-conformity assessment
- High administrative burden from documentation obligations, including:
- Risk management system
- Registration of stand-alone AI systems in EU database
- Declaration of conformity needs to be signed
- For generative AI: Sufficiently detailed summary of copyrighted material training data, safeguards to ensure legality of output
- Overlap with GDPR / redundancies
Key Freshfields contact(s):
Dr. Theresa Ehlen 合伙人
Düsseldorf, Frankfurt am Main
Dr. Christoph Werkmeister 合伙人
Düsseldorf
Dr. Lutz Riede 合伙人
Vienna, Düsseldorf
Giles Pratt 合伙人
London
Rachael Annear 合伙人
London
Matthias Hofer 资深律师
Vienna
Zofia Aszendorf 资深律师
London
Tochukwu Egenti 律师
London