EU AI Act Compliance Requirements by Risk Tier

Type: Decision Rule Confidence: 0.92 Sources: 8 Verified: 2026-03-02 Applies to: compliance > ai | EU jurisdiction

Rule

Organizations that develop, deploy, import, or distribute AI systems in the European Union must classify their systems into one of four risk tiers — unacceptable, high, limited, or minimal — and meet the corresponding obligations under Regulation (EU) 2024/1689 (the EU AI Act). Prohibited practices carry fines up to EUR 35 million or 7% of global annual turnover, whichever is higher. High-risk AI systems require conformity assessments, technical documentation, human oversight, and EU database registration before market placement. General-purpose AI (GPAI) model providers face separate, parallel obligations regardless of the downstream risk tier. [src1, src2]

Evidence

The EU AI Act entered into force on 1 August 2024 and is being enforced in phases: prohibited practices became enforceable on 2 February 2025, GPAI provider obligations on 2 August 2025, and most high-risk system obligations apply from 2 August 2026. The penalty structure is the most aggressive of any EU digital regulation: up to EUR 35 million or 7% of global turnover for prohibited practices (compared to GDPR's 4%), EUR 15 million or 3% for high-risk violations, and EUR 7.5 million or 1% for supplying incorrect information. The Commission published guidelines on prohibited practices in February 2025, and the AI Office became operationally active on 2 August 2025 with exclusive jurisdiction over GPAI enforcement. As of January 2026, the GPAI Code of Practice has been finalized by independent experts, providing practical compliance guidance for model providers. Finland became the first EU Member State with full AI Act enforcement powers on 22 December 2025. Italy enacted Law 132/2025, the first national implementing legislation, establishing fines up to EUR 774,685 with business disqualification measures. [src1, src3, src4, src5, src7]

Key Properties

Conditions

Constraints

Rationale

The EU AI Act exists to create a harmonised legal framework for trustworthy AI across the single market, balancing innovation with fundamental rights protection. The risk-based approach is deliberately tiered: the higher the potential harm to individuals, the stricter the obligations. This prevents a one-size-fits-all regulatory burden that would stifle low-risk AI innovation while ensuring that AI systems used in critical domains — employment decisions, law enforcement, credit scoring, healthcare triage — meet safety and transparency standards proportionate to their impact. [src1, src8]

Framework Selection Decision Tree

START — User needs AI regulation compliance guidance
├── Which jurisdiction?
│   ├── European Union → EU AI Act ← YOU ARE HERE
│   ├── United States → US Executive Order on AI / NIST AI RMF
│   ├── United Kingdom → UK AI Safety Framework (pro-innovation approach)
│   └── Multiple jurisdictions → Cross-jurisdictional AI compliance comparison
├── What type of AI system?
│   ├── General-purpose AI model (foundation model / LLM)
│   │   ├── Trained with >10^25 FLOPs → GPAI with systemic risk obligations
│   │   └── Below threshold → Standard GPAI obligations (transparency, copyright, documentation)
│   ├── Specific-purpose AI system → Classify by risk tier (see below)
│   └── Open-source model → Exempt unless high-risk or GPAI with systemic risk
├── Risk tier classification?
│   ├── Prohibited (Article 5) → STOP: system cannot be deployed in EU
│   ├── High-risk (Annex I/III) → Full compliance: conformity assessment, documentation, monitoring, registration
│   ├── Limited risk → Transparency obligations: disclose AI use, label deepfakes/generated content
│   └── Minimal risk → No specific obligations (voluntary codes of conduct encouraged)
└── Compliance maturity?
    ├── Existing AI governance program → Audit against EU AI Act requirements
    └── No existing program → Start with risk classification, then build compliance framework

Application Checklist

Step 1: Classify the AI system by risk tier

Step 2: Identify role-specific obligations

Step 3: Implement required controls and documentation

Step 4: Register, deploy, and establish ongoing monitoring

Anti-Patterns

Wrong: Treating the EU AI Act as a future concern because full enforcement is in 2026

Organizations delay compliance planning, assuming they have time until August 2026. In reality, prohibited practices have been enforceable since February 2025 and GPAI obligations since August 2025. Companies using social scoring, emotion recognition in workplaces, or manipulative AI techniques are already in violation. [src3, src7]

Correct: Map current AI systems against Article 5 immediately; begin GPAI compliance now

Conduct an inventory of all AI systems in use, classify them against the prohibited practices list (effective since Feb 2025), and ensure GPAI model providers you rely on have met their August 2025 obligations. Build the high-risk compliance program in parallel for the August 2026 deadline. [src1, src4]

Wrong: Assuming open-source AI models are exempt from all obligations

Some organizations believe that because the Act provides exemptions for free and open-source models, they can use any open-source AI without compliance obligations. The exemption does not apply to high-risk AI systems or GPAI models with systemic risk. Deployers of open-source high-risk systems still bear deployer obligations. [src2, src6]

Correct: Evaluate open-source models against high-risk and GPAI criteria before claiming exemption

Check whether the open-source model is classified as high-risk (Annex III) or is a GPAI model exceeding the 10^25 FLOPs threshold. If so, full obligations apply regardless of the license. The open-source exemption covers only minimal- and limited-risk models that are not GPAI with systemic risk. [src1, src6]

Wrong: Equating GDPR compliance with AI Act compliance

Organizations with mature GDPR programs assume their data protection framework satisfies AI Act requirements. While there is overlap (data quality, impact assessments), the AI Act introduces distinct requirements: conformity assessment, CE marking, risk management systems specific to AI, and EU database registration — none of which exist under GDPR. [src4]

Correct: Treat the AI Act as a separate compliance stream that intersects with GDPR

Build dedicated AI Act compliance processes: risk classification, conformity assessment, technical documentation, and post-market monitoring. Leverage existing GDPR data protection impact assessments as inputs to the AI Act's fundamental rights impact assessment, but do not treat them as substitutes. [src1, src4]

Counter-Arguments

Common Misconceptions

Misconception: The EU AI Act only applies to companies based in the EU.
Reality: The Act has extraterritorial reach. Any provider whose AI system is placed on the EU market or whose system's output is used in the EU is subject to the regulation, regardless of where the provider is established. This mirrors GDPR's extraterritorial scope. [src1, src2]

Misconception: All AI systems require conformity assessment under the EU AI Act.
Reality: Only high-risk AI systems require conformity assessment. Minimal-risk systems (the majority of AI applications) face no specific obligations. Limited-risk systems only need to meet transparency requirements. The Act is deliberately tiered to avoid burdening low-risk innovation. [src1, src2]

Misconception: The EU AI Act bans all facial recognition.
Reality: The Act bans real-time remote biometric identification in public spaces for law enforcement, with specific exceptions (locating missing persons, preventing imminent threats, identifying serious crime suspects). Other biometric systems may be permitted but classified as high-risk with corresponding obligations. Untargeted facial image scraping from the internet or CCTV is separately prohibited. [src3, src8]

Misconception: GPAI obligations only apply if the model is deployed as a high-risk system.
Reality: GPAI model providers have standalone obligations (technical documentation, training data summaries, copyright compliance) regardless of the downstream application's risk tier. These apply as a separate layer on top of any high-risk deployment obligations. [src6]

Comparison with Similar Rules

Rule/FrameworkKey DifferenceWhen to Use
EU AI Act (this unit)Legally binding regulation with tiered risk classification, conformity assessment, penalties up to 7% turnoverAI systems placed on or used in the EU market
GDPR (EU)Data protection regulation — covers personal data processing, not AI system governance specificallyPersonal data processing by AI systems — complementary to AI Act
NIST AI Risk Management Framework (US)Voluntary framework, no legal penalties — provides risk management methodologyUS-based AI development seeking best practices without regulatory mandate
UK AI Safety FrameworkPro-innovation, sector-led approach — no single horizontal regulationAI development and deployment in the UK market
US Executive Order on AI (EO 14110)Executive action requiring federal agency standards — less comprehensive than EU ActAI systems used by or supplied to US federal agencies

When This Matters

Fetch this when a user asks about EU AI regulation, AI Act compliance requirements, AI risk classification, prohibited AI practices in the EU, GPAI model obligations, AI Act penalties, or how to determine if their AI system is high-risk under EU law. Also relevant when a user is building or deploying AI for the European market and needs to understand regulatory obligations before launch.

Related Units