website.documents.printTip
The EU AI Act Compliance Whitepaper for European Enterprises
Executive Summary
The EU AI Act (Regulation EU 2024/1689) is the world's first comprehensive horizontal law on artificial intelligence. Adopted in 2024 and entering into force in stages, it classifies AI systems by risk and imposes documentation, transparency, governance and conformity-assessment obligations on providers and deployers across all 27 EU Member States.
Key takeaways
- The Act defines four risk classes: unacceptable (prohibited), high (regulated), limited (transparency obligations) and minimal (voluntary).
- High-risk AI system rules become fully applicable on 2 August 2026, with major obligations for providers including risk management, data governance and conformity assessment.
- General-Purpose AI (GPAI) models face dedicated rules. Models above a compute threshold of around 10^25 FLOPs are presumed to pose systemic risk and face stricter obligations.
- Maximum fines: EUR 35 million or 7% of global annual turnover for prohibited practices; EUR 15 million or 3% for high-risk non-compliance.
- Sovereign on-premise AI infrastructure addresses the AI Act, GDPR Chapter V and NIS2 simultaneously — a structurally compliant alternative to US cloud LLMs.
Who should read this
- CISOs and CIOs evaluating AI deployment options under the new regulatory framework.
- DPOs and compliance officers mapping AI use cases against the four risk classes.
- AI governance leads designing the inventory, classification and documentation processes.
- Procurement teams negotiating AI vendor contracts and assessing systemic risk exposure.
What Is the EU AI Act?
The EU AI Act is the first major horizontal regulation on artificial intelligence anywhere in the world. It applies to AI systems placed on the market, put into service or used within the European Union, regardless of where the provider is established. Like GDPR, it has explicit extraterritorial reach.
The regulation adopts a risk-based approach: rather than regulating AI uniformly, it classifies systems into four tiers and imposes proportionate requirements at each level. Most enterprise AI use falls into the limited or minimal categories with only transparency obligations, but a meaningful subset — anything used in employment, credit, education, law enforcement, biometric identification and several other sensitive domains — triggers the full high-risk regime.
Beyond the risk tiers, the Act introduces a separate dedicated chapter for general-purpose AI models, recognising that foundation models present risks that are not tied to a specific use case but to the capabilities of the model itself. Providers of GPAI models above a compute threshold face additional model-evaluation, adversarial-testing and incident-reporting obligations.
1 Aug 2024 — AI Act enters into force · 2 Feb 2025 — Prohibited AI bans take effect · 2 Aug 2025 — GPAI rules apply · 2 Aug 2026 — High-risk AI obligations apply (the big one) · 2 Aug 2027 — Full applicability including pre-existing high-risk systems.
The Four Risk Classes
The Act's defining feature is its risk-based architecture. Every AI system within scope must be classified into one of four tiers, and the obligations scale accordingly. The figures below are illustrative orders of magnitude — the actual distribution varies by sector, but minimal-risk dominates.
Title II. Practices considered fundamentally incompatible with EU values, including social scoring, real-time remote biometric identification in public spaces and emotion recognition at workplaces or schools. Outright banned.
Title III + Annex III. AI systems in 8 sectors including employment, education, essential services, law enforcement and biometric identification. Subject to a comprehensive set of obligations under Articles 8-15.
Title IV. AI systems that interact with humans (chatbots), generate synthetic content (deepfakes) or perform emotion recognition. Must inform users that they are interacting with AI.
Title IX. Spam filters, video games, inventory management — the vast majority of enterprise AI use. Voluntary codes of conduct only.
Prohibited AI Practices
Article 5 of the AI Act lists categories of AI systems that are prohibited outright across the European Union. The bans took effect on 2 February 2025 and apply regardless of any commercial or operational benefit. These are not high-risk systems with obligations — they are flatly forbidden.
- Subliminal techniques beyond consciousness or purposefully manipulative techniques that materially distort behaviour and cause harm.
- Exploitation of vulnerabilities of specific groups (age, disability, socio-economic situation) to materially distort behaviour.
- Social scoring by public authorities or on their behalf, leading to detrimental treatment in unrelated contexts.
- Predictive policing based solely on profiling or assessment of personality traits.
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases.
- Emotion recognition in workplaces and educational institutions, except for medical or safety reasons.
- Biometric categorisation systems that infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
- Real-time remote biometric identification in publicly accessible spaces for law enforcement, with narrow and tightly controlled exceptions.
The prohibitions are absolute: there is no risk-management exception and no commercial-benefit defence. Any organisation operating one of these practices in the EU faces the maximum fine of EUR 35 million or 7% of global annual turnover, whichever is higher.
High-Risk AI Systems (Annex III)
High-risk AI systems are listed in Annex III of the Act. These cover eight sectors where AI use carries significant risks to health, safety or fundamental rights. From 2 August 2026, providers of high-risk systems must comply with the full Article 8-15 obligations before placing them on the EU market.
Article 9 mandates a comprehensive risk management system for every high-risk AI system, operated as a continuous iterative process throughout the system's entire lifecycle. This is the most demanding single obligation in the Act and requires substantial documentation.
High-Risk Obligations (Articles 8-15)
High-risk system providers face eight categories of substantive obligations. They are not optional. They must be in place before the system is placed on the market or put into service, and they must be maintained continuously throughout the system's lifecycle. Conformity assessment under Article 43 verifies that the obligations are met.
A continuous, iterative process across the lifecycle covering identification, estimation, evaluation and mitigation of risks to health, safety and fundamental rights.
Training, validation and testing datasets must meet quality criteria including relevance, representativeness, freedom from errors and statistical properties appropriate to the intended purpose.
Comprehensive documentation drawn up before placing on the market, kept up to date, and demonstrating conformity with the requirements of the Act.
Automatic logging of events ("logs") over the entire lifetime of the high-risk AI system, ensuring traceability of operation and supporting post-market monitoring.
Instructions for use that enable deployers to understand the system's capabilities, limitations, expected accuracy and circumstances of use.
Human-machine interface tools that enable effective oversight by natural persons during the period in which the system is in use.
Appropriate level of accuracy, robustness and cybersecurity throughout the lifecycle, with declared performance metrics and protection against adversarial attacks.
A documented QMS covering compliance strategies, design controls, data management, post-market monitoring and serious-incident reporting.
General-Purpose AI Models (Title VIIIA)
The AI Act creates a parallel regime for General-Purpose AI (GPAI) models — foundation models trained on large datasets with broad capabilities not tied to a specific use case. The rules apply to providers of the models themselves, regardless of how downstream deployers use them. There are two tiers.
A voluntary Code of Practice on GPAI is being developed by the AI Office to operationalise these obligations. Providers can demonstrate compliance through the code or through an alternative adequate means.
Penalties & Enforcement
The AI Act establishes a three-tier penalty structure. Member State authorities issue fines under their national law, but the maxima are set at the EU level. The largest fines target prohibited practices; the structure mirrors GDPR but the ceilings are higher.
Why Sovereign AI Matters
For organisations operating in Europe, the AI Act does not exist in isolation. It overlaps with GDPR (data protection), NIS2 (cybersecurity) and the upcoming Cyber Resilience Act. When an AI system processes personal data through a non-EU cloud provider, the entity must satisfy all four regimes simultaneously. Sovereign on-premise AI is the only architecture that satisfies all four by construction.
AI Act, GDPR and NIS2 form a regulatory triangle. Sovereign AI infrastructure is the only architecture that addresses all three without compromise. Cloud-based AI can be made compliant through contractual and technical controls, but the compliance burden is permanent and the failure modes are external.
Implementation Roadmap
Most enterprises are still early in their AI Act journey. The phases below describe what should be in place today, what should be operational by August 2026 and what continuous practice looks like beyond that.
Phase 1 — Inventory & classification (Now → April 2026)
- Inventory all AI systems in development, in production and procured from third parties.
- Classify each system against the four AI Act risk classes.
- Identify any prohibited practices currently in use and discontinue them immediately.
- Assign accountability for each in-scope system to a named owner.
Phase 2 — Documentation & risk management (Apr → Aug 2026)
- Document the risk management system for every high-risk AI system per Article 9.
- Build the technical documentation file required by Article 11 and Annex IV.
- Validate training data quality against Article 10 requirements.
- Define and test human-oversight controls for every high-risk system.
Phase 3 — Conformity & deployment (Aug 2026)
- Complete the Article 43 conformity assessment for each high-risk system.
- Affix the CE marking and register in the EU database.
- Issue Article 13 instructions for use to all deployers.
- Stand up post-market monitoring and serious-incident reporting.
Phase 4 — Operate and improve (Beyond Aug 2026)
- Continuous post-market monitoring and serious-incident reporting.
- Annual review of the risk management system per Article 9.
- Update technical documentation as the system evolves.
- Track regulatory and standards developments — the Act is a moving target.
How Orizon AI Supports AI Act Compliance
Orizon AI is a sovereign, on-premise generative AI platform built around the European compliance triangle: AI Act, GDPR and NIS2. Where US cloud LLMs require continuous contractual and technical controls to remain compliant, Orizon AI is compliant by architecture.
Next steps
- Book an AI Act readiness assessment with our compliance team.
- Request a demo of the Orizon AI on-premise appliance and its compliance reporting features.
- Get a copy of our Orizon AI security and compliance documentation pack.
Contact: [email protected] · orizon.one/orizonai · EU headquarters, sovereign infrastructure.