The Four Enforcement Waves Are Real, and the Clock Is Running
The EU AI Act entered into force August 2024. Most enterprises missed this because regulators focused on AI literacy first. But enforcement reality hits in three waves, and the timeline is non-negotiable. If you're operating AI systems in the EU (or serving EU customers), here's what actually happens when:
| Enforcement Wave | Date | What Goes Live | Impact |
|---|---|---|---|
| Wave 1: Prohibited Uses | August 2025 | Ban on six categories of AI use (mass surveillance, emotion recognition, social scoring, real-time biometric identification, etc.) | Fines up to 6% of global annual turnover. Non-negotiable. Any deployment of these systems becomes illegal. |
| Wave 2: High-Risk Requirements | August 2026 | 24 categories of high-risk AI systems must meet 18 mandatory compliance requirements | Operational compliance now mandatory. Risk management, data governance, technical documentation, human oversight, transparency disclosures. Non-compliance penalties: up to 10 million EUR or 2% of global turnover. |
| Wave 3: Foundation Models (GPAI) | Ongoing (phased by model size) | Models above 10^25 FLOPs require transparency, copyright documentation, and abuse monitoring | Transparency requirements for deployment. If you're using GPT-4 scale models, this applies to you. |
| Wave 4: General Compliance Culture | Ongoing | Low-risk systems subject to transparency, documentation, and record-keeping obligations | Administrative burden. Not high penalty risk, but audit-time and resource intensive. |
That's 17 months from now for high-risk system compliance. For reference: 94% of financial services institutions, 87% of healthcare systems, and 72% of insurance firms in scope are under-prepared.
First Question to Answer: What Is Your Role in This?
The EU AI Act defines three roles. Your compliance obligations depend entirely on which one(s) you occupy. Many enterprises occupy multiple roles simultaneously. Know which applies to you before building your compliance roadmap.
Most enterprises are deployers, not providers. But if you're building AI models internally, training custom models on proprietary data, or releasing AI services to other organizations, you are also a provider. The compliance requirements are additive: deployers + providers = full stack compliance.
Identifying High-Risk Systems: The 24 Categories That Trigger Mandatory Compliance
Not all AI systems are "high-risk" under the EU AI Act. Prohibited systems (six categories) are outright banned. General-purpose systems and low-risk uses have lighter compliance requirements. But 24 categories trigger mandatory, heavy-duty compliance. These systems require risk management systems, technical documentation, human oversight protocols, and continuous monitoring. Identifying them correctly is your first operational task.
Your audit task is simple: inventory all AI systems in your organization and ask, "Does it fit one of these 24 categories?" If yes, mark it high-risk. If it's decision-supporting (suggestion only, human makes final call) rather than decision-making, risk is lower. But the burden of proof falls on you.
Need clarity on your high-risk systems?
Our AI inventory assessment tool helps you classify systems and identify which compliance obligations apply to your specific systems. No guesswork, no vendor bias.
What High-Risk Compliance Actually Requires: The 18 Mandatory Requirements in Practice
Once you identify high-risk systems, you must implement 18 mandatory requirements. These aren't optional. They're the difference between "compliant" and "facing penalties." Here's what actually has to be done, condensed into operational requirements:
The 90-Day Sprint: Your High-Risk System Compliance Timeline
Compliance doesn't happen overnight, but it needs to happen in 90 days to stay ahead of August 2026 deadlines. Here's the specific week-by-week sprint that works:
Weeks 1-2: Audit and Classify
Inventory all AI systems. Classify by risk tier. Identify high-risk systems. Document current state (what documentation exists, what testing has been done, who owns each system). Output: high-risk system registry with 2-3 page summaries for each.
Weeks 3-4: Risk Assessment Deep Dive
For each high-risk system, conduct formal risk assessment. Identify failure modes, potential misuse, accuracy gaps, bias vectors. Document controls already in place and gaps. Output: risk assessment report per system with prioritized gap list.
Weeks 5-8: Documentation and Process Build
Develop technical documentation templates and complete them for all high-risk systems. Design and document logging infrastructure. Build transparency notices. Create human oversight procedures and escalation protocols. Output: complete technical documentation, logging implementation, and SOPs for human review.
Weeks 9-10: Testing and Validation
Execute testing protocols: accuracy/robustness testing, bias assessment, adversarial testing, security assessment. Update risk assessments based on test results. Output: testing reports with metrics and remediation priorities.
Weeks 11-12: Remediation and Sign-Off
Address critical gaps identified in testing. Implement human oversight and incident reporting procedures. Conduct compliance review with legal/compliance teams. Build training for model owners and human reviewers. Output: final compliance sign-off, training completion, incident response procedures live.
This sprint assumes you have 2-3 high-risk systems and one dedicated compliance person. If you have 10+ systems, multiply timelines by 3-5x or build a compliance team.
GPAI Obligations: Do You Deploy Foundation Models?
General-purpose AI models (GPAI) are large foundation models like GPT-4, Claude, Llama, and others. If a model's training compute exceeds 10^25 FLOPs (floating point operations), it's GPAI under the EU AI Act. If you're deploying these models, you have specific obligations:
- Transparency Register: Maintain model cards describing architecture, training data, capabilities, limitations, and intended uses. Make available to regulators upon request.
- Copyright Documentation: If the model was trained on copyrighted content, document it. This is an emerging compliance area with ongoing legal uncertainty, but documentation is mandatory.
- Abuse Monitoring: Implement procedures to detect and report misuse of GPAI models, including jailbreak attempts and harmful applications.
- Downstream Risk Assessment: Assess how GPAI models are used in your high-risk systems. If a GPAI model powers a high-risk application, both GPAI obligations and high-risk obligations apply.
If you're using Claude, GPT-4, or similar models in production, you already have deployment obligations under the EU AI Act. This is not optional based on your vendor's compliance status; deployers share responsibility.
Sector-Specific Practical Guidance
Financial Services: Credit Scoring, Insurance, Securities
Financial services are heavily regulated under the EU AI Act. Most financial AI systems (credit scoring, insurance underwriting, securities trading, fraud detection above threshold) are high-risk. Requirements:
- Credit scoring and loan decisions: Must include human review override and transparency to applicants on decision factors.
- Insurance underwriting: Bias testing mandatory. Gender-based pricing is banned; test for proxy discrimination (zip codes, education level correlating with protected characteristics).
- Securities trading and market manipulation detection: Require robust adversarial testing and audit trails.
- Fraud detection: Lower risk if decision-supporting (alerts human reviewers) rather than auto-blocking. If auto-blocking, high-risk requirements apply.
Healthcare: Medical Devices and Patient Risk Assessment
Healthcare AI systems under Annex III (medical devices) have dual compliance: EU AI Act high-risk requirements and medical device regulation (MDCG guidance). Key overlaps:
- Diagnostic support systems (CAD, risk prediction): High-risk. Must include training for clinicians, documented performance on diverse patient populations, and human override capability.
- Clinical decision support: If purely informational (supports human decision), lower risk than if system auto-determines treatment pathway.
- Mental health and psychological assessment: Specifically high-risk. Require explainability and human clinician review.
- Patient data governance: GDPR plus EU AI Act requirements overlap. You need data processing agreements, impact assessments, and bias monitoring.
Three Actions to Start Today
Action 1: Inventory Your AI Systems. List all AI systems in your organization. Categorize by risk (prohibited, high-risk, low-risk). Assign owner to each. This takes 2-4 weeks for most enterprises and is non-negotiable as your starting point.
Action 2: Assign Compliance Ownership. Designate a compliance lead or team. Budget resources (this is not a part-time project). Define reporting structure to leadership. Build cross-functional governance with legal, product, engineering, and operations.
Action 3: Begin Technical Documentation. For your highest-risk systems, start documenting architecture, training data, testing methodology, and known limitations. Don't wait for perfect documentation; start the process and iterate.