Most enterprise AI teams are treating the EU AI Act as a legal and compliance project to be handled by lawyers and risk managers. That is a mistake that is already causing production delays. The EU AI Act imposes specific technical requirements on AI systems: documentation standards, data governance requirements, human oversight mechanisms, and transparency obligations that must be built into your AI architecture, not bolted on as an afterthought by a compliance team.
This guide is written for AI practitioners and AI program leaders, not legal counsel. We cover what the Act actually requires in operational terms, how to determine which of your AI systems are affected, and the 90-day compliance sprint structure we use to bring existing production systems into compliance without shutting them down.
Who Is Actually Affected
The EU AI Act applies to AI systems that are placed on the EU market or put into service in the EU, regardless of where the organization is headquartered. If your AI system processes data about EU individuals, makes decisions affecting EU individuals, or is operated by an entity with EU operations, you are likely within scope.
This means that US-headquartered enterprises running AI systems that affect European customers, employees, or partners are within scope. The extraterritorial reach is similar in design to GDPR, and organizations that assumed "we are a US company so this does not apply" are discovering that assumption is incorrect.
The Four-Tier Risk Classification
The EU AI Act classifies AI systems into four risk tiers, each with different compliance obligations. Correctly classifying your AI systems is the first and most important step in any EU AI Act compliance program.
What High-Risk Compliance Actually Requires
If any of your AI systems are classified as high-risk, the compliance obligations are substantial. Here is what the Act actually requires, in practical operational terms.
| Requirement | What It Means in Practice |
|---|---|
| Risk management system | Documented, continuous risk management process for each high-risk AI system, covering known and foreseeable risks. Must be updated throughout lifecycle. Cannot be a one-time assessment. |
| Data governance | Training, validation, and test datasets must be documented for relevance, representativeness, and freedom from errors and biases. Lineage documentation required. Protected characteristics handling must be documented. |
| Technical documentation | Minimum 18 categories of documentation covering system purpose, development process, architecture, training data, performance metrics, risk assessment, and intended use. Must be available to market surveillance authorities on request. |
| Logging and audit trails | Automatic logging of system operation sufficient to trace events leading to any high-risk output. Log retention periods specified by sector-specific regulation or minimum five years. |
| Transparency | Instructions for use must allow deployers to understand system capabilities, limitations, and required human oversight. Not just "user documentation" — specific technical and operational guidance. |
| Human oversight | Technical measures enabling humans to understand, monitor, and override or stop the system. Must be built into the system architecture, not just described in documentation. |
| Accuracy and robustness | Performance measured and documented across performance dimensions relevant to intended use. Accuracy, robustness to errors, and cybersecurity resilience must be appropriate for the risk level. |
The 90-Day EU AI Act Compliance Sprint
For organizations with existing production AI systems that need to be brought into EU AI Act compliance, we use a structured 90-day sprint. This is not a comfortable timeline for organizations with large portfolios, but it is achievable for individual systems and provides a replicable template for the rest of the portfolio.
Inventory and Classification
Documentation Build
Technical Remediation
Conformity and Governance
General Purpose AI Models: What the Act Requires
A significant portion of enterprise GenAI deployments use general purpose AI models (GPAI) — the large foundation models from OpenAI, Anthropic, Google, and others. The EU AI Act introduces a separate regime for GPAI models that affects both providers and enterprise deployers.
For enterprise organizations deploying GPAI models, the key implication is that you are the deployer, and the Act assigns specific obligations to deployers that you cannot delegate entirely to your GPAI vendor. You are responsible for conducting your own risk assessment when a GPAI model is integrated into an application that falls in a high-risk category, even if the underlying model provider has fulfilled their GPAI obligations.
This means that a financial services firm using a commercial LLM for credit decision support needs to apply high-risk AI system requirements to that deployment, regardless of what the LLM provider's EU AI Act compliance posture is. The provider's compliance covers their model. Your deployment of that model in a high-risk context is your compliance obligation.
Sector-Specific Considerations
Financial services organizations face the most complex EU AI Act compliance challenge because so many of their AI systems fall squarely in the high-risk category. Credit scoring, risk assessment, loan decisions, insurance underwriting, and fraud detection systems are all explicitly high-risk. These organizations must also reconcile EU AI Act requirements with existing model risk management frameworks (SR 11-7 equivalent standards) and the forthcoming DORA AI governance requirements.
Healthcare organizations face the intersection of EU AI Act high-risk classification for clinical AI with existing EU Medical Device Regulation requirements for SaMD (Software as a Medical Device). Systems that qualify as SaMD under MDR and as high-risk AI systems under the EU AI Act face a dual compliance burden that requires coordinated regulatory strategy.
HR and employment technology organizations are discovering that systems used for CV screening, candidate ranking, and performance evaluation fall explicitly in the high-risk AI category. Many HR technology vendors are revising their product architectures in response to EU AI Act requirements, and enterprise buyers need to understand how their contracts allocate compliance obligations between vendor and deployer.