AI regulation moved from policy discussion to enforceable obligation in 2026. The EU AI Act is past its transitional period for prohibited practices and high-risk system requirements. The United States has a fragmented but expanding federal and state-level regulatory picture. The United Kingdom, Singapore, Canada, and Brazil have all advanced their AI governance frameworks materially. For multinational enterprises, the compliance picture is now genuinely complex and the cost of ignoring it is rising.
The objective of this article is not to provide legal advice but to give enterprise leaders the strategic context needed to understand what the regulatory landscape requires, which regimes are most immediately material to their operations, and how to build a compliance approach that does not paralyse innovation. Compliance and effective AI deployment are not opposites. The organizations handling this best are those that treat regulatory requirements as a forcing function for governance practices they should be building regardless.
The EU AI Act: What Is Now Enforceable
The EU AI Act has moved past transition periods for its most critical provisions. Enterprises with operations or customers in the European Union need to understand their obligations with specificity, not broad awareness.
The prohibited practices provisions have been in effect since August 2024. These include social scoring systems by public authorities, real-time remote biometric identification in public spaces (with narrow law enforcement exceptions), AI that exploits psychological vulnerabilities or uses subliminal techniques, and AI that classifies people based on sensitive characteristics. For most commercial enterprises, prohibited practice obligations are met by confirming that no deployed or planned systems fall into these categories.
The high-risk system requirements are the primary compliance burden for commercial enterprises. Systems used in employment decisions (hiring, promotion, performance monitoring), credit scoring, insurance risk assessment, access to essential services, education, and several other categories are classified as high-risk under Annex III. High-risk systems require conformity assessment, technical documentation, data governance requirements, human oversight design, accuracy and robustness standards, and registration in the EU AI database. Most enterprises in financial services, healthcare, and HR functions have at least some systems in scope.
General Purpose AI (GPAI) model providers face separate obligations from August 2025 onwards, with systemic risk GPAI models (those trained on more than 10 to the 25th FLOPS) facing enhanced requirements including adversarial testing and incident reporting. This primarily affects foundation model providers rather than enterprise deployers, though enterprises that fine-tune GPAI models on their own data may have obligations that require analysis.
The Global Regulatory Landscape by Region
The Enterprise EU AI Act Compliance Timeline
For enterprises operating in the EU or with EU-exposed operations, the compliance timeline has specific dates that should be driving current program activity.
Prohibited Practices — In Force
All prohibited AI practices (social scoring, real-time biometric ID, subliminal manipulation, vulnerability exploitation, certain biometric categorisation) are illegal in the EU. Any systems potentially in scope should have been assessed and confirmed out of scope or discontinued.
GPAI and Governance Obligations — In Force
General Purpose AI model obligations active. AI literacy obligations for all enterprises deploying AI systems: staff using AI must have appropriate understanding. Governance framework requirements for high-risk system deployers becoming applicable. Enterprise AI governance programs should be operational, not in planning.
High-Risk System Full Compliance Required
Full conformity assessment obligations for Annex III high-risk systems. Technical documentation, human oversight requirements, accuracy standards, data governance, EU database registration. This is the primary near-term compliance deadline for enterprises with AI systems in employment, credit, insurance, and other regulated use cases. Organisations that have not started their conformity assessment programs for in-scope systems are already behind schedule.
National Authority Enforcement Scaling
EU member state national competent authorities are establishing enforcement capacity throughout 2026 and 2027. Early enforcement is expected to focus on egregious prohibited practices violations and high-profile high-risk system non-compliance. Enterprises with strong documentation and demonstrable governance processes are better positioned for regulatory scrutiny than those that rely on good intent without evidence.
Building Compliance Without Paralyzing Innovation
The enterprises that handle AI regulation best are those that treat compliance as a governance capability rather than a legal project. The distinction is important. A legal project has a completion date and an external objective. A governance capability is an ongoing organizational function that allows the enterprise to deploy AI efficiently while managing risk. Our AI governance advisory practice helps enterprises build the second kind.
The foundational governance practices that the EU AI Act and other frameworks require are largely practices that good AI governance demands regardless of regulation. Maintaining an inventory of AI systems in use, classifying them by risk, documenting their data inputs and intended use, maintaining oversight mechanisms for consequential decisions, and having an incident response plan for when models misbehave: all of these are practices that improve AI program quality independent of their compliance value. Regulations have created the external pressure to formalize what should already exist.
The compliance overhead that creates the most friction in practice comes from organizations that are trying to retrofit documentation onto systems that were deployed without it. The Enterprise AI Governance Handbook documents the 18 documentation categories that high-risk systems require and the governance operating model that makes maintaining them sustainable. Building that documentation as part of the deployment process, not after the fact, is the practice that separates organizations with manageable compliance burdens from those facing expensive retroactive documentation efforts.
Regulatory compliance is not the reason to build good AI governance. Responsible deployment, stakeholder trust, and predictable AI behavior are the reasons. Regulation just makes it non-negotiable. Treat compliance as a floor, not a ceiling.
What Is Coming Next
The regulatory trajectory is toward more specificity, more enforcement, and broader geographic coverage. The pattern established by GDPR is instructive: initial uncertainty about enforcement followed by significant fines that established compliance as genuinely material financial risk. AI regulation is following the same trajectory with compressed timelines.
In the near term, expect national competent authority enforcement actions under the EU AI Act to begin establishing precedent in 2026 and 2027, particularly in financial services and employment AI use cases which have the clearest regulatory exposure and the most visible high-risk system deployments. Expect US state-level legislation to proliferate, creating a patchwork of requirements that enterprises operating across state lines will need to track. Expect sector-specific guidance from financial regulators in the US, UK, and EU to become increasingly specific about AI model risk management requirements.
The enterprises best positioned for this trajectory are those building governance capabilities that can flex to meet evolving requirements rather than those targeting the minimum viable compliance posture for current requirements. The gap between the two approaches will widen materially over the next 24 months. Our approach to AI governance that does not kill innovation outlines how leading enterprises are building this flexibility into their governance architecture from the start.