Sixty-seven percent of enterprise AI Centers of Excellence fail to scale beyond five use cases. Not fail to launch. Fail to scale. They launch successfully, produce one or two impressive pilots, and then spend the next two years as an expensive internal consultancy answering the same questions from business units who cannot actually execute without them. The CoE becomes a bottleneck dressed up as a center of expertise, and the board eventually asks why the AI budget is not translating into production outcomes.

The setup decisions you make in the first 90 days determine whether your AI CoE becomes a scale enabler or a centralized bottleneck. The critical decisions are not the technology ones. They are the operating model choices: who does the CoE serve, what does it own versus what does it enable, and how does it measure success in ways that cannot be gamed by demonstrating activity rather than outcomes.

The Four Ways AI CoEs Fail

Understanding the failure modes before designing your CoE is more valuable than any template. The four patterns we most commonly see in CoEs that fail to deliver at scale are predictable, and each has a specific design response.

The Ivory Tower Failure
The CoE builds excellent models that business units never adopt because the CoE designed for technical elegance rather than operational integration. The team knows how to train models but does not understand the workflows they are supposed to improve. Fix: Business unit embeds required in every CoE use case from day one.
The Bottleneck Failure
Every AI initiative must go through the CoE, and the CoE team is too small to service the demand. Business units wait 6 to 9 months for CoE capacity. The response to capacity constraints is to add more process (governance gates, priority committees) rather than to enable business units to execute independently. Fix: CoE design must include explicit graduation criteria for business units to move from CoE-led to CoE-enabled execution.
The Pilot Cemetery Failure
The CoE optimizes for starting new use cases rather than taking existing ones to production. The leadership team has an incentive to show a broad portfolio of active work rather than a narrow portfolio of production outcomes. The number of pilots in flight looks good. The number of models in production does not. Fix: Success metrics must weight production outcomes, not pilot starts.
The Technology First Failure
The CoE spends the first 6 months selecting and deploying the ML platform, data infrastructure, and tooling stack before building any models. By the time the technical foundation is ready, the business sponsorship that created the CoE has moved on to the next priority. Fix: Ship something to production in the first 90 days using whatever infrastructure already exists, before investing in platform optimization.
8mo
Average time from AI CoE formation to first model in production, across enterprises that followed a standard setup process. Our 12-month launch roadmap targets first production deployment at month 4, which we achieve in 85% of engagements.

The 12-Month CoE Launch Roadmap

The CoE launch roadmap below is built from 40+ CoE design and launch engagements. It is organized into four phases, each with a specific milestone that must be achieved before the next phase begins. The milestone gates prevent the most common failure pattern: teams that spend 12 months in "setup mode" because there is no defined criterion for completion.

Phase 1: Foundation
Months 1 to 2: Charter, Structure, First Team
CoE charter ratified by C-suite with explicit mandate, operating model decision (hub/spoke/platform), founding team of 4 to 6 people, use case portfolio selection (prioritize 3 to 5 that can reach production within 6 months), and technical foundation assessment. Do not wait for the full team before starting.
Milestone: Charter signed, founding team assembled, first 3 use cases approved
Phase 2: First Use Cases
Months 2 to 4: Development and Governance Foundation
Active development of first 2 to 3 use cases, model registry and lifecycle process established, governance framework designed (risk classification, review process), and MLOps baseline built. Governance should be designed around the first use cases, not designed in the abstract before work begins.
Milestone: First model in shadow mode, governance process operating, model registry live
Phase 3: First Production
Months 4 to 6: First Models Live and Scaling Design
First model in production with monitoring operational, measurable business outcome tracked, second cohort of 5 to 8 use cases approved, business unit enablement program launched (training, documentation, self-service capabilities), and CoE backlog management process established. The business outcome from the first production model is your funding and mandate for the next phase.
Milestone: First model in production, business outcome measured, 5 use cases in active development
Phase 4: Scale and Embed
Months 6 to 12: Portfolio Expansion and Self-Service
Multiple models in production across 2 to 3 business units, business unit satellite capability launched (embedded AI talent or trained AI champions), platform investment based on proven use case patterns (not speculative), expanded governance to include CoE performance review, and CoE annual report to executive leadership.
Milestone: 8 to 14 models in production, business units executing independently on low-risk use cases
Designing your AI CoE and not sure where to start?
Our AI CoE advisory has launched 40+ Centers of Excellence. We can help you select the operating model, design the governance process, and hit first production in 4 months.
Learn About AI CoE Advisory →

The Founding Team: Who to Hire First

The first hire decisions determine the CoE's capability profile for the first 18 months. Most CoEs make the same hiring mistake: they start by hiring the most technically impressive candidates they can find, and end up with a team that can train any model but cannot navigate an enterprise data governance review, cannot present to a skeptical CFO, and cannot establish the trust with business unit leaders that determines whether models get adopted. We have seen CoEs with three PhDs and no production deployment in 12 months.

The optimal founding team sequencing for a CoE targeting 8 to 14 production models in the first year is shown below. The sequence matters. Lead with the person who can operate in the organization, not the person who can train the best model.

Hire 1: Month 1
CoE Lead (AI Strategy + Organizational)
Senior practitioner with both technical credibility and executive communication capability. This person runs the CoE, manages stakeholder relationships, and represents AI outcomes to leadership. Former McKinsey or Accenture AI practice, or ex-FAANG AI program leader. Not a data scientist who has been promoted.
Hire 2 to 3: Month 1 to 2
Senior ML Engineers (Production-Focused)
Engineers who have taken models from development to production in enterprise environments. Comfortable with MLOps, data pipelines, monitoring, and the messy reality of enterprise data. Not research scientists optimizing model architecture.
Hire 4: Month 2
AI Program Manager
Manages the use case portfolio, stakeholder communication, governance processes, and business unit relationships. The organizational glue that keeps the CoE connected to the rest of the enterprise. Former product manager or technology program manager background.
Hire 5 to 6: Month 3 to 4
Data Engineers and Domain Specialist
Data engineers to build production-grade feature pipelines. Domain specialist from the business unit with the first priority use case (their subject matter expertise prevents the ivory tower failure). These hires can wait until you understand the first use case requirements.
The CoE that ships 14 models in its first year typically has 6 people. The CoE that ships 2 models in its first year often has 12. Team size is not the constraint. Operating model clarity and production focus are.

CoE Governance: The Intake and Portfolio Process

The intake process is where most CoEs get captured by the highest-volume requester rather than the highest-value use case. Without a structured intake and portfolio governance process, the CoE backlog is determined by which business unit leader calls the CoE lead most frequently, not by which use cases will generate the most measurable value.

Effective CoE portfolio governance requires four components. A structured intake form that captures business problem (not the AI solution), expected value with quantification, data availability confirmation, and business owner commitment to participate in development and own the outcome. A scoring process that applies the six-factor use case prioritization framework (Business Value, Data Availability, Implementation Complexity, Organizational Readiness, Regulatory Risk, Strategic Alignment) to rank the backlog. A portfolio review cadence (monthly for the leadership team, quarterly for executive stakeholders) that reviews the active portfolio and backlog against capacity. And explicit graduation criteria: when does a business unit "graduate" from CoE-led to CoE-enabled execution, and what support do they receive after graduation?

The governance integration question is the one most organizations get wrong at CoE launch. The CoE governance process should be connected to, not separate from, the enterprise AI governance program. CoE use cases should move through the same risk classification and review process as externally procured AI systems. Separate governance tracks for internal development versus vendor deployment create inconsistent risk management and double documentation burden. See our detailed guide on AI CoE operating models for the structural design options in detail, and the AI Center of Excellence Guide for the complete 50-page reference framework.

Measuring CoE Success in Ways That Cannot Be Gamed

CoE success metrics determine CoE behavior. Metrics that measure activity produce activity. Metrics that measure production outcomes produce production outcomes. The metric set that we implement for CoE programs we design distinguishes sharply between leading indicators (which predict future outcomes) and lagging indicators (which confirm past performance).

The primary production outcome metrics are: number of models in production (not in development), business value attributed to CoE-deployed models (with a defined attribution methodology that finance has reviewed and accepted), time-to-production for new use cases (from intake approval to production go-live), and the ratio of production models to total use cases approved (not the same as "models in development"). Organizations that track only the number of use cases in flight reliably produce a large portfolio of perpetually in-progress work rather than a production AI program.

The secondary organizational health metrics, which predict future production outcomes, include: business unit Net Promoter Score for CoE service quality, use case backlog wait time (how long approved use cases wait before active development starts), CoE team retention rate (high turnover indicates a team doing more coordination than interesting technical work), and the number of business units that have graduated to independent AI execution capability (the CoE's ultimate goal is to make itself progressively less necessary for routine use cases).

Key Takeaways for AI CoE Program Leaders

For CIOs, CDOs, and Chief AI Officers designing or redesigning an AI CoE, the practical implications are clear:

  • Ship something to production in the first 90 days, using existing infrastructure if necessary. The first production outcome justifies continued investment more effectively than any business case document.
  • Design the operating model to enable business units, not to centralize all AI capability in the CoE. Centralization creates the bottleneck failure pattern. The goal is a CoE that makes itself progressively less necessary for routine use cases over time.
  • Hire for organizational effectiveness before technical excellence. The ability to navigate enterprise data governance, communicate to executives, and build trust with business unit leaders determines CoE success more than model accuracy scores.
  • Connect CoE governance to enterprise AI governance from day one. Separate governance tracks create inconsistency and double documentation burden that neither the CoE nor the business can sustain.
  • Measure production outcomes, not activity. CoEs that measure the number of use cases in development produce a large portfolio of perpetually in-progress work. Measure models in production and business value attributed to those models.

The AI CoE is the organizational vehicle for scaling enterprise AI beyond isolated pilots. It works when it is designed as a scale enabler rather than a centralized authority. Take the free AI readiness assessment to understand whether your organization is ready for CoE investment, or contact our AI CoE advisory practice to discuss your specific situation.

AI CoE Advisory: Design to First Production in 4 Months
40+ CoEs designed and launched. Operating model selection, team structure, governance design, and first use case execution. Senior practitioners, not junior analysts.
Learn More →
The AI Advisory Insider
Weekly intelligence for enterprise AI leaders. CoE management, production realities, governance from senior practitioners.