The 50 questions below represent the most commonly skipped due diligence in enterprise AI programs. They are not theoretical. Each one maps directly to a category of failure we have observed across 200+ enterprise AI deployments.
Most organizations answer fewer than half of these questions before committing to an AI program. The typical discovery sequence is: approve budget, start building, encounter gap, discover this question exists, scramble to address it 6 to 18 months later at significantly higher cost and with significantly more organizational resistance.
73%
Of AI program failures trace directly to readiness gaps that were knowable before the program started. The questions below exist to surface those gaps before they become production failures.
Work through these six dimensions honestly. A "no" or "I don't know" is not a reason to abandon your AI initiative. It is a signal about where to invest first. The organizations that answer these questions honestly before starting almost always build more successful AI programs than those that start fast and discover gaps in production.
Dimension 1: Data Readiness (Questions 1 to 10)
01
Do you know where all the data that would be used to train your intended AI model currently lives, what format it is in, and who owns it? Most common gap
02
Has the data been labeled for the specific task you intend the model to perform? If not, do you have a plan for labeling it, including who will do it, how long it will take, and what it will cost?
03
Do you have at least 12 months of historical data for the target outcome you are trying to predict or generate? Do you have enough examples of rare but important events?
04
Is the data representative of the conditions the model will encounter in production? If your data is from one time period, geography, or customer segment, will the model generalize?
05
Are there privacy, legal, or contractual restrictions on the data you intend to use for training? Have you confirmed with legal that your proposed data use is compliant? Often skipped
06
Do you have a pipeline that can continuously deliver fresh training data as conditions change? Or will the model go stale because retraining requires a new manual data extraction project?
07
Have you measured the quality of the data? Specifically: completeness rate, duplicate rate, inconsistency rate, and field-level accuracy for the most important features?
08
Is the ground truth label reliable? If the label is derived from a human process (e.g., "was this loan repaid?" or "did the clinician agree with this flag?"), is that process consistent and well-documented?
09
Do you have data governance policies that cover AI-specific requirements: lineage tracking, retention for audit purposes, privacy-preserving techniques for sensitive fields?
10
For RAG or GenAI applications: Is the document corpus that will be indexed for retrieval well-structured, regularly updated, and access-controlled at the document level?
Dimension 2: Infrastructure and MLOps (Questions 11 to 18)
11
Do you have a model serving infrastructure capable of meeting the latency requirements of your target use case? Have you measured baseline latency under realistic production query volumes? Most common gap
12
Is there a model registry for tracking model versions, training run parameters, evaluation metrics, and deployment status? Or will version control be handled through file naming conventions?
13
Do you have monitoring in place for production model performance? Specifically: can you detect data drift, model accuracy degradation, and distribution shift in real time?
14
Can you roll back to a previous model version within 30 minutes of detecting a performance problem? Have you tested this? Often skipped
15
Have you load-tested your serving infrastructure at 2x expected production peak query volume? Many systems that pass average load testing fail under traffic spikes.
16
Do you have automated retraining pipelines, or does retraining require manual intervention from an ML engineer every time? If manual, how often does retraining need to happen?
17
Is there a fallback mechanism for when the AI system fails or returns low-confidence results? Does the downstream business process have a defined path when AI output is unavailable?
18
Have you assessed whether your cloud infrastructure cost model scales appropriately with production query volume? Have you modeled costs at 10x pilot volume?
Want a professional readiness assessment instead of a self-assessment?
Our senior advisors assess all six dimensions in a structured 3-week engagement and deliver a scored report with industry benchmarks and a prioritized action plan.
Start with a Free Assessment →
Dimension 3: Talent and Capability (Questions 19 to 26)
19
Does your team include ML engineers with production deployment experience (not just model training experience)? These are different skills and most teams have the latter without the former. Most common gap
20
Is there a business translator on the team who understands both AI technical requirements and business domain requirements? This person prevents the most common scope and design errors.
21
Do you have AI governance expertise? Specifically: someone who understands model risk management, EU AI Act requirements, and bias assessment methodology for your sector?
22
Does executive leadership have sufficient AI literacy to make informed decisions about use case prioritization, risk tolerance, and investment allocation? Or are all decisions delegated to the technical team?
23
What is your talent retention plan for AI engineers? The market for production ML engineers is extremely competitive. Have you benchmarked compensation and created development paths that retain key people?
24
Do you have a plan for upskilling the business users who will work with AI outputs? Adoption requires training, not just deployment.
25
Is there a defined ownership structure for each AI use case: a business owner who is accountable for outcomes, a technical owner who is accountable for performance, and a risk owner who is accountable for compliance?
26
Do you have access to senior AI advisory support for architectural decisions and problem-solving, either internally or through an external firm? Most organizations overestimate their internal capability at this level.
Dimension 4: Governance and Risk (Questions 27 to 34)
27
Have you classified your intended AI use case under the EU AI Act framework? Do you know whether it is high-risk, limited-risk, or minimal-risk under Annex III? Most common gap in regulated industries
28
Is there a risk classification framework for AI systems at your organization? Does it distinguish between different risk levels with corresponding documentation and approval requirements?
29
What is the model approval process? Who approves a model for production deployment, what documentation is required, and what is the expected timeline from development complete to approval?
30
Have you assessed the model for bias and fairness on the population it will serve? For use cases affecting people (hiring, lending, healthcare), this is not optional regardless of regulatory requirements.
31
Is there an AI incident response process? If the model produces a harmful output, who is notified, what is the escalation path, and how quickly can the system be taken offline?
32
Does the model produce explainable outputs that can be audited? For regulated decisions (credit, insurance, employment), explainability may be a legal requirement under GDPR Article 22 or sector-specific regulations.
33
Is there a human-in-the-loop mechanism for high-stakes decisions? Full automation of decisions that significantly affect individuals typically requires explicit regulatory authorization in most jurisdictions.
34
Has your legal team reviewed the vendor contracts for any AI platforms or APIs you will use? Specifically: data processing terms, liability for AI errors, and IP ownership of model outputs?
Dimension 5: Use Case and Business Readiness (Questions 35 to 43)
35
Is there a specific, measurable business outcome that this AI system is intended to improve? Not "improve efficiency" but "reduce claim processing time from 4 days to 8 hours" or "increase fraud detection rate from 82% to 94%". Most common gap
36
Is there a named business sponsor who is accountable for the outcome, has budget authority, and will champion adoption within their organization? A technical sponsor is not sufficient.
37
Have you defined the minimum acceptable model performance threshold? Below what accuracy, precision, or other metric would you not deploy this model to production?
38
Have you modeled the business value under conservative, base, and optimistic scenarios? Does the business case hold under the conservative scenario?
39
Is the business process the AI will be integrated into well-documented and stable? AI deployed into poorly understood or frequently changing processes typically produces poor outcomes.
40
Have the business users who will interact with AI outputs been involved in designing the system? Solutions designed without input from end users frequently fail on adoption regardless of technical quality.
41
Have you mapped what the business process looks like when the AI system is wrong? Specifically: what is the cost of a false positive versus a false negative for this use case?
42
Is there a competitor or comparable organization that has successfully deployed AI for this use case? If yes, what can you learn from their approach? If no, why does your organization believe it is feasible?
43
Have you pressure-tested whether the benefit is truly from AI or from a simpler solution? Sometimes rule-based automation or process improvement delivers 80% of the value at 10% of the cost and risk.
Dimension 6: Organizational Culture (Questions 44 to 50)
44
Does executive leadership have a clear, consistent message on AI strategy that the broader organization has heard? Or is AI a technical initiative that business units see as someone else's problem? Often the root cause of stalled programs
45
Have you assessed whether target users are likely to trust and use AI outputs, or whether there is significant skepticism or resistance? Trust needs to be built proactively, not assumed.
46
Are there union or works council considerations for AI deployments that affect job roles? In many jurisdictions, employee representative bodies must be consulted before deploying AI that changes working conditions.
47
Is there a change management plan that addresses how AI will be introduced to affected employees, including what happens to their current tasks and how success will be measured?
48
Is the organization comfortable with a shadow mode deployment where the AI runs in parallel with existing processes before taking over? Or is there pressure to deploy at full scale immediately?
49
Do middle managers understand what is expected of them in supporting AI adoption? Middle management resistance is one of the most commonly cited reasons that technically successful AI deployments fail to achieve adoption.
50
Is there a feedback mechanism for users to report AI errors, poor outputs, and process problems? Organizations that build feedback loops into AI deployments improve significantly faster than those that treat launch as the endpoint.
Research Download
AI Readiness Assessment Framework
44 pages with full scoring rubrics for all six dimensions, industry benchmark scores, gap prioritization methodology, and a 90-day acceleration playbook to close the most important gaps before your program launches.
Download the Readiness Framework →
How to Use Your Results
Count your "no" and "I don't know" answers by dimension. Any dimension with four or more gaps is a blocking constraint. Programs that launch with blocking constraints in data, infrastructure, or governance dimensions fail at a much higher rate than those that resolve them first.
Any dimension with two or three gaps is a risk that needs a mitigation plan before launch. Any dimension with zero or one gap is ready to proceed.
The most important insight from this exercise is not the total score. It is the distribution. A single dimension with seven gaps is a worse starting position than five dimensions with two gaps each, because the single dimension gap will block progress regardless of progress elsewhere.
Bring these results to your steering committee before approving AI program budget. The investment required to close the gaps is almost always lower than the cost of discovering them in production.
Get a professional gap assessment
Our senior advisors assess all six dimensions in a structured engagement and deliver a scored report, industry benchmarks, and a prioritized 90-day action plan to close the most critical gaps.
Free Assessment →
The AI Advisory Insider
Weekly intelligence on AI readiness, governance, and production deployment. Practical, vendor-neutral, and written for senior AI program leaders.