The average enterprise AI engagement starts with a compelling pitch and ends with a strategy document that sits on a shelf. The gap between what AI consulting firms promise and what they deliver is not a coincidence. It is structural. Understanding the structural incentives that drive AI consulting behavior is the prerequisite to evaluating firms effectively.
This guide gives you the questions to ask, the answers that reveal the truth, and the red flags that should end the conversation before you waste six months and significant budget.
Why AI Consulting Fails So Predictably
Before evaluating individual firms, understand the forces that shape the industry. Four structural problems affect most AI consulting relationships.
The billing model incentive. Most consulting firms bill by the hour or day. Longer engagements mean more revenue. This creates a systematic incentive to extend scope, add phases and avoid clear production milestones that would end the engagement. An honest advisor whose business model aligns with your outcomes is structurally different from a firm that benefits from complexity.
The vendor partnership problem. A significant portion of consulting revenue at Big 4 and major SIs comes from vendor referral fees, platform implementation fees and joint go-to-market arrangements with cloud providers. An advisor who earns referral fees from the platforms they recommend cannot give you independent advice. This is not a theoretical concern. We have seen this dynamic cost clients tens of millions of dollars in platform decisions driven by advisor incentives rather than client fit.
The staffing model gap. The senior practitioner who wins your engagement and the team that delivers it are often different people. Managing Directors close deals, analysts execute. In AI consulting specifically, this matters because AI implementation requires practitioner experience that analysts two years out of university do not have. Ask to meet the delivery team before signing, not the relationship team.
The strategy-delivery disconnect. Many firms strong in AI strategy have no production delivery capability. Many system integrators strong in delivery have no strategic judgment. The rare firm that can do both credibly is the exception.
67%
of enterprise AI engagements produce no production deployment within 18 months of completion, according to our analysis of 200+ enterprise programs. The strategy is written. The model never ships.
10 Red Flags That Should End the Conversation
These are the signals, based on patterns observed across hundreds of enterprise AI programs, that reliably predict a poor outcome.
01
They lead with vendor recommendations
Before understanding your data infrastructure, team capability and use case portfolio, they tell you to use Azure, AWS or a specific platform. Real advisory starts with your situation, not the platform they have a partnership with.
02
They cannot name production deployments
Ask for three specific examples of production AI systems they built. Not strategies. Not roadmaps. Not pilots. Production systems with measurable outcomes. If they hedge, generalize or use client confidentiality as a blanket excuse for every example, they do not have a track record worth trusting.
03
The senior advisor will not commit to delivery hours
If the Managing Director or Principal who sold you the engagement disappears after kickoff, the delivery team is junior. Ask for explicit commitments about which named senior practitioners will be present at which project phases, before signing.
04
They use the word "transformation" more than "production"
Transformation language is the signature of strategy consultants who have rebranded as AI consultants. Ask them to define success for your engagement. If they cannot give you a specific, measurable production outcome, the engagement will end with a presentation, not a deployed system.
05
They do not ask about your data
In the first conversation. Any advisor with production experience knows that the data is the constraint in most enterprise AI programs. If your first meeting goes 90 minutes without a detailed conversation about data availability, quality and governance, you are talking to a strategy firm, not an AI practitioner.
06
They guarantee specific AI performance numbers before assessment
No responsible practitioner commits to model accuracy, latency or ROI before completing a readiness assessment. Anyone who guarantees 95 percent accuracy in a sales pitch is giving you marketing material, not an honest projection.
07
Their engagement scope keeps expanding
During the sales process, the scope broadens with each conversation and the proposed team grows. This is a sign of scope creep built into the proposal design. An engagement should get more focused as you share more context, not less.
08
They cannot explain your regulatory environment
If your industry is regulated and the advisor cannot speak fluently about SR 11-7, the EU AI Act, FDA SaMD pathways or equivalent frameworks for your sector, their AI experience is in unregulated contexts that do not apply to you.
09
They have a preferred methodology regardless of context
Every engagement gets the same framework with different client names in the boxes. Real practitioners adapt their approach to the specific problem. If you hear about their proprietary methodology in the first meeting rather than questions about your situation, be cautious.
10
They are not willing to work at risk
Senior practitioners who believe in their production track record will accept outcome-linked fee structures. Firms that resist any performance-based component in their contracts are signaling that they do not fully believe in their own results.
The 12 Questions That Reveal the Truth
Ask these directly. The quality of the answers is more revealing than the answers themselves.
Q
Who will be the named senior practitioners on this engagement, and how many hours per week will each commit during delivery?
A vague answer confirms that senior practitioners close deals and analysts do the work.
Q
What are three specific AI systems you have deployed to production in the last 24 months, with measurable outcomes?
If they cannot give you three named examples with numbers, their track record is in strategy, not production.
Q
Do you receive referral fees, implementation fees or any other commercial consideration from any AI platform or technology vendor?
The answer tells you immediately whether their recommendations can be trusted.
Q
What is the definition of a successful outcome for this engagement, and how will we measure it at 90 days, 6 months and 12 months?
A production-focused firm will answer in terms of deployed systems and measurable business outcomes.
Q
What percentage of your AI engagements result in a production system within 12 months of completion?
Industry average is under 40 percent. Any number below 80 percent from a firm claiming production expertise should prompt follow-up questions.
Q
Tell me about an AI engagement that failed or significantly underperformed expectations. What happened and what did you learn?
Practitioners who have done real work have real failure stories. Evasion or a perfect track record claimed are both warning signs.
Want an Independent Assessment Before Selecting an AI Advisor?
Our AI readiness assessment gives you a clear picture of your organization's actual starting point, which makes it much harder for any advisor to oversell scope or complexity.
Take the Free Assessment →
How to Score and Compare Firms
After the initial conversations, score each firm across these six dimensions. Weight them based on your situation.
01 — MOST IMPORTANT
Production Track Record
Named deployments with measurable outcomes in your industry or use case type. This is the single most predictive indicator of whether you will get a production outcome.
02 — CRITICAL
Vendor Independence
No referral fees, implementation partnerships or equity stakes in the platforms they recommend. Without this, every technology recommendation is compromised.
03 — CRITICAL
Senior Practitioner Commitment
Named senior practitioners committed to specific delivery phases, not just advisory oversight. Confirm this in writing before signing.
04 — IMPORTANT
Regulatory Expertise
Demonstrated knowledge of the specific regulatory frameworks applicable to your industry and AI use case type. Generalizing is not enough.
05 — IMPORTANT
Outcome Alignment
Willingness to link at least part of their compensation to measurable outcomes. Even a modest performance component signals confidence in their own results.
06 — STANDARD
Engagement Model Fit
The scope, team structure and timeline proposed matches your specific situation rather than a templated engagement model adapted to your context.
What a Good Engagement Looks Like
The best AI advisory engagements share a common pattern. They start with an honest readiness assessment that produces a clear picture of where you actually stand, including constraints the firm will need to navigate. They define production outcomes with specific success criteria before scope is fixed. They staff senior practitioners in delivery roles with specific time commitments, not just oversight. They give you a timeline to a measurable production outcome within 12 to 16 weeks for a well-scoped use case.
Most importantly, they are willing to say no. A practitioner who has been in production AI long enough will tell you when a use case is not ready, when the data will not support the outcome you want, and when the timeline is not realistic. An advisor who tells you everything is possible is selling you what you want to hear, not what you need to hear.
Related Research
Enterprise AI Strategy Playbook
The complete framework for building an AI strategy that produces production systems, including how to structure advisory relationships and governance.
Download the playbook →
Talk to a Senior Practitioner Before Deciding
Our advisory team includes former Google, Microsoft, McKinsey and Accenture senior practitioners. We work with 200+ enterprises and hold no vendor partnerships. No referral fees. Ever.
Book a Conversation
Read About Our Approach
Our methodology, track record and the principles that make independent advisory structurally different from vendor-aligned consulting.
About Our Practice