The insurer had processed over 2.1 million claims annually across motor, property, and liability lines in 12 European and Asia-Pacific markets. Their claims operation employed more than 4,200 adjusters and support staff, with an average end-to-end claims cycle of 14.2 days for motor claims, 22 days for property claims, and 31 days for contested liability claims. Against industry benchmarks for comparable peers who had invested in intelligent automation, these cycle times were running 2.3 times the leader average.
The Chief Claims Officer had initiated two prior automation programs. The first, a rules-based robotic process automation implementation from 2021, had achieved partial automation for the simplest 22% of claims but created significant technical debt and brittle workflows that failed whenever claim data deviated from narrow expected formats. The second, a machine learning triage pilot from 2023, had been limited to a single country market and a single claims line, with results that the organization's data science team acknowledged could not be replicated at enterprise scale without substantial rearchitecting.
The core gap was a missing integration layer. The insurer had invested in individual AI components in isolation: an optical character recognition vendor for document processing, a separate fraud detection tool, a legacy reserving system, and scattered ML models built by the in-house team. None of these components communicated effectively, and no intelligent orchestration existed to route claims through the right combination of automated and human decision points.
After conducting our structured intake assessment over the first two weeks, we identified five constraints that had prevented prior programs from scaling beyond pilots:
Understanding these five constraints changed the fundamental architecture. We designed for governance and integration from the start rather than trying to retrofit these capabilities after achieving technical performance targets.
Rather than replacing individual point solutions, we designed an intelligent orchestration layer that unified the existing components and added the missing AI capabilities. The architecture comprised five layers working in sequence for every claim entering the system.
Layer 1: Multi-Modal Document Intelligence. We replaced the legacy OCR vendor with a fine-tuned vision-language model trained on 340,000 claims documents from the insurer's own archive. The model achieved 94.3% extraction accuracy across all 47 document types, including handwritten content and degraded mobile photos, compared to the legacy system's 71% average. Critically, the model was trained to output structured extraction with confidence scores for each field, enabling downstream systems to route low-confidence extractions to human review rather than passing uncertain data forward into automated decisions.
Layer 2: Claims Complexity and Routing Classification. A gradient-boosted classification model evaluated 87 features extracted from the document intelligence layer, historical claims patterns, policy data, and third-party data signals to classify each claim into one of four processing tracks: straight-through automated settlement for simple, clear-liability claims; assisted settlement where AI prepares recommendations and an adjuster reviews and approves; complex adjuster-led where AI provides data aggregation and analysis support but a human makes all decisions; and specialist referral for high-value or legally contested claims. This routing model was calibrated separately for each market to reflect regulatory requirements.
Layer 3: Intelligent Fraud and Leakage Detection. Rather than deploying a standalone fraud tool, we embedded fraud and leakage signals directly into the routing decision. We integrated the existing fraud detection vendor's API but supplemented it with two proprietary models: a network analysis model identifying suspicious claimant-repairer-assessor relationships across claims history, and a severity anomaly model comparing claim characteristics against a peer cohort of 180,000 similar historical claims. Combined, these models flagged 4.2% of claims for enhanced investigation, with an 87% confirmed anomaly rate on flagged claims.
Layer 4: Automated Settlement and Reserve Recommendation. For claims routed to the straight-through track, a settlement recommendation engine calculated offer amounts by applying the insurer's approved settlement rules, market-specific jurisdictional parameters, and a comparable claims database of 2.4 million settled claims. Every recommendation included a full audit trail showing which rules and comparable claims drove the calculation, satisfying the actuarial governance requirement. The engine generated reserve recommendations for all claims, which were reviewed quarterly by the actuarial team against the population-level reserve accuracy requirement.
Layer 5: Adjuster Augmentation Interface. For claims not fully automated, we built a new adjuster workbench replacing the previous interface with an AI-augmented view showing extracted claim data, identified anomalies, comparable settled claims, settlement range recommendations, and regulatory checklist status for the specific market. This layer was designed explicitly to make adjusters faster and more accurate, not to replace them, which was critical to the change management program. Adjuster satisfaction scores with the new interface reached 8.2/10 within 8 weeks of rollout.
Structured audit of all existing AI tools, data assets, and integration points. Claims volume and complexity analysis across 12 markets. Architecture design review with claims operations leadership, IT, actuarial, and legal/compliance teams. Constraint documentation and design-freeze sign-off. Outputs: Architecture blueprint, data dictionary, integration specification, governance framework.
Vision-language model fine-tuning on 340,000 historical claims documents across all 47 formats. Integration middleware development connecting the new AI layer to the legacy claims management system via data sync. Confidence scoring framework calibration. Parallel OCR accuracy benchmark comparing new model against legacy vendor across stratified document sample.
Claims complexity routing model training and validation across all 12 markets. Fraud and network analysis models built and tested against historical confirmed-fraud cases. Settlement recommendation engine built with actuarial parameter library for all markets. Adjuster workbench development and internal user testing with pilot adjuster group (24 volunteers across 4 markets).
Live deployment in Netherlands and Singapore markets processing real claims in parallel with existing system. Every AI decision tracked against human decision for 14-day shadow period. Routing accuracy measured at 96.2%. Settlement recommendation acceptance rate by adjusters: 91.4%. Fraud flag confirmation rate: 87.3%. Actuarial review of reserve recommendations: within approved parameters for 99.1% of claims.
Sequential rollout to all 12 markets over 3 weeks. Adjuster training completed (6-hour program per team, delivered in-market). Legacy OCR system decommissioned. Monitoring dashboards live for Claims Operations, IT, and Actuarial teams. Performance measurement locked in: 89% straight-through processing rate for simple claims, 73% reduction in average cycle time, $28M annualized savings validated by Finance.
We had spent three years trying to automate claims. Two prior programs left us with more technical debt and less confidence than when we started. What differentiated this engagement was that the advisors understood our governance constraints as well as our operations team did, and designed the architecture around those constraints from the first week. We did not have to compromise between automation rates and regulatory compliance. We achieved both.
Our senior advisors have worked with insurers across motor, property, life, and specialty lines in 28 countries. We can assess your current claims AI maturity and identify where the highest-value automation opportunities are for your specific portfolio and regulatory context.
Senior advisor response within 24 hours.