When AI Makes the Right Decision About the Wrong Thing
Enterprise teams keep hearing the same promise. Add AI, automate judgment, reduce risk, and move faster. Yet results often fail to materialize, and trust breaks in production.
In a presentation at TechEx Global 2026, Senzing Head of Data & AI Strategy Dr. Gurpinder Dhillon explains why “successful” AI can still create customer friction and internal churn. “AI is doing what it is supposed to do,” he says, “but the decisions coming out are not the right decisions.” The model follows its logic. The problem is identity context. The system cannot reliably tell who is who across fragmented records, so the AI ends up making decisions about the wrong underlying entity.
Key Takeaways
- AI outcomes are limited by identity context. Fragmented identities produce fragmented features, histories, and explanations.
- Production failures often trace back to entity problems. Fix the identity first, then tune AI models.
- Trust improves when teams can explain decisions in entity terms, including which records were linked and why.
The Trust Paradox Shows Up as Friction
Dhillon uses fraud as his primary example because the results are easily measurable. AI-driven fraud controls can prevent real losses, yet false positives and false declines lead to increased costs and frustration.
In his presentation, Dhillon cites figures that illustrate the imbalance:
- $10.5B in prevented fraud
- 61% of customers reduce card usage after a false positive decline
- An 8:1 ratio of false positives to real fraud detected
- An estimated $443B lost annually due to falsely declined transactions
Every false decline erodes customer trust, and every false positive expands the manual review queue.
Why AI Can Be Right and Still Be Wrong
That gap between prevention and friction is what led Dhillon to two diagnostic questions:
- Are the models sound?
- Is the AI making decisions about the correct entity?
He argues that many organizations over-invest in the first question and under-invest in the second. Models can perform well in QA while failing to meet end-user expectations in production. Teams respond by tuning, adding features, or expanding training sets. Those steps help when modeling is the bottleneck
But often the weak link sits upstream. When one real-world person or organization appears as multiple profiles across systems, the model trains on incomplete, inconsistent data – producing confident, consistent, yet wrong decisions.
A Fraud Program Example That Maps to Any Workflow
Dhillon shares an anonymized case study from a global financial institution that invested millions in advanced ML fraud detection. Despite that investment, the organization still saw a 12% false-positive rate, about double the 6% industry average rate expected. In addition:
- $420K per month manual review costs
- 35% increase year over year in customer complaints
- 6.2/10 analyst trust
The organization assumed the models were at fault, so they reviewed them and tried to improve them via tuning and other ML modifications. The outputs did not materially change. The root cause analysis shifted upstream to identity fragmentation across systems, where one person often appeared as multiple profiles with different attributes. “This was not a model performance issue,” Dhillon explains. “It was really about understanding who was who.”
He illustrates the point with a long-time customer, “Sarah Chen,” whose records lived across multiple systems. Because her name, identifiers, and data fields varied across systems, the model treated Sarah as multiple entities and flagged her legitimate activity as suspicious.
A Fraud Program Example That Maps to Any Workflow
Dhillon shares an anonymized case study from a global financial institution that invested millions in advanced ML fraud detection. Despite that investment, the organization still saw a 12% false-positive rate, about double the 6% industry average rate expected. In addition:
- $420K per month manual review costs
- 35% increase year over year in customer complaints
- 6.2/10 analyst trust
The organization assumed the models were at fault, so they reviewed them and tried to improve them via tuning and other ML modifications. The outputs did not materially change. The root cause analysis shifted upstream to identity fragmentation across systems, where one person often appeared as multiple profiles with different attributes. “This was not a model performance issue,” Dhillon explains. “It was really about understanding who was who.”
He illustrates the point with a long-time customer, “Sarah Chen,” whose records lived across multiple systems. Because her name, identifiers, and data fields varied across systems, the model treated Sarah as multiple entities and flagged her legitimate activity as suspicious.
The same fragmentation works in reverse, too: common-name variants can merge two or more distinct people into a single profile. Either way, the AI makes the best decision it can, but about the wrong thing.
While fraud makes the stakes obvious, the pattern applies whenever AI depends on identities, including risk scoring, compliance screening, customer onboarding, and investigation workflows. These all suffer when the system cannot first reliably resolve who is who.
The Fix. An Entity Resolution Layer That Makes Identity Persistent
Dhillon’s fix comes upstream of model tuning: Resolve identities before training, and then continuously resolve identities in real time before decisioning. In the case study, the organization implemented Senzing® entity resolution across its data sources to de-duplicate records, align attributes, and maintain a cohesive, comprehensive entity view as new data arrived.
Dhillon notes that the organization averaged 3.7 identities per person across systems. Entity resolution reduced duplication and reassembled complete, current entity profiles, which improved the context available for downstream ML, fraud, risk, compliance, and investigation workflows.
Senzing is delivered as an SDK and can run in batch or real time. It also supports sidecar and in-parallel evaluations, so teams can validate impact without rebuilding existing systems.
What Changes When AI Gets Identity Right
Once the system can reliably resolve who is who, the AI and ML start operating on a complete entity profile instead of fragmented records. Entity resolution consolidates duplicates, aligns attributes across sources, and maintains a single resolved identity as new records arrive. As Dhillon puts it, the breakthrough came when the team shifted from tuning models to “understanding who was who.” Getting identity right improves the inputs, the decisions, and the explanations:
- Cleaner Training Inputs: Fewer duplicates, less noise, and a more complete entity profile feeding features and patterns.
- More Stable Decisions in Production: Incoming transactions are resolved to the correct entity before decisioning, so your AI model does not treat one customer as multiple customers.
- Better Explainability For Analysts: A resolved entity view and an audit trail make it easier for investigators to see what was linked, why it was linked, and why a decision was triggered.
With identities resolved, these improvements show up where it matters most downstream: fewer false positives, reduced manual review load, and higher analyst trust.
Reported Results After Six Months
After implementing Senzing entity resolution as an upstream identity layer, the organization tracked results for six months
- 73% reduction in false positives, from 12% to 3.2%
- $308K/month reduction in manual review costs, from $420K to
- Improved fraud detection accuracy by 5 percentage points, from 89% to 94%
- 44% increase in analyst trust, from 6.2 to 8.9 out of 10
Fix Identity First, Then Let AI Shine
AI can only perform as well as the identity context it is trained on. When one person shows up as multiple identities, decisions become unstable, false positives climb, and trust erodes. When identities resolve into a single, persistent view, models have the accurate entity data and context they need to perform consistently in production.
If your AI performs well in testing but disappoints in production, you should evaluate identity fragmentation as a potential root cause. Then consider an entity resolution layer that can run in batch and/or in real time, including sidecar or in-parallel entity resolution deployments.
Most organizations have three to five identities for every customer; they just don’t know it yet. Schedule a Demo to see how Senzing entity resolution helps teams unify identities across systems and improve downstream decisioning, investigations, and compliance workflows.
Video Highlights
00:50: The AI investment paradox. AI can behave correctly, yet still drive the wrong outcomes.
01:58 The trust and cost problem. Fraud prevention gains vs. false-positive fallout.
02:55: Why false declines hurt. Customer behavior impact and revenue loss dynamics.
04:55: Case study baseline. 12% false positives, manual review cost, complaints, and analyst trust.
07:20: “Sarah Chen.” One customer becomes multiple entities across systems.
08:43: Root cause analysis. Why model tuning did not fix production performance.
11:22: The solution pattern. Entity resolution before training and before real-time decisions.
12:30: Six-month outcomes. False positives drop, costs fall, analyst trust rises.
Connect Data. Power Intelligence.