New analysis highlights why transparency and partnership — not competition — should guide healthcare’s AI adoption
A peer-reviewed Stanford University analysis published in the January 2026 issue of Health Affairs reviews and synthesizes the landscape of AI tools used in utilization review, including examples of 21 tools described by the authors. Their analysis offers both a warning and a validation: while healthcare risks an “AI arms race” that could “supercharge flaws” in the system, a collaborative approach can help realize AI’s benefits while minimizing risks.
The Stanford authors review and discuss AI tools used by providers and payers across the full utilization management spectrum — from prior authorization to appeals. Their analysis highlights significant concerns about how AI is being deployed:
-
- Automation bias: Reviewers may over-rely on AI recommendations
- Anchoring effects: AI-generated summaries can unconsciously influence clinical judgment
- Opacity: Limited transparency in how tools make decisions
- Expertise gaps: Users may not recognize AI limitations or errors
Perhaps most critically, the study found that the current trajectory risks entrenching adversarial dynamics rather than resolving them — with payers and providers deploying AI tools to outmaneuver each other rather than improve care coordination.
A Different Path Forward
Among the 21 tools discussed, the authors identified important distinctions that point toward more responsible AI adoption:
In the authors’ market scan, only three tools offer both predictive and generative AI capabilities.
Predictive AI has long helped healthcare organizations analyze large volumes of structured and unstructured data to inform decisions — it’s the more tested and proven approach. Generative AI, as a complementary capability, accelerates communication and documentation.
This combination matters because predictive models identify cases requiring closer review, while generative capabilities help teams move faster while staying aligned. It’s a powerful pairing: efficiency gains without sacrificing clinical rigor.
The article describes only three vendors for concurrent review processes.
This real-time stage of utilization management represents a critical opportunity to prevent downstream conflicts through early alignment — yet it’s received far less attention than it deserves.
Prior authorization has dominated healthcare AI headlines, driven largely by the CMS mandate requiring impacted payers to implement open APIs by January 2027. But as we’ve explored in detail, concurrent review’s lower profile masks its critical importance. Medical necessity denials during concurrent review comprise a $2.5 billion annual problem for healthcare organizations — roughly $5 million per provider each year.
Unlike prior authorization, which patients experience directly when scheduling procedures or refilling prescriptions, concurrent review happens behind the scenes while patients are actively receiving care. This invisibility has led to underinvestment — despite the fact that real-time alignment during concurrent review prevents the costly downstream appeals battles that plague prior authorization. Stanford’s inclusion of concurrent review validates its strategic importance in the utilization management continuum.
The authors describe only two platforms as “collaborative.”
This designation reflects a fundamental difference in design philosophy: rather than optimizing for one party’s interests, collaborative platforms align payers and providers around appropriate care decisions based on clinical evidence. In an industry where the relationship between these two groups has historically been adversarial, this distinction represents more than a feature set — it’s a fundamentally different approach to solving utilization management challenges.
The traditional payer-provider dynamic has been characterized by transactional, often contentious relationships. This friction doesn’t just strain administrative staff; it directly affects patient care and revenue growth. Research suggests up to 28% of administrative waste could be eliminated annually if payers and providers worked together rather than operating as adversaries. Yet despite 92% of provider executives expressing desire for greater collaboration with payers, the tools and processes to enable it have lagged behind.
Collaborative platforms bridge this divide through several key mechanisms
Shared data views.
Rather than each party working from different data sets in proprietary systems, collaborative platforms provide equal access to the same real-time clinical and financial information. This establishes a single source of truth — the foundation for trust and alignment.
Mutually beneficial automation.
When both parties agree on evidence-based thresholds for care decisions, AI can automate straightforward cases (often 80-90% of reviews) and preserve human expertise for genuinely complex situations. This reduces administrative burden for both sides.
Transparent decision-making.
When providers and payers can see how and why decisions are being made — with access to the same predictive analytics and clinical evidence — it facilitates productive conversations and focuses energy on cases that objectively deserve closer review.
As our CEO Joan Butters has explored, these collaborative approaches create tangible benefits for patients: less wasted time, reduced bias in medical necessity determinations, and more frictionless experiences. When payers and providers align around shared goals, patients aren’t caught in the middle of administrative battles.
The Stanford analysis’s identification of only two collaborative platforms among 21 tools evaluated underscores both how rare this approach remains — yet how critical it is for the industry’s future.
What Responsible AI Looks Like
The Stanford researchers concluded with clear recommendations for ensuring AI serves patients and the healthcare system:
-
- Increased transparency in how AI tools make recommendations
- Meaningful human review, not perfunctory oversight
- Staff training on AI limitations and potential biases
- Monitoring for underperformance and disparate impacts
- Governance structures that ensure responsible use
How can your organization ensure responsible AI? Learn why we joined CHAI as an Early Member.
When implemented responsibly, the authors note, AI should:
-
- “Help insurers approve requests more efficiently”
- “Improve communications with providers and patients”
- “Conserve reviewers’ time for hard decisions”
These aren’t aspirational goals for AI — they’re operational requirements.
The Xsolis Approach to AI-Driven Efficiency and Collaboration
Xsolis was identified in the Stanford analysis in three key ways: as one of only three tools evaluated for concurrent review, one of only three tools offering both predictive and generative AI, and one of only two collaborative AI platforms identified.
This alignment with the study’s recommendations reflects the foundational design principles on which Xsolis has operated for more than a decade:
Transparency by design.
Our platform provides shared visibility into decision-making for both payers and providers, addressing the opacity concerns the researchers identified.
Human expertise at the center.
AI informs clinical decisions but doesn’t replace clinical judgment. Our workflows ensure appropriate oversight while reducing administrative burden.
Collaboration over competition.
With more than two-thirds of 600 hospitals connected to payer partners through Xsolis’ “connected network,” we’ve built infrastructure for real-time alignment.
Precision, not generalization.
And perhaps most central to our DNA and origin story, Xsolis models are laser-focused on solving administrative burden and increasing the accuracy of medical necessity decisions during concurrent authorization — and have been since the company was founded in 2013. This was years before AI became mainstream, which means we’ve amassed and continuously refined volumes of clinical data.
Our models have been trained on the full scope of more than 350 million patient encounters and have generated over 6.6 billion predictions to date. Throughout this evolution, we’ve maintained rigorous clinical validation and human-in-the-loop practices, while clients have conducted their own independent audits and studies.
The results speak for themselves: three peer-reviewed studies published in 2025 alone — by Baylor Scott & White, Yale New Haven Health, and Mayo Clinic — validated the accuracy of Xsolis AI modes. Precision and clinical trust aren’t achieved overnight — they are built slowly through scale, validated continuously, and proven in real-world practice.
Moving Forward
The Stanford analysis provides an evidence-based framework for evaluating AI in utilization management. As healthcare organizations navigate this rapidly evolving landscape, the research makes clear that technology choices have consequences — not just for operational efficiency, but for the fundamental dynamics between payers and providers.
The question isn’t whether AI will transform utilization management. It’s whether that transformation will entrench existing tensions or create new pathways for collaboration.
Learn more about how Dragonfly’s collaborative AI platform serves both payers and providers:
For Health Plans: Xsolis Align For Providers: Xsolis Utilize