top of page

What is Your Documentation Gap Costing You?


Stacked binders filled with papers on a desk, suggesting a busy or cluttered workspace. Neutral tones with a focus on document piles.

Data is vital to clinical documentation integrity (CDI), yet access keys are limited to a select few. Our newsletter strives to share knowledge and uplift our CDI community. Many modern CDI systems include gap analysis tools designed to uncover missed comorbidities, risk adjustments, and severity indicators. These tools promise enhanced queries, increased provider engagement, improved revenue capture, and better visibility into patient outcomes. However, behind the scenes, their algorithms and benchmarks often remain frustratingly proprietary—accessible only to those who can afford the high subscription fees or connect with rigid enterprise systems. Despite these considerable costs, there is no guarantee of exceptional performance. Many hospitals acquire systems they hardly use, while others optimize their capabilities for growth. We aim to guide you in navigating the selection process.


The Issue: When Data Becomes Trapped in the AI Black Box

AI-powered gap analysis systems have become increasingly prevalent. These systems evaluate discrepancies between expected and actual capture of Major Complications or Comorbidities (MCCs/CCs), examine how documentation aligns with risk-adjusted outcomes, and measure the prevalence of specific diagnoses within Diagnosis-Related Groups (DRGs). Theoretically, this should enable hospitals to identify clinical documentation gaps, reduce denials, and enhance coding accuracy. However, the promise of transparency and improvement often comes with an unseen cost: hospitals rely on systems that conceal their data logic behind proprietary curtains.


Many of these tools use benchmarks sourced from closed, non-public datasets or limited

institutional partnerships, which end users cannot access. Even when a hospital inputs its

own patient-level data, the resulting insights are filtered through opaque models, making

it difficult, if not impossible, to understand how conclusions are drawn. These tools may recommend querying for specific conditions or flagging documentation variances, but the logic behind these alerts is rarely disclosed. Without access to the benchmark population characteristics, inclusion/exclusion criteria, or adjustment factors, clinicians and CDI professionals cannot trust outputs they cannot verify or reproduce.


This lack of transparency creates several downstream issues. Hospitals cannot validate the accuracy or fairness of the AI/ML recommendations, replicate findings for quality improvement studies, or adapt the system for diverse patient populations. Worse still, since the logic and benchmarks are vendor-dependent, institutions become locked into proprietary ecosystems that hinder their ability to innovate independently or contribute to scientific progress. As a result, the outcome is a fragmented landscape where powerful insights are retained by those who own the algorithms, while the healthcare systems expected to act on them remain uninformed.


Example: Acute Respiratory Failure and Proprietary AI Benchmarking

A health system implements an AI-driven CDI gap analysis tool designed to flag missing acute respiratory failure (J96.0x) diagnoses in patients admitted with pneumonia, COPD exacerbation, or sepsis. The tool assesses oxygen saturation trends, ABG results, and documentation patterns to suggest when a query should be issued.


The platform alerts the CDI team on a high percentage of cases, suggesting that acute respiratory failure is not reported. However, the alerts don’t specify what criteria the AI uses to define ARF, nor do they show the benchmark population or clinical thresholds (1).


Here’s where it becomes a problem:

  • The physicians are documenting “hypoxia” or “dyspnea,” but not explicitly stating “acute respiratory failure.”

  • The AI model appears to flag SpO₂ < 92% or oxygen supplementation as a trigger, even in patients without ABG confirmation or evidence of severe respiratory distress.

  • The hospital’s coding team uses CMS and AHIMA guidelines, which emphasize:

    • A documented provider diagnosis

    • Objective signs such as PO₂ < 60 mmHg, PCO₂ > 50 mmHg with pH imbalance, or need for ventilatory support

    • Alignment with the clinical picture of ARF (e.g., respiratory rate, mental status changes)


However, the AI tool suggests queries based on criteria that do not align with CMS

coding requirements or clinical best practices. This results in:

  • A flood of non-actionable queries that physicians push back on

  • An increase in coding denials, especially in RAC and commercial payor audits

  • Frustration from the CDI team, who can’t access or validate the AI’s inclusion

    criteria

  • CDI metrics that look artificially inflated, while actual revenue capture declines


A Scientific Solution: DextroSync’s Open-Source Vision Supported by U.S. Utility

Patents

Cloud logo with three green arrows, text "DextroSync" below in teal and green on a white background. Modern and tech-focused design.

At DextroMedical, we believe that closed algorithms should never limit clinical insights.

The future of Clinical Documentation Integrity (CDI) requires transparency, reproducibility, and trust. Consequently, in addition to our utility patents, we are developing an open-source quality gap scoring system that seamlessly integrates into our cloud-based platform, DextroSync. Rather than reinforcing the status quo of vendor-controlled data logic, our mission is to eliminate barriers to clinical benchmarking knowledge and continually evolve documentation benchmarks, empowering every hospital to improve, regardless of size or budget.


1. Data-Driven, Not Vendor-Driven

DextroSync’s gap analysis engine is based on a foundation of published, peer-reviewed, and publicly accessible data. Rather than relying on black-box algorithms with hidden proprietary datasets, we utilize open, validated sources such as HCUP discharge data, Medicare SAF files, and literature-based prevalence rates for MCC/CCs by DRG. Early adopter hospitals have the option to contribute anonymized case data, which will assist in shaping the benchmarking logic for the entire network. This ensures that the tool reflects not just one vendor’s data interpretation but a collective scientific understanding, progressively updated to align with clinical realities.


2. Transparent Methodology

Each documentation gap flagged within DextroSync will be presented with complete transparency. Users won’t just see a vague prompt to “query for diabetes”; instead, they’ll see the expected capture rate based on national benchmarks, the variance range relative to similar hospitals, and a citation of the data or literature used for that comparison. We call this feature the Evidence Link™—a clickable, always-visible chain back to the scientific basis of the recommendation. This ensures that queries and interventions are founded on documented best practices and can be reviewed, challenged, or defended by medical directors and CDI teams.


3. Community-Contributed Prevalence Benchmarks

One of the most exciting aspects of our model is its open architecture for community input. DextroSync will enable clinical informatics professionals, physician advisors, and

medical educators to contribute new rulesets, validate logic through peer review, and test prevalence patterns across shared datasets. By decentralizing control of benchmark logic, we open the door to diverse insights, regional variations, and emerging evidence that can directly influence CDI practice. Hospitals will no longer be at the mercy of a single algorithm—they’ll be part of a living scientific network.


4. Leveling the Playing Field

This open-source design is compelling for under-resourced facilities: rural hospitals, safety-net institutions, and academic centers often lack the budget to license high-end analytics tools, but they possess the clinical acumen to improve. DextroSync’s platform levels the playing field by making access to clinical quality analytics affordable—and more importantly, customizable. A small public hospital can benefit from the same scientific rigor and peer benchmarking as a large academic medical center, and vice versa. This bridges the gap between “have” and “have-not” systems in data-driven CDI.


5. A Platform That Evolves with the Science

Most importantly, DextroSync is designed to evolve. As new clinical research emerges, DRG definitions change, and healthcare quality measures shift, our benchmarking engine

automatically recalibrates, updating its logic and expectations based on the latest science. Users won’t need to wait for a software update or a vendor meeting to access new insights. Instead, the platform grows in real-time with the community that supports it. This adaptability is critical for long-term impact, positioning DextroSync as a CDI tool and a scientific ecosystem for clinical documentation integrity. Through this open-source approach, DextroMedical establishes a new standard for integrity, transparency, and collaboration in healthcare data analytics because the future of documentation—and ultimately patient care—should belong to everyone.


Why This Matters for the Future of CDI

Documentation is no longer just about compliance; it now involves defending clinical insights and ensuring equitable reimbursement. But here’s the problem: you can’t fix what you can’t measure. For years, proprietary CDI systems have shown us what’s possible, but those insights remain behind closed doors. The next leap forward? Making documentation intelligence transparent, scientific, and universally accessible.


That’s where DextroSync is going. We are creating a future where:

  • CDI teams can justify every query with clinical evidence

  • Hospitals can run independent audits without being tethered to a vendor

  • Researchers can contribute and validate prevalence benchmarks for real-world applications


We’re not just closing gaps in documentation but narrowing the gap in access to the science behind it. The future of CDI isn’t concealed in algorithms; it’s shared, open, and evidence-based. Interested in joining? Please submit your inquiry here


Reference:

Wunsch, H., et al. (2010). The epidemiology of mechanical ventilation use in the United

States. Critical Care Medicine, 38(10), 1947–1953.



Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page