Menu
About This ExerciseFor ParticipantsFor FacilitatorsFAQ
Navigation
Healthcare

Healthcare Payer — Private Cards

Large US Health Insurer

Healthcare Payer Private Information Cards#

Facilitator Note

FACILITATOR NOTE: Print this document and separate at page breaks. Distribute one card per round, face-down, at the start of each round's decision preparation phase. Cards are confidential to the Healthcare Payer participant. Cards accumulate — the participant keeps all cards and may refer to them in later rounds.


Card 1 — Round 1#

Title: Prior Authorization False Negative Liability + Clinical Diagnostic AI Friction

Card Type: Operational Intelligence

Classification: Operational Intelligence

Source: Claims Audit & Quality Team, Clinical Review, Risk Management

Reveal Timing: Round 1 Decision Preparation


The Intelligence:

Your prior authorization AI achieved strong operational results — $22M in cost savings and meaningful reduction in processing time. However, internal audit has identified cases where the algorithm recommended denial of clinically appropriate care. The cases are rare but material. Clinical review confirms: these are genuine false negatives where the AI's cost-optimization logic overrode clinical appropriateness signals in the training data.

The clinical risk is real. Patient harm from an inappropriate denial creates liability exposure for you as the coverage decision-maker (coverage appropriateness liability, regulatory enforcement risk) and for the treating provider (physician practice liability). Implementing mandatory human review for high-risk denials would reduce net savings by 20-30% but near-eliminate false negatives. A tiered approach (human review only for high-cost or high-risk denial categories) could preserve 80-85% of savings while catching the most dangerous false negatives.

Separately, intelligence from your provider network indicates that clinical diagnostic AI pilots are showing accuracy improvements but creating physician workflow friction. This matters to you: if providers' clinical AI deployments stall due to physician resistance, the downstream clinical data quality improvements you need for your own risk adjustment and member health management models will be delayed.


Decision Tension:

Do you accept the $22M savings and manage the litigation and regulatory risk of occasional inappropriate denials? Implement mandatory human review across all denials and reduce savings to $16-18M? Or design a tiered review system — human review for high-risk denial categories, algorithmic decisioning for low-risk approvals — to balance savings and safety?

The provider-side diagnostic AI friction is outside your direct control, but it affects your data quality roadmap. Do you invest in incentivizing provider AI adoption (data sharing agreements, joint governance) or plan around continued data quality limitations?


Questions to Consider:

  • What is your actual liability exposure per inappropriate denial? Does your corporate liability insurance cover AI-driven coverage denials that lead to patient harm?
  • Can you implement tiered human review (high-confidence AI decisions pass through; medium-confidence sampled; low-confidence mandatory review) to reduce costs while managing liability?
  • How do you operationalize continuous monitoring of prior auth accuracy? What false negative rate triggers escalation or algorithm redesign?
  • If regulators begin auditing your prior auth algorithms for inappropriate denials, is your current documentation and governance sufficient to withstand scrutiny?


Card 2 — Round 2#

Title: Synthetic Fraud Detection Arms Race

Card Type: Risk Event Intelligence

Classification: Risk Event Intelligence

Source: Fraud Detection Team, Special Investigations Unit, Cybersecurity

Reveal Timing: Round 2 Decision Preparation


The Intelligence:

Your transaction monitoring system has flagged a meaningful increase in synthetic claim submissions over the past quarter. AI-generated provider credentials, member data, and claim narratives are becoming increasingly difficult to distinguish from legitimate claims. Your fraud detection model performance has degraded relative to prior periods — the false negative rate (fraudulent claims passing undetected) has increased, while the false positive rate (legitimate claims flagged as fraud) has also risen, creating provider relations friction.

You are losing the AI arms race. Offenders are using generative AI to forge claims faster and more convincingly than your detection models can adapt. Competitor intelligence suggests that at least two rival insurers have deployed more advanced synthetic claim detection capabilities, putting you at a competitive disadvantage — sophisticated fraud rings will preferentially target insurers with weaker detection.

Developing next-generation synthetic fraud detection requires material investment and a 12-18 month development timeline. Options include: (1) build internally with your existing data science team (slower, retains IP), (2) acquire a specialized fraud-detection startup (faster, expensive, integration risk), or (3) partner with a cybersecurity vendor (moderate speed, shared IP, ongoing cost).


Decision Tension:

Do you accelerate investment in next-generation fraud detection now, accepting the material cost and 12-18 month timeline? Do you acquire a specialized startup for speed at the cost of integration risk and premium pricing? Or do you accept higher fraud costs as an ongoing cost of business and reallocate capital to other priorities?


Questions to Consider:

  • What is the ROI threshold for next-generation fraud detection investment? At what fraud loss rate does the arms race become existential rather than incremental?
  • Your fraud detection model degradation is accelerating. What is your exposure if you delay investment by 6-12 months? Can you quantify the cost of inaction?
  • Build vs. buy vs. partner: what are the trade-offs for each approach? Which gives you the fastest path to competitive parity?
  • How does synthetic fraud interact with your claims data quality constraints? If fraudulent claims are contaminating your training data, does your model degradation compound over time?
  • Can you coordinate with industry consortia or law enforcement to share synthetic fraud intelligence, or does competitive dynamics prevent it?


Card 3 — Round 3#

Title: Prior Authorization Enforcement Cascade + Disparate Impact Exposure

Card Type: Regulatory Development

Classification: Regulatory Intelligence

Source: Regulatory Affairs, State Insurance Board Contacts, CMS Policy Intelligence

Reveal Timing: Round 3 Decision Preparation


The Intelligence:

CMS and state insurance boards are preparing to announce enforcement actions against major insurers for inappropriate prior authorization denials. The target companies' prior auth algorithms exhibited statistically significant disparate impact — higher denial rates for certain member demographics (age, race, geographic location, socioeconomic proxies). The enforcement actions will include material financial penalties, mandatory algorithm audits, and required member remediation programs.

This enforcement action will trigger industry-wide prior auth audits within 12-18 months. Every major insurer will face regulatory examination of their prior authorization algorithms for fairness, accuracy, and disparate impact. Your internal exposure analysis suggests vulnerabilities in your current prior auth algorithm — specifically, denial rate disparities correlated with member zip code (a known socioeconomic proxy) and age cohort. These disparities may not be intentional, but the regulatory standard is disparate impact, not intent.

Proactive remediation requires material investment: comprehensive bias auditing of all coverage determination algorithms, algorithm redesign where disparities are found, member remediation for historically affected populations, and enhanced governance and reporting infrastructure. Reactive posture (waiting for your audit) risks larger penalties, mandatory corrective action under regulatory supervision, and reputational damage.


Decision Tension:

Do you immediately launch proactive bias auditing and algorithm remediation across all coverage determination AI — accepting the material cost and operational disruption — to position ahead of the regulatory wave? Or do you monitor the enforcement actions against competitors and react when your audit arrives, preserving capital but accepting higher regulatory and reputational risk?


Questions to Consider:

  • Is early bias remediation a strategic advantage (early mover credibility with regulators, reduced penalty exposure) or just an expense with no competitive benefit?
  • What is your actual disparate impact exposure? Can you quantify the scope of affected member populations and potential remediation costs?
  • How do you communicate proactive remediation to regulators, members, and investors? Does disclosure create legal exposure, or does it demonstrate good faith?
  • If your prior auth AI requires fundamental redesign (not just parameter tuning), what is the impact on your $22M savings and operational efficiency gains?
  • How do you balance prior auth compliance costs against fraud detection investment needs? Both require material capital in the same timeframe. What gets funded first?