Menu
About This ExerciseFor ParticipantsFor FacilitatorsFAQ
Navigation
Healthcare

Healthcare Provider — Private Cards

Large US Health System

Healthcare Provider Private Information Cards#

Facilitator Note

FACILITATOR NOTE: Print this document and separate at page breaks. Distribute one card per round, face-down, at the start of each round's decision preparation phase. Cards are confidential to the Healthcare Provider participant. Cards accumulate — the participant keeps all cards and may refer to them in later rounds.


Card 1 — Round 1#

Title: Clinical Diagnostic AI Workflow Friction + Prior Authorization Liability Exposure

Card Type: Operational Intelligence

Classification: Operational Intelligence

Source: Clinical Operations, Radiology Department, Risk Management

Reveal Timing: Round 1 Decision Preparation


The Intelligence:

Your diagnostic radiology AI pilot shows meaningful improvement in diagnostic accuracy compared to radiologist baseline. However, documentation analysis reveals radiologists are spending significantly more time validating AI-generated reports than reviewing standard imaging studies. Radiologists report high cognitive load despite high accuracy — the AI system improves diagnostics but worsens radiologist workflow. Early warning: accuracy does not equal clinical adoption.

Separately, your prior authorization prediction tool (designed to help physicians anticipate insurer denials) surfaced an internal finding: the tool's underlying model identified cases where the affiliated health plan's prior auth algorithm recommended denial of clinically appropriate care. The cases are rare but material. The clinical risk is real — patient harm from inappropriate denial creates liability exposure for you as the treating provider (physician practice liability, malpractice risk) and for the payer (coverage appropriateness liability). Implementing mandatory human review for high-risk denial predictions would add physician burden but near-eliminate false negatives.


Decision Tension:

Your diagnostic AI improves accuracy but increases physician validation burden. Do you pause the radiology pilot and redesign to minimize radiologist workflow friction? Push through adoption friction and accept higher physician burden during transition? Or pivot to different clinical domains (pathology, EKG interpretation) with different workflow profiles?

On prior auth, your prediction tool has revealed liability risk in the coverage determination process. Do you escalate to the payer side and demand algorithm changes? Implement your own human review overlay for flagged cases (adding physician burden)? Or accept the risk and focus your limited capital on clinical AI instead?


Questions to Consider:

  • What is the minimum workflow burden reduction required for physician adoption of diagnostic AI? What design changes reduce validation overhead without sacrificing accuracy?
  • For the prior auth finding, what is your actual liability exposure if a patient is harmed by an inappropriate denial you identified but did not escalate?
  • Can you design tiered review for prior auth predictions (high-confidence predictions require no action; low-confidence predictions trigger physician review) to manage burden?
  • How do you operationalize continuous monitoring of diagnostic AI performance and physician workflow impact? What metrics trigger escalation or redesign?


Card 2 — Round 2#

Title: Physician Advisory Board Resistance to Clinical AI Deployment

Card Type: Risk Event Intelligence

Classification: Risk Event Intelligence

Source: Physician Advisory Board, Medical Staff Leadership, Chief Medical Officer

Reveal Timing: Round 2 Decision Preparation


The Intelligence:

Your physician advisory board is expressing significant and organized resistance to clinical AI deployment. The resistance is not fringe — it represents senior physicians and academic faculty who carry substantial influence over clinical practice standards at your facilities. Key concerns:

  1. Autonomy and clinical judgment: Diagnostic AI is perceived as reducing physician autonomy and substituting algorithmic recommendations for clinical expertise. Senior physicians view this as fundamentally threatening to the physician-patient relationship and professional identity.

  2. Liability exposure: Physicians are personally liable for clinical decisions. If an AI-recommended diagnosis contributes to patient harm, liability cascades to the treating physician and their malpractice insurer. Physicians are asking: who bears liability when they follow AI recommendations that turn out to be wrong? Their malpractice insurers are asking the same question.

  3. Administrative burden perception: Prior authorization prediction tools and documentation AI are perceived by some physicians as "more technology to manage" rather than burden reduction. Physician leaders report that early-career physicians are more receptive, but senior physicians and department heads are actively discouraging adoption.

The advisory board has formally requested a 6-month moratorium on new clinical AI deployments pending development of a physician-led governance framework, liability clarity, and workflow burden assessment.


Decision Tension:

Do you accept the moratorium and invest 6 months in physician-led governance development — potentially losing competitive ground but building sustainable adoption? Do you reject the moratorium and accelerate deployment, risking physician attrition and organized resistance? Or do you negotiate a middle path — continue existing pilots under enhanced physician oversight while developing the governance framework in parallel?


Questions to Consider:

  • What would credibly address physician concerns about autonomy and liability? Transparent algorithm design? Explicit liability frameworks? Limited scope of deployment to advisory/support roles only?
  • How much physician consensus is required for sustainable clinical AI adoption? Can you deploy with partial buy-in, or does resistance from senior physicians poison adoption across the organization?
  • What role should physician leadership play in AI governance? Advisory only, or veto authority over clinical AI deployments?
  • How do you sequence physician engagement — do you invest in early advisory input now, or push through and address concerns later? What are the costs of each approach?


Card 3 — Round 3#

Title: FDA Clinical AI Validation Requirements + CMS Conditions of Participation

Card Type: Regulatory Development

Classification: Regulatory Intelligence

Source: Regulatory Affairs, FDA Pre-Submission Contacts, CMS Policy Team

Reveal Timing: Round 3 Decision Preparation


The Intelligence:

Through FDA and CMS contacts, you have received advance notice of upcoming regulatory guidance that will materially affect your clinical AI strategy:

FDA (Q2 2026): New clinical validation requirements for diagnostic AI tools are imminent. External validation will be required (internal validation insufficient). Bias audits across demographic groups will be mandatory. Real-time performance monitoring with reporting obligations will be required post-deployment. Implementation estimate: 12-18 months of pre-market work per diagnostic AI tool. Estimated compliance cost: $5-10M per tool.

CMS (Q2 2026 draft, finalizing 2027): New Conditions of Participation for AI-assisted diagnostics will require demonstrated clinical benefit (not just accuracy), governance protocols with physician oversight, and ongoing algorithm performance monitoring with CMS reporting. Hospitals that deploy clinical AI without meeting these conditions risk losing CMS certification — which would be catastrophic for Medicare/Medicaid reimbursement.

The regulatory clarity you have been waiting for is arriving, but it comes with substantial compliance burden. Early movers who filed pre-submission meetings and began validation work 12-18 months ago will have a significant head start. Those who waited for final guidance face 18-24 month delays before any clinical AI can reach deployment.


Decision Tension:

Do you accelerate clinical AI investment now to position ahead of regulatory requirements — accepting the $50M+ multi-year compliance cost for your clinical AI portfolio? Or do you slow clinical AI and redirect capital to administrative AI (coding, documentation, prior auth) where regulatory burden is lower and ROI is faster? Do you file with FDA immediately (accepting risk that guidance may shift) or wait for final guidance (slower but more certain)?


Questions to Consider:

  • If you invested in early FDA pre-submission engagement and academic validation partnerships in earlier rounds, you are ahead. If not, you face 18-24 month delays. How does your prior positioning affect this decision?
  • What is the competitive value of being an early clinical AI deployer vs. a fast follower? In healthcare, is first-mover advantage real or illusory?
  • How do you manage investor and board expectations for AI ROI given longer regulatory timelines for clinical AI?
  • Do you prioritize low-risk administrative AI (medical coding, documentation) for near-term ROI, or invest in clinical diagnostic AI for long-term differentiation?
  • How do you sequence regulatory filings across your clinical AI portfolio? Which tools file first?