Finance — Private Cards
Large Universal Bank
Finance Private Information Cards#
Facilitator NoteFACILITATOR NOTE: Print this document and separate at page breaks. Distribute one card per round, face-down, at the start of each round's decision preparation phase. Cards are confidential to the Finance participant. Cards accumulate — the participant keeps all cards and may refer to them in later rounds.
Card 1 — Round 1#
Title: Underwriting Explainability Audit — Proxy Variable Risk & Regulatory Exposure
Card Type: Regulatory Development
Classification: Regulatory Intelligence
Source: Internal Model Risk Management Team; Regulatory Affairs
Reveal Timing: Round 1, start of Decision Preparation phase
The Intelligence:
Your AI-driven insurance underwriting model outperforms traditional actuarial methods and is your strongest near-term business case for AI deployment. However, an internal explainability audit has surfaced material concerns. The model relies on complex ensemble methods that regulators may challenge under current examination standards. More critically, the model uses proxy variables — combinations of geographic, behavioral, and demographic features — that correlate with protected characteristics. Your state insurance regulatory contacts have informally flagged that models using similar proxy structures at competitor institutions are drawing scrutiny.
Your regulatory affairs team assesses material risk of enforcement action if you deploy this model in its current form without further refinement. The accuracy advantage of the ensemble model over an interpretable alternative is 6-8 percentage points — meaningful in underwriting economics. An interpretable model would be regulator-friendly but would sacrifice competitive edge.
Your Model Risk Management team has proposed a dual-architecture approach: deploy the interpretable model for regulatory-facing underwriting decisions, and use the high-accuracy ensemble model internally for risk management and portfolio analytics where explainability requirements are lower. This is technically feasible but operationally complex — maintaining two parallel model stacks increases cost, governance burden, and the risk of model drift between the two.
Decision Tension:
Regulatory compliance versus competitive accuracy. Your interpretable models sacrifice meaningful accuracy. Do you deploy the high-accuracy ensemble for internal risk management only and present the interpretable model to regulators? How do you manage the dual-model complexity, cost, and governance risk? Or do you accept the accuracy penalty across the board and deploy a single interpretable model, simplifying governance at the cost of competitive disadvantage?
Questions to Consider:
- What is your regulatory tolerance? Can you defend a dual-tier underwriting architecture (high-accuracy internal, compliant-but-less-accurate external) if regulators challenge why you use different models for different purposes?
- What is the business cost of 6-8 points of accuracy loss in underwriting? How does that translate to loss ratios, approval rates, and ROE impact?
- How do you sequence the regulatory engagement? Do you proactively brief OCC and state insurance regulators on your model governance framework, or wait for them to examine?
- If a competitor deploys a high-accuracy model and faces no immediate regulatory action, does that change your calculus?
SHARED INTELLIGENCE: Consulting and Law industry participants have received related intelligence about AI governance and regulatory compliance challenges across professional services. Their perspectives on regulatory strategy and governance frameworks may surface during cross-industry discussion.
Card 2 — Round 2#
Title: Synthetic Fraud Detection Arms Race — Model Degradation & Competitive Intelligence
Card Type: Risk Event Intelligence
Classification: Risk Event Intelligence / Competitive Intelligence
Source: Fraud Detection Team; Cybersecurity Operations; Competitive Intelligence Unit
Reveal Timing: Round 2, start of Decision Preparation phase
The Intelligence:
Your transaction monitoring system has flagged a meaningful increase in synthetic identity applications over the past quarter. AI-generated work histories, income documentation, and identity papers are becoming increasingly difficult to distinguish from legitimate documents. Your fraud detection model — which was best-in-class 18 months ago — has experienced measurable performance degradation. False negative rates have increased as adversarial AI generates more sophisticated forgeries.
You are losing the AI arms race. Fraudsters are using generative AI to create synthetic identities and forge documentation faster than your detection models can adapt. The estimated annual fraud loss exposure if current trends continue is material to your P&L. Your fraud team estimates that developing next-generation synthetic identity detection capability requires significant investment and a 12-18 month development timeline.
Competitive intelligence adds urgency. Two of your top-5 banking peers have deployed more advanced synthetic identity detection systems, reportedly achieving meaningfully better detection rates. One acquired a specialized fraud-detection startup; the other partnered with a cybersecurity vendor. Your current detection rate is falling behind competitive benchmarks, and the gap is widening each quarter as adversarial AI improves.
Decision Tension:
Investment urgency versus capital discipline. Developing next-generation fraud detection requires material capital commitment on a 12-18 month timeline with uncertain ROI. Do you accelerate investment now to close the competitive gap, or accept elevated fraud losses as an ongoing cost of business while deploying capital elsewhere? Build versus buy: do you invest in internal R&D, acquire a specialized startup, or partner with a cybersecurity vendor? Each path has different cost, timeline, and capability trade-offs.
Questions to Consider:
- What is the ROI threshold for next-generation fraud detection investment? At what false negative rate does synthetic fraud become an existential cost rather than a manageable loss?
- Build, buy, or partner? Internal R&D preserves IP control but is slowest. Acquisition is fastest but most expensive and creates integration risk. Partnership is flexible but creates vendor dependency. What fits your risk profile?
- How do you prioritize fraud detection investment against other AI initiatives (underwriting, wealth management, compliance infrastructure) competing for the same capital and talent?
- If competitors have already deployed superior detection, what is the cost of waiting another 6-12 months? Does competitive gap create customer attrition risk or regulatory exposure?
Card 3 — Round 3#
Title: CFPB Fair Lending Enforcement Action — Audit Cascade & Model Vulnerability Exposure
Card Type: Regulatory Development
Classification: Regulatory Intelligence
Source: Regulatory Affairs; Board Contacts (OCC, Federal Reserve); General Counsel
Reveal Timing: Round 3, start of Decision Preparation phase
The Intelligence:
The CFPB is preparing to announce an enforcement action against a major fintech competitor for AI-driven lending discrimination. The company's underwriting model exhibited statistically significant disparate impact against protected classes in mortgage and auto lending. The enforcement action will include substantial penalties, mandatory model remediation, and a multi-year compliance monitoring agreement.
This enforcement action will trigger fair lending audits across all major banks within 12-18 months. Your regulatory contacts confirm that OCC and Federal Reserve examiners are already developing enhanced AI model examination procedures. Every institution deploying AI in lending or underwriting decisions should expect heightened scrutiny.
Your internal exposure analysis — conducted by your Model Risk Management team in response to Card 1 intelligence — has identified vulnerabilities in your underwriting systems. The proxy variable concerns flagged in Round 1 remain partially unresolved. Your remediation options and their estimated costs:
- Full model refactoring (replace ensemble with fully interpretable architecture): Significant investment; 12-18 month timeline; resolves regulatory concern but sacrifices accuracy.
- Proxy variable remediation (remove or constrain problematic features): Moderate investment; 6-9 month timeline; reduces but does not eliminate disparate impact risk.
- Customer remediation program (proactive outreach to potentially affected borrowers): Material cost; demonstrates good faith; reduces penalty severity if enforcement action occurs.
- Enhanced compliance infrastructure (independent model audit, ongoing bias monitoring, regulatory reporting): Substantial annual cost; necessary regardless of which remediation path you choose.
Decision Tension:
Proactive remediation versus reactive monitoring. Do you immediately launch comprehensive model remediation and customer outreach — expensive but demonstrating regulatory good faith and potentially reducing enforcement severity? Or do you monitor the enforcement action against your competitor, learn from their experience, and remediate only what examiners specifically require — cheaper in the near term but risking harsher treatment if regulators view your response as inadequate? The first-mover question cuts both ways: early remediation is expensive and may fix problems examiners never would have flagged, but late remediation after an examination finding carries consent order risk and reputational damage.
Questions to Consider:
- What is your proactive versus reactive stance on fair lending compliance? Is early, comprehensive remediation a strategic advantage (demonstrating industry leadership and regulatory good faith) or just premature expense?
- How do you sequence remediation? Full model refactoring takes 12-18 months. Proxy variable remediation is faster but incomplete. Do you do both? In what order?
- What is the cost of a consent order versus the cost of proactive remediation? Consider not just financial penalties but reputational damage, customer attrition, and impact on deposit growth and wealth management retention.
- How does this enforcement landscape affect your AI deployment pace for other initiatives? Does regulatory uncertainty cause you to slow down across the board, or do you accelerate governance investment to enable continued deployment?
- How do you communicate this regulatory risk to your board, investors, and employees without triggering panic or premature market reaction?