Menu
About This ExerciseFor ParticipantsFor FacilitatorsFAQ
Facilitator Guide

Facilitator Overview & Runbook

Facilitator Overview & Runbook#

Exercise Overview#

Project Threshold is an 8-hour tabletop exercise designed to stress-test near-term (2026-2030) economic implications of AI across eleven US industries. Individual industry participants navigate four sequential rounds under different AI deployment scenarios, make strategic decisions individually, and evaluate industry-level implications through cross-industry discussion.

Key Parameters#

ParameterValue
Participants5-11 industry representatives + 1-2 facilitators
Industry Representatives5-11 individual participants. Each selects 1 or more industries (recommend 2). One decision worksheet per industry per round. Ideal: 8+ industries assigned. Fewer possible. Facilitator plays unassigned industries.
The 11 IndustriesRetail, CPG, Healthcare Provider, Healthcare Payer, Finance, Consulting, Law, Manufacturing, Logistics, Big Tech, B2B/B2C SaaS
Rounds4 sequential periods (0-6mo, 6-18mo, 18-30mo, 30-48mo)
FormatSingle 8-hour event
AI ScenarioBaseline Type A (configurable; see separate technical specs)
Decision Archetypes15 common strategic patterns with plausibility gates

The 11 Industries#

IndustryKey Concerns
RetailDemand forecasting, supply chain, labor displacement, competitive AI dynamics
CPGDemand forecasting, supply chain, labor displacement, competitive AI dynamics
Healthcare Provider (hospital systems)Clinical AI workflows, FDA approval, patient safety
Healthcare Payer (insurers)Claims automation, coverage decisions, patient safety
Finance (banking + insurance)Underwriting/fraud AI, regulatory approval, labor transition
Consulting (Big Four/MBB)Copilot adoption, service pricing, labor transition
Law (AmLaw 50)Billable hour disruption, bar rule uncertainty, malpractice liability
ManufacturingProduction optimization, labor transitions, capex allocation
LogisticsAutonomous vehicles, labor transitions, capex allocation
Big Tech (cloud, ads, devices, enterprise software — excludes AI lab/model development)AI feature integration, pricing pressure, margin management
B2B/B2C SaaS (Workday/Salesforce-class)AI feature integration, pricing pressure, competitive threats from AI-native startups

Note on Big Tech scope: Big Tech covers cloud infrastructure, advertising, devices, and enterprise software. AI lab and foundation model development decisions are excluded from participant scope and introduced via facilitator injects only.

Facilitator Role & Responsibilities#

PhaseResponsibilities
Pre-ExerciseAssign industry roles (ideal: 8+ industries assigned; fewer possible; facilitator plays unassigned industries), distribute decision packets and AI Adoption Arc Foundation phase (pre-read), brief scenarios
During ExerciseManage injects, distribute Private Cards (from Private Cards) and AI Adoption Arc phase handouts (from AI Adoption Arcs) at the start of each round, track decision quality, score per adjudication rules, challenge implausible proposals
Market Shock & Collective BonusAnnounce Facilitator Market Shock constraints (R2 only); facilitate optional Collective Bonus during cross-industry discussion (all rounds)
AdjudicationApply banded scoring ({-2, 0, +2}) + red-flag triggers (+/-3), track individual decisions per industry
Cross-Industry DiscussionFacilitate 27-min discussion period in each round for cross-industry dynamics and spillover analysis. Use discussion prompts to surface synergies and tensions across industries.
DebriefLead 60-min final debrief with individual reflections (15min) + cross-industry discussion (25min) + no-regrets actions (20min)

Private Cards & AI Adoption Arc Distribution#

Distribute the appropriate materials to each participant at the start of each round, before the scenario read.

RoundPrivate Cards (from Private Cards)AI Adoption Arc (from AI Adoption Arcs)
Pre-Exercise--Foundation phase included in pre-read packets
Round 1Card 1 -- shared within industry clusters (e.g., Retail and CPG both receive Consumer Card 1)Confirm all participants have Foundation phase
Round 2Card 2 -- unique per industryAcceleration phase handout
Round 3Card 3 -- unique per industryReckoning phase handout
Round 4No Private CardsNormalization phase handout

Exercise Timeline#

The master timeline for the 8-hour exercise is in 01_Timeline.md. Refer to that document for the complete minute-by-minute schedule, round structure, sub-block durations, debrief structure, and facilitator checklist. All clock times, break placements, and duration allocations are defined there.

Key reference values: Rounds 1–4 at 65 min each (5 + 3 + 15 + 3 + 12 + 27). Debrief at 60 min (15 + 25 + 20). Exercise runs 8:30 AM – 4:30 PM.


Practice Micro-Round (Before Round 1)#

Purpose: Teach participants the individual decision format, scoring model, and specificity checklist in 5 minutes flat.

Inject: "A major consultancy just announced that AI copilots have reduced junior hiring by 40%. Entry-level salaries are down 15%. What do you do?"

Decision Format Example: "The Consulting participant proposes: Deploy GitHub Copilot to 25% of consulting staff (pilot phase, major US offices only) over 90 days. Pair with mandatory training. Commit to zero involuntary headcount reduction in pilot phase. Budget: $2M for licensing and training. Success metric: Track analyst productivity and time-to-billable-hours."

Facilitator Scoring (Live):

DimensionScoreRationale
Strategic Fit0Defensive move; doesn't accelerate growth, but acknowledges competitive pressure
Execution Risk+1Copilot adoption is proven; 90-day timeline is tight but feasible with modern infrastructure
Tail Risk+1Pilot phase limits downside; if adoption fails, can pivot; if succeeds, can scale
Total+2/6Acceptable decision. Participant sees how scoring works; why specificity matters

Facilitator debrief: "Notice how the participant specified WHO (25% of staff), WHAT (GitHub Copilot), WHERE (major US offices), WHEN (90 days), HOW MUCH ($2M), HOW (training plan), RISK (pilot-only, no layoffs). That specificity made scoring easy. Vague proposals (e.g., 'accelerate AI adoption') get challenged. This is individual decision-making -- you're deciding alone for your assigned industries."


Running with Minimal Staff#

Project Threshold V7.4 can run with 1 facilitator + 1 helper or 1 facilitator solo:

ConfigurationRoleResponsibilities
1 Facilitator + 1 Helper (Recommended)FacilitatorReads scenarios, manages injects, distributes Private Cards and AI Adoption Arc handouts, scores decisions, announces Facilitator Market Shock (R2), facilitates Collective Bonus (Optional) and cross-industry discussion, leads debrief
HelperTracks time, maintains scoreboard, tallies Collective Bonus nominations, handles logistics (breaks, AV, materials)
1 Facilitator Solo (Possible)Facilitator (all duties)Pre-record scenario reads (or use text slides). Use timer app with hard alarms for round transitions. Fast-track scoring via specificity checklist and industry baselines; don't debate edge cases. Distribute scorecards post-event rather than live.

Facilitator Discussion Prompts: Cross-Industry Synergies & Tensions#

Use these prompts during the 27-minute cross-industry discussion period to surface dynamics across industries. Scan the table below for the relevant cluster and pick 1-2 prompts per round.

Industry ClusterIndustriesDiscussion Prompts
ConsumerRetail, CPG(1) How does Retail's AI-driven inventory optimization affect CPG's demand forecasting and production planning? (2) If Retail cuts shelf space for AI-optimized assortment, how does CPG respond?
HealthcareProvider, Payer(1) How do Provider AI diagnostic investments interact with Payer claims automation? Who bears the risk of AI errors? (2) If Payer automates prior authorization, does that help or hinder Provider clinical AI adoption?
Finance & PSFinance, Consulting, Law(1) How does Finance's AI underwriting capability affect demand for Consulting advisory services? (2) If Law firms adopt AI-generated legal research, how does that change Consulting's competitive landscape for regulatory advisory? (3) How does the shift from billable hours (Law) interact with outcome-based pricing (Consulting)?
Supply ChainsManufacturing, Logistics(1) How does Manufacturing's predictive maintenance success affect Logistics demand and routing? (2) If autonomous vehicles disrupt Logistics, how does Manufacturing's supply chain strategy adapt?
Software & TechBig Tech, B2B/B2C SaaS(1) How does Big Tech's AI infrastructure investment create or destroy opportunity for SaaS incumbents? (2) If Big Tech bundles AI features into cloud platforms, what happens to standalone SaaS pricing?
Cross-IndustryBig Tech, Consulting, LawHow does Big Tech's enterprise AI platform strategy affect Consulting and Law firms' competitive positioning?
Cross-IndustryManufacturing, Logistics, Retail, CPGIf Manufacturing and Logistics automate aggressively, how does that flow through to Retail and CPG supply chain costs?
Cross-IndustryFinance, Healthcare Provider, SaaSHow does Finance's risk appetite for AI-driven industries affect capital availability for Healthcare Provider and SaaS?

Tips for Facilitating#

Decision Specificity Checklist (Apply Before Every Explicit Score)#

Before scoring an explicit decision, verify the participant specified:

FieldWhat to Check
INDUSTRYWhich industry? (Retail? CPG? Finance? Healthcare Provider? Consulting? Law? etc.)
WHOWhich team owner? Committed sponsor?
WHATSpecific capability/action? (e.g., "GitHub Copilot," not "deploy AI")
WHEREScope -- pilot, function, geography, enterprise?
WHENTimeline realistic for scope? Regulatory approval needed?
HOW MUCHHeadcount, capex, revenue impact clear?
HOWTalent plan? Integration detail? Rollback plan?
RISKParticipant acknowledges execution and tail risk?
BANDSSpend/Commitment? Time-to-Impact? Execution Complexity? Dependency? Scale?

If >2 items missing: "I need more specificity before I score. Can you clarify [issue]? And how would you classify this in terms of Spend, Complexity, and Dependency? That changes the execution risk profile."

After Round Scoring: Apply Base Case Fallbacks#

After scoring all explicit industry decisions:

  1. Identify any industry that received no explicit action from any participant
  2. Reference the fallback bank (Base Case Fallback Bank)
  3. Apply the pre-defined fallback score (deterministic, small deltas: +/-1 per dimension)
  4. Post fallback score alongside explicit scores for transparency
  5. Fallback industries receive standard base case scores

Example facilitator announcement: "Participant covering Retail and CPG submitted one explicit decision on Retail (+2/6). CPG received no explicit action. Applying base case fallback: CPG = +1/6 (defensive cost-reduction move). Both scores now posted."

Challenging Implausible Proposals#

Use this template:

"I appreciate the ambition. Let me ask for clarity on [specific issue]. Your proposal as stated is [current scope]. That changes the execution risk significantly. Here's what I can score: [narrower scope with realistic timeline]. That gives you [Strategic Fit] + [Execution Risk] + [Tail Risk] = [Total]. Alternatively, if you want [different tradeoff], we can score [different proposal]. Which direction?"

Red-Flag Triggers (When to Challenge)#

Red-Flag CategoryExamples (Challenge If Present)
Timeline misalignmentEnterprise AI in <6 months without pilot; FDA approval in <12 months; major M&A close in <3 months
No execution risk discussionMissing talent plan, integration detail, or regulatory pathway
Industry constraint violationsHealthcare AI without FDA plan; Trading AI without circuit breakers; Retail with no labor transition; Law AI without bar rule compliance plan; Consulting AI without client confidentiality safeguards
Unhedged tail riskAutonomous systems with no rollback; >30% headcount cut without retention plan; full deployment no pilot
Implausible synergiesM&A with >100% synergies; supply chain cuts >20% COGS; revenue growth >50% without market expansion

Industry Health Signals (End of Round Only)#

  • Score first (3 dimensions, banded {-2, 0, +2})
  • Apply fallbacks to any industries without explicit participant decisions
  • At end of round, update cumulative scores for all 11 industries
  • Look up Industry Condition from the condition band table (see Industry Health Signal Tables)
  • Announce conditions at start of next round (~2 minutes)
  • Apply constraints for any industry in Headwind or Crisis

Time Management#

Hard Stops (Non-Negotiable)#

ActivityDurationRule
Individual decision prep15 min per roundIndividuals decide faster than teams
Collective Bonus (Optional)~5 min per round (within discussion period)Participants nominate industries for +2 (strong) or -2 (risky). Takes effect if 3+ agree.
Facilitator Market Shock3 min (R2 only)Facilitator selects 2-3 industries, imposes one constraint each. No negotiation.
Cross-industry discussion27 min per round (uniform)Primary learning period; includes optional Collective Bonus
Debrief60 min totalNon-negotiable. If time is short, cut earlier rounds or optional injects, never the debrief

If Falling Behind#

PriorityActionDetail
1Skip optional injectsKeep core injects #1-#8; defer others
2Fast-track decisionsUse industry baseline ranges directly; don't debate edge cases
3Reduce cross-industry discussionHit key questions quickly; defer nuance to debrief
4Abbreviate individual reflections in debriefEach participant: 2 min (vs. 3 min). Keep cross-industry discussion + no-regrets actions

If Ahead of Schedule#

PriorityActionDetail
1Add optional injectsPull from the packet
2Extend cross-industry discussionDeep-dive on spillovers and industry interdependencies
3Extend debriefExplore tail risks or policy implications
4Run war gameAsk participants to role-play responses to one more 2028 scenario

Scoring Mechanics (Quick Reference)#

ParameterValue / Rule
DimensionsStrategic Fit, Execution Risk, Tail Risk (each -3 to +3, but typically {-2, 0, +2})
Banded framework inputsSpend/Commitment, Time-to-Impact, Execution Complexity, Dependency, Scale
Banded scoring during play{-2, 0, +2} for each dimension (default)
Red-flag exceptionIf band red-flag combination fires, unlock +/-3 exception scoring
Total per explicit decisionSum of three dimensions (range: -6 to +6; typical: -2 to +6)
Fallback industry scoringPre-defined small deltas (+/-1 per dimension) applied directly from fallback bank
Aggregate industry scoreSum of explicit decisions + fallback decisions (per industry)
Industry HealthCumulative aggregate scores determine Industry Condition bands: Surge (+15+), Tailwind (+6 to +14), Steady (-5 to +5), Headwind (-6 to -14), Crisis (-15 or worse) with mechanical consequences (see Industry Health Signal Tables)

Materials Checklist#

  • Scenario briefs (one per participant + facilitator)
  • Decision packets / worksheets (forms for recording WHO/WHAT/WHERE/WHEN/HOW/HOW MUCH/RISK; one per industry per round)
  • Private Cards sorted by round (from Private Cards)
  • AI Adoption Arc handouts for Rounds 2-4 (from AI Adoption Arcs; Foundation phase in pre-read)
  • Scoring sheet template (one per round; track all industry decisions)
  • Adjudication rules card (laminated for reference)
  • Quick reference card (facilitator)
  • Plausibility decision trees (for red-flag checks)
  • Industry baselines (for calibration)
  • Industry Health Signal tables (for end-of-round condition announcements)
  • Base case fallback bank (for industries without explicit decisions)
  • Timer (or phone timer) for Collective Bonus, Market Shock, and round transitions
  • Flip chart + markers (for posting scores and Collective Bonus results after each round)

Next: See Adjudication Rules for detailed scoring guidance for 11 industries.