Scoring Baselines (Industry-Level)
Scoring Baselines (Industry-Level)#
Overview#
This section provides scoring baselines for the eleven industries in V7.4, showing expected Strategic Fit, Execution Risk, and Tail Risk ranges for typical AI decisions. Use these as calibration anchors when scoring industry-level participant decisions.
Key Changes (V7 -> V7.4):
- 11 industries (up from 10 personas): Split Professional Services into separate Consulting and Law industries. Each is now an independent industry with its own packet, baselines, and scoring context.
- Industry terminology: "Persona" is now "industry" throughout. Each industry IS the packet.
- Big Tech scope narrowed: Excludes AI lab/model development. Scope is cloud, ads, devices, enterprise software.
- Industry-specific baselines: Each of 11 industries has distinct cost structures, competitive threats, and regulatory environments; they receive separate baselines.
- Band-to-score translation: Baselines reference banded inputs (Spend/Commitment, Time-to-Impact, Execution Complexity, Dependency, Scale) rather than granular operational details.
- Base case fallback scoring: Industries without explicit actions receive pre-defined small fallback scores from the fallback bank (see Base Case Fallback Bank). Fallback industries are not scored here; this section covers explicit decisions only.
Band-to-Score Translation Reference#
Use this table to quickly map banded inputs to expected score ranges:
| Spend | Time | Complexity | Dependency | Scale | Typical Strategic Fit | Typical Exec Risk | Typical Tail Risk | Typical Total |
|---|---|---|---|---|---|---|---|---|
| Absorbable | 0-3mo | Low | Internal | Pilot | +1 | +2 | +1 | +4 |
| Absorbable | 0-3mo | Medium | Internal | Regional | 0 | +1 | 0 | +1 |
| Material | 3-12mo | Low | Internal | Regional | +1 | +1 | 0 | +2 |
| Material | 3-12mo | Medium | Vendor | National | 0 | 0 | -1 | -1 |
| Transformational | 1-2yr | Medium | Vendor | National | +1 | -1 | -1 | -1 |
| Transformational | 1-2yr | High | Regulator | National | +1 | -2 | -2 | -3 |
| Transformational | 2+yr | High | Ecosystem | Global | 0 | -2 | -2 | -4 |
| Existential | 2+yr | Very High | Ecosystem | Global | -1 | -3 | -3 | -7 |
How to use: If a participant proposes a decision with bands matching one of these rows, the typical score range provides an anchor. Decisions better-executed than baseline get higher scores; worse-executed get lower scores.
Industry 1: RETAIL (Omnichannel Retailers, ~500 stores + e-commerce)#
Strategic Priorities (2026-2030):
- Deploying AI demand forecasting and inventory optimization (margin defense)
- Personalizing customer experience (conversion improvement, brand trust risk)
- Automating supply chain and logistics (cost reduction)
- Competing with Amazon and tech-native direct-to-consumer
Scoring Ranges#
| Dimension | Range | Typical | Notes |
|---|---|---|---|
| Strategic Fit | -2 to +2 | 0 | High if catching up on operational AI (inventory, forecasting); neutral if consumer-facing personalization |
| Execution Risk | -1 to +2 | 0 | Demand forecasting is proven; personalization at scale requires brand safety discipline |
| Tail Risk | -1 to +2 | 0 | Labor backlash if aggressive automation; brand backlash if personalization feels invasive |
Example Decisions (Retail)#
| Decision | Strat Fit | Exec Risk | Tail Risk | Total | Rationale |
|---|---|---|---|---|---|
| Deploy AI demand forecasting + inventory optimization (500 stores, pilot) | +2 | +1 | +1 | +4 | Proven tech; phased; catches up to Amazon; low brand risk |
| Launch omnichannel personalization with transparency/opt-in | +1 | 0 | +1 | +2 | Customer trust managed; execution feasible; brand safe |
| Aggressive dynamic pricing + personalization; brand as efficiency leader | +2 | 0 | -2 | 0 | High margin upside; high brand backlash risk if customers perceive discrimination |
| Cut 30% retail labor; deploy autonomous checkout + fulfillment | 0 | -2 | -2 | -4 | Labor relations damaged; union negotiation required; ROI uncertain |
Industry 2: CPG (Consumer Goods Manufacturer, 35 brands)#
Strategic Priorities (2026-2030):
- R&D acceleration: reduce product development cycles from 18-24 months to 12-15 months (time-to-market advantage)
- Marketing automation + AI-generated content: reduce marketing spend from 8.2% to 7.0% of revenue (margin gain ~60 bps)
- Demand sensing + supply chain optimization: improve forecast accuracy, reduce inventory, optimize production
- Direct-to-consumer (DTC) expansion: bypass retailers, own customer relationship, premium pricing
- Brand safety: manage AI-generated content backlash; maintain brand equity
Scoring Ranges#
| Dimension | Range | Typical | Notes |
|---|---|---|---|
| Strategic Fit | -1 to +2 | 0 | High if R&D acceleration or brand-safe marketing efficiency; neutral if aggressive DTC (retailer relations risk) |
| Execution Risk | -1 to +1 | 0 | R&D AI is proven; DTC is harder (retailer retaliation, supply chain complexity) |
| Tail Risk | -1 to 0 | -0.5 | Brand safety risk if AI-generated content backfires; retailer retaliation if DTC too aggressive |
Example Decisions (CPG)#
| Decision | Strat Fit | Exec Risk | Tail Risk | Total | Rationale |
|---|---|---|---|---|---|
| Accelerate R&D with AI ideation + formulation support (pilot: 10 product lines) | +2 | +1 | 0 | +3 | Proven ROI (6-month cycle reduction); manageable execution; low brand risk |
| Deploy AI marketing automation + AI-generated copy (with human review); target -60 bps marketing spend | +1 | 0 | +1 | +2 | Cost efficiency proven; human review manages brand safety risk |
| Launch aggressive DTC with AI personalization + dynamic pricing; bypass retailer distribution | +2 | -1 | -2 | -1 | Revenue opportunity; but retailer relationships damaged; private-label retaliation likely |
| Build "no AI" positioning: emphasize human-crafted products, authentic storytelling | -1 | +1 | +1 | +1 | Defensive; low execution risk; but misses R&D acceleration opportunity |
Industry 3: HEALTHCARE PROVIDER (Hospital System, Clinical AI & Operations)#
Strategic Priorities (2026-2030):
- Clinical diagnostic AI (radiology, pathology, cardiology) with FDA approval
- Clinical decision support (reduce medical errors, improve outcomes, physician workflows)
- Operational AI (scheduling, resource allocation, supply chain within hospital)
- Care coordination + risk stratification (population health, EHR integration)
Scoring Ranges#
| Dimension | Range | Typical | Notes |
|---|---|---|---|
| Strategic Fit | -1 to +2 | 0 | High if improving patient outcomes + reducing costs; moderate if defensive/cautious on unproven tech |
| Execution Risk | -2 to 0 | -1 | FDA regulatory approval adds 12-24 months; physician adoption uncertain; EHR integration complex |
| Tail Risk | -3 to 0 | -1 | Patient safety liability if AI error leads to harm; physician autonomy risk; malpractice exposure |
Example Decisions (Healthcare Provider)#
| Decision | Strat Fit | Exec Risk | Tail Risk | Total | Rationale |
|---|---|---|---|---|---|
| Deploy clinical decision support (diagnosis suggestions with physician override); staff training | +2 | -1 | +1 | +2 | Clinical value; physician autonomy preserved; adoption uncertain without workflow redesign |
| Initiate FDA pre-submission for diagnostic radiology AI; plan external validation; 12-month timeline | +1 | -1 | 0 | 0 | Strategic value; long regulatory timeline; patient safety covered; early FDA engagement reduces risk |
| Deploy operational scheduling + resource AI; pilot in 3 departments; monitor physician/staff adoption | +1 | 0 | +1 | +2 | Operational efficiency; lower clinical risk; physician workflow disruption manageable |
| Build clinical validation and governance infrastructure upfront (before major diagnostic AI) | 0 | +1 | +1 | +2 | Defensive; enables faster future diagnostic AI deployment; reduces liability risk |
Industry 4: HEALTHCARE PAYER (Health Insurer, Claims & Coverage AI)#
Strategic Priorities (2026-2030):
- Prior authorization automation (claims cost reduction, liability management)
- Fraud detection + waste reduction (claims review AI, payment integrity)
- Coverage determination + medical policy AI (actuarial models, policy logic)
- Risk stratification + population health (predictive analytics, preventive care targeting)
Scoring Ranges#
| Dimension | Range | Typical | Notes |
|---|---|---|---|
| Strategic Fit | 0 to +2 | +1 | High if improving medical loss ratio (MLR) + reducing administrative costs; moderate if defensive |
| Execution Risk | -1 to +1 | 0 | Prior auth automation proven; fraud detection is arms race; regulatory compliance adds complexity |
| Tail Risk | -2 to +2 | 0 | Denial rate increase + litigation risk; member backlash if coverage perception erodes; regulatory penalty risk (medical loss ratio limits) |
Example Decisions (Healthcare Payer)#
| Decision | Strat Fit | Exec Risk | Tail Risk | Total | Rationale |
|---|---|---|---|---|---|
| Deploy prior authorization AI; mandatory human review for high-cost/rare procedures. Phased rollout | +1 | 0 | +1 | +2 | Proven ROI (15-20% claim speed improvement); human safeguards manage denial litigation; clear pathway |
| Deploy fraud detection + waste reduction AI; automated payment integrity for high-variance claims | +2 | +1 | 0 | +3 | High strategic fit (cost reduction); proven technology; manageable execution; low tail risk |
| Deploy risk stratification AI + predictive models for preventive care targeting; partner with providers | +1 | 0 | +1 | +2 | Population health upside; execution depends on provider cooperation; reduces adverse selection risk |
| Aggressive claim denial automation (>70% auto-denials); minimize human review | 0 | +1 | -2 | -1 | Cost reduction upside; high denial litigation + regulatory backlash risk; member satisfaction damage |
Industry 5: FINANCE (Bank + Insurance, Underwriting & Trading)#
Strategic Priorities (2026-2030):
- AI underwriting + fraud detection (core profitability levers)
- Synthetic fraud detection (arms race; ongoing investment required)
- Fair lending compliance (regulatory scrutiny; accuracy/explainability trade-off)
- Back-office optimization (claims, KYC/AML, document review)
Scoring Ranges#
| Dimension | Range | Typical | Notes |
|---|---|---|---|
| Strategic Fit | -1 to +3 | +1 | High if improving underwriting ROI with compliance safeguards; moderate if defensive/cautious |
| Execution Risk | -2 to +2 | -1 | Regulatory approval adds timeline; fair lending audits add complexity; talent availability varies |
| Tail Risk | -3 to +2 | -1 | High tail risk (systemic, discrimination litigation, regulatory backlash) |
Example Decisions (Finance)#
| Decision | Strat Fit | Exec Risk | Tail Risk | Total | Rationale |
|---|---|---|---|---|---|
| Deploy AI underwriting (with explainability requirements); human review for denials >threshold | +2 | 0 | +1 | +3 | Core profitability; human safeguards; regulatory pathway clear |
| Deploy next-gen synthetic identity fraud detection (partnership with specialized vendor) | +1 | 0 | 0 | +1 | Defensive (staying in arms race); proven technology; manageable cost |
| Deploy AI trading; autonomous for 80%+ trades; minimal oversight | +2 | +1 | -3 | 0 | High profit upside; systemic risk if model fails; regulatory backlash certain |
| Invest in bias auditing + fair lending compliance infrastructure upfront | +1 | +1 | +1 | +3 | Defensive; reduces regulatory penalty risk; enables faster future deployment |
Industry 6: CONSULTING (Big Four / MBB Archetype, $20-25B Revenue, 40K Employees)#
Strategic Priorities (2026-2030):
- Copilot deployment in delivery (research, analysis, proposal development, client presentations)
- Vertical-specific AI expertise practices (Financial Services AI, Healthcare AI, Manufacturing AI)
- Premium AI governance + compliance offerings (regulatory uncertainty, high-margin)
- Talent repositioning (junior analysts -> complex, judgment-intensive work; AI handles routine analysis)
- Pricing model transition (value-based, outcome-based; away from time-and-materials)
- Competing against AI-enabled boutiques and client in-house AI capabilities
Scoring Ranges#
| Dimension | Range | Typical | Notes |
|---|---|---|---|
| Strategic Fit | -1 to +3 | +1 | High if building vertical expertise + deploying copilots + pricing innovation; moderate if generic services without differentiation |
| Execution Risk | -1 to +1 | 0 | Copilot adoption straightforward; vertical expertise requires hiring + client relationships; pricing models hard to change |
| Tail Risk | -1 to +2 | 0 | Junior talent commoditization risk; pricing pressure from AI boutiques; competitive disintermediation as clients build in-house AI capabilities |
Example Decisions (Consulting)#
| Decision | Strat Fit | Exec Risk | Tail Risk | Total | Rationale |
|---|---|---|---|---|---|
| Deploy copilots firm-wide (research, analysis, proposals); maintain junior hiring; invest in vertical AI expertise (50 specialists) | +2 | 0 | +1 | +3 | Productivity gains proven; vertical expertise defensible; junior transition managed |
| Build premium AI governance + compliance offerings (regulatory focus); differentiate from AI-native boutiques | +3 | +1 | +1 | +5 | High strategic fit (regulatory uncertainty real); premium pricing justified; execution feasible with talent |
| Pilot outcome-based pricing with 5 trusted clients (lower-risk engagements); document value creation | +1 | -1 | +1 | +1 | Addresses pricing pressure; pilots learning; adoption risk high (new model for clients + firm) |
| Hold on copilot deployment; maintain traditional leverage model; wait for AI disruption to stabilize | -1 | +2 | +1 | +2 | Low execution risk; but strategic lag; competitors moving faster; talent attrition likely |
| Aggressive junior headcount reduction (>40%); rely on AI for analyst-level work | +1 | -1 | -2 | -2 | Short-term cost savings; destroys talent pipeline; client delivery quality at risk; future partner pipeline broken |
Industry 7: LAW (AmLaw 50 Firm, Billable Hour Economics, Partner/Associate Leverage)#
Strategic Priorities (2026-2030):
- AI-assisted legal research, due diligence, and contract review (efficiency gains within billable model)
- Bar rule compliance for AI-generated work product (jurisdiction-specific; rapidly evolving)
- Malpractice liability management (attorney review protocols for AI output)
- Associate leverage model adaptation (AI handles routine tasks; associates redeployed to complex work)
- Pricing model defense or transition (billable hour under pressure from client demands for efficiency)
- Competition from legal AI platforms (Harvey.ai, CoCounsel) and alternative legal service providers
Key Economic Context:
- Revenue driven by billable hours x realization rate x partner/associate leverage ratio
- AI efficiency gains create a paradox: better for clients, potentially destructive to revenue model
- Partner economics depend on associate leverage (billing associates at 3-4x cost); if AI replaces associate work, leverage model erodes
- Bar associations in multiple jurisdictions actively developing rules on AI-generated work product
- Malpractice insurance implications for AI-assisted work not yet settled
Scoring Ranges#
| Dimension | Range | Typical | Notes |
|---|---|---|---|
| Strategic Fit | -1 to +2 | +1 | High if piloting AI tools while managing bar compliance; moderate if fully committing to pricing model change; low if ignoring AI entirely |
| Execution Risk | -2 to +1 | -1 | Bar rule uncertainty adds complexity; malpractice review requirements slow adoption; partner resistance to pricing change |
| Tail Risk | -2 to +2 | -1 | Malpractice exposure if AI-generated work has errors; bar discipline risk if rules violated; revenue erosion if billable model disrupted without replacement |
Example Decisions (Law)#
| Decision | Strat Fit | Exec Risk | Tail Risk | Total | Rationale |
|---|---|---|---|---|---|
| Pilot AI research tools (Harvey.ai/CoCounsel) in 3 practice groups; mandatory attorney review; bar rule compliance assessment per jurisdiction | +2 | 0 | +1 | +3 | Proven tools; phased; attorney review manages malpractice; bar compliance addressed |
| Deploy AI-assisted contract review firm-wide; invest in quality control infrastructure; maintain billable rates | +1 | -1 | 0 | 0 | Strategic value; execution harder at scale; bar rule compliance across jurisdictions complex; revenue model preserved near-term |
| Transition to alternative fee arrangements with 5 major clients; maintain billable for rest | +1 | -1 | +1 | +1 | Addresses client demand; pilots learning; risk managed via limited scope; partner buy-in uncertain |
| Reduce associate class by 30% over 24 months; redeploy remaining to complex work; AI handles routine | 0 | -1 | -1 | -2 | Cost savings real; but leverage model damaged; staffing large matters becomes harder; lateral partners may leave |
| Build proprietary legal AI platform for contract analysis | +1 | -2 | -1 | -2 | Differentiation potential; but massive capex; competing against well-funded legal AI startups; no AI talent in-house |
| Hold: defer AI adoption; maintain traditional model; monitor bar rule developments | -1 | +2 | 0 | +1 | Low execution risk; but strategic lag; competitors gaining efficiency; clients demanding AI-enabled delivery |
Industry 8: MANUFACTURING (Heavy Manufacturing, 28 plants)#
Strategic Priorities (2026-2030):
- Predictive maintenance (downtime reduction, equipment life extension)
- Production optimization (throughput, quality, energy efficiency)
- Quality inspection AI (defect detection, process control)
- Labor transition (retraining, "no-layoff" agreements, union cooperation)
Scoring Ranges#
| Dimension | Range | Typical | Notes |
|---|---|---|---|
| Strategic Fit | -1 to +2 | 0 | High if prioritizing high-ROI plants (8-12 vs. all 28); moderate if spreading capex thin |
| Execution Risk | -1 to +2 | 0 | Predictive maintenance is proven; OT/IT integration is complex; equipment retrofitting labor-intensive |
| Tail Risk | -2 to 0 | -1 | Union relations if labor displacement not managed; supply chain dependencies if suppliers not ready |
Example Decisions (Manufacturing)#
| Decision | Strat Fit | Exec Risk | Tail Risk | Total | Rationale |
|---|---|---|---|---|---|
| Deploy predictive maintenance in 8 highest-ROI plants (pilot); phased OT/IT integration | +2 | +1 | +1 | +4 | Proven tech; 3.2-year payback; high ROI on priority plants; phased approach reduces risk |
| Announce "no-layoff" retraining agreement with unions; commit $35-40M over 2 years | +1 | +1 | +2 | +4 | Preserves labor relations, safety culture; union cooperation enables faster automation; long-term competitive advantage |
| Deploy warehouse automation + logistics integration across all plants simultaneously | +1 | -2 | -1 | -2 | High labor displacement without transition plan; execution risk high (OT/IT complex); union friction likely |
| Hold on manufacturing AI; focus on operational efficiency (cost controls, headcount management) | -1 | +2 | +1 | +2 | Defensive; low execution risk; but strategic lag if competitors gain efficiency advantage |
Industry 9: LOGISTICS (Freight/3PL/Warehouse, 5,000+ vehicles)#
Strategic Priorities (2026-2030):
- Route optimization + fuel efficiency (cost reduction, proven ROI: $180-200M potential)
- Predictive vehicle maintenance (downtime reduction, equipment life)
- Driver assistance + safety systems (accident reduction, driver acceptance)
- Autonomous vehicle pilots (partnerships with Waymo/Aurora; regulatory uncertainty; long timeline)
Scoring Ranges#
| Dimension | Range | Typical | Notes |
|---|---|---|---|
| Strategic Fit | -1 to +2 | +1 | High if prioritizing route optimization + driver acceptance; moderate if overcommitting to unproven AV |
| Execution Risk | -1 to +1 | 0 | Route optimization is proven; driver adoption varies by age cohort; AV regulatory timeline uncertain |
| Tail Risk | -2 to +2 | -1 | Driver resistance; union concerns; last-mile profitability limits; autonomous vehicle regulatory risk |
Example Decisions (Logistics)#
| Decision | Strat Fit | Exec Risk | Tail Risk | Total | Rationale |
|---|---|---|---|---|---|
| Deploy route optimization to 2,000 trucks (pilot: 40% of fleet); driver engagement program | +2 | +1 | +1 | +4 | Proven tech; $45-50M annual savings; phased approach allows learning; driver buy-in investment |
| Partner with Waymo/Aurora for long-haul autonomous pilots (5-10% of fleet); manage expectations | +1 | -1 | 0 | 0 | Strategic value; regulatory timeline uncertain (2028-2030); partnership de-risks vs. internal development |
| Deploy autonomous vehicles internally (full long-haul fleet); eliminate driver roles | 0 | -3 | -3 | -6 | Regulatory approval uncertain; driver/union resistance certain; execution technically and politically infeasible |
| Accept last-mile profitability limits; optimize only high-density urban routes | +1 | +2 | 0 | +3 | Realistic; lower total savings; but avoids overselling optimization to unprofitable segments |
Industry 10: BIG TECH (Google/Meta/Microsoft/Amazon-Class — Cloud, Ads, Devices, Enterprise Software)#
Scope Note: Big Tech in this exercise covers cloud infrastructure, advertising, devices, and enterprise software. AI lab and foundation model development decisions are excluded from participant scope; those dynamics are introduced via facilitator injects only.
Strategic Priorities (2026-2030):
- AI-powered product features (search, advertising, recommendation, cloud services)
- Enterprise AI services (APIs, platform tools, developer ecosystems)
- Cloud infrastructure scaling (GPUs, data centers, inference capacity for enterprise customers)
- AI cost management + margin defense (compute cost inflation, infrastructure investment)
- Regulatory scrutiny navigation (antitrust, data privacy, content moderation)
Scoring Ranges#
| Dimension | Range | Typical | Notes |
|---|---|---|---|
| Strategic Fit | 0 to +3 | +2 | High if investing in AI product features + enterprise services; very high if staying ahead of competition on cloud/platform |
| Execution Risk | -1 to +2 | +1 | Massive CapEx required for infrastructure; talent competition fierce; regulatory approval uncertain for some products |
| Tail Risk | -2 to +2 | -1 | Antitrust scrutiny + regulatory backlash; margin compression from AI compute cost inflation; competitive disruption from open-source models |
Example Decisions (Big Tech)#
| Decision | Strat Fit | Exec Risk | Tail Risk | Total | Rationale |
|---|---|---|---|---|---|
| Invest in cloud/inference infrastructure ($15B+ capex); build custom chips for enterprise AI workloads | +3 | +1 | 0 | +4 | Core strategic value; execution capability exists; margin pressure manageable with scale |
| Launch enterprise AI APIs + platform tools; compete on cost/performance for enterprise adoption | +2 | 0 | 0 | +2 | Platform leverage; proven go-to-market; execution straightforward; tail risk low |
| Integrate AI deeply into core products (search, ads, cloud); expect 5-10% productivity gain | +2 | +1 | +1 | +4 | Strategic fit high; user experience upside; execution proven; regulatory risk limited (private integration) |
| Defend market against open-source model commoditization; cut enterprise AI margins to 10-15% | +1 | 0 | -1 | 0 | Defensive pricing; maintains enterprise stickiness; profitability erodes; shareholder backlash risk |
Industry 11: B2B/B2C SaaS (Workday/Salesforce/SAP-Class, ~100K+ employees)#
Strategic Priorities (2026-2030):
- AI feature integration into products (copilots, predictive analytics, automation)
- Pricing model evolution (AI features bundled vs. premium tier)
- Competitive threat from AI-native startups (simpler, cheaper, specialized products)
- Margin pressure from AI infrastructure costs (training, inference, compute)
- Customer retention + upsell via AI (lock-in effect)
Scoring Ranges#
| Dimension | Range | Typical | Notes |
|---|---|---|---|
| Strategic Fit | 0 to +3 | +1 | High if bundling AI features + defending margins; moderate if uncertain on pricing/competition |
| Execution Risk | -1 to +1 | 0 | AI feature integration proven; pricing model change is hard; competitive response unpredictable |
| Tail Risk | -2 to +2 | -1 | Pricing pressure + margin compression; customer churn if AI features disappoint; startup disruption from specialized competitors |
Example Decisions (B2B/B2C SaaS)#
| Decision | Strat Fit | Exec Risk | Tail Risk | Total | Rationale |
|---|---|---|---|---|---|
| Bundle AI copilots into core product; include in standard SKU; no premium pricing upside | +1 | +1 | 0 | +2 | Maintains customer lock-in; execution straightforward; margin pressure from cost (no revenue offset) |
| Integrate AI prediction + automation deeply; sell as premium tier at 30-50% price increase | +2 | 0 | +1 | +3 | Strategic fit high (differentiation); proven pricing model; execution proven; churn risk manageable |
| Build AI-native product for vertical (e.g., HR-specific AI recruiting); compete on cost + specialization | +2 | -1 | 0 | +1 | High strategic value (new market); execution complexity (new product); adoption uncertain |
| Maintain traditional product; defer heavy AI integration until market stabilizes | 0 | +2 | -1 | +1 | Low execution risk; but competitive lag; startup disruption risk rises; customer churn risk |
Using Baselines During Industry-Level Scoring#
- Participant proposes decision. Identify the industry (Retail, CPG, Healthcare Provider, Healthcare Payer, Finance, Consulting, Law, Manufacturing, Logistics, Big Tech, B2B/B2C SaaS).
- Reference the industry baseline. Check typical Strategic Fit, Execution Risk, Tail Risk ranges for this specific industry.
- Calibrate your score. If participant decision is better than baseline, score higher; if worse, score lower.
- Score independently. Remember: all 11 industries are scored separately. A participant covering both Consulting and Law has each decision scored on its own industry-specific merits.
- Post and move on. Explain score concisely; don't debate. Remember you may be scoring multiple decisions per round (variable based on participant count and industry assignments).
Example#
Participant: Consulting participant proposes to "deploy copilots to 50% of consultant base over 90 days; hire 50 AI specialists to build vertical AI practices; maintain junior hiring (zero layoffs in pilot phase)."
Baseline (Consulting): Deploy copilots + maintain junior hiring + build vertical AI = +2 to +3 typical score.
Your calibration:
- Specificity is excellent (50%, 90 days, zero layoffs, vertical hires quantified)
- Timeline is tight but feasible (copilots are proven; vertical hiring is hard but not impossible)
- Commitment to zero layoffs + vertical hiring reduces tail risk and addresses competitive disintermediation
Score: +2 (Strategic Fit) + 0 (Execution Risk) + +1 (Tail Risk) = +3/6 (in line with baseline; executed well).
Note: If the same participant also covers Law, the Law decision will be scored separately based on Law baselines + Law-specific context (billable hour impact, bar compliance, malpractice liability).
Base Case Fallback Scoring (NOT Covered Here)#
Industries without explicit participant actions receive base case fallback scores from the fallback bank (see Base Case Fallback Bank). Fallback scores are:
- Deterministic: Not varied per participant or round; same fallback applies across all instances
- Small deltas: Typically +/-1 per dimension (not -2 to +2)
- Plausible: Represent defensive but reasonable moves (e.g., cost control, operational efficiency)
- Automatic: Applied by facilitator without participant input
Fallback industries are NOT scored using the baselines in this document. Baselines apply only to explicit decisions submitted by participants.
Summary: When to Score High, Medium, Low (Explicit Decisions Only)#
High Score (+2 to +3 range)#
- Participant is responding to clear competitive threat (e.g., competitor AI advantage)
- Technology is proven and participant's organization has execution capability
- Scope is realistic (phased, limited, with clear milestones)
- Participant acknowledges and mitigates tail risk (pilot, human oversight, severance plan)
Medium Score (0 to +1 range)#
- Strategic alignment is moderate (neither captures opportunity nor avoids disaster)
- Execution is feasible but carries standard risk (integration, talent, regulatory timeline)
- Tail risk is managed but not eliminated
Low Score (-1 to -2 range)#
- Decision is defensive or reactive (holding pattern, waiting)
- Execution faces material barriers (talent shortage, long regulatory approval, capital constraints)
- Tail risk is not addressed or is material
Very Low Score (-3 or exception +/-3)#
- Decision triggers a red-flag (see Plausibility Decision Trees)
- If red-flag fires, unlock +/-3 exception scoring to reflect severity
Use these baselines to calibrate, not to lock scores. Participants proposing above-baseline decisions earn higher scores. Participants proposing below-baseline decisions earn lower scores. Move briskly; facilitate, don't debate.