Menu
About This ExerciseFor ParticipantsFor FacilitatorsFAQ
Navigation
Software & Tech

Big Tech — Private Cards

Cloud / Ads / Devices / Enterprise Software

Big Tech Private Information Cards#

Facilitator Note

FACILITATOR NOTE: Print this document and separate at page breaks. Distribute one card per round, face-down, at the start of each round's decision preparation phase. Cards are confidential to the Big Tech participant. Cards accumulate — the participant keeps all cards and may refer to them in later rounds.


Card 1 — Round 1#

Title: AI Infrastructure Costs Exceed Projections & Capex Escalation

Card Type: Operational Intelligence

Reveal Timing: Round 1 — distribute face-down at start of decision preparation phase

Classification: Restricted

Source: Internal capex tracking, cloud infrastructure utilization dashboards, and competitive intelligence

Shared Intelligence (also received by SaaS — framed differently):

The Intelligence:

Your internal capex models for AI infrastructure are tracking significantly above projections. Data center buildout, custom silicon procurement, and inference infrastructure costs are escalating faster than anticipated. Your GPU/TPU utilization rates are lower than expected — inference demand from enterprise cloud customers and internal AI product workloads has not scaled as quickly as projected. Many clusters are idle between peak usage windows. You are on track to spend $18B on AI-related infrastructure capex in 2026, up from $12B projected six months ago. The payback period on this infrastructure investment may exceed 3-5 years.

Competitive intelligence suggests all major cloud providers are experiencing similar cost pressures and utilization shortfalls. A market bifurcation is emerging: only well-capitalized players can sustain this level of infrastructure investment. Mid-market cloud providers and smaller AI companies are being forced to rely on open-source models and shared infrastructure, accelerating the commoditization of AI compute.

The critical question for you is not whether to build AI infrastructure — that ship has sailed. The question is how to drive utilization rates up, how to convert enterprise AI pilots into production workloads that generate sustained cloud revenue, and how to manage the gap between committed capex and realized demand. Your cloud pricing strategy, managed services offering, and enterprise sales execution all directly affect whether this infrastructure investment pays off or becomes stranded capital.

Decision Tension:

Do you accelerate infrastructure buildout further to capture enterprise AI workload migration (betting that demand will catch up to supply)? Or do you moderate capex growth and focus on driving utilization of existing infrastructure through pricing incentives, managed services, and customer success investment?

Questions to Consider:

  • What is the utilization rate threshold at which your current infrastructure investment becomes profitable? How far are you from that threshold?
  • Can you accelerate enterprise cloud customer conversion from AI pilot to production workload? What are the bottlenecks?
  • How does your cloud pricing strategy compare to competitors? Is aggressive pricing required to capture workload share, or does it destroy margin?
  • What is the competitive consequence if you moderate capex growth while competitors continue to build? How quickly does market share shift?
  • How long can you sustain 30%+ capex growth before investor pressure forces a reset?

Card 2 — Round 2#

Title: Enterprise Cloud Pricing Collapse & Open-Source Commoditization Pressure

Card Type: Market Intelligence

Reveal Timing: Round 2 — distribute face-down at start of decision preparation phase

Classification: Confidential

Source: Enterprise cloud sales team intelligence, customer contract negotiations, and competitive pricing analysis

The Intelligence:

Your enterprise cloud customers are demanding access to AI services at commodity pricing. The pricing floor is being set by open-source models: customers are benchmarking your managed AI services against the cost of running LLaMA, Mistral, or Phi on their own infrastructure or through discount cloud providers. Your premium pricing for proprietary AI services is under direct pressure.

Specific signals from the field:

  • Your top 50 enterprise cloud accounts (representing 25% of cloud revenue) are renegotiating contracts. They want 20-30% volume discounts on AI compute and managed services, citing competitive offers from other cloud providers and internal open-source alternatives.
  • Three Fortune 100 customers have announced internal "open-source first" AI strategies, explicitly reducing dependence on proprietary cloud AI services. They are building internal capabilities to run open-source models on bare-metal infrastructure, bypassing your managed services entirely.
  • Your cloud sales team reports that new AI workload deals are taking 40% longer to close than 12 months ago. Customers are conducting extended proof-of-concept evaluations, comparing your platform against competitors on price, performance, and flexibility.
  • Open-source model quality continues to improve. For 70%+ of enterprise use cases (classification, summarization, basic generation, structured data extraction), open-source models deliver acceptable performance at a fraction of proprietary costs. Your premium is justified only for complex, high-stakes applications (advanced reasoning, multimodal, domain-specific fine-tuning).

Your cloud margin is compressing. The gap between infrastructure cost (which you bear) and the price customers are willing to pay (which is falling) is narrowing. If you match competitive pricing to retain volume, margins erode. If you hold premium pricing, you lose workload share to competitors and open-source alternatives.

Decision Tension:

Do you defend premium pricing for proprietary AI cloud services (protecting margin but risking workload share loss)? Or do you match commodity pricing to retain volume and market position (preserving share but compressing margins and potentially triggering a race to the bottom)?

Questions to Consider:

  • What percentage of your cloud AI revenue comes from use cases where open-source models are viable substitutes? How fast is that percentage growing?
  • Can you differentiate managed services (security, compliance, fine-tuning, SLA guarantees) enough to justify a premium over open-source alternatives?
  • What is the revenue impact of losing your top 50 enterprise accounts to competitors or open-source alternatives? What retention investment is justified?
  • If you cut pricing aggressively, what is the margin impact? Can you offset it with volume growth?
  • What does a "two-tier" pricing strategy look like — commodity pricing for basic AI compute, premium pricing for advanced managed services?

Card 3 — Round 3#

Title: Antitrust Enforcement Targets Platform Practices & Data Governance

Card Type: Regulatory Development

Reveal Timing: Round 3 — distribute face-down at start of decision preparation phase

Classification: Restricted

Source: Regulatory intelligence, legal counsel briefing, and government affairs team analysis

The Intelligence:

Federal antitrust enforcement is moving from investigation to action. The enforcement focus is on your platform practices, data governance, and the degree to which your market position creates barriers to competition in AI services. An enforcement action is being drafted that would likely include:

  • Data separation mandates: You cannot use enterprise cloud customer data for improving your own AI products or services without explicit, granular, per-use-case consent. Cross-product data flows (e.g., using search data to improve cloud AI services, or using cloud customer usage patterns to optimize advertising) would require separate opt-in agreements. Compliance cost is estimated at $500M-$1B over 18 months.

  • API access mandates: You must provide non-discriminatory API access to your cloud AI infrastructure and managed services. Competitors and third-party developers must be able to access the same services, at the same pricing, as your own internal product teams. This reduces the integration advantage your internal product teams currently enjoy.

  • Platform conduct restrictions: Bundling restrictions on how you package AI features with existing products (e.g., cannot require cloud subscription to access AI features; cannot preferentially surface your own AI services over competitors in your marketplace or app store). This directly affects your ecosystem lock-in strategy.

Parallel intelligence: European regulators are advancing similar enforcement actions under the Digital Markets Act, with potentially stricter requirements including forced interoperability and data portability mandates.

Compliance will require significant organizational restructuring of data governance, pricing, and product architecture. Legal and engineering teams will be consumed by compliance work for 12-18 months. The distraction and resource diversion are as costly as the direct financial impact.

Decision Tension:

Do you adopt a proactive compliance posture — voluntarily implementing data separation, open API access, and fair pricing before enforcement arrives (reducing regulatory risk but also reducing competitive advantage)? Or do you contest enforcement through litigation and lobbying while maintaining current practices (preserving competitive advantage near-term but risking more severe enforcement outcomes and reputational damage)?

Questions to Consider:

  • What is the competitive impact of data separation mandates? How much does your AI product quality degrade if cross-product data flows are restricted?
  • Can you turn API access mandates into a revenue opportunity (charging competitors for access to your infrastructure)?
  • What is the reputational cost of aggressive litigation vs. proactive compliance? How do enterprise customers react to each approach?
  • How do you prioritize engineering resources between compliance implementation and AI product roadmap execution? What slips?
  • If European enforcement is stricter, does it make sense to implement a global compliance standard (one set of practices) or region-specific approaches?