Build on the Bubble: Why foundation model instability is the best thing that ever happened to enterprise AI

Build on the Bubble: Why foundation model instability is the best thing that ever happened to enterprise AI

Alex Salazar's avatar
Alex Salazar
DECEMBER 18, 2025
8 MIN READ
THOUGHT LEADERSHIP
Rays decoration image
Ghost Icon

Right now, somewhere in San Francisco, a foundation model company is losing money serving your API call.

OpenAI spent $8.67 billion on inference in the first nine months of 2025—nearly double their revenue for the same period. Sam Altman publicly admitted they lose money on $200-per-month ChatGPT Pro subscriptions. Anthropic burns 70% of every dollar they bring in. These companies are pricing their products below cost, subsidized by the largest concentration of venture capital in technology history, in a desperate race to capture market share before the music stops.

At the December DealBook Summit, Anthropic CEO Dario Amodei offered a blunt assessment: "There are some players who are YOLO, and I'm very concerned." He wasn't talking about AI skepticism. He was warning about timing risk—companies betting correctly on AI's eventual impact but incorrectly on when the economics will work.

Here's what most coverage of Dario's warning misses: the foundation model layer's instability is the application layer's opportunity. If you're building AI agents for enterprise workflows, you're not exposed to that timing risk. You're benefiting from it. Every agent you deploy today runs on subsidized compute, paid for by investors who may never see a return.

The strategic question isn't whether to build. It's how to build—fast enough to capture value during the subsidy window, resilient enough to survive the consolidation that's coming.

The Subsidy Window

Let's look at the numbers that define the current moment.

OpenAI generated approximately $4.3 billion in the first half of 2025 and projects roughly $13 billion for the full year. Against that revenue, they're spending approximately $22 billion—a gap of $9 billion that investors are funding. Cumulative losses from 2023 through 2028 are projected to reach $44 billion. The company doesn't expect profitability until 2029 or 2030.

Anthropic's trajectory is similar in shape if smaller in scale: roughly $3 billion in annual burn against revenue that reached $5 billion annualized by August 2025. They're targeting breakeven in 2028—two years ahead of OpenAI, but still three years away.

Sequoia's David Cahn has quantified the industry-wide gap: $600 billion between what AI companies need to earn to justify their infrastructure investments and what they actually generate in revenue. That gap has grown, not shrunk, as spending has accelerated.

What does this mean for you? It means inference is mispriced. When OpenAI charges you for an API call, they're not charging what it costs them to serve that call—they're charging what they think will maximize adoption while they race to scale. The difference between price and cost is your subsidy, funded by SoftBank, Microsoft, Sequoia, and a constellation of investors betting that market share today converts to pricing power tomorrow.

This is not a stable equilibrium. Subsidies end. They end when investors lose patience, when companies get acquired, when the capital markets tighten, or when survivors gain enough market share to raise prices. The only question is when.

The Consolidation Reality

The math doesn't support five or more well-capitalized foundation model companies. This isn't speculation—it's arithmetic. When an industry has $600 billion more in costs than revenue, the outcome is consolidation. Companies merge, get acquired, pivot, or fail. The infrastructure persists; some of the players don't.

Dario's "YOLO" warning is, implicitly, a prediction about this consolidation. Some companies are taking timing risks that won't pay off. They'll run out of capital before the economics work. That's not a criticism of AI—it's a recognition that financial reality eventually asserts itself, regardless of how transformative the underlying technology might be.

We've seen this pattern before. The fiber optic buildout of the late 1990s saw over $100 billion invested in capacity that vastly exceeded near-term demand. Companies went bankrupt. Investors lost fortunes. And yet that infrastructure became the backbone of the modern internet. The money was largely wasted for those who spent it, but the capacity enabled everything that followed.

AI infrastructure is following a similar trajectory. The question for enterprise builders isn't whether the shakeout is coming—it's how to position for it.

The Circular Financing Problem

One dynamic deserves particular attention because it explains why the current spending levels may be even less sustainable than they appear.

Consider the Nvidia-OpenAI relationship. In September 2025, Nvidia announced a $100 billion investment in OpenAI. OpenAI will use significant portions of that capital to purchase Nvidia GPUs. The money flows from Nvidia to OpenAI and back to Nvidia. Meanwhile, OpenAI's other investors—SoftBank, Microsoft, and others—are also funding GPU purchases that flow to Nvidia, whose market cap growth then justifies further AI investment.

Goldman Sachs has flagged this circularity as a risk factor. The apparent size of the AI market may be inflated by capital recycling through the ecosystem. When that cycle slows—when any major node reduces spending—the effects will propagate through the entire system.

This isn't a reason to avoid AI investment. It's a reason to understand where you sit in the value chain. If you're a foundation model company, you're dependent on the circular flow continuing. If you're building applications on top of foundation models, you're capturing value created by that flow without being exposed to its interruption.

Enterprises as Kingmakers

Here's what the foundation model companies understand that most enterprises don't: your adoption decisions will determine who survives.

Consumer revenue—ChatGPT subscriptions, Claude Pro accounts—won't close the $600 billion gap. The numbers are too small. Enterprise contracts are the only path to the revenue scale that justifies current infrastructure spending. OpenAI, Anthropic, Google, and every other foundation model company know this. That's why they're all building enterprise sales teams, pursuing SOC 2 compliance, and talking about data privacy.

Anthropic's 2028 breakeven target assumes enterprise revenue ramps that haven't materialized yet. OpenAI's 2029-2030 profitability projection makes similar assumptions. These aren't just forecasts—they're existential bets on your purchasing decisions.

This gives you leverage. Foundation model companies desperate for enterprise revenue will compete aggressively for your business. Multi-year commitments, volume discounts, custom fine-tuning, dedicated support—everything is negotiable for a customer who represents proof that enterprise adoption is happening. Use that leverage. But use it now, while the desperation is acute.

The Strategic Framework: Speed Plus Resilience

Putting this together, the strategic imperative for enterprise AI builders is a synthesis: move aggressively while the economics are favorable, but architect for a world where your foundation model provider might not exist in three years—or might cost three times more.

This isn't a contradiction. It's a recognition that the current moment offers both an opportunity (subsidized compute) and a risk (vendor instability). The winning strategy captures the opportunity while hedging the risk.

For CIOs and technology executives: The ROI calculations you're running today are artificially favorable. Inference costs will not stay this low forever. Projects that look marginal at 3x current API costs should be deprioritized. Projects that work even at higher price points should be accelerated. And every negotiation with a foundation model provider should recognize your leverage as an enterprise customer whose adoption they desperately need.

For AI/ML leaders and architects: Model-agnosticism isn't just good engineering—it's existential risk management. Build abstraction layers. Use orchestration frameworks like MCP and LangChain that let you swap foundation models without rebuilding. Your agent architectures should treat the foundation model as a replaceable component, not a load-bearing wall. The switching cost you avoid today is the vendor lock-in that could cripple you when consolidation reprices your entire stack.

For engineers building agents: The moat you're building isn't "we use GPT-4" or "we use Claude." It's "we've automated this workflow and we know how to iterate on it." The institutional knowledge, the prompt engineering, the integration with enterprise systems, the feedback loops that improve performance over time—that's what survives foundation model churn. Build systems that capture workflow value. Don't build systems that only work with one model's idiosyncrasies.

Predictions

Based on the data and dynamics outlined above, here's what we expect over the next three years:

Inference pricing will stabilize or increase within 18-36 months. The current pricing reflects market-share acquisition, not sustainable unit economics. As capital constraints tighten or survivors gain market power, prices will rise toward actual cost. The 280x cost reduction documented by Stanford over the past two years will slow as efficiency gains get harder and foundation companies need to show paths to profitability.

At least one major foundation model company will face a liquidity crisis, acquisition, or significant pivot by 2027. The math requires it. Not everyone burning billions annually will find a path to sustainability. Dario's "YOLO" warning will prove prescient for at least one specific company. We won't predict which one—too many variables—but the category outcome is near-certain.

Model-agnostic architectures will become the enterprise standard within 18-24 months. Early adopters are already building this way. As consolidation risk becomes more apparent, procurement and architecture review processes will require multi-model capability. Vendors offering orchestration layers and abstraction frameworks will see accelerating demand.

The agent and orchestration layer will capture more value than the foundation layer by 2028. This mirrors the cloud computing pattern, where infrastructure providers operate on thin margins while application providers closer to business problems capture the majority of value. Foundation models are commoditizing faster than most observers expected—Claude, GPT-4, Gemini, and Llama are increasingly interchangeable for many enterprise use cases. The differentiation is moving up the stack, toward workflow automation, tool integration, and enterprise-specific fine-tuning.

Enterprise adoption velocity will determine which foundation model companies survive. Consumer revenue isn't enough. The next 18 months will sort winners from losers based primarily on enterprise sales execution, not model capability. F2000 deployment decisions are, quite literally, existential for multiple foundation model companies.

The Opportunity in Instability

Every major technology transition creates a period of instability where the economics don't quite work. The railroad bubble. The automotive shakeout. The dot-com crash. The fiber glut. In each case, the infrastructure persisted and enabled massive value creation—just not always for the people who built the infrastructure.

We are in that period for AI. The foundation model companies are building infrastructure at an unprecedented pace, subsidizing it with investor capital, and pricing it below cost to capture market share. Some of them will fail or get acquired. The survivors will eventually have pricing power.

But right now—in this specific window—the economics favor application builders. You get the benefit of billions in infrastructure investment without bearing the cost. You get inference priced for market share acquisition, not profitability. You get foundation model companies competing desperately for your enterprise contracts.

Dario is right to be concerned about timing risk at the foundation layer. That's his problem to solve. For enterprises building agents and AI-powered workflows, the timing risk is inverted: the risk isn't moving too fast, it's moving too slow and missing the subsidy window.

Build aggressively. Build model-agnostic. Capture workflow value while the compute is cheap. And architect systems that will thrive regardless of which foundation model companies are still standing in 2028.

The bubble is real. Build on it anyway.


Sources & Methodology

A note on data quality: Neither OpenAI nor Anthropic publishes audited financial statements. The figures in this analysis derive from leaked documents reported by The Information, The Wall Street Journal, and TechCrunch; executive statements in public forums; and analyst estimates from firms including Sacra and Sequoia Capital. They should be understood as directionally indicative rather than precise. Where figures are interpolated or contested, this is noted in the text.

Key sources:

OpenAI financials: Reuters (H1 2025 revenue), The Wall Street Journal (spending projections), The Information (cumulative loss projections, leaked documents), TechCrunch/The Register (inference cost analysis via Ed Zitron)

Anthropic financials: Reuters (May 2025 ARR), Goldman Sachs/PM Insights (August 2025 ARR), The Wall Street Journal (breakeven projections, burn rate)

Infrastructure commitments: Official announcements and subsequent reporting from CNBC, Bloomberg, NYT, The Wall Street Journal (Stargate, Nvidia investment, Oracle deal, Microsoft Azure commitment)

Industry analysis: Sequoia Capital (David Cahn revenue gap analysis), Goldman Sachs Research (AI infrastructure ROI warnings), Stanford HAI (AI Index 2025, inference cost trends)

Executive statements: Sam Altman (ChatGPT Pro economics, revenue projections), Dario Amodei (DealBook Summit December 2025)

Disclosure: This analysis was prepared by Arcade, a runtime platform for building and operating multi-user AI agents across enterprise systems. Our products include infrastructure for building model-agnostic agent architectures. We have commercial relationships with multiple foundation model providers and enterprise customers. Our perspective is informed by, and aligned with, the enterprise application layer thesis presented in this piece.

SHARE THIS POST

RECENT ARTICLES

COMPANY NEWS

Your MCP Client Just Got Superpowers: Arcade Tools are now in Cursor, VS Code, and more

If you've been using Cursor, Claude Desktop*, VS Code, or any MCP-compatible client, you've probably experienced the same frustration: your agent is brilliant at reasoning through workflows, but the moment it needs to actually do something across your tools, you're back to juggling configurations for each individual tool, debugging auth flows, and troubleshooting why the setup that worked yesterday doesn't work today. Those days are over. With the launch of Arcade MCP Gateways, your MCP client

Rays decoration image
THOUGHT LEADERSHIP

Agent Skills vs Tools: What Actually Matters

The agent ecosystem has a terminology problem that masks a real architectural choice. "Tools" and "skills" get used interchangeably in marketing decks and conference talks, but they represent fundamentally different approaches to extending agent capabilities. Understanding this distinction is the difference between building agents that work in demos versus agents that work in production. But here's the uncomfortable truth that gets lost in the semantic debates: from the agent's perspective, it'

Rays decoration image
THOUGHT LEADERSHIP

Using LangChain and Arcade.dev to Build AI Agents For Consumer Packaged Goods: Top 3 Use Cases

Key Takeaways * CPG companies hit a multi-user authorization wall, not a capability gap: Most agent projects stall in production because leaders can’t safely govern what permissions and scopes an agent has after it’s logged in across fragmented, domain specific systems (ERPs, retailer portals, communications). Arcade.dev’s MCP runtime replaces months of custom permissioning, token/secret handling, and auditability work. * Weather-based demand forecasting delivers fastest ROI: Unilever achiev

Blog CTA Icon

Get early access to Arcade, and start building now.