back

Signal/Noise

Get SIGNAL/NOISE in your inbox daily

Signal/Noise

2025-12-05

While everyone argues about AI regulation and bubbles, the real story unfolding is infrastructure capture—a quiet reshuffling of economic power where whoever controls the data pipes, compute clusters, and human-AI interfaces will determine who wins the next phase of digital capitalism. Today’s news reveals three fronts in this war: the scramble for training data, the race to own enterprise workflow integration, and the desperate attempt to find sustainable business models before the music stops.

Data Is the New Oil, and Everyone’s Drilling in Someone Else’s Backyard

Meta’s licensing deals with CNN, Fox News, and USA Today aren’t just content partnerships—they’re strategic positioning for the post-search era. While OpenAI fights The New York Times in court over copyright infringement, Meta is quietly paying publishers to avoid the legal headaches that could cripple scaling efforts. This reveals a crucial split in AI company strategy: lawsuit-happy scrapers versus relationship-building licensors. The deeper game isn’t about today’s chatbot responses—it’s about tomorrow’s AI agents that will need real-time, verified information to function in enterprise environments. Meta understands that reliable data partnerships will become moats when AI agents start making consequential decisions on behalf of users. Meanwhile, Perplexity’s legal troubles with multiple news outlets signal that the ‘move fast and break things’ approach to training data may be hitting regulatory walls. The companies writing checks now are buying legitimacy and avoiding the compliance quicksand that could slow their deployment velocity later. This isn’t just about feeding LLMs—it’s about building the trusted information infrastructure that enterprise customers will demand when AI starts handling their customer communications, research, and decision-making processes.

The Enterprise Integration Gold Rush: Where Margins Go to Die

Anthropic’s internal research on how AI transforms work reveals the dirty secret every enterprise software vendor is racing to solve: AI doesn’t just automate tasks, it fundamentally reshapes workflows in unpredictable ways. Their finding that 27% of AI-assisted work consists of entirely new tasks nobody would have done manually isn’t a productivity win—it’s a integration nightmare for enterprise buyers. When engineers become ‘full-stack’ overnight and workflows morph constantly, traditional software architectures built around fixed user roles and predictable processes start breaking down. This explains the explosion in ‘vertical AI agents’ and specialized workflow tools—everyone’s scrambling to own specific niches before horizontal platforms can absorb their functionality. The real competition isn’t between AI models anymore, it’s between integration approaches. Will enterprises prefer platform plays like Microsoft’s Copilot ecosystem that promise seamless workflow integration, or will they choose best-of-breed AI tools for specific functions? The early signal from Anthropic’s research is troubling for platform players: workers are already managing ‘multiple Claude instances’ like a swarm of specialized agents rather than relying on one superintelligent assistant. This points toward a fragmented AI tooling landscape where enterprises cobble together dozens of specialized AI services—exactly the opposite of the platform consolidation story that’s driving current valuations. Companies building horizontal AI platforms may be solving yesterday’s problem while specialized vertical solutions capture tomorrow’s workflows.

The Business Model Reckoning Nobody Wants to Discuss

HPE’s disappointing AI revenues and IBM’s skepticism about competitors’ AI spending reveal what happens when the demo theater meets actual P&L scrutiny. While everyone focuses on the spectacular capabilities of frontier models, the mundane economics of AI deployment are turning brutal. The infrastructure companies closest to actual enterprise deployments are seeing a disconnect between AI hype and AI purchasing decisions. HPE’s struggles suggest that even companies positioned to benefit from the AI infrastructure buildout are finding customers reluctant to commit serious budgets. This isn’t because the technology doesn’t work—it’s because most enterprises are still figuring out how to measure AI ROI beyond pilot projects. The Anthropic research hints at why: when AI changes workflows unpredictably and creates new categories of work, traditional cost-benefit analysis breaks down. How do you price productivity gains when you can’t define what the new job actually is? This measurement problem is creating a valley of death between successful AI pilots and scaled enterprise deployments. Companies are happy to spend six figures on proof-of-concepts but balking at the seven-figure production deployments needed to justify the current AI infrastructure buildout. The result is a growing gap between the capital deployed in AI infrastructure and the revenue actually flowing through those systems. Wall Street’s growing concern about an AI bubble isn’t just about technical overpromise—it’s about a fundamental mismatch between infrastructure scale and demand maturity.

Questions

  • If AI creates fundamentally new categories of work rather than just automating existing tasks, how do enterprises measure ROI using financial frameworks built for cost reduction?
  • What happens to the current AI infrastructure buildout if enterprises settle on fragmented, specialized AI tools rather than consolidated platform solutions?
  • Are we witnessing the emergence of a new economic model where data licensing becomes more valuable than data ownership, and what does that mean for companies that built their moats around proprietary datasets?

Past Briefings

Mar 26, 2026

AI’s Blind Geniuses

Everyone's measuring AI adoption. Nobody's measuring AI results. If Jensen Huang and Alfred Lin can't agree on a scorecard, that tells you more about the state of AI than any benchmark can. THE NUMBER: 0.37% or 100% — the gap between the best score any AI achieved on ARC-AGI-3 (Gemini 3.1 Pro's 0.37%) and Jensen Huang's claim that we've already reached AGI. Even among the most credible voices in AI, nobody can agree on whether we're at the starting line or the finish line. That uncertainty isn't a bug. It's the operating environment. And it's exactly why the question of...

Mar 25, 2026

OpenAI Killed Sora 30 Minutes After a Disney Meeting. The Kill List Is the Strategy Now.

$15M/day to run, $2.1M lifetime revenue. The pivot to Codex puts them behind Claude Code — in a market China is about to commoditize from below. THE NUMBER: $15 million / $2.1 million — the daily operating cost of Sora vs. its lifetime revenue. When a product costs 2,600x more to run per day than it has ever earned, killing it isn't a choice. It's arithmetic. The question is what that arithmetic tells you about everything else OpenAI is doing. OpenAI killed Sora this week. Not quietly — 30 minutes after a working session with Disney, whose $1 billion investment...

Mar 24, 2026

I’m a Mac. I’m a PC. And Only One of Us Is Getting Enterprise Contracts

THE NUMBER: 1,000 — the number of publishable-grade hypotheses an AI model can generate in an afternoon. Terence Tao, the greatest living mathematician, says the bottleneck is no longer ideas. It's knowing which ones are true. Two engineers hacked an inflight entertainment system this week to launch a video game at 35,000 feet. The airline gave them free flights for life. The hacker community on X thought it was the coolest thing they'd seen all month. Every CISO reading this just felt their blood pressure spike. That's the divide. Not between capabilities. Between cultures. Remember those "I'm a Mac, I'm...