back

Signal/Noise

Get SIGNAL/NOISE in your inbox daily

Signal/Noise

2025-11-06

Three seismic shifts are converging to reshape how value is created in the AI economy. While everyone’s watching the flashy partnerships and IPO drama, the real story is infrastructure capture—from physical data centers to attention networks—determining who controls the chokepoints of the next economic era.

The Great Infrastructure Land Grab

Google’s announcement that it’s planning orbital data centers by 2027 isn’t just engineering ambition—it’s infrastructure warfare disguised as innovation. While Chinese autonomous vehicle companies struggle with 12% share drops in Hong Kong debuts and need $860 million just to stay competitive, tech giants are racing to lock up the fundamental building blocks of AI computation.

The pattern is everywhere: Duke Energy backing satellite-based vegetation monitoring, the UK approving AI factories of “national importance” in Derbyshire, utilities scrambling to partner with AI startups just to keep the lights on. This isn’t about meeting demand—it’s about controlling supply chains that don’t exist yet.

Space-based data centers solve a problem that terrestrial facilities can’t: unlimited solar power and zero cooling costs. But more critically, they create a moat that’s literally astronomical. Once you own orbital infrastructure, competitors face launch costs, regulatory approval, and physics itself. Google isn’t just building data centers; it’s claiming the high ground for the next century of computation.

Meanwhile, Palantir trades at 624 times earnings not because investors are “batshit crazy,” but because they understand infrastructure capture. The company isn’t selling software—it’s positioning itself as the nervous system for how governments and enterprises will think. When your platform becomes the interface between human decision-making and AI computation, pricing power becomes irrelevant.

The infrastructure winners aren’t just providing compute—they’re defining the geography of intelligence itself.

The Attention Economy’s New Landlords

Snap’s $400 million deal with Perplexity reveals the next battlefield: who controls the interface between humans and AI-generated answers. This isn’t partnership—it’s platform colonization.

Google’s AI Overviews already cost News Corp referral traffic while the media giant scrambles to license content to multiple LLMs just to survive. The brutal math is simple: when AI answers questions directly, publishers become invisible unless they’re cited. News Corp’s “woo and sue” strategy acknowledges this reality—you either get paid for training data or you disappear.

But Snap’s move is more sophisticated than content licensing. By embedding Perplexity directly into Snapchat’s chat interface, they’re capturing attention at the moment of query formation. Users won’t search Google then get AI answers—they’ll ask questions inside Snapchat and never leave. The $400 million isn’t buying search technology; it’s buying the right to intermediate between users and all human knowledge.

The parallels to historical media consolidation are striking, but the stakes are higher. When newspapers controlled distribution, alternatives existed. When AI systems control answers, there’s nowhere else to go. Each platform that successfully embeds AI search becomes a sovereign territory in the attention economy.

SAP’s embrace of “agentic AI” and enterprise integration shows how this plays out in B2B markets. Once AI agents are embedded in business workflows, switching costs become prohibitive. You’re not just changing software—you’re rewiring institutional memory and decision-making processes.

The companies winning these integration races aren’t just capturing market share—they’re positioning themselves as the middlemen for human-AI interaction.

The Labor Replacement Ultimatum

Geoffrey Hinton’s warning cuts through Silicon Valley optimism with uncomfortable precision: “To make money you’re going to have to replace human labor.” The godfather of AI isn’t predicting the future—he’s describing the business model.

OpenAI’s Sam Altman claiming he doesn’t want government bailouts while projecting $1.4 trillion in infrastructure commitments reveals the cognitive dissonance. The same week employees using AI earn 40% more than those who don’t, AI executives openly acknowledge their technology only works economically by eliminating jobs. The math is simple but brutal: AI’s productivity gains must exceed human labor costs, or the investments don’t pencil.

The evidence is already visible in educational policy panic. When students face false accusations of AI use—from detection tools with massive error rates—and universities respond with punitive measures rather than adaptation, they’re revealing their own obsolescence anxiety. If AI can produce work indistinguishable from human output, what exactly are schools selling?

Tabnine’s “org-native” AI agents represent the endgame: systems that understand company repositories, tools, and policies well enough to complete entire workflows autonomously. This isn’t augmentation—it’s replacement with better integration.

The geopolitical implications are staggering. AUKUS partnerships between SubSea Craft and Greenroom Robotics show how military contractors are already building AI-powered systems designed to “keep people out of harm’s way.” The same efficiency logic that drives corporate adoption becomes national security doctrine.

Hinton’s call for how “we organize society” sounds almost quaint against this backdrop. The organization is already happening—by companies building systems that require fewer humans to operate. The social consequences will follow the technical capabilities, not precede them.

Questions

  • If orbital data centers become critical infrastructure, what happens when geopolitical tensions extend into space?
  • Are we building AI systems that make human expertise irrelevant, or just redistributing it to whoever controls the interfaces?
  • When the entire economy runs through AI intermediaries, who has the power to turn it off?

Past Briefings

Mar 26, 2026

AI’s Blind Geniuses

Everyone's measuring AI adoption. Nobody's measuring AI results. If Jensen Huang and Alfred Lin can't agree on a scorecard, that tells you more about the state of AI than any benchmark can. THE NUMBER: 0.37% or 100% — the gap between the best score any AI achieved on ARC-AGI-3 (Gemini 3.1 Pro's 0.37%) and Jensen Huang's claim that we've already reached AGI. Even among the most credible voices in AI, nobody can agree on whether we're at the starting line or the finish line. That uncertainty isn't a bug. It's the operating environment. And it's exactly why the question of...

Mar 25, 2026

OpenAI Killed Sora 30 Minutes After a Disney Meeting. The Kill List Is the Strategy Now.

$15M/day to run, $2.1M lifetime revenue. The pivot to Codex puts them behind Claude Code — in a market China is about to commoditize from below. THE NUMBER: $15 million / $2.1 million — the daily operating cost of Sora vs. its lifetime revenue. When a product costs 2,600x more to run per day than it has ever earned, killing it isn't a choice. It's arithmetic. The question is what that arithmetic tells you about everything else OpenAI is doing. OpenAI killed Sora this week. Not quietly — 30 minutes after a working session with Disney, whose $1 billion investment...

Mar 24, 2026

I’m a Mac. I’m a PC. And Only One of Us Is Getting Enterprise Contracts

THE NUMBER: 1,000 — the number of publishable-grade hypotheses an AI model can generate in an afternoon. Terence Tao, the greatest living mathematician, says the bottleneck is no longer ideas. It's knowing which ones are true. Two engineers hacked an inflight entertainment system this week to launch a video game at 35,000 feet. The airline gave them free flights for life. The hacker community on X thought it was the coolest thing they'd seen all month. Every CISO reading this just felt their blood pressure spike. That's the divide. Not between capabilities. Between cultures. Remember those "I'm a Mac, I'm...