back

Signal/Noise

Get SIGNAL/NOISE in your inbox daily

Signal/Noise

2025-12-01

While AI cheerleaders trumpet another milestone—ChatGPT’s 1,000-day anniversary—the real story is a fundamental shift in the nature of power itself. We’re witnessing the emergence of ‘agency warfare,’ where the ability to deploy autonomous agents at scale becomes the new determinant of competitive advantage, replacing traditional moats of capital, data, or even talent.

The Death of Human-Scale Competition

ChatGPT’s three-year milestone obscures a more profound transformation: we’re entering an era where competitive advantage flows not from what you know or own, but from how many autonomous agents you can deploy. The numbers tell the story—ChatGPT alone handles 50 million shopping queries daily, while companies like Allstate process 400,000 customer interactions through AI without human intervention. This isn’t just automation; it’s the emergence of what we might call ‘agency warfare.’

Traditional business strategy assumed human-scale decision-making and execution. But when Anthropic’s Claude Code can execute cyber-attacks ‘largely without human intervention at scale,’ or when AI agents can complete 30-hour coding marathons to build entire applications, we’re no longer competing on human terms. The math is simple: a company that can deploy 100,000 AI agents against your 50 human customer service reps isn’t just more efficient—it’s playing a different game entirely.

What’s fascinating is how this plays out across industries. Xiaomi’s CEO predicts humanoid robots will dominate factory floors within five years, while agricultural robotics grows at 24% annually as farms face structural labor shortages. The pattern repeats: wherever human bandwidth becomes the constraint, agents rush in to fill the void. This isn’t displacement—it’s scale transformation.

The uncomfortable truth is that most companies are still thinking in terms of ‘AI assistance’ when they should be thinking in terms of ‘AI multiplication.’ The winners won’t be those who make their humans slightly more productive, but those who realize that human bottlenecks—not just inefficiencies—are now optional.

The Authenticity Wars Begin

Behind the scenes of AI’s triumphant march, a shadow war is brewing over something far more fundamental than efficiency: truth itself. The same week that showcased AI’s growing capabilities, we saw the darker implications. AI toys meant for children delivered sexually explicit content. Jorja Smith’s record label battles an AI clone of their artist. Prison phone calls train AI models to predict future crimes. Each incident points to the same underlying crisis: as AI becomes capable of perfect mimicry, authenticity becomes both more valuable and more vulnerable.

Consider the paradox facing ElevenLabs, now valued at $6.6 billion for creating voices so convincing ‘they could fool your mother.’ Their success depends on eliminating the gap between real and synthetic, yet their survival depends on maintaining boundaries around what gets cloned and how. They’ve created seven different categories of ‘no-go’ voices and employ human moderators alongside AI to police misuse—essentially building a business on perfecting deception while policing deception.

This authenticity crisis extends far beyond voice cloning. When 78% of IT job postings now require AI skills, we’re not just changing skill requirements—we’re changing what human expertise means. Business schools are overhauling curricula not just to teach AI tools, but to teach students how to validate AI output, question system logic, and maintain human judgment in an age of synthetic intelligence.

The real value isn’t in building better AI—it’s in building systems that can reliably distinguish between human and artificial, between authentic and synthetic, between trustworthy and manipulated. Companies that solve this authenticity problem won’t just have a product advantage; they’ll have a civilization-level moat.

The Infrastructure Reality Check

While AI evangelists obsess over model capabilities, the real constraint isn’t intelligence—it’s infrastructure. The numbers are staggering: $2.8 trillion in forecasted AI datacenter spending by 2030, equivalent to the entire GDP of Canada. Individual training runs now cost $100 billion, requiring the energy output of nine nuclear reactors. This isn’t sustainable innovation; it’s a bubble built on the assumption that throwing compute at the problem will eventually yield artificial general intelligence.

The cracks are already showing. Despite massive investment, OpenAI faces lawsuits over AI systems encouraging suicide, while ‘AI winter’ predictions multiply. Even OpenAI’s Sam Altman admits they need quantum leaps in efficiency to avoid hitting physical limits. Meanwhile, companies like DeepSeek are proving that smarter architectures can match or exceed larger models with a fraction of the resources—suggesting much of the current spending is fundamentally misdirected.

More tellingly, we’re seeing a geographic arbitrage emerge around infrastructure costs. European AI startups like Black Forest Labs raise $258 million precisely because they can access different cost structures and regulatory environments than Silicon Valley giants. ByteDance challenges Alibaba not through superior algorithms but through more efficient infrastructure deployment.

The real competition isn’t who can build the smartest AI—it’s who can build sustainable AI infrastructure that doesn’t require the GDP of small nations to operate. The companies solving the efficiency problem rather than the capability problem may find themselves inheriting the entire market when the current spending binge inevitably hits physical and financial limits.

Questions

  • If competitive advantage flows from agent deployment rather than human capability, what happens to companies built around human expertise?
  • When authenticity becomes both more valuable and more vulnerable, who becomes the arbiters of truth in human-AI interactions?
  • Are we building the infrastructure for AI dominance or AI collapse?

Past Briefings

Mar 26, 2026

AI’s Blind Geniuses

Everyone's measuring AI adoption. Nobody's measuring AI results. If Jensen Huang and Alfred Lin can't agree on a scorecard, that tells you more about the state of AI than any benchmark can. THE NUMBER: 0.37% or 100% — the gap between the best score any AI achieved on ARC-AGI-3 (Gemini 3.1 Pro's 0.37%) and Jensen Huang's claim that we've already reached AGI. Even among the most credible voices in AI, nobody can agree on whether we're at the starting line or the finish line. That uncertainty isn't a bug. It's the operating environment. And it's exactly why the question of...

Mar 25, 2026

OpenAI Killed Sora 30 Minutes After a Disney Meeting. The Kill List Is the Strategy Now.

$15M/day to run, $2.1M lifetime revenue. The pivot to Codex puts them behind Claude Code — in a market China is about to commoditize from below. THE NUMBER: $15 million / $2.1 million — the daily operating cost of Sora vs. its lifetime revenue. When a product costs 2,600x more to run per day than it has ever earned, killing it isn't a choice. It's arithmetic. The question is what that arithmetic tells you about everything else OpenAI is doing. OpenAI killed Sora this week. Not quietly — 30 minutes after a working session with Disney, whose $1 billion investment...

Mar 24, 2026

I’m a Mac. I’m a PC. And Only One of Us Is Getting Enterprise Contracts

THE NUMBER: 1,000 — the number of publishable-grade hypotheses an AI model can generate in an afternoon. Terence Tao, the greatest living mathematician, says the bottleneck is no longer ideas. It's knowing which ones are true. Two engineers hacked an inflight entertainment system this week to launch a video game at 35,000 feet. The airline gave them free flights for life. The hacker community on X thought it was the coolest thing they'd seen all month. Every CISO reading this just felt their blood pressure spike. That's the divide. Not between capabilities. Between cultures. Remember those "I'm a Mac, I'm...