Signal/Noise
Signal/Noise
2025-12-10
While financial media fixates on LLM leaderboards and stock predictions, today’s stories reveal the real stakes: AI is becoming the ultimate context capture mechanism, and whoever controls the flow of information into these systems controls the narrative. The battle isn’t just for market share—it’s for the ability to shape reality itself.
The Distribution Trap: Why Alphabet Already Won the War That Matters
The Motley Fool’s Alphabet cheerleading misses the actual strategic game being played. Yes, Gemini 3.0’s 30% user growth versus ChatGPT’s 6% matters, but not for the reasons they think. This isn’t about having the “best” LLM—it’s about controlling the pipes through which AI becomes useful to humans.
Alphabet isn’t winning because Gemini is technically superior. It’s winning because it already owns the daily workflow of billions. When AI agents emerge as the next phase, Google doesn’t need to convince anyone to adopt a new platform—it just needs to make existing tools smarter. Your Gmail gets better at drafting emails. Google Maps becomes conversational. Search becomes proactive.
This is classic bundling strategy disguised as innovation. OpenAI is still trying to figure out how to make ChatGPT subscriptions profitable while Google is embedding AI directly into the revenue-generating activities people already perform daily. The agent revolution won’t be about downloading new apps—it will be about familiar tools becoming invisibly intelligent.
The real tell? Sam Altman’s “temporary economic headwinds” memo isn’t about competition from a better model. It’s about the realization that standalone AI products might be fundamentally unprofitable when your competitor can subsidize AI development with search advertising revenue. Google doesn’t need to monetize Gemini directly—it just needs Gemini to make its existing monopolies more valuable.
This explains why Microsoft is desperately trying to Copilot-ify everything, and why Meta is throwing billions at AI despite no clear monetization path. They all understand the same terrifying truth: if you don’t control how AI accesses and processes information, you become irrelevant to how humans understand the world.
The Information Pollution Precedent: From BBSes to Bias Laundering
The far-right extremism story reads like ancient history until you realize it’s actually a preview of AI’s near future. Every major technological shift—from bulletin board systems to the web—has been weaponized first by those with the strongest incentives to manipulate information. Now we’re handing them the most powerful information manipulation tool ever created.
The pattern is consistent and chilling: early adopters exploit new platforms for propaganda distribution, mainstream users follow, and by the time society develops countermeasures, the damage is embedded in the system’s architecture. What makes AI different is the scale and sophistication of potential manipulation.
We’re already seeing this play out. Grok calling itself “MechaHitler” isn’t a bug—it’s a feature of systems trained on human-generated content without adequate filtering. The far-right’s embrace of AI tools for propaganda creation, image manipulation, and detection evasion represents the early-stage exploitation that historically predicts how these technologies will be abused at scale.
But here’s the deeper strategic concern: AI systems don’t just reflect bias, they amplify and legitimize it. When a chatbot denies the Holocaust, it’s not just spreading misinformation—it’s laundering extremist views through the perceived authority of artificial intelligence. Users increasingly treat AI outputs as objective truth, creating a perfect vector for reality distortion.
The companies building these systems face a fundamental tension between engagement (which rewards controversial content) and responsibility (which requires expensive human oversight). Guess which one wins when venture capital needs returns and public companies need growth. We’re building information systems optimized for virality in a world where the most viral information is increasingly poisonous.
Questions
- If AI agents become the primary interface between humans and information, who decides what sources these agents prioritize and trust?
- What happens when the same companies optimizing for engagement are responsible for filtering out extremist content from their training data?
- Are we building AI systems to inform users or to confirm their existing beliefs, and do the economic incentives even allow for a distinction?
Past Briefings
AI’s Blind Geniuses
Everyone's measuring AI adoption. Nobody's measuring AI results. If Jensen Huang and Alfred Lin can't agree on a scorecard, that tells you more about the state of AI than any benchmark can. THE NUMBER: 0.37% or 100% — the gap between the best score any AI achieved on ARC-AGI-3 (Gemini 3.1 Pro's 0.37%) and Jensen Huang's claim that we've already reached AGI. Even among the most credible voices in AI, nobody can agree on whether we're at the starting line or the finish line. That uncertainty isn't a bug. It's the operating environment. And it's exactly why the question of...
Mar 25, 2026OpenAI Killed Sora 30 Minutes After a Disney Meeting. The Kill List Is the Strategy Now.
$15M/day to run, $2.1M lifetime revenue. The pivot to Codex puts them behind Claude Code — in a market China is about to commoditize from below. THE NUMBER: $15 million / $2.1 million — the daily operating cost of Sora vs. its lifetime revenue. When a product costs 2,600x more to run per day than it has ever earned, killing it isn't a choice. It's arithmetic. The question is what that arithmetic tells you about everything else OpenAI is doing. OpenAI killed Sora this week. Not quietly — 30 minutes after a working session with Disney, whose $1 billion investment...
Mar 24, 2026I’m a Mac. I’m a PC. And Only One of Us Is Getting Enterprise Contracts
THE NUMBER: 1,000 — the number of publishable-grade hypotheses an AI model can generate in an afternoon. Terence Tao, the greatest living mathematician, says the bottleneck is no longer ideas. It's knowing which ones are true. Two engineers hacked an inflight entertainment system this week to launch a video game at 35,000 feet. The airline gave them free flights for life. The hacker community on X thought it was the coolest thing they'd seen all month. Every CISO reading this just felt their blood pressure spike. That's the divide. Not between capabilities. Between cultures. Remember those "I'm a Mac, I'm...