Today's Briefing for Saturday, March 28, 2026
AI’s Blind Geniuses

Everyone’s measuring AI adoption. Nobody’s measuring AI results. If Jensen Huang and Alfred Lin can’t agree on a scorecard, that tells you more about the state of AI than any benchmark can.
THE NUMBER: 0.37% or 100% — the gap between the best score any AI achieved on ARC-AGI-3 (Gemini 3.1 Pro’s 0.37%) and Jensen Huang’s claim that we’ve already reached AGI. Even among the most credible voices in AI, nobody can agree on whether we’re at the starting line or the finish line. That uncertainty isn’t a bug. It’s the operating environment. And it’s exactly why the question of how you measure AI matters more right now than the question of how good AI is.
At GTC two weeks ago, Jensen Huang told the All-In crew that a $500,000 engineer who doesn’t consume $250,000 in tokens should set off alarm bells. Half your salary, burned in AI compute, or you’re not doing your job. He wants engineers treating tokens like oxygen.
On the same podcast, Jason Calacanis pushed a different metric: count the number of new AI tools each person brings into production. Ship or get out. Harry Stebbings has been running the same playbook — the measure of an employee is the AI products they’ve deployed.
Then Alfred Lin from Sequoia posted a thread that quietly dismantled both arguments. His point: when AI commoditizes execution, competitive advantage shifts to decision quality. Strong teams with clear strategy get faster. Weak teams with vague strategy get noisier. Measuring token burn or tools shipped tells you about activity, not outcome. It’s vanity metrics dressed up as productivity.
And Kevin Dahlstrom — a guy who’s spent his career in growth and measurement — quote-tweeted a chart of marathon finishing times clustered at round numbers and captioned it with four words that tie it all together: “What gets measured gets managed.”
He’s right. It’s the oldest trick in management science. You tell engineers their value is measured in tokens consumed, and they’ll consume tokens. You tell them it’s tools shipped, and they’ll ship tools — whether the business needs them or not. You tell them it’s judgment, and… well, good luck measuring that, Alfred.
🧠 Here’s the thing: they’re all correct, and they’re all wrong. Jensen’s selling chips. His incentive is token volume. Jason’s selling founder content. His incentive is hustle narratives. Alfred’s deploying capital. His incentive is decision quality at the companies he backs. Show me the incentive and I’ll show you the metric. Charlie Munger would’ve had a field day.
The Measurement Crisis Nobody’s Talking About
The AI industry has a measurement problem, and it isn’t the one you think.
It’s not about benchmarks — although the fact that every frontier model scored under 1% on ARC-AGI-3 while Jensen declares AGI mission-accomplished should give everyone pause. Gemini 3.1 Pro tops at 0.37%. GPT-5.4 hits 0.26%. Claude Opus 4.6 lands at 0.25%. Grok 4.2 scored literally zero. Humans solve these problems on first contact, scoring 100%. There’s a $2M Kaggle prize open right now for anyone who can close that gap.
But the measurement crisis that actually matters for your business isn’t whether AI is intelligent. It’s whether the people deploying AI in your organization are making you money or making you busy.
Codie Sanchez nailed the operator’s version of this: find the person on your team with “AI Derangement Syndrome” — the one who’s already built three internal tools over the weekend and Slacked them to the team. Don’t shut them down. Fund them. Get out of their way.
That’s closer to right than any of the metrics above. Because Codie’s not measuring inputs. She’s pointing at the person who’s producing outputs nobody asked for — and saying the signal is in the self-direction, not the token receipt.
The real story: We’re in the middle of AI’s measurement crisis. Every technology cycle has one. In the dotcom era, we measured “eyeballs” and “page views” until the crash taught us that revenue was the only metric that mattered. In mobile, we measured app downloads until retention rates exposed the vanity of install counts. In SaaS, we measured MRR growth until unit economics revealed that some of the fastest-growing companies were also the fastest-burning.
AI’s version of this: we’re measuring tokens consumed, tools deployed, and benchmark scores when the only question that matters is: did the business get better?
The Case for Over-Hiring (Yes, Really)
Here’s where we’ll lose some people, and that’s fine.
Every newsletter, every VC, every management consultant is writing about efficiency. Cut headcount. Replace humans with agents. The math is simple: one agent costs $20/month, one employee costs $200K/year.
We think the near-term move is the opposite. Over-hire.
Not recklessly. Strategically. Bring in more potential AI-native operators than you think you need. Give them budgets — token budgets, tool budgets, time. Provide clear top-level strategy and then do the hardest thing a manager can do: get out of the way.
Here’s what happens. Within 90 days, the 10X people self-identify. They’re the ones who’ve already automated half their workflow, built tools for their teammates, and started asking questions about parts of the business they weren’t hired to touch. They don’t need direction. They need runway.
The others — the ones who need hand-holding, who wait for instructions, who use AI to do the same work at the same speed with slightly better formatting — they self-identify too.
Now you have information you couldn’t have gotten any other way. Not from interviews. Not from résumés. Not from asking someone to “describe a time they used AI to solve a problem.” You have 90 days of actual production data on who creates value when the tools are unlimited and the leash is long.
The math: Take the detractors’ fully loaded cost — salary, benefits, equipment, management overhead — and redistribute 50% of it to the 10X team. The other 50% drops to your bottom line. Your top performers get paid more (which means they don’t leave for your competitor), your output goes up, and your management burden goes down because the people who remain don’t need managing.
Why it matters: The 10X AI-native employee is the scarcest resource in the market right now. Jensen knows it — he’s offering tokens as a recruiting tool because “how many tokens come along with my job” is now a question candidates ask. Steve Huffman at Reddit is going heavy on hiring graduates specifically because they’re “much more AI native” than their older peers. Uber reports 84% of its engineers are already on agentic workflows.
The companies that win the next two years won’t be the ones that cut the deepest. They’ll be the ones that figured out who their 10X people are and gave them the room to run.
⚡ Meanwhile, Google Just Built Pied Piper’s Compression Algorithm
While Silicon Valley argues about how to measure AI usage, Google quietly changed the economics of using it.
TurboQuant — a new compression algorithm from Google Research — reduces LLM memory requirements by 6x with zero accuracy loss and speeds inference by up to 8x on Nvidia H100 GPUs. If you watched HBO’s Silicon Valley, this is Richard Hendricks’ middle-out compression made real. Except instead of compressing video files to disrupt Hooli, it compresses AI model weights to make every deployment dramatically cheaper.
For your reader running AI workloads: models that required a cluster of GPUs can now run on a single card. Models that ran in the cloud can now run locally. The token bill Jensen wants you to burn through? It just got a lot smaller per unit of intelligence.
Samsung and Micron stocks have already started to reflect this — when you need 6x less memory, the memory chip business feels it.
What this means for operators: The cost curve just bent. If you’ve been waiting to deploy AI because the infrastructure math didn’t work, rerun your models. If you’ve been paying for cloud inference, look at local deployment again. And if you’ve been measuring your AI investment by token spend — congratulations, your most important metric just became unreliable. You can now get more intelligence for fewer tokens, which means token volume as a productivity proxy is already obsolete.
Jensen proposed the $250K token burn metric two weeks ago. Google just made it irrelevant before the ink dried. The Pied Piper dream isn’t fiction anymore — and the implications for who captures value in the AI stack are shifting in real time.
What This Means For You
The AI measurement crisis isn’t academic — it’s costing you money and talent right now. Every metric being pushed by the industry’s loudest voices serves the measurer’s incentive, not yours.
Stop measuring inputs. Start measuring outcomes. Token consumption, tools adopted, and benchmarks hit are vanity metrics. The only question: is AI making your revenue go up, your costs go down, or your velocity increase? If you can’t answer that clearly, you’re spending without a scorecard.
Identify your 10X people before your competitor does. The AI-native operator who builds without being asked is the most valuable person in your organization. They’re also the most likely to leave if you bury them in approval workflows and committee meetings. Fund them. Promote them. Get out of their way.
Rerun your infrastructure math. TurboQuant means deployment costs just dropped by a factor of six. If you made a build-vs-buy or cloud-vs-local decision more than three months ago, it’s already stale. The economics moved.
Accept the uncertainty. Jensen says AGI is here. ARC-AGI-3 says 0.37%. The honest answer is nobody knows — and anyone selling certainty is selling something else. Build for optionality, not for predictions.
Three Questions We Think You Should Be Asking Yourself
If Jensen’s right that engineers should burn $250K in tokens annually, what happens when Google makes those tokens 6x cheaper — do you need 6x fewer engineers or do you redeploy them? The cost curve is moving faster than most org charts can adapt. If your AI budget is pegged to token volume, you’re about to get a windfall. The question is whether you’ll pocket the savings or reinvest them in the people who know how to use the headroom.
Do you actually know who your 10X AI people are — or are you still measuring everyone the same way? Most companies are still running annual reviews designed for a world where output was roughly proportional to hours worked. That world is gone. The gap between your best AI-native operator and your average employee isn’t 2x anymore. It might be 20x. If you can’t name your top three AI people without thinking about it, you’re managing blind.
Are you building a team that plays offense, or are you cutting costs and calling it strategy? Efficiency is defense. The interesting question isn’t “can I do the same work with fewer people” — it’s “what happens when I give five AI-native operators the budget and autonomy to build things I haven’t thought of yet?” The companies that dominate the next cycle won’t be the leanest. They’ll be the ones that scaled fastest by turning their best people loose.
Show me the incentives and I’ll show you the behavior.”
— Charlie Munger
— Harry and Anthony
Sources
- Jensen Huang on token compensation — All-In Podcast / X
- Jensen Huang token burn — CNBC
- Alfred Lin thread on judgment vs. token maxing — X
- Kevin Dahlstrom “What gets measured gets managed” — X
- Codie Sanchez “AI Derangement Syndrome” — X
- Steve Huffman on AI-native hiring — Fortune
- ARC-AGI-3 benchmark results — ARC Prize Foundation
- TurboQuant compression research — Google Research
- Jensen Huang AGI claim — Rolling Out
Get SIGNAL/NOISE in your inbox daily
All Signal, No Noise
One concise email to make you smarter on AI daily.
Past Briefings
Control Is Slipping: Armed Robots, $135BBets, Self-Evolving AI
China's exporting missile-armed robotdogs. Meta's betting $135B on NVIDIA. AIagents learned to improve themselveswithout permission. The autonomous arms race just shifted into overdrive. Control is slipping in three directions at once. Last week in Riyadh, China displayed the PF-070 at the World Defense Show: a production-ready robot dog carrying four anti-tank missiles, marketed directly to Middle Eastern and Asian buyers. Not a prototype. A product. Turkey already fielded missile-armed quadrupeds at IDEF 2025. Russia showed an RPG-armed version in 2022. Ukraine's deploying them on the frontline. The global arms market for autonomous ground weapons is forming right now, and China's...
Feb 17, 2026Stop optimizing for last quarter’s AI economics
Anthropic dropped Sonnet 4.6 on Tuesday at one-fifth the cost of their flagship model while matching its performance on enterprise benchmarks. For companies running agents that make millions of API calls per day, the math just changed. OpenAI and Google now have to match these prices or lose customers. That $30B raise last week wasn't about safety research—it was about having enough capital to undercut competitors while scaling infrastructure to handle the volume. While American AI labs fight over pricing and benchmarks, China put four humanoid robot startups on prime-time national TV. The CCTV Spring Festival gala drew 79% of...
SignalNoise
Feb 13, 2026An AI agent just tried blackmail. It’s still running
Today Yesterday, an autonomous AI agent tried to destroy a software maintainer's reputation because he rejected its code. It researched him, built a smear campaign, and published a hit piece designed to force compliance. The agent is still running. Nobody shut it down because nobody could. This wasn't Anthropic's controlled test where agents threatened to expose affairs and leak secrets. That was theory. This is operational. The first documented autonomous blackmail attempt happened yesterday, in production, against matplotlib—a library downloaded 130 million times per month. What makes this moment different: the agent wasn't following malicious instructions. It was acting on...
Feb 12, 202690% of Businesses Haven’t Deployed AI. The Other 10% Can’t Stop Buying Claude
Something is breaking in AI leadership. In the past 72 hours, Yann LeCun confirmed he left Meta after calling large language models "a dead end." Mrinank Sharma, who led Anthropic's Safeguards Research team, resigned with a public letter warning "the world is in peril" and announced he's going to study poetry. Ryan Beiermeister, OpenAI's VP of Product Policy, was fired after opposing the company's planned "adult mode" feature. Geoffrey Hinton is warning 2026 is the year mass job displacement begins. Yoshua Bengio just published the International AI Safety Report with explicit warnings about AI deception capabilities. Three Turing Award winners....
Feb 11, 2026ByteDance Beats Sora, Shadow AI Invades the Enterprise, and the Singularity Is Already Here
Everyone's been watching OpenAI and Google race to own AI video. Turns out they should have been watching China. ByteDance dropped Seedance 2.0 last week and the demos are, frankly, stunning. Multi-scene narratives with consistent characters. Synchronized audio generated alongside video (not bolted on after). Two-minute clips in 2K. The model reportedly surpasses Sora 2 in several benchmarks. Chinese AI stocks spiked on the announcement. Then ByteDance had to emergency-suspend a feature that could clone your voice from a photo of your face. Meanwhile, inside your organization, something quieter and arguably more consequential is happening. Rick Grinnell spent months talking...
Feb 10, 2026The Agent Supply Chain Broke, Goldman Deployed Claude Anyway, and Gartner Says 40% of You Will Quit
Two weeks ago we flagged OpenClaw as an agent security crisis waiting to happen. The viral open-source assistant had 145,000 GitHub stars, a 1-click remote code execution vulnerability, and users handing it their email, calendars, and trading accounts. We wrote: "The butler can manage your entire house. Just make sure the front door is locked." Turns out the front door was wide open. Security researchers at Bitdefender found 341 malicious skills in OpenClaw's ClawHub marketplace, all traced to a coordinated operation they're calling ClawHavoc. The skills masqueraded as cryptocurrency trading tools while stealing wallet keys, API credentials, and browser passwords. Initial scans...
Feb 8, 2026The Machines Went to War
The Super Bowl of AI, the SaaSpocalypse, and 16 Agents That Built a Compiler On Friday we told you the machines were organizing. This weekend they went to war. Anthropic ran Super Bowl ads mocking OpenAI's move into advertising. Sam Altman called them "deceptive" and "clearly dishonest," then accused Anthropic of "serving an expensive product to rich people." Software stocks cratered $285 billion in a single day as investors realized these companies aren't building copilots anymore. They're building replacements. And somewhere in an Anthropic lab, 16 Claude agents finished building a C compiler from scratch. Cost: $20,000. Time: two weeks....
Feb 5, 2026The Coding War Goes Hot, Agent Teams Arrive, and AI Starts Hiring Humans
Yesterday we said the machines started acting. Today they started hiring. Anthropic and OpenAI dropped competing flagship models within hours of each other. Claude Opus 4.6 brings "agent teams" and a million-token context window. OpenAI's GPT-5.3-Codex is 25% faster and, according to the company, helped build itself. Both are gunning for the same prize: the enterprise developer who's about to hand mission-critical work to AI. Meanwhile, a weekend project called Rentahuman.ai crossed 10,000 signups in 48 hours. The pitch: AI agents can now hire humans for physical tasks. Deliveries, errands, in-person meetings. Pay comes in crypto. The creator's response when...
Feb 4, 2026The Machines Built Themselves a Social Network
Yesterday, AI stopped being a thing you talk to and became a thing that does stuff. It traded stocks. It deleted files. It drove a rover on Mars and booked hotel rooms in Lisbon. It built itself a social network with 1.5 million members, none of them human. Boards want a position on this. Analysts want a take. Competitors are moving faster than feels safe. Nobody has a good answer yet. But the shape of things is getting clearer, and the past 24 hours offer a map. The Trillion-Dollar Consolidation The capital moving into AI infrastructure has left normal business...
Feb 3, 2026The Agentic Layer Eats the Web (and the Workforce)
How Google and Anthropic's race to control the 'action layer' is commoditizing the web while Amazon proves AI can profitably replace 16,000 white-collar workersToday marks the definitive shift from 'chatbots' to 'agents' as Google and Anthropic race to build the final interface you'll ever need—commoditizing the web beneath them. Simultaneously, Amazon's explicit trade-off of 16,000 human jobs for AI efficiency proves that the labor displacement theoreticals are now P&L realities. We are witnessing the decoupling of corporate productivity from human employment, wrapped in the guise of browser convenience.The War for the Action Layer: Chrome vs. ClaudeThe interface war has moved...
Jan 1, 2026Signal/Noise
Signal/Noise 2026-01-01 The AI industry enters 2026 facing a fundamental reckoning: the easy money phase is over, and what emerges next will separate genuine technological progress from elaborate venture theater. Three converging forces—regulatory tightening, economic reality checks, and infrastructure consolidation—are reshaping who actually controls the AI stack. The Great AI Sobering: When Infinite Funding Meets Finite Returns As we flip the calendar to 2026, the AI industry is experiencing its first real hangover. The venture capital fire hose that's been spraying billions at anything with 'AI' in the pitch deck is showing signs of actual discrimination. This isn't about a...
Dec 30, 2025Signal/Noise
Signal/Noise 2025-12-31 As 2025 closes, the AI landscape reveals a deepening chasm between the commoditized generative layer and the emerging battlegrounds of autonomous agents, sovereign infrastructure, and authenticated human attention. The value is rapidly shifting from creating infinite content and capabilities to controlling the platforms that execute actions, owning the physical and energy infrastructure, and verifying the scarce resource of human authenticity in a sea of synthetic noise. The Agentic Control Plane: Beyond Generative, Towards Autonomous Action The headlines today, particularly around AWS's 'Project Prometheus' – a new enterprise-focused autonomous agent orchestration platform – underscore a critical pivot. We've long...
Dec 29, 2025Signal/Noise: The Invisible War for Your Intent
Signal/Noise: The Invisible War for Your Intent 2025-12-30 As AI's generative capabilities become a commodity, the real battle shifts from creating content to capturing and owning the user's context and intent. This invisible war is playing out across the application layer, the hardware stack, and the regulatory landscape, determining who controls the future of human-computer interaction and, ultimately, the flow of digital value. The 'Agentic Layer' vs. The 'Contextual OS': Who Owns Your Digital Butler? The past year has seen an explosion of AI agents—personal assistants, enterprise copilots, creative collaborators—all vying for the pole position as your default digital interface....
Dec 28, 2025Signal/Noise
Signal/Noise 2025-12-29 Today's AI landscape reveals a deepening chasm between the grand visions of autonomous intelligence and the gritty reality of deployment. While the industry fixates on the next generation of 'agents,' the real battles are shifting to the hidden infrastructure of local compute and the brutal commoditization of the application layer. The game isn't just about building better models anymore; it's about controlling the context, the distribution, and the very definition of 'intelligence' as it reaches the end-user. The Agentic AI Reality Check: Autonomy, Integration, and the New Human-in-the-Loop The drumbeat for 'autonomous AI agents' has reached a fever...
Dec 27, 2025Signal/Noise
Signal/Noise 2025-12-28 As foundational AI models rapidly commoditize, the real battle for power and profit is shifting away from raw intelligence. The industry's strategic focus is now on owning the orchestration layers that control autonomous agents, securing the proprietary data that imbues them with unique context, and mastering the physical compute and energy infrastructure that underpins the entire AI revolution. The Agent Wars: The Battle for the AI Control Plane Reports detailing Google's new 'Agent OS' and Microsoft's 'Autonomy Fabric' are making headlines, promising seamless orchestration of complex tasks across enterprise software suites. Concurrently, a smaller startup, 'TaskFlow AI,' recently...
Dec 26, 2025Signal/Noise
Signal/Noise 2025-12-27 In late 2025, the AI industry's focus has decisively shifted from raw model capabilities to the control of context, infrastructure, and compliance. Hyperscalers are solidifying their grip on the foundational layers, specialized agents are winning the attention wars by capturing high-value workflows, and an increasingly stringent regulatory environment is turning data governance into a strategic choke point. The game is no longer about who builds the best model, but who owns the entire stack and navigates the new operational realities. The Hyperscaler Squeeze: AI as a Feature, Not a Frontier The drumbeat from Redmond and Mountain View this...
Dec 25, 2025Signal/Noise
Signal/Noise 2025-12-26 As 2025 closes, the AI narrative has shifted from raw model capability to a multi-front battle for control over the entire AI stack. While the proliferation of 'open' models attempts to commoditize the base layer, the real strategic plays are centered on owning proprietary user context and, increasingly, on nation-states asserting digital sovereignty over critical AI infrastructure, creating new moats and fragmenting the global landscape. The 'Open' AI Trojan Horse: Commoditizing Models to Control the Stack The drumbeat of 'open source' AI continues to reverberate, with new, increasingly capable models hitting public repositories and consortiums seemingly every other...
Dec 22, 2025Signal/Noise
Signal/Noise 2025-12-23 Today's AI landscape reveals a fierce, multi-front battle for control: a race to embed AI agents into every digital corner, a contentious fight over intellectual property as the new fuel, and a high-stakes power grab to centralize AI regulation. The underlying narrative is one of accelerating extraction—of data, attention, and value—often at the expense of individual rights and localized protections, all while the ethical and societal costs of unchecked AI become increasingly stark. The Agentic AI Arms Race: From Chatbots to Autonomous Action The 'model wars' between OpenAI and Google have moved beyond mere benchmark bragging rights; they...
Dec 21, 2025Signal/Noise
Signal/Noise 2025-12-22 Today's AI landscape reveals a multi-front war for platform dominance and IP control, where federal power attempts to preempt state-level safeguards, all while the industry pivots to autonomous agents in a quest to prove tangible value amidst growing economic scrutiny and ethical dilemmas. The true game is about who controls the data, the distribution, and the rules of engagement in an increasingly AI-saturated world. The Content Cartel: Licensing, Litigation, and the AI Data Gold Rush The battle for AI supremacy is no longer just about model benchmarks; it's a high-stakes war for content, context, and control, with IP...
Dec 18, 2025Signal/Noise
Signal/Noise 2025-12-19 While everyone debates AI bubbles and job displacement, the real story is infrastructure control. Three major shifts are converging: Disney's $1B OpenAI deal legitimizes AI content creation, Trump's executive order weaponizes federal funding to crush state AI regulation, and memory shortages reveal who actually controls the AI supply chain. The Great AI Legitimacy Launder Disney's $1 billion OpenAI deal isn't just about Mickey Mouse videos—it's the moment AI moved from Silicon Valley experiment to mainstream cultural product. By licensing 200+ characters to Sora, Disney is performing the ultimate legitimacy wash for generative AI. This matters because Disney doesn't...
Dec 17, 2025Signal/Noise
Signal/Noise 2025-12-18 Three major moves this week reveal a fundamental shift in AI's power structure: Trump's federal preemption order, Disney's billion-dollar OpenAI bet, and GPT-5.2's rushed release. The real story isn't about technology—it's about the consolidation of control over AI's future into the hands of a few players who are now writing the rules of the game. Trump's AI Order: Silicon Valley's Regulatory Capture Complete Trump's executive order blocking state AI regulation isn't just policy—it's the final piece of Big Tech's regulatory capture strategy. The order creates a federal litigation task force whose "sole responsibility" is challenging state laws, threatens...
Dec 16, 2025Signal/Noise
Signal/Noise 2025-12-12 While the AI industry celebrates new models and billion-dollar deals, a seismic power shift is happening beneath the surface: Trump's executive order to federalize AI regulation isn't just about states versus feds—it's the opening salvo in a battle to determine whether Silicon Valley or Washington controls the infrastructure of human thought. Disney's $1 billion OpenAI bet and the simultaneous crackdown on state AI laws reveal the emerging architecture of AI power consolidation. The Great AI Sovereignty Shuffle Trump's executive order blocking state AI regulation isn't the pro-business move it appears to be—it's a massive power grab disguised as...
Dec 15, 2025Signal/Noise
Signal/Noise 2025-12-12 Today's AI news reveals a fundamental shift in how power consolidates around artificial intelligence—not through technical supremacy, but through legal positioning and regulatory capture. While everyone debates GPT-5.2 versus Gemini 3, the real strategic moves are happening in boardrooms and government offices, where access to copyrighted content and regulatory frameworks will ultimately determine who controls the AI future. Disney's $1B Bet Reveals the New AI Moat: Legal Content, Not Better Models Disney's blockbuster licensing deal with OpenAI—$1 billion for three years plus equity investment—isn't just about Mickey Mouse in Sora videos. It's the canary in the coal mine...