Borrowed Authority
I borrowed Andrej Karpathy's authority this weekend for an argument I'd already made. By breakfast on Monday I was wondering whether anyone's authority is actually their own.

This essay was written with Claude. The idea of it originated with me. The themes are mine — and this is Harry speaking now — but the scaffolding is Claude’s. Some, hopefully a lot, of the prose is mine. Some of it is the model’s. Some of it is the genuine joint output where I couldn’t tell you who wrote which sentence. That ambiguity is part of the argument.
My phone buzzed at 7:24 AM with a text from my old friend Stephen Cornick. Stephen is a guy who lives inside AI in a way most people don’t — he tracks the field with the patience of someone who actually wants to know what’s happening rather than what’s hot. The message was short and gentle in the way the truly painful messages always are.
Karpathy? Dwarkesh? I can only find an interview from October. In October he talked about the 10 years I think. In his Sequoia talk last week he’s definitely not saying 10 years.
The English translation of that text is: you anchored your weekend Signal/Noise on a piece of news that wasn’t news, and the man you cited as your headline witness has already moved on from the position you put in his mouth. My immediate reaction was the usual one — the half-second of stomach drop, then the part of the brain that rationalizes mid-flight. Oh shit. Could be that I got bad info! I was fried last night and didn’t fact-check everything. Believe it was on Scoble’s Aligned News as the lead headline around 8 or 9 PM. That’ll wake you up faster than coffee.
So I did what you do. I went and looked. Pulled up Aligned News on the laptop — the headline that had led their Sunday Night Analysis the night before was nowhere to be found by Monday morning. Clicked through every other newsletter that landed in my inbox to see who else had run with it. None of them had. Checked X. Karpathy himself had been silent on the topic all weekend, which — if a brand-new two-and-a-half-hour interview had genuinely just dropped — would have been very unlike him. Then went and re-read what we had actually published the night before, and read it the way an editor reads it. The way a lawyer reads a contract.
The interview date was wrong. It was a six-and-a-half-month-old interview that had been pushed back into the cycle by Aligned’s editorial team — likely mistakenly rather than deliberately — framed in the present tense, and treated as fresh news on the rendered homepage. We had taken the framing at face value. We had built our weekend lead around it. We had shipped it to thousands of people.
And then I went one step deeper, because the sourcing error felt like it had a longer paper trail than just Harry got tired and didn’t fact-check. I pulled up the actual conversation history with Claude — the working session that produced Sunday night’s Signal/Noise — and went looking for the moment the date error first entered the system. I found it. Six minutes into the session, after pulling the Aligned API and rendered page, the model had written, in its own hand: Editorial memory locked in. Karpathy’s “agents are a decade out” is Aligned’s headline today — and it lands directly into the architecture-doubt thread we ran on April 30 (“Heat”). Today. No flag, no verification, no caveat. The model had taken Aligned’s framing at face value and embedded it directly into the brief. Then it kept going. Now let me write the brief. The Karpathy thread is the spine. I read the brief. I trusted it. I shipped it. The whole error sat on the page in plain sight from minute six of the working session, and neither the model nor the writer caught it because neither of us had been trained to. That is the literal anatomy of borrowed authority, in two sentences of model output and a thousand words of writer’s prose layered on top.
But here is the part that surprised me. The thesis still held. Karpathy had said the decade thing in October. He had said it on the record, on camera, for two and a half hours, with specifics about which capabilities were missing and why each one was a multi-year research program. The fact that he had since softened the position in his Sequoia talk last week did not unsay the original interview. It just meant that even Karpathy’s own October thesis was already being walked back by Karpathy. The architects-vs-marketers tension our piece described was more true after that revelation, not less. Hassabis was still saying 2030. Silver had still raised $1.1 billion at $5.1 billion last week on the architecture-has-a-floor thesis. The Five Eyes intelligence services had still issued the joint advisory on agents the same week. The capability-versus-readiness gap I wrote about — you bought a Porsche and you’re driving it like a bicycle — was structurally unrelated to whether Karpathy’s particular two-and-a-half-hour interview had aired in October or in the last twenty-four hours.
What the wrong date had given us was intellectual heft. The piece read better with Karpathy as the headline witness than it would have read with Hassabis alone, because Karpathy is more famous for his bluntness and the decade number is the kind of clean, quotable claim that anchors an argument. Aligned’s editorial team had repackaged a six-month-old interview as Sunday-night news, we had taken the bait, and the consequence was that our argument had slightly more rhetorical force than it would have had if we’d handled the source material more carefully. The borrowed authority strengthened the piece. It was the equivalent of a startup announcing a Sequoia-led round when, in fact, Sequoia had passed and the round was led by a smaller fund. The investors don’t matter to whether the company works. They matter to how the press writes about the company. We had the press version of that error. The company underneath was fine.
The right call this morning was to flag the date error in our subsequent pieces, fix the research-pipeline gap that let it through, strip the timing claims from the social posts that hadn’t yet shipped, and move on. We did all of that by 9 AM. The mea culpa is small, but it’s owed: we should have checked the date. The fact that the thesis held doesn’t excuse the sourcing error.
But the morning was just getting started. Because once I started reading the rest of the inbox carefully, the question that had begun as did I get the date wrong? turned into a much larger question.
I have been doing this for a few months now and I have started to recognize a particular voice in the AI newsletter ecosystem. It is a voice that I would describe as competent and slightly too symmetrical. The paragraphs are well-balanced. The sentences are about the same length. There is a fondness for historical analogies pulled from the second-tier shelf of business history — railroads in the 1880s, the printing press, the Bell System divestiture, ARM versus Intel. There is a rhetorical move where every argument lands with the same calibrated weight, the way a well-mannered debater never raises his voice. The historical analogies are fine. The arguments are fine. The voice is fine. It is too fine. It is the voice of someone who has read every business book and can pattern-match across all of them but has never been fired by a board, never had to make payroll out of a personal credit card, never made a dumb trade and watched it print on the close. The voice is the voice of an extraordinarily well-read but slightly anesthetized analyst.
I know that voice intimately because the voice is Claude. I write with Claude every day. I have spent the last several months training myself to recognize the model’s fingerprints in my own prose so I can sand them off — the parallel sentence structures, the throat-clearing transitions, the “it’s not X, it’s Y” formula, the perfect three-paragraph arc with a bow on top, the way every argument is given equal airtime so as not to offend any plausible interlocutor. I cut those things out of my drafts. The model puts them back in if I’m not paying attention. Sometimes the model is right. But the voice is recognizable enough now that I notice it everywhere.
I noticed it last week when three different newsletters — Implicator, Shelly Palmer’s Sunday essay, and a Substack I won’t bother to name — all reached for the 1880s railroad analogy in the same week. None of them quoted each other. None of them appeared to have read each other. Hell, none of them probably had any great history with the railroad wars of the 1880s to begin with. They were three writers, three sets of bylines, in three different cities, each accepting the metaphor that Claude had laid in front of them. No one thought of it independently. Not even Claude, who pattern-matched it from a series of tokens that the model had seen ten thousand times before in business-history books written by humans who themselves got it from prior business-history books. No one cared, because no one was going to question it. That is not plagiarism. That is scenius in Brian Eno’s sense — the genius of the scene rather than the individual — except the scene is increasingly just one model talking to itself through different keyboards.
While I was finishing this essay, a friend sent me a piece by Tom Slater at Baillie Gifford that landed back in March. He got there months ahead of me, and he got there with a meta-analysis behind him. One of his section titles is A thousand writers, one story — the same observation I’d just made about Implicator and Palmer and the railroads, except he has the actual creativity-research data showing AI assistance boosts individual creative output while collapsing the diversity of ideas across the population, and I have a coincidence in my inbox. The whole AI commentariat is going to spend the next twelve months either crediting that piece or quietly absorbing it and pretending they arrived independently. I just got handed the choice in real time. I’m crediting it. The model demonstrated the dynamic at the keyboard. Slater has it nailed in the data. The piece you are reading is the third attempt at the same observation in the same calendar quarter.
This is the part of the morning where the question stopped being did I get the date wrong and became do I know whose voice is on this page. I had borrowed Karpathy’s authority over the weekend. By Monday morning I was looking at a stack of newsletters that had been borrowing Claude’s authority for months. By extension I was looking at a piece of my own writing whose scaffolding I had just disclosed at the top of this very essay was Claude’s too.
Where exactly does my voice end and the model’s begin? Honest answer: I don’t entirely know anymore. And that has consequences.
There is a long tradition of getting upset about this — and a longer, quieter tradition of being changed by it without noticing. Technology reshapes the human who uses it. Every time. A person whose local travel is structured by a car thinks differently about distance than a person whose local travel is structured by a bicycle, or a horse. Seventy-five years ago, JFK to SFO was a two-week drive across a continent that the traveler felt the size of. Today it’s a five-hour nap. I am not going to think twice next week when I book the flight. The continent shrank, and the human who experiences the continent shrank with it. Recorded music was going to kill live performance. Photography was going to kill painting. The novel was going to rot the brains of women in the eighteenth century. Television was going to rot the brains of children in the twentieth. The internet was going to make us shallow. Spotify was going to flatten taste. TikTok was going to destroy attention spans. Every one of those critiques was right about the changing and wrong about the killing. The taste-shaping process changed shape rather than ended. The humans who came out the other end of each round were not the humans who went in. The humans before would not have recognized them. The humans afterwards barely noticed.
We absorbed all of those changes without ever really asking where our taste came from. I have not had an unmediated relationship with my own taste in twenty-five years. Google has shaped which sources I see when I research a topic. Amazon has shaped which books are surfaced when I’m browsing. Spotify has been editing my listening for fifteen years. Netflix’s recommender has been doing its work for almost as long. TikTok’s For You page has been re-engineering attention on a scale and scope that nothing in the prior history of media even approaches. None of those companies asked me whether I wanted my taste to be shaped by their algorithms. They simply did the shaping, and I — like everyone else — sat in the chair and accepted the haircut.
AI writing assistants are the next layer of that. They are different in kind in one important way: they sit closer to the act of creation than the prior layers, which sat closer to the act of curation. Spotify chose what I heard. Claude helps me write what I say. The intimacy is closer. The shaping is more direct. But the structural fact — that some non-me agent is helping shape what eventually shows up on the page — is not a new fact in the history of taste. It is just a new location for an old fact.
So the question I sat with this morning is not really should I be using AI to write but the older, harder question that every prior wave of taste-shapers has already forced. Does it matter where taste comes from, if the work is good?
Marcel Duchamp signed a urinal R. Mutt 1917 and put it on a museum wall. He didn’t make the urinal. He chose it. He named it. He put it where it had to be argued with rather than walked past. The piece sits in the canon a century later. Every art-history class in the world walks students through it as the moment provenance stopped being the test.
Eight years ago someone uploaded a Bored Ape to the blockchain and the same provenance argument got run by people who had not absorbed the Duchamp lesson. I made it. I own it. The chain says so. Therefore it must be valuable. The ape did not survive contact with three years of time. The urinal has survived a century. The difference between them was not where they came from. The difference was whether they did what art is supposed to do. Duchamp’s piece reframed every other piece in the museum. It made you ask different questions. It ruined a certain kind of pretension and earned a different kind in its place. The ape did not reframe anything. It announced itself as art and waited for you to agree. Provenance is not the test. It never was.
The same thing is true in pop music, except no one is mortified about it. The American pop charts have been written, for the last thirty years, by a small number of producers and songwriters whose names you would recognize if I listed them and whose work you would recognize if I named the songs. The Motown catalog — the catalog that defined a generation, that holds up sixty years later, that gets revived every five years on a soundtrack — was produced by a similarly small team operating out of a single building on West Grand Boulevard in Detroit. The Funk Brothers played the rhythm tracks. Holland-Dozier-Holland wrote a startling percentage of the songs. The songwriting team was a factory. Nobody mistook it for less than great because of that fact. Sixty years later it is still on every radio station that plays popular music, and in every car driving down every highway in America.
Suno is the next factory. Implicator’s Sunday digest had three separate items on it: 44% of all uploads to Deezer are now AI-generated; Warner Music is reportedly negotiating a templates partnership; the company itself is at $300 million in annualized revenue with two million paying users. People who already paid Spotify or Apple Music are paying Suno on top, for AI music. The stock-and-flow argument is over before it began. The factory has already shipped to the market. The market has already opted in.
I am not arguing that everything Suno produces is great. Most of it isn’t, the same way most pop songs aren’t and most paintings aren’t and most books aren’t. Sturgeon’s Law has not been repealed. Ninety percent of everything is crap. The relevant question is not whether AI-generated music is on average mediocre. Of course it is on average mediocre. The relevant question is whether the top one percent of AI-generated music is meaningfully different in quality or impact from the top one percent of human-generated music. Five years from now we will have the data on that. Twenty years from now we will know which of today’s AI-generated tracks are still in rotation. Fifty years from now we will know which of them survived.
I sat on the board of Pandora for six years before they went public. The Music Genome Project — Pandora’s hand-tagged taste-recommendation engine — was the company’s actual asset. Tim Westergren and his team hired musicologists to listen to songs and assign each one four hundred and fifty attributes across categories that included things like acoustic-electric blend, rhythm syncopation, vocal harmony density, minor-key tendency, and so on. The algorithm matched songs based on attribute profiles. It was, in its time, miraculous. It surfaced songs you had never heard that you ended up loving. It built playlists that taught you what your own taste actually was. It was the first product I ever used where the recommender genuinely surprised me in a good way.
Pandora got acquired by SiriusXM in 2019. As far as I can tell, the Music Genome Project has not been meaningfully updated since. Fifteen years later, it still produces playlists I prefer to anything Spotify has built. Spotify has poured a meaningful percentage of its R&D budget into AI-driven recommendation for a decade and a half. They have access to the largest listening graph in the history of human music. They have shipped DJ, Daylist, Discover Weekly, Release Radar, Made For You. None of them, for me personally, surprises me the way an unmodified hand-tagged 2005-vintage human-curated algorithm still does.
I do not draw any sweeping conclusion from that. I notice it. The taste-shaping system that works for me — the one I trust most, even now — was built by a small team of musicologists in Oakland in the early aughts. Whether that means human-curated systems are fundamentally better, or whether it means I formed my listening habits during the years when the MGP was the dominant recommender and the system shaped me as much as I shaped it, I cannot tell. I lived inside it for a long time. I am, in some real sense, a product of its taste. Spotify is shaping a generation of listeners who will probably feel about Spotify the way I feel about MGP. And so on, generation by generation, recommender by recommender. The human-versus-AI distinction is going to look very small in retrospect. The actual story is just that successive generations of taste-shaping systems have fingerprints, and the fingerprints persist.
Which brings me back to where I started. Stephen’s text at 7:24 this morning was the right text and I was wrong to publish what I published the night before without verifying the date. The mea culpa is small but real. The research pipeline now has a date-verification step that should catch this kind of thing in the future. Specific failure, specific fix, ship it.
The bigger thing I’m sitting with is harder to action. I borrowed Karpathy’s authority for an argument I’d already made. The argument was not weakened by the borrowing. The audience was not deceived in any meaningful way — Karpathy did say the things I quoted him as saying, just earlier than I implied. The borrowing was an accident of date verification, not a fabrication of substance. The realization the morning produced is that essentially everyone in the AI commentariat is borrowing authority from somewhere — Implicator from Claude, Shelly Palmer probably from Claude, half the Substack ecosystem from Claude, the cable shows from the newsletters, the newsletters from the labs, the labs from each other. We are running an enormous game of telephone with Claude as one of the players. The 1880s railroad analogy showed up in three places this week because the model handed it to three writers in three different cities who each thought they had thought of it independently. The thinking is real. The thinkers are real. The thinking is also being shaped, in part, by something other than the thinkers.
Where does that leave anyone who wants to write something worth reading?
I think it leaves you with the only test that has ever mattered, which is the test of time. Not whether AI helped you write it. Not whether the metaphor is yours or the model’s. Not whether the cadence sounds a little too symmetrical. Whether anyone is still reading it in fifty years, the way I still listen to the Beatles sixty years on, and to Ella seventy, and to Beethoven two hundred and fifty. The writers and musicians and painters who made those things are not survived by their authentication chain. They are survived by the fact that the work still does something for someone in the future. That is the test. That has always been the test.
The Karpathy interview in October will not be remembered in fifty years. Most of what any of us write this week will not be remembered in five. The honest job of anyone who writes for a living, in 2026 or 1976 or 1876, is to try to make the small share of the work that might survive worth surviving. Whether the metaphor came from Claude or from a dream or from a conversation in a bar last weekend matters very little to that question. The work either earns its place against time or it doesn’t.
I borrowed Karpathy’s authority this weekend and the piece I borrowed it for held up anyway. I borrowed Claude’s scaffolding for this essay and you can decide whether the essay holds up or it doesn’t. The provenance is not the test. What survives is.
— Harry
Recent Blog Posts
Anthropic Shipped Claude Channels. Your AI Agent Can Now Text You Back.
Until very recently, every interaction with an AI agent had the same shape. You sit down. You open the tool. You give it a task. You wait. You check. You iterate. Every cycle requires your presence. Walk away and the session stalls, the output piles up unseen, or a permission prompt freezes everything until you come back. That constraint just changed. On March 20, 2026, Anthropic shipped a feature called Claude Code Channels. It lets Claude's agentic tool communicate with you through Telegram, Discord, and iMessage. You send a task from your phone. Claude does the work on your computer....
Apr 13, 2026What Did You Do Today?
There's a saying in Jackson Hole. You hear it at the coffee shop on the square, on the chairlift at the Village, in the bars after a day on the mountain. It goes like this: It's not what you do. It's what you did today. I've been thinking about that line all weekend. Because Sam Lessin dropped a piece arguing that AI isn't just a labor crisis — it's a meaning crisis. And Goldman Sachs just published 40 years of data proving that when technology displaces workers, the damage doesn't heal. It scars. Ten percent slower earnings growth for the...
Apr 3, 2026Claw-code Broke GitHub’s Star Record in 24 Hours. Two Engineers Did It on an Airplane. Here’s What That Means for Your Business.
Here's the number: 100,000. That's how many GitHub stars a repository called claw-code collected in roughly 24 hours. Not a year. Not a month. One day. By the time a live stream was done discussing it, the counter was climbing by a thousand stars every ten minutes. Nobody in the room could remember seeing anything grow that fast. Because nothing had. I watched it happen in real time. I'd met the two engineers behind it the weekend before at an AI hackathon in San Francisco. Within 72 hours of shaking hands, they'd built the fastest-growing repo in GitHub history —...