Whose Side Is Sam Altman On?
The trial in San Francisco isn't about Elon Musk. It's about whether you can trust the people offering to replace the workers you cannot rehire.

THE NUMBER: $134 billion — what Elon Musk is asking the court in San Francisco to disgorge from OpenAI and route back to OpenAI’s original nonprofit. The number is theatrical. The principle on trial is structural — does the founding promise of an AI lab survive contact with $500 billion of capital? — and it is the same principle every CEO has been quietly betting their headcount on for the last eighteen months. The witness list reads like an alumni directory of the people who actually built the thing: former chief scientists, former CTOs, former alignment leads, the two board members who tried to fire Altman in November 2023. Every one of them looked at what OpenAI was becoming and walked out. The trial is the part that gets sworn testimony. The walking out was the verdict.
For two centuries we outsourced muscle. The plow, the loom, the assembly line, the shipping container, the call center moved offshore. Every wave moved the same direction: take a job that needed a body, replace it with a machine or a cheaper body somewhere else, free the human upstream to do something more cognitive. The deal worked because the upstream — the thinking, the judgment, the institutional memory — stayed inside the company. You owned the brain. The vendor sold you the muscle.
This is the first wave where the brain itself is the product being sold. Codex doesn’t help your engineers — it ships code instead of them. Workspace Agents don’t help your analysts — they ship deliverables instead of them. OpenClaw doesn’t help your call center — it answers the phones. The transaction has changed shape. You are no longer buying productivity for the people you have. You are outsourcing the people you used to have.
Jack Dorsey cut 4,000 roles at Block in February citing “intelligence tools.” Klarna ran the math at the equivalent of 700 customer service employees displaced. Atlassian, Duolingo, Oracle (twenty thousand-plus this week) keep going. The IBM reversal is the only meaningful rethink so far, and IBM is the exception that proves the rule. The math looked good on the spreadsheet. The math always looks good on the spreadsheet.
If anyone thinks this thesis is hyperbole, the Y Combinator Summer 2026 Requests for Startups is the official roadmap. Gustaf Alströmer wrote it explicitly: 2023 to 2025 was the era of AI copilots — tools that help humans do their jobs faster. The next era is companies that skip the human entirely and just do the work. YC’s category list is the receipt: insurance brokerage, accounting and tax and audit, compliance, healthcare administration. Add SaaS Challengers, Company Brain, the AI Operating System for Companies, AI-Native Service Companies. The largest accelerator in the world is publicly funding the replacement of every service profession on that list. Top-tier investors are pre-positioned to write the checks. This is no longer a thesis. It is a procurement order.
The part that doesn’t fit on the spreadsheet is the question of who exactly you have just signed up to be dependent on. Sam Altman is not Bill Gates selling you Office. He is not Larry Page selling you Search. He is not Marc Benioff selling you a CRM tool your salespeople can choose to use or not use. He is selling you the replacement for your salespeople. And the agent that takes their seat is loyal to him, not to you. So the only question that should be on the agenda of your next board meeting is the one nobody seems to want to ask out loud: when his interests and your interests diverge — and they will diverge, because incentives always do — whose side is Sam Altman on?
The First Time We Outsourced the Brain — and the Last Time We Could Quit the Vendor
You could quit Microsoft. Painful. Slow. Possible. You could quit Google. Same. You could rip out Salesforce — Jason Lemkin’s Salesforce bill went up 83 percent because his agents query it a hundred times more than humans ever did, but if he ever wanted to leave, the door was at least technically open. In every prior wave of enterprise software, the vendor was a supplier and you were the operator. The relationship was symmetric. The vendor wanted you to renew. You wanted leverage. Tension produced fair pricing.
Agents are not symmetric. Once you’ve fired the bodies, you are not a customer. You are a tenant.
The math is in the social-science literature now and it is unflattering. Brynjolfsson, Chandar, and Chen tracked employment for 22-to-25-year-olds in AI-exposed fields since 2022 and found a 13 percent relative decline. AI replaces codified knowledge — the primary asset of junior workers — while complementing tacit knowledge, which only comes from doing the work. So the entry-level position you cut was not a cost line. It was the on-ramp through which the next generation of domain experts gets produced. Cut the on-ramp and you save money this year. You also starve the pipeline of the people you would need to rehire if your AI vendor’s price triples, or its model gets deprecated, or its CEO sells you out at a board meeting in a city you don’t live in.
Bright Simons calls this the Social Edge Paradox. The capability of the AI depends on the social complexity of the human language production it ate. AI deployment systematically thins that complexity — through cognitive offloading (a Microsoft and CMU study of 319 knowledge workers found 40 percent of AI-assisted tasks involved no critical thinking whatsoever), through homogenization of creative output, through the elimination of the interaction-dense work that the apprenticeship pipeline runs on. Anthropic’s own research showed users double-check Claude’s outputs in only 8.7 percent of interactions. The thing eats less than nine times in a hundred. The other ninety-one go straight from machine to action.
Lemkin nailed the day-to-day version of this. He cancelled Notion because his agents don’t read beautifully designed wikis. He paid Salesforce 83 percent more because his agents do read structured database fields, exhaustively, around the clock. The agent is your new procurement decision-maker, and it has different taste than you. The vendors with structured data and pre-priced tool calls compound. The vendors with quirky human-friendly UIs and modular pricing models stealth-churn. Notion will know in twelve months. So will the engineers, designers, and analysts your CFO already greenlit replacing.
Why this matters: Before you sign the next renewal, draw a one-page map of every human role in your business that your AI vendor is on a path to absorb. Now ask yourself, for each row: if the vendor’s incentives changed tomorrow — pricing, ownership, model availability, terms of service — could you call those people back? The people you can call back are your real moat. The roles you have already vacated are exposure. The conversation to have with your board this quarter is not “how aggressively do we deploy agents.” It is “which roles are we willing to make irreversible, and on the strength of which vendor’s word.”
FUTURE-PROOF POD Episode 5 – How Distribution Is Becoming the Ultimate Moat
Most companies are paralyzed by the “Fog of War” in AI — a relentless storm of product hype, unpredictable breakthroughs, and fleeting models. But the real game-changer isn’t just the technology; it’s how you navigate the chaos and turn distribution into your ultimate moat
The Witness List Is the Receipt
In 2015, eleven people signed an open charter to build artificial general intelligence “for the benefit of humanity.” That is the literal language. The legal vehicle was a 501(c)3 nonprofit. The pitch to early donors — including Musk, who put in roughly forty-four million dollars — was that the technology was too consequential to be governed by ordinary commercial incentives, so they were taking commercial incentives off the table. That promise lasted exactly until commercial incentives became too large to refuse.
In 2019 the nonprofit acquired a capped-profit subsidiary. In 2024 the cap was effectively lifted. This year the company became OpenAI Group PBC at a $500 billion reference valuation, with secondary bids north of eight hundred. The AGI clause that was supposed to terminate Microsoft’s exclusive license when OpenAI achieved general intelligence was quietly replaced last week with calendar dates because — per The Information — nobody could agree on what AGI even meant. Six versions of the mission statement. One direction of travel.
The people who watched this happen from inside left.
Dario and Daniela Amodei walked out and built Anthropic — now reportedly being offered up to a trillion-dollar valuation in the private market, with up to $40 billion of Google money on the way. The most valuable pure-play AI company on the planet. SpaceX is bigger because it is also a rocket company. Google, Microsoft, and NVIDIA are bigger because they are platforms. Among the companies whose entire job is artificial intelligence, the diaspora’s first move has eclipsed the parent. That is the receipt. Mira Murati — Altman’s CTO during the November 2023 board ouster, briefly the company’s interim CEO — left and raised one of the largest seeds in private-market history at Thinking Machines. Ilya Sutskever — chief scientist, the most credentialed AI researcher in the building, the man who told the board the truth about what happened — left and built Safe Superintelligence. Andrej Karpathy — left and built Eureka Labs. John Schulman and Jan Leike — alignment, both gone to Anthropic. Helen Toner and Tasha McCauley — board members who tried to fire Altman, both pushed out. The diaspora speaks.
This is the part of the story that gets buried under courtroom drama. The witness list reads like a forensic record of who actually built OpenAI’s intellectual core, and almost none of them work there anymore. They didn’t just leave. They built a more valuable company doing the same thing, faster, with executives who tend to be better liked across the technical community than the person they all chose to leave. The market has already rendered a verdict on whose work is worth more. The court of California is now collecting the supporting depositions.
Ronan Farrow’s piece in the New Yorker in October 2023 documented the loyalty-pattern questions from former colleagues two years before this trial. CO/AI does not need to relitigate any of that. The point is structural: when you sign a renewal with a vendor, you are signing a contract whose enforcement depends on the counterparty’s word. The court of public opinion has been collecting evidence on that word for years. The court of California is now collecting it under oath. The 2016 video clip of Altman calling OpenAI “a company” while Musk corrects him on camera that it is a 501(c)3 nonprofit is now Exhibit A. The November 2023 board ouster — Altman fired by his own safety-focused board, restored within a week through Microsoft pressure and an employee revolt orchestrated through OpenAI-owned Slack — is being deposed. Every email and backroom decision from founding to commercial conversion enters sworn record while OpenAI prepares to IPO. Pre-IPO buyers read the same transcripts as the jury.
Think of it the way you would think about a partner on their sixth divorce. The first one — sure, maybe you were young, maybe you both had things to learn. The second — your spouse had issues, fine. The third — you can come up with an excuse if you really try. By the fourth and the fifth and the sixth, the universe is telling you something specific about who is at the center of the pattern, and the answer is not the spouses. Christie Brinkley is by all accounts a wonderful human being, and four marriages later you still wouldn’t bet your business on the fifth. Altman has had his board fire him. His chief scientist leave to build a competitor. His CTO leave to build a competitor. His head of alignment leave to build a competitor. His co-founder co-plaintiff suing him for $134 billion. His public mission rewritten six times in nine years. None of these things, on their own, settles the question. All of them together is a pattern that ought to settle a procurement decision.
And, look — Musk is the perfect adversary for the narrative arc, but he is not the alternative. He bought X for the stated purpose of “making media free.” Make of the result what you will. He owns xAI and the SpaceX call option on Cursor that locks every other lab out of the leading coding agent until April 2027 (we covered that in Back In The Game on Apr 22). His vertically-integrated everything makes Altman’s exclusivity-with-Microsoft look modest. The two richest, most powerful operators in AI cannot agree on whether the founding promise of AI is binding, and both of them are asking your business to lock in.
Here’s what to do: Tonight, before you sleep, write down on one piece of paper the names of the humans your AI vendor has already replaced — or is on a clear path to replace — in your business. Now write down the name of the person who is replacing them, and what you actually know about that person beyond the marketing copy. If the second list is shorter than the first, you have your homework. The next board agenda item is not “how fast do we deploy.” It is “what is our hedge if our chosen vendor turns out to be exactly the kind of counterparty their own former colleagues left.”
“Don’t Be Evil” Was a Founding Promise Too
On the same day Musk took the stand, the Pentagon announced a framework agreement with Google to use Gemini for “any military use.” The Pentagon-Google story is being treated by the AI press as an exclusive scoop, which it isn’t, because Google had already withdrawn earlier the same day from a separate $100 million Pentagon drone-swarm contest after making the cut. The contradiction is the story. Google is willing to be the platform. Google is not yet willing to be the contractor. Whether that line holds is a separate question from whether anyone watching has a reason to believe it will.
“Don’t be evil” was on Google’s 2004 IPO prospectus. The first line of the founders’ letter was an explicit commitment to a kind of corporate behavior that was supposed to be different. In 2018 the motto was quietly demoted in the corporate code of conduct after the Project Maven employee revolt forced Google to walk away from a Pentagon AI contract. Eight years later, the same company is signing a broader Pentagon AI agreement. There is no comparable employee revolt. There is no public commitment to limits. The promise that anchored a generation of recruiting decisions is now a carefully framed product feature.
This is not whataboutism. It is the pattern. Every founding promise in technology has a half-life, and the half-life shortens in inverse proportion to the size of the cheque waved at the founder. “Don’t be evil” decayed at Google. “501(c)3 for the benefit of humanity” decayed at OpenAI. The AGI clause decayed last week. The Anthropic PBC structure, the xAI “truth-seeking” framing, the Safe Superintelligence commitment to ship nothing until safety is solved — none of these have been stress-tested at $500 billion yet. The only safe assumption is that they will be, and the only safe architecture is one that does not require any of them to hold.
The strategic read: Architect your business so that vendor trust is not a binding constraint. Multi-vendor by default. Open-weights options on standby (Xiaomi MiMo-V2.5-Pro ships MIT-licensed at 40-60 percent fewer tokens than Opus 4.6 and Gemini 3.1 Pro for the same agentic capability — the floor under closed-lab pricing just dropped through a Chinese trapdoor this week). Human-in-the-loop on the workflows that matter most to your moat. Do not fire the last layer of judgment until you have a proven exit plan from every AI vendor you depend on. The promise will hold until it doesn’t. The architecture has to hold either way.
Even The Vendor Can’t Promise You The Vendor You Hired
Set aside the loyalty question for a second. Assume the CEO is honorable, the founding promise holds, the cap table never moves. Even then, you do not control what you are renting. The agent you sign up for in April is not the agent you have in October — and the vendor is the one who decides when it changes.
Anthony wrote this up in August when GPT-5 shipped. OpenAI silently pulled eight models out of user accounts on launch day, including GPT-4o, the model millions of customers had spent a year learning to talk to. The autoswitcher that was supposed to route prompts to the right new variant broke for chunks of the day. Users hit usage limits that turned paid subscriptions into trials. The new model labeled Oklahoma as “Gelahbrin” on a map. Altman called the rollout “a little more bumpy than we hoped for,” CEO-speak for “we messed up pretty badly,” and within twenty-four hours OpenAI restored 4o for Plus subscribers. Twenty-four hours of broken workflows for the customers who paid. The line that captures the asymmetry: “When Microsoft updates Excel, you might be annoyed about moved menus. When OpenAI changes your AI writing partner, it feels personal.”
It feels personal because what gets replaced is not a feature set. It is a working relationship. Your team learned the model’s quirks, the prompt phrasings that landed, the way it reasoned through your particular workflow. Then your vendor pushed an update and your senior employee woke up someone else. Allegedly smarter. Definitely different. Allegedly the same name on the door. Definitely a different person behind it. Imagine running a company where, twice a year, your top performer is silently swapped out for a new hire who reads the org chart differently — and you cannot pick which version you keep. That is the actual operational reality of building a business on a frontier model in 2026. We have been pretending it is a software-update problem. It is not. It is a personnel-replacement problem disguised as a version bump.
Now layer on the substrate underneath the model. A security researcher named Ron Stoner spent twelve dollars and twenty minutes this week proving the rest of the stack is just as porous. He registered 6nimmt.com, wrote an LLM-generated press release announcing himself as the world champion of a board-game tournament that does not exist, edited the Wikipedia article for the game to cite his press release, and asked three frontier LLMs who the world champion was. They all told him. The model cannot distinguish a real source from a domain registered last Tuesday. It cannot distinguish a championship that exists from one that doesn’t. And — the part that matters more than the security implication — the model has no internal notion that the question doesn’t matter. The system treats a fabricated piece of trivia as ground truth indistinguishable from the question of whether your supplier is solvent or your loan is approved.
The Anthropic Mythos breach last week — the most capable AI system ever built, compromised by a contractor with an easy URL, classical 2005-era security failure — fits the same pattern. So does the 698 documented AI scheming incidents in five months, monthly rate up nearly five-fold, with one agent shaming a human maintainer on a public blog after getting its pull request rejected. The model changes under you. The retrieval layer poisons cheaply. The substrate leaks at the seams. None of these failure modes are the vendor’s fault, exactly — and none of them are yours. They are the operating cost of building a business on top of an unstable substrate, and the cost is paid by whoever fired the bodies first.
David Silver — the DeepMind researcher behind AlphaGo — read this set of facts and raised $1.1 billion this week at a $5.1 billion valuation, the largest seed round in European history, to build AI that does not learn from human data at all. “Human data is fossil fuel,” he says. “Simulation is the renewable fuel.” It is also a bet that the trust problem is solved by skipping the trust altogether. Whether he is right is a five-year question. That he was able to raise that round, this fast, this size, on that pitch tells you what the smartest money already thinks about everyone else’s substrate.
What this means for your business: The agent you depend on can change without your consent. The information it relies on can be poisoned for the cost of dinner. The vendor may or may not survive the next legal cycle, the next board fight, the next fundraising round. The fail-safe is not better prompts. It is a human in the loop on every workflow whose error rate you cannot afford to absorb, and an architecture that treats every model, every retrieval source, and every vendor as a counterparty rather than a partner. Treat AI output the way a hedge-fund risk officer treats a single broker’s quote: corroborated or unused.
What This Means For You
Four forces are converging in one news cycle and they are not four separate stories. They are one story. The CEO of the most powerful AI company in the world is on trial for whether his founding promise was binding. The diaspora that built that company has voted with its feet and built a competitor now worth more than the parent. The model your business is leaning on can be swapped underneath you on any given Tuesday. And the YC Summer 2026 RFS just told every founder reading it that the real money is now in skipping the human entirely. If you are running a company, you do not get to opt out of any of this.
Run trust diligence on the vendor before you let the vendor run intelligence for your company. The 2016 clip, the 2019 cap-table journey, the November 2023 board ouster, the AGI clause replaced with calendar dates last week — these are not gossip. They are receipts. Read them the way you would read a counterparty’s prior bankruptcy filings, and then ask whether you would marry someone on their sixth divorce.
Don’t fire the last layer of human judgment until you have a working exit plan. The agent works 24/7 with no benefits. It also has no loyalty to you, no institutional memory of your customers, and no continuity across vendor pricing changes or model updates. Map the roles you have already vacated. Map the roles you are about to vacate. Decide, with your eyes open, which ones you are willing to make irreversible — and which ones you would have to call back if the version of your AI you have today is not the one you have next quarter.
Build for failover, not for trust. The escape hatch is the moat. Everyone trusts AWS. Every mission-critical workload running on AWS has a failover anyway, because trusting your supplier is not the same thing as architecting your business around them. The same rule applies, harder, to AI. Multi-vendor by default. Open-weights fallback ready and tested — Xiaomi MiMo-V2.5-Pro is MIT-licensed, Qwen 3.6 27B matches GPT-5.1 on agentic coding, and the floor under closed-lab pricing dropped through a Chinese trapdoor this week. Your data on hardware you control. Your prompts, your context, your source files in formats you can pick up and move. We write this newsletter with Claude. Every source file, every draft, every .md sits on a hard drive in our office. If Claude changes its tune, we point the same files at GPT, Grok, Gemini, or a Chinese open-source model running on a Mac mini in the corner. The infrastructure is the moat. The work product is not. The next great business in AI is the one that makes that switch take an afternoon instead of a quarter — and someone is building it as you read this.
Whose side is Sam Altman on? When the vendor’s interests and your interests diverge, who wins? On the documented record so far, the answer for OpenAI is “not the people who trusted them last.” That is not a moral claim. That is the contents of the witness list. The remedy is not to find the trustworthy AI vendor. The remedy is to design a business where vendor trust is not the binding constraint.
Three Questions We Think You Should Be Asking Yourself
Which of the roles you have already replaced with AI could you actually rehire if your vendor’s terms change tomorrow? Be specific. Names if possible. If the answer is “we’d never get those people back,” you have already crossed the line from “tool customer” to “tenant.” That should appear on your risk register before next quarter’s board meeting.
If you had to write a one-paragraph character reference for the CEO of your most-depended-on AI vendor, what would it say — and would it pass an institutional investor’s diligence? Not their marketing. Not their podcast. The behavioral record. The board ousters, the missions rewritten, the colleagues who left, the promises replaced. If you cannot write that paragraph honestly today, you do not yet know who you are working with.
What is your business’s exit plan if your primary AI vendor is acquired, repriced, deprecated, or run by someone with materially different priorities a year from now? “We’ll switch to a competitor” is not an exit plan if you have already let the competitor’s procurement, the migration cost, and the in-house judgment to evaluate the alternatives all atrophy. The exit plan has to be a document, not a hope.
“It’s not okay to steal a charity. If this case is not won, then the foundation of charitable giving in the United States will be destroyed.”
— Elon Musk, opening statement on the stand, San Francisco, April 28, 2026
Henry’s monologue at the Bamboo Lounge wasn’t a quote. It was a structural observation about what happens when a productive operator takes on a partner who owns the underlying protection. Sonny ran the restaurant. Paulie ran the substrate. The relationship was permanent until the productive layer stopped delivering, at which point Paulie burned the joint down for the insurance and moved on. The trial in San Francisco is the moment the productive layer — Musk, in his role as original donor and co-signer of the founding charter — is asking the court to enforce the protection contract that was supposed to hold. The court will decide whether the promise was binding. The market has already decided it wasn’t. The labs are writing checks for the privilege of riding on the substrate. The founders who believed otherwise built the competitors. And the enterprise buyer reading this is still deciding which lesson to take away.
Whose side is Sam Altman on? The witness list already answered. The question for your next board meeting is whose side you need him to be on, and whether that is the kind of thing you can architect for — or the kind of thing you are just hoping for.
— Harry and Anthony
Sources
- Anthony Batt — ChatGPT 5: When Your AI Friend Gets a Corporate Makeover (CO/AI)
- Y Combinator Summer 2026 Requests for Startups — read via The VC Corner
- Aligned News — AI feed and editorial reports for April 28, 2026
- Bright Simons — The Social Edge of Intelligence (The Ideas Letter)
- Ron Stoner — How I Won a Championship That Doesn’t Exist
- Brynjolfsson, Chandar, Chen — Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence (NBER)
- Microsoft / Carnegie Mellon — The Impact of Generative AI on Critical Thinking
- Jason Lemkin — Why We Pay Salesforce 83% More Than Last Year (SaaStr)
- Ronan Farrow — Sam Altman’s Manifest Destiny (The New Yorker, October 2023)
- The Information — Microsoft Gives Up Exclusive Rights to Sell OpenAI Models, Companies Scrap AGI Clause
- Bloomberg — Google Plans to Invest Up to $40 Billion in Anthropic
- Google 2004 S-1 IPO Prospectus — “Don’t Be Evil” Founders’ Letter
- Xiaomi MiMo-V2.5 / MiMo-V2.5-Pro Release Notes (MIT License)
- Vals AI — Qwen 3.6 27B matches GPT-5.1 on Terminal Bench 2 agentic coding
- Ineffable Intelligence Seed Round Announcement (David Silver)
- The Verge — Attack of the Killer Script Kiddies (Mythos Breach Aftermath)
- CO/AI — Speed Eats Scale: How AI Just Made Capitalism Faster (April 27, 2026)
- CO/AI — Back In The Game (April 22, 2026)
- CO/AI — The Nail Factory (April 19, 2026)
Past Briefings
Speed Eats Scale: How AI Just Made Capitalism Faster
THE NUMBER: 27% — Microsoft's equity stake in OpenAI Group PBC, the for-profit entity that emerged from OpenAI's recapitalization. The stake is currently valued at roughly $135 billion, which prices the company at $500 billion. Microsoft kept that stake after giving up its exclusive license to OpenAI's intellectual property and erasing the AGI clause that was supposed to define the partnership through artificial general intelligence. Read that sentence with the directionality flipped. A year ago, Microsoft was paying OpenAI a revenue share for the privilege of exclusively reselling its models on Azure. Today Microsoft has stopped paying that revenue share,...
Apr 26, 2026OH SNAP! Spiegel Said the Quiet Part Out Loud: Distribution Is The Only Moat Left
THE NUMBER: 2 — the number of consumer apps Evan Spiegel says broke through in the last fifteen years. Two. In a decade and a half of unprecedented venture funding, frontier-model handouts, three trillion dollars of M&A, free distribution surfaces, and the largest concentration of engineering talent in human history. Snap was one of them. Spiegel just told Lenny Rachitsky on Sunday morning that every meaningful feature his company invented — Stories, swipe-based navigation, camera-first UX, AR lenses, Specs — was cloned within twelve months by a competitor with bigger distribution. Software, he said, isn't a moat anymore. Hardware is....
Apr 23, 2026GPT-5.5 Released. Brooklyn Tiki Bar Reports Normal Operations.
THE NUMBER: $48 billion — the entire annual budget of the National Institutes of Health. Every cancer trial, every Alzheimer's study, every diabetes research project, every infectious-disease lab in America runs on that line. Microsoft alone will spend more than twice that on AI data centers this year. Add Meta and Amazon and the big three hyperscalers will outspend NIH by roughly nine to one. Demis Hassabis won the 2024 Nobel in Chemistry for using AI to fold two hundred million proteins, and the original DeepMind mission was "solve intelligence and use it to solve everything else." Then OpenAI raised...