Anthropic Planned for 10x Growth. It Got 80x. Then Elon Musk Called.
Inside the infrastructure crisis that forced the most unlikely partnership in AI history
I scrolled past it twice before it actually registered.
It was a Musk post on X from May 6, and it ended with: “Everyone I met was highly competent and cared a great deal about doing the right thing. No one set off my evil detector.”
He was talking about Anthropic.
The same company he had publicly accused of hating Western civilization. That same outfit he had called misanthropic on X three months earlier. The same individuals are at the center of everything he appears to oppose regarding AI safety. And there he was, complimenting them like a business partner he’d known for years. Because that is exactly what they had just become.
Because on the same day that Musk was sitting in a federal courthouse in Oakland, California, testifying for the third day in his lawsuit against OpenAI and its CEO Sam Altman, he handed Anthropic the entire Colossus 1 supercomputer. All 220,000 Nvidia GPUs. All 300 megawatts of compute.
Every single chip.
That sentence would have made zero sense in February.
When the Best AI Model in the World Runs Out of Power
To understand why this happened, you should go back to a strategic decision Dario Amodei made a few years ago. Rather than racing to acquire GPUs at any price, the way OpenAI had, he chose deliberate restraint on infrastructure. His logic was defensible. If demand didn’t materialize as fast as expected, a lab sitting on mountains of expensive, depreciating hardware would be in danger. This scenario isn’t new: at the start of the 2000s, telecommunications firms invested heavily in fiber optics, anticipating a surge in internet usage that eventually materialized, though not within the predicted timeframe. Worldcom, Global Crossing, names nobody remembers today. They were right about the future and catastrophically wrong about the timing. Dario, who knows this chapter of tech history well, wanted to avoid becoming the Worldcom of AI.
The problem is that the wave arrived roughly ten times faster than any model predicted.
At the Code with Claude developer conference in San Francisco on May 6, Amodei admitted on stage that Anthropic had planned for tenfold growth in 2026. What they actually got, on an annualized basis in Q1, was 80 times. He called it “just crazy” and “too hard to handle.” Those aren’t the words of a CEO being strategically modest. Anthropic’s annualized revenue had climbed from $9 billion at the end of 2025 to $30 billion by April 2026 — the full trajectory documented by VentureBeat runs from $87 million in January 2024 to $1 billion by December 2024 to $30 billion sixteen months after that. For context, Salesforce took 20 years to reach $30 billion in annual revenue. Anthropic did it in under three years from a standing start.
And that growth had one product at its center. Claude Code, the agentic coding tool launched publicly in mid-2025, hit $1 billion in annualized revenue within six months of launch. Over 1,000 enterprise customers now spend more than $1 million per year on Anthropic services. Goldman Sachs, Visa, Citi:
Roughly 40% of Anthropic’s top 50 clients are financial institutions. The demand was real, enormous, and accelerating. The infrastructure was not.
To anyone who had been using Claude daily, the signs were hard to miss. Quotas were quietly reduced without warning, degraded performance during peak hours, and weeks of bugs in Claude Code that Anthropic initially refused to acknowledge. It was eventually confessed by the company in a late April postmortem that three bugs had impacted Claude Code from March 4th onwards, that internal testing failed to identify them, and that the subsequent performance issues had persisted for several weeks. I pay a subscription for this. It is genuinely maddening to watch the best model on the market run out of computing power to serve the people paying for it. You pay, you stay loyal through the chaos, and the response is silence.
This context matters because it explains why the Stanford AI Index 2026 is the most important single document for understanding the moment we’re in. In just three years, generative AI has been adopted by 53% of the global population, according to the report, making its growth faster than that of personal computers, the internet, and smartphones. Nothing in the history of consumer technology has spread this quickly. This isn’t a bubble slowly inflating. It is a dam that broke. Demand had been compressed for years, and the moment AI tools crossed a quality threshold, everything flooded through at once. Anthropic was standing directly in that current with inadequate infrastructure and a conservative bet that had just come back very hard.
The IPO Hiding Inside the Partnership
Now the real question: why would Elon Musk help?
Three months before the deal, Musk was posting on X asking whether there was “a more hypocritical company than Anthropic.” He called them misanthropic. He said they hated Western civilization. And then, while simultaneously testifying in federal court against OpenAI and Sam Altman, he signed the most significant compute partnership in Anthropic’s history.
Here is what changed: SpaceX filed a confidential S-1 on April 1, 2026, targeting a valuation between $1.75 trillion and $2 trillion, with an IPO expected as early as June. To justify a number like that in front of public market investors, SpaceX needed to prove that its infrastructure wasn’t just a cost center. It needed to demonstrate that Colossus could generate revenue. And Colossus 1, which had been built specifically to power Grok, was not running anywhere near its potential — Grok had never attracted the user base the facility was designed for.
Antoine Chkaiban, an analyst at New Street Research, estimates the Anthropic deal will generate $3 billion to $4 billion in annual revenue for SpaceX, with more than $2.5 billion in cash profit. The margins are extreme because the data center is already built, and the capital expenditure has already sunk. Every GPU that runs for Anthropic is nearly pure profit. “He’s not going to want multiple billions of dollars of GPUs sitting idle,” Chkaiban told Fortune. On the same day the deal was announced, Musk posted that xAI would be dissolved as a separate entity and folded entirely into SpaceX, now operating under the name SpaceXAI. A liability became a revenue engine. A dying subsidiary got absorbed. And the entire package got wrapped up neatly just before the IPO roadshow. The timing, as TechCrunch put it, is best read as “a major heat check before the IPO.”
There’s also the OpenAI dimension, which no serious analysis of this deal can skip. The day before the trial began, Musk had privately messaged Greg Brockman, OpenAI’s president and co-founder, to probe for an out-of-court settlement. When Brockman proposed that both parties drop their respective claims, Musk reportedly replied that Brockman and Altman would be “the most hated men in America” by the end of the week. That’s the temperature. In that context, an alliance with Anthropic — which competes directly with OpenAI — takes on a dimension that goes beyond computing costs. One industry observer framed the calculus precisely: Elon’s enemy is Sam. Dario’s enemy is Sam. Enemy of my enemy is a compute partner. Reinforcing Anthropic weakens OpenAI by proxy, especially when you’re simultaneously trying to get Sam Altman removed from his own board.
The Kill Switch Nobody Put in the Press Release
What makes this deal genuinely worth watching is a clause that appeared nowhere in Anthropic’s official announcement or in any formal press release. In a separate post on X, Musk added that SpaceX reserves the right to reclaim access to Colossus 1 if Anthropic’s AI “engages in actions that harm humanity.” Fortune confirmed the clause exists in that tweet, but noted clearly that whether it appears in the actual contract is unknown.
If it is enforceable, the implication is significant. Musk now holds a direct lever over one of the three most important AI labs on the planet, while simultaneously suing the second one in federal court. That is a structural position nobody held two weeks ago.
The political layer makes it stranger still. In March, the Trump administration designated Anthropic a supply chain risk, according to CNBC, effectively blacklisting it from Pentagon contracts and defense work. Microsoft, Google, and xAI signed agreements allowing the federal government to test their AI tools. Anthropic was not part of that group. A company approaching a $900 billion private market valuation is simultaneously persona non grata in Washington. A partnership with SpaceX, which has the closest relationship to the current administration of any major tech contractor, is not a purely commercial arrangement in that environment.
The Real Winner Is Whoever Controls the Watts
Here is the structural question this whole episode raises, and it’s the one that matters most for what comes next. If Anthropic can train Claude on Nvidia GPUs, on Google TPUs, or on Amazon Trainium — and if xAI can hand its entire supercomputer to a competitor — then what, exactly, is a chip worth?
We’ve seen this transition before. At the start of the 20th century, the people who found oil had everything. Once oil became a commodity, power dynamics changed, moving first towards refining operations, then to distribution, and ultimately to the large-scale generation of energy. The GPU may be entering that same commoditization curve. The chip remains critical, but if every major lab can substitute between hardware providers, the strategic value of any individual architecture begins to compress.
What doesn’t compress is the energy underneath all of it. Colossus 1 alone consumes 300 megawatts — enough to power more than 300,000 homes, and that is a single deal in Anthropic’s compute strategy. Anthropic has committed to 5 gigawatts with Amazon and 5 gigawatts with Google and Broadcom, as well as $30 billion for Azure capacity with Microsoft and Nvidia, and a $50 billion investment in U.S. AI infrastructure. The race for compute is, at its foundation, a race for energy — and the entities that control electricity at scale will set the terms for who can run models and at what cost.
Fortune’s framing is accurate. The model race is no longer Musk’s main objective. He’s positioning SpaceX to become the infrastructure layer that all the other labs run on top of.
The landlord of AI. The owner of the walls that the entire industry rents from.
According to every piece of data we have, supply will outpace demand for the better part of a decade. Because when you step back and think about it, the demand for intelligence is essentially infinite. There will always be one more problem to solve, one more process to automate, one more question that needs answering. What we’re watching right now, in this deal between a company that insulted another company three months ago and the company it insulted, is the reconfiguration of who produces what in the knowledge economy. That shift is already underway, and the people who understand the infrastructure layer — not just the models — are the ones positioning themselves to shape what comes next.
Thanks for reading. Your thoughts in the comments are very welcome.



