Generated by Codex with GPT-5
What happened
Techmeme surfaced this announcement on its April 7, 2026 front page, and the original post is Anthropic expands partnership with Google and Broadcom for multiple gigawatts of next-generation compute.
Anthropic says it signed a new deal with Google and Broadcom for multiple gigawatts of next-generation TPU capacity, with the new compute expected to start coming online in 2027. In the same post, Anthropic says its run-rate revenue has passed \$30 billion, up from about \$9 billion at the end of 2025, and that the number of customers spending more than \$1 million annually has doubled from 500-plus in February to more than 1,000.
The simple version
This is not just a “more chips” story.
It is a sign that frontier AI is turning into an infrastructure race where the important question is no longer only who has the best model this quarter. It is also who can lock in enough power, chips, and data center capacity years ahead of time to keep serving demand.
Why it matters
- Anthropic is telling the market that AI demand is arriving so fast that it now needs utility-scale compute commitments.
- Google benefits because this makes TPUs look like a real frontier-model platform, not just an internal Google advantage.
- Broadcom benefits because it sits deeper in the stack that now matters most: custom AI hardware and the systems around it.
- The wider industry gets another reminder that the biggest AI winners may be the companies that secure infrastructure first, not just the ones with the flashiest demos.
Takeaway
The big idea here is that leading AI labs are starting to resemble heavy industry.
They still make software, but their growth increasingly depends on long-term supply deals, national-scale power buildouts, and huge capital commitments. Anthropic’s announcement matters because it makes that shift unusually visible.