Every technological boom has a point at which the numbers become unbelievable. This spring, Anthropic quietly crossed that threshold by announcing commitments totaling about $300 billion across Google Cloud and Amazon, securing enough compute capacity to keep its Claude models operating and expanding for the better part of ten years.
Amazon provides five gigawatts. Google Cloud reportedly received $200 billion over a five-year period. contracts that currently make up over half of the $2 trillion in total cloud backlogs across AWS, Azure, and Google Cloud Platform, according to one estimate. As you stand outside any of those shiny data center campuses and observe the cooling systems humming through the night, you begin to question whether anyone is keeping a close eye on the true costs of all of this, both financially and physically.
| Category | Details |
|---|---|
| Company | Anthropic |
| Founded | 2021 |
| Headquarters | San Francisco, California, USA |
| CEO & Co-Founder | Dario Amodei |
| Core Product | Claude — AI assistant and frontier language model |
| Run-Rate Revenue (2026) | $30+ billion (up from ~$9B in late 2025) |
| Amazon Deal Value | $100+ billion over 10 years — up to 5GW of compute capacity |
| Google Cloud Commitment | $200 billion over five years |
| Total Amazon Investment | Up to $28 billion (including prior $8B) |
| Alphabet Investment | Up to $40 billion |
| Compute Capacity Target | Nearly 1GW via Amazon chips by end of 2026 |
| Hardware Stack | AWS Trainium, Google TPUs, Nvidia GPUs |
| Cloud Footprint | AWS Bedrock, Google Vertex AI, Microsoft Azure Foundry |
| Customers on AWS | Over 100,000 |
Just the revenue trajectory is astounding. This year, Anthropic’s run-rate exceeded $30 billion, more than tripling from approximately $9 billion at the end of 2025. By its own admission, the company started to feel the strain of its own success as enterprise demand increased and consumer tiers grew. Reliability problems during peak hours affected both free, Pro, Max, and Team users. Dario Amodei, the CEO, put it simply: infrastructure must keep up with demand. However, to keep up, more chips, cooling, electricity, and land are required. Whether any of those pledges are accompanied by a thorough examination of energy sourcing is still up for debate.
Claude is operated by Anthropic on a truly diverse hardware stack that is both environmentally complex and operationally intelligent, including AWS Trainium chips, Google TPUs, and Nvidia GPUs. Every platform has its own carbon accounting, water consumption, and energy footprint. Over a million Trainium2 chips are reportedly used by Amazon’s Project Rainier, the massive compute cluster that was introduced in collaboration with Anthropic.

By year’s end, more Trainium3 capacity is anticipated. Google and chip designer Broadcom have reached an agreement for several gigawatts of tensor processing units, which will go online beginning in 2027, further deepening Google’s involvement. The implied power demands are not hypothetical. Cities are measured in gigawatts.
There’s a feeling that the AI sector is actually grappling with this, or at least engaging in public wrestling. With differing degrees of credibility, a number of significant cloud providers have made net-zero commitments. However, those commitments were made prior to the current arms race in training, before multi-gigawatt commitments became commonplace, and before AI companies started using enough electricity to resume discussions about nuclear energy and generate fresh estimates of grid stress in Virginia, Texas, and Iowa. Through renewable contracts, it’s possible that Google and Amazon are taking on a sizable share of this growth. Additionally, the math may not add up as neatly as the press materials indicate.
The company’s stated identity is what makes Anthropic’s position particularly intriguing—and, depending on your point of view, concerning. This business focuses on AI safety. It was started by individuals who had left OpenAI, in part due to worries about the speed at which the technology was being implemented. Nevertheless, due to the sheer logic of demand and competition, it is now among the world’s biggest users of industrial-scale computing, entering into ten-year contracts that guarantee a physical infrastructure footprint that will outlive any existing safety framework. The conflict between those two realities is not easily resolved.
It’s difficult to ignore how far behind the financial discussion of AI has been the environmental one. The amount of electricity used in data centers to power Alphabet’s cloud backlog does not fluctuate when the company’s stock rises. The cooling water needed to maintain that growth doesn’t make headlines when run-rate revenue triples. The goals of the Silicon Boom are measured in gigawatts, and it is real and accelerating. Its repercussions will eventually follow.
