Anthropic signs deal with Google, Broadcom for next-generation AI chip capacity.
Run-rate revenue crosses $90 billion, driven by rising demand for Claude.
AI infrastructure demand surges globally as data centre capacity needs accelerate.
Anthropic signs deal with Google, Broadcom for next-generation AI chip capacity.
Run-rate revenue crosses $90 billion, driven by rising demand for Claude.
AI infrastructure demand surges globally as data centre capacity needs accelerate.
Anthropic announced on April 7 that it has signed an agreement with Google and Broadcom to secure multiple gigawatts of next-generation tensor processing unit (TPU) capacity that will come online in 2027.
In addition, the AI company revealed that its run-rate revenue has surpassed $90bn, up from $30bn at the end of 2025, due to the rising global demand for Claude.
Anthropic requires AI chips to train and run large language models that power its generative AI systems. These chips handle massive data processing and complex computations required for scaling models like Claude, enabling faster training, improved performance and handling growing user demand.
Anthropic has teamed up with Google to get access to advanced AI chips and cloud infrastructure.
Google helps Anthropic train and deploy AI models on a large scale by giving them access to its Tensor Processing Units (TPUs) and cloud services.
Broadcom supports AI development by designing and supplying custom AI chips and networking components. These chips improve data transfer and computing efficiency, which are critical for training large AI models and running high-performance data centres.
“We are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development,” CFO of Anthropic, Krishna Rao stated in a blog post.
The rapid growth of generative AI is driving an unprecedented surge in demand for specialised computing infrastructure, with global data centre capacity demand projected to grow at an average rate of 33% annually between 2023 and 2030, according to an October 2025 report by McKinsey & Company. McKinsey estimates that 70% of total data centre capacity demand by 2030—which could rise to 220 gigawatts (GW) by that year—will be driven by AI workloads.
Another McKinsey & Company report published in April 2025 stated that data centres are projected to require $6.7trn worldwide by 2030 to keep pace with the demand for compute power. Data centres equipped to handle AI processing loads are projected to require $5.2trn in capital expenditures, while those powering traditional IT applications are projected to require $1.5trn in capital expenditures (see sidebar “What about non-AI workloads?”). Overall, that’s nearly $7trn in capital outlays needed by 2030—a staggering number by any measure.
Industry estimates suggest that training frontier models now costs hundreds of millions of dollars, underlining the need for long-term chip supply agreements. As competition intensifies among AI firms, securing reliable, high-performance computing capacity has become a strategic priority to sustain innovation and meet rising user demand.