About
Subscribe
  • Home
  • /
  • Storage
  • /
  • Top cloud service providers reach combined $710bn capex

Top cloud service providers reach combined $710bn capex

Staff Writer
By Staff Writer, ITWeb
Johannesburg, 26 Feb 2026
Google widens its ASIC lead, with TPUs expected to power nearly 78% of its AI servers this year.
Google widens its ASIC lead, with TPUs expected to power nearly 78% of its AI servers this year.

The combined capital expenditure (capex) of the world’s eight biggest cloud service providers (CSPs) − Google, Amazon Web Services (AWS), Meta, Microsoft, Oracle, Tencent, Alibaba and Baidu − is projected to exceed $710 billion in 2026.

This is according to a new TrendForce report focusing on the top eight cloud service providers’ capital expenditure for 2026 and the () server market.

The report notes the capex of the world’s eight biggest CSPs will see a year-on-year (YOY) increase of approximately 61%, driven by growing demand for AI products and services.

“CSPs are ramping up investments in AI servers and infrastructure, driven by expanding AI workloads and upgrades,” according to TrendForce.

“The cloud giants continue to procure NVIDIA and AMD GPU platforms, but there is a marked shift toward in-house application-specific integrated circuits (ASIC) to optimise AI workloads and improve centre cost-efficiency.”

An ASIC is a custom-designed semi-conductor chip built for a specific workload or application, rather than for general-purpose computing.

Google maintains dominance

TrendForce forecasts Alphabet, Google’s parent company, will see 2026 capex surpass $178.3 billion, up 95% YOY.

Google’s early adoption of in-house ASICs has given it a substantial lead in AI research and development. Its Tensor Processing Unit (TPU) roadmap is expected to transition to the next-generation v8 platform this year, says TrendForce.

Driven by demand from Google Cloud Platform and Gemini AI applications, TPUs are projected to account for nearly 78% of AI servers shipped to Google in 2026.

“This positions Google as the only major CSP with more ASIC-based servers than GPU-based systems, widening the gap with competitors reliant on GPU deployments.”

Amazon continues to scale GPU deployments with NVIDIA GB300 and V200 rack-scale systems to support AI training and inference, with GPUs expected to represent nearly 60% of its AI server build-out.

On the ASIC front, Amazon’s next-generation Trainium 3 chips are expected to ramp up in the second quarter of 2026, following the rollout of Trainium 2 and 2.5. TrendForce notes shipment momentum may accelerate in the second half of 2026, as software maturity and system validation progress.

Meta’s capex is forecast to exceed $124.5 billion in 2026, up 77% YOY, the report predicts.

The company’s AI servers will continue to rely primarily on NVIDIA and AMD GPUs, which will account for over 80% of its AI server build-out.

“Meta is also advancing its in-house MTIA ASIC platform to lower unit compute costs, although software-hardware tuning challenges may limit shipment volumes relative to expectations.”

Microsoft remains committed to procuring NVIDIA rack-scale systems, while introducing its in-house Maia 200 chip for high-efficiency AI inference workloads.

Oracle is similarly expanding GPU rack-scale deployments to support AI data centre projects, including initiatives such as Stargate and collaborations with OpenAI.

Global CSP capital expenditure will exceed $710 billion in 2026, driven by next-generation chip procurement. (Infographic source: TrendForce)
Global CSP capital expenditure will exceed $710 billion in 2026, driven by next-generation chip procurement. (Infographic source: TrendForce)

[CAPTION] Global CSP capital expenditure will exceed $710 billion in 2026, driven by next-generation chip procurement. (Infographic source: TrendForce)

TrendForce estimates that multinational internet giant ByteDance will allocate over half of its 2026 capex to AI chip procurement. NVIDIA H200 is expected to be a key solution for its AI servers, alongside domestic chips from Cambricon, subject to US–China regulatory developments.

Tencent will continue to source NVIDIA GPUs for cloud and generative AI services, while partnering with local developers on ASIC solutions to diversify compute resources, the report forecasts.

Alibaba and Baidu are advancing proprietary ASIC development. Alibaba, through T-head and Alibaba Cloud, supports public cloud and AI infrastructure, while developing Qwen large language models and enterprise/consumer software.

Baidu plans to roll out its next-generation Kunlun chips post-2026 and is expanding its Tianchi AI server cluster platform, capable of linking hundreds of AI chips to enhance system-level compute power.

“The TrendForce analysis underscores that AI server procurement strategies are increasingly diversifying across CSPs, with ASICs playing a critical role in optimising AI workload efficiency and maintaining competitive advantage,” notes TrendForce.

Share