Loading...

GMI Cloud Deploys 7000 Nvidia Blackwell Chips in Taiwan Centre

US cloud provider GMI Cloud announces $500 million AI data centre in Taiwan featuring 7,000 Nvidia Blackwell GB300 GPUs, targeting March 2026 launch amid surging AI infrastructure demand.

AvatarOH

By Olivia Hall

5 min read

Image for illustrative purpose.
Image for illustrative purpose.

US-based cloud services provider GMI Cloud announced a $500 million artificial intelligence data centre in Taiwan, powered by Nvidia's latest Blackwell GB300 chips.

The facility will house approximately 7,000 GPUs across 96 high-density racks and is scheduled to come online by March 2026, marking one of the most significant AI infrastructure investments in the Asia-Pacific region this year.

The data centre will draw around 16 megawatts of power and deliver processing capacity of nearly 2 million tokens per second.

GMI Cloud founder and CEO Alex Yeh emphasized that Taiwan needs more data centres as strategic assets to support AI development, noting that the company's GPU utilization rates are running almost full amid surging demand.

Why Taiwan for This Strategic AI Investment

Taiwan's positioning as a semiconductor manufacturing powerhouse made it a natural choice for GMI Cloud's expansion strategy. The island nation hosts the world's most advanced chip fabrication facilities and maintains close ties with major technology suppliers, creating an ecosystem conducive to AI infrastructure development.

Yeh stressed that promoting local AI ecosystems requires building data centres and clusters first, establishing the foundation before applications can flourish.

The decision comes despite Taiwan's well-documented power supply challenges, particularly in the northern regions where data centre demand has outpaced generation capacity.

Taiwan Power Company halted new electricity connections exceeding 5 megawatts north of Taoyuan in 2024, forcing many operators to consider alternative locations.

GMI Cloud's confidence in proceeding suggests either strategic arrangements with power authorities or facility placement in regions with a more stable electricity supply.

Did you know?
A single Nvidia GB300 NVL72 rack can process up to 1.1 million tokens per second, which is equivalent to analyzing approximately 825,000 words or roughly 15 full-length novels every single second.

What Makes the Blackwell GB300 Architecture Critical

Nvidia's Blackwell GB300 chips represent a significant leap in AI processing capabilities compared to previous-generation hardware. Each GB300 GPU delivers approximately 15,200 tokens per second in optimized configurations, providing a 5x improvement over the earlier H100 architecture.

The 72-GPU rack-scale systems can achieve over 1.1 million tokens per second in aggregate throughput, making them particularly suited for large language model inference and AI reasoning tasks.

The GB300 NVL72 configuration combines 72 GPUs with 36 CPUs in a tightly integrated architecture that maximizes data transfer speeds and minimizes latency.

This design delivers a 10x boost in tokens per second per user and a 5x improvement in efficiency per megawatt compared to Hopper generation chips.

For GMI Cloud's GPU-as-a-Service business model, this translates directly into higher utilization rates and improved economics for customers running demanding AI workloads.

How Power Supply Challenges Are Being Addressed

Taiwan's electricity infrastructure has faced mounting pressure from high-tech industries, with northern regions experiencing a supply gap of approximately 20 billion kilowatt-hours between generation and consumption in 2023.

The situation intensified following the shutdown of the Third Nuclear Power Plant's No. 1 reactor in July 2024, reducing available baseload capacity.

AI data centres compound these challenges, as a single 16-megawatt facility like GMI Cloud's planned centre consumes electricity equivalent to roughly 11,000 households.

Yeh expressed confidence that power supply challenges can be remedied, though specific details of GMI Cloud's arrangements were not disclosed in the announcement.

Taiwan Power Company has been actively encouraging data centre operators to establish facilities in central and southern regions where generation capacity exceeds local demand.

The company is also pursuing a 10-year grid resilience plan aimed at reducing transmission line construction timelines from 10 years to 6 years, though fundamental capacity expansion remains the critical long-term solution.

ALSO READ | Big Tech Issues Record $73B Debt for Artificial Intelligence

Which Major Clients Have Already Signed Up

The Taiwan AI factory has secured commitments from several prominent technology firms even before construction completion. Initial customers include Nvidia itself, cybersecurity firm Trend Micro, electronics manufacturer Wistron, and Chunghwa System Integration.

Data infrastructure provider VAST Data and industrial solutions firm TECO have also signed on as early adopters of the facility's GPU-as-a-Service offerings.

This customer roster reflects the broad applicability of high-performance GPU infrastructure across industries, from semiconductor design and simulation to enterprise AI applications and research workloads.

GMI Cloud projects the facility will generate approximately $1 billion in total contract value once fully operational.

The company already operates data centres in the United States, Singapore, Thailand, and Japan, positioning the Taiwan facility as a strategic expansion within its Asia-Pacific network.

What This Means for the Asia AI Infrastructure Race

The GMI Cloud announcement follows a pattern of accelerating AI infrastructure investments across Asia, with major projects also underway in South Korea, Japan, and Singapore.

Nvidia CEO Jensen Huang has been actively promoting the concept of AI factories, describing them as essential production facilities for the intelligence economy.

The company announced deals to supply advanced GPUs to projects in Saudi Arabia and South Korea earlier in 2025.

However, US President Donald Trump expressed a preference for reserving top chips like Blackwell for American companies.

Taiwan hosts several other significant AI infrastructure projects, including a 100-megawatt data centre announced by Foxconn and Nvidia in May 2025.

These investments collectively position Taiwan as a critical node in global AI supply chains, complementing its existing dominance in semiconductor manufacturing.

GMI Cloud also revealed plans for a new 50-megawatt US data centre and confirmed it is targeting an initial public offering within two to three years, signaling strong growth expectations for the GPU cloud services market.

The GPU-as-a-Service market was valued at $6.4 billion in 2023 and is projected to grow at over 30% annually through 2032, driven by enterprises seeking access to high-performance computing without capital expenditure on physical hardware.

As AI workloads continue to expand across industries, strategic infrastructure positioning in semiconductor-adjacent locations, such as Taiwan, offers providers both technical advantages and proximity to critical supply chains.

GMI Cloud's $500 million commitment underscores confidence that demand will continue outpacing supply despite mounting competition and infrastructure constraints.

(0)

Please sign in to leave a comment

Related Articles
© 2025 Wordwise Media.
All rights reserved.