Loading...

How Did Amazon's AI Chips Grow 150% in Just One Quarter?

Amazon's AI chips saw an unprecedented 150% growth in Q3 2025. Explore drivers behind Trainium2's rapid adoption, market impact, and why Marvell Technology benefits from this surge.

AvatarOH

By Olivia Hall

4 min read

Image Credit: Unsplash
Image Credit: Unsplash

In the third quarter of 2025, Amazon reported a 150% quarter-over-quarter jump in Trainium2 AI chip adoption, drawing industry-wide attention. Behind this surge, Amazon’s cloud division AWS has accelerated the deployment of its custom AI silicon, leveraging rising demand from large-scale AI model developers and cloud enterprise clients.

New large clients and an expanding market for cloud-based custom AI processing marked this remarkable quarter.

Major customers like Anthropic, already adopting hundreds of thousands of Trainium2 chips, signaled confidence that AWS could reliably deliver high-performance machine learning at scale.

What fueled Trainium2's explosive growth last quarter?

The main driver for Trainium2’s incredible quarter-over-quarter growth was surging demand from foundation model providers and generative AI startups.

These companies require vast compute power as the volume and size of their artificial intelligence models multiply. By providing high-throughput, low-cost custom chips, Amazon secured commitments from several industry leaders.

CEO Andy Jassy noted Trainium2’s commercial usage is fully subscribed, transforming it into a multibillion-dollar business virtually overnight.

Another contributing factor was the timing of new AI infrastructure deployments, primarily designed for large customers rolling out new AI products.

As most existing clients consolidated their cloud contracts to leverage Amazon’s newest offerings, Trainium2 quickly moved to the center stage within the AWS AI stack.

Did you know?
The AWS Project Rainier cluster, deployed in 2025, is powered by nearly half a million custom chips, making it one of the largest single AI compute clusters ever built.

How critical is Project Rainier to AWS's AI future?

Project Rainier, officially launched in late 2025, marks Amazon’s most ambitious AI compute cluster to date. The cluster extends across multiple U.S. data centers and leverages almost 500,000 Trainium2 chips at its core.

This represents a new level of scale for dedicated AI infrastructure not only for AWS, but globally. Early customers, including prominent AI research laboratories and large language model developers, have already begun using the Project Rainier platform as their training backbone.

Amazon positioned Project Rainier as a foundation for expanding future workloads, promising to double the chip count by the end of the year, thereby maintaining a clear lead in infrastructure capacity.

Why is Marvell Technology benefiting from Amazon's AI chip demand?

Marvell Technology manufactures Amazon’s custom Trainium2 processors exclusively, putting Marvell at the heart of AWS’s AI supply chain.

Following Amazon’s Q3 earnings announcement, in which the 150% growth was disclosed, Marvell shares surged more than 5% within hours.

Analysts at J.P. Morgan reaffirmed their bullish stance, projecting further stock gains as Marvell’s custom ASIC segment stands to expand 18–20% in 2026.

This exclusive partnership translates Amazon’s soaring chip needs directly into Marvell’s top-line revenue.

As AWS scales Trainium2 and preps for Trainium3, the Marvell-Amazon pipeline is expected to deliver multibillion-dollar annual revenue, with next-generation 2nm technology on the horizon.

ALSO READ | Strategic AI Partnership Between HCLSoftware and Microsoft Announced

How does Trainium2 adoption impact the global AI market?

With global AI infrastructure spending projected to exceed $150 billion in 2026, Amazon’s rapid Trainium2 scale-out signals intense competitive pressure for incumbent chipmakers like Nvidia.

Trainium2 delivers performance tailored for large-scale machine learning at significantly lower operational cost, enticing major enterprise customers looking to diversify their AI hardware vendors.

Amazon’s deployment also sets a precedent for cloud infrastructure providers to invest in custom silicon rather than standard third-party chips.

This shift could gradually reshape supply chains, reduce reliance on traditional semiconductor giants, and support new entrants in the AI accelerator space.

What is next for Amazon's AI chip strategy?

Amazon plans to preview its next-generation Trainium3 chip to select customers by the end of 2025, aiming for full deployment in early 2026. The company’s capital expenditures are poised to exceed $125 billion this year, with future increases already planned to keep pace with surging demand.

AWS leadership expects the next hardware iterations to unlock broader adoption, particularly among mid-sized enterprises seeking scalable AI solutions.

Partners and clients anticipate Project Rainier to scale past one million chips, consolidating Amazon’s standing as a critical AI infrastructure provider.

Continuous hardware innovation in custom silicon will be key as cloud AI workloads rapidly evolve in complexity and size. Amazon’s recent achievement underlines the shifting balance of power in global AI technology.

With Trainium2’s unprecedented growth and new clusters like Project Rainier setting records, the industry may witness further disruption as Amazon and its partners invest in deploying the next wave of AI computing blocks.

(0)

Please sign in to leave a comment

Related Articles
© 2025 Wordwise Media.
All rights reserved.