AUSTIN, March 23, 2026, 03:57 CDT
Amazon.com’s custom Trainium AI chips are getting picked up by bigger names in the AI world. Anthropic, for one, is running Claude on more than a million Trainium2 chips. OpenAI, meanwhile, has agreed to tap about 2 gigawatts of Trainium capacity as part of its latest AWS arrangement, Amazon said, citing a TechCrunch report published Sunday. 1
Amazon is pouring roughly $200 billion into AI infrastructure this year, a figure that’s drawn scrutiny as the company faces mounting calls to prove its internally developed chips can attract heavyweight customers beyond its own operations. CEO Andy Jassy, speaking last week, put a number on the potential: with AI fueling growth, AWS could see annual revenue soar to $600 billion by 2036—up from $128.7 billion forecast for 2025. 2
Amazon’s drive to chip away at Nvidia’s grip on AI computing just got a little sharper. Nvidia’s GPUs continue to rule the sector. Even so, Reuters reported last week that AWS signed on to purchase 1 million Nvidia chips by 2027—a sign that Trainium’s making progress but hasn’t unseated the heavyweight yet. 3
TechCrunch got a look inside Amazon’s Austin chip lab and put the total number of Trainium chips out there at 1.4 million, spanning three generations. Most of the Trainium2s are packed into Project Rainier—the massive Anthropic cluster—where 500,000 chips came online in late 2025, according to their report. 1
Inference — the process where a trained model generates a response from a user prompt — is increasingly running on Trainium chips, according to Amazon. Most of that inference load on Bedrock, Amazon’s managed AI platform, is now shouldered by Trainium2, TechCrunch has reported. Demand, according to lab director Kristopher King, is rising “as fast as we can get capacity out there.” 1
Back in February, Amazon and OpenAI revealed that AWS would serve as the exclusive third-party cloud distributor for Frontier, OpenAI’s enterprise-grade platform designed for building and running AI agents—those tools that handle complex, multistep business processes. OpenAI also agreed to tap Trainium for Stateful Runtime and other high-end workloads, spanning both Trainium3 and Trainium4, according to Amazon. “This is about putting AI into businesses at real scale,” OpenAI’s Sam Altman said at the time. Amazon CEO Andy Jassy, for his part, said OpenAI had decided to “go big” with Trainium. 4
Anthropic came first. Back in 2024, the startup named AWS its main cloud and training provider, with its own engineers teaming up alongside Amazon’s Annapurna Labs to fine-tune future Trainium chips, tackling improvements from hardware all the way up to the software layer. 5
Amazon wants to take a bite out of Nvidia’s software moat. According to Mark Carroll, director of engineering at the lab, shifting PyTorch workloads to Trainium can require almost nothing more than a “one-line change” once you’ve recompiled. Still, making the jump isn’t seamless—customers are left with the heavier lifting: porting and testing code outside Nvidia’s environment. 1
Still, the landscape remains tangled. Amazon is snapping up massive amounts of Nvidia hardware. Microsoft, for its part, is weighing a lawsuit over the Amazon-OpenAI partnership, Reuters said, claiming the Frontier deal could violate Azure’s exclusive rights to OpenAI’s stateless APIs—the default toolset developers rely on to access models. 3
Amazon is facing a crucial moment: can its headline cloud deals translate to consistent, real-world production? Anthropic’s already deploying at scale. OpenAI’s on deck for the next phase. Amazon, meanwhile, keeps scrambling to build out more capacity. The upshot: Trainium is finally getting a real-world demand signal from outside the company—despite Nvidia’s grip on the sector. 1