SAN JOSE, California, March 16, 2026, 13:17 PDT
Nvidia’s Jensen Huang on Monday pegged the potential revenue from the company’s high-powered AI chips at $1 trillion or more through 2027, pointing to expected demand for its Blackwell systems and what’s next with the Vera Rubin lineup. Huang made the call during his GTC keynote at Nvidia’s annual developer event in San Jose. 1
This latest forecast marks a significant jump from the $500 billion market Nvidia pointed to for 2026 in its most recent earnings call. The stakes are high, as investors are zeroing in on whether Nvidia can keep its edge while AI budgets move from training to inference—the point where an AI model, already trained, actually responds to queries or generates predictions. 1
Huang insists Nvidia’s edge isn’t just about hardware. CUDA, the chipmaker’s proprietary programming software, remains key, he said. The massive installed base brings in developers, who then craft the algorithms that lock customers into Nvidia’s ecosystem. 1
Nvidia’s scope stretched past just one chip. The company confirmed Vera Rubin is now in full production, rolling out seven fresh chips that bring together CPUs, GPUs, networking, storage, and Groq inference accelerators. Systems from partners are slated for the back half of 2026. 2
Huang pointed to inference and AI agents—software capable of managing complex, multi-step tasks for users—as the next major drivers of demand. “The inflection point of inference has arrived,” he told the audience, voicing confidence that computing demand will top $1 trillion. 3
This is also where Nvidia is feeling the heat. OpenAI and Meta, among others, are now designing their own chips. To keep its edge as AI applications move past just training, Nvidia is relying on Groq — the startup it partnered with in a $17 billion deal struck last December. 1
eMarketer analyst Jacob Bourne, speaking before the event, anticipated Nvidia would roll out a “full-stack roadmap update” focused on inference, networking, and what the company calls AI factory infrastructure—those sprawling data-center setups for training and deploying AI. That’s largely how Monday’s presentation played out. 4
Nvidia turned to its biggest customers for backup. On Monday, the company released statements from OpenAI CEO Sam Altman, who said Vera Rubin would let them run “more powerful models and agents at massive scale.” Anthropic’s Dario Amodei added that the platform was a fit for workloads demanding more complex reasoning. 2
But Huang offered no breakdown on how Nvidia arrived at that $1 trillion figure, giving investors a headline target but not much in the way of calculations to back it up. The company also faces the ongoing challenge of demonstrating that its fresh networking, storage, and optical tools can actually scale cost-effectively for sprawling AI data centers, especially as inference heats up. 1
Nvidia shares spiked on the keynote news, but soon gave up most of the pop, settling to a 1.4% gain. Questions lingered—investors still aren’t sure if Nvidia’s bet on funneling profits into the broader AI landscape will keep delivering. 1
The GTC conference, a four-day event in San Jose that wraps up March 19, has grown into a major showcase for AI infrastructure. For this year’s edition, Huang isn’t just pushing a new, speedier processor. He’s rolling out a comprehensive platform—chips, software, networking, storage—positioning Nvidia right in the thick of the next AI wave. 5