SAN JOSE, California, March 19, 2026, 02:09 PDT
Nvidia has secured a green light from Beijing to restart H200 AI chip sales in China and is now gearing up a localized Groq chip for that market, sources told Reuters. The move restores access to a region that previously made up 13% of Nvidia’s overall revenue, offering the company a fresh path into inference—the phase where trained AI responds to prompts or performs tasks.
Nvidia is looking to push momentum from its GTC developer conference deeper into the data-center business—not just model training, but the whole stack. CEO Jensen Huang this week put a $1 trillion-plus revenue target on Blackwell and Rubin through 2027. But according to , that projection leaves out H200 sales to China; any rebound in that region would be extra.
Nvidia’s H200, built on Hopper architecture, sits just behind its latest Blackwell and Rubin chips in terms of power. “Our supply chain is getting fired up,” CEO Jensen Huang said, noting the company had started booking orders after receiving U.S. export clearance. Reuters
According to a source cited by Reuters, the chip headed for China from Groq isn’t just a watered-down China-only edition—it’s a variant built to handle integration with a range of systems. Nvidia, which inked a $17 billion licensing deal for Groq tech back in December, is looking to deploy it for inference tasks like coding and Q&A. The same source noted that the China-compatible chip could hit the market as soon as May.
Nvidia, speaking at GTC in San Jose, announced that its Vera Rubin platform is now fully in production. The setup brings together Rubin GPUs, Groq 3 LPX accelerators, Vera CPUs, plus networking components—bundled into rack-scale systems targeting large AI data centers. The company also rolled out Dynamo 1.0, open-source software it’s calling an operating system for inference workloads on chip clusters.
Nvidia turned to its roster of customers as proof points for its latest move. OpenAI’s Sam Altman, in a March 16 company release, described Nvidia infrastructure as “the foundation” for advancing AI. Anthropic’s Dario Amodei, in the same statement, argued that more intricate reasoning demands systems that “can keep pace.” NVIDIA Newsroom
The competitive landscape is denser than it is in training. According to Reuters, Google and Amazon have launched their own AI chips, AMD is rolling out software aimed at breaking Nvidia’s dominance, and Baidu manufactures inference chips in China. Analyst Richard Windsor notes Nvidia’s lock-in is “not nearly as strong in inference.” Reuters
Still, the window remains tight. Rubin systems are off-limits in China, and Groq’s offering for that region isn’t expected before May. Even Nvidia, in its March 16 update, signaled that rollout dates for certain products and features could shift.
Nvidia expects to get Rubin-based products into the hands of partners like Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure starting in the back half of this year. The company is lining up a wide launch outside China, even as it works to reestablish a presence there using its previous-generation Hopper chip and a Groq variant that’s still in the pipeline.