OpenAI eyes faster ChatGPT chips as it weighs Nvidia alternatives and $100B talks drag

February 3, 2026
OpenAI eyes faster ChatGPT chips as it weighs Nvidia alternatives and $100B talks drag

San Francisco, 09:12 PST, February 3, 2026

  • Sources say OpenAI is exploring alternatives to the latest Nvidia AI chips to speed up “inference.”
  • Nvidia, OpenAI, and Oracle all dismissed reports of friction related to Nvidia’s upcoming investment.
  • Shares of Nvidia dropped roughly 3% in early U.S. trading as investors digested the tension.

OpenAI is exploring alternatives to some of Nvidia’s newest AI chips as it pushes for faster performance in products like ChatGPT, according to eight sources familiar with the situation. This move could complicate ongoing discussions about a significant Nvidia investment in the startup. Reuters

The dispute centers on “inference” — the point when a trained AI model delivers an answer to a user — rather than the training phase where the models are created. As AI tools shift from demos to continuous operation, inference is now the key challenge.

This is significant since Nvidia remains the go-to provider for the toughest AI tasks, with OpenAI standing out as one of its largest and most closely followed clients. Should OpenAI move even a portion of its inference operations to competitors’ hardware, it might indicate a weakening of Nvidia’s control over the future of AI computing.

On Tuesday morning in the U.S., Nvidia shares slipped roughly 3%. Oracle also dropped close to 3%, and AMD fell by around 1%.

Sources close to the situation say OpenAI has grown frustrated with how quickly Nvidia hardware delivers results for specific tasks, like software development. A key concern lies with “agentic” AI—systems designed to interact with other software and perform actions—where even small delays add up rapidly.

OpenAI has focused its search on chips packed with embedded SRAM, a faster yet pricier type of on-chip memory. The goal: cut down the delays caused by fetching data from external memory, which often slows inference.

OpenAI has held talks with chip makers like AMD and inference-focused startups including Cerebras and Groq, according to people familiar with the discussions. Afterward, Nvidia secured a deal to license Groq’s technology, which one source said essentially shut down OpenAI’s negotiations with them.

Nvidia claims customers pick its chips for inference due to “best performance” and a lower “total cost of ownership”—which covers purchase price along with power and operating expenses. In a separate statement, OpenAI confirmed Nvidia continues to power most of its inference fleet, delivering the best performance per dollar.

OpenAI CEO Sam Altman took to X to praise Nvidia as the maker of “the best AI chips in the world,” adding that OpenAI aims to remain a “gigantic customer.” Sachin Katti, who oversees OpenAI’s compute infrastructure, described the partnership as “foundational” but noted the company is expanding its supplier roster. Datacenterdynamics

Nvidia CEO Jensen Huang brushed off rumors of a split as “nonsense” during comments to reporters in Taipei this weekend, confirming Nvidia plans to invest “a great deal of money” in OpenAI. Oracle chimed in on X, stating the Nvidia-OpenAI deal has “zero impact” on its financial ties with OpenAI. Business Insider reports Oracle’s commitment includes a multi-year agreement for OpenAI to purchase $300 billion worth of computing power. Businessinsider

Back in September, Nvidia announced plans to pour up to $100 billion into OpenAI, linked to a wider deal involving Nvidia gear in OpenAI’s data centers. But those talks have been stuck for months. Meanwhile, OpenAI’s product focus has shifted toward inference-heavy tasks, altering its hardware needs, according to sources close to the discussions.

Switching away from Nvidia on a large scale isn’t something that happens overnight. Alternative chips require mature software, dependable supply chains, and demonstrated performance in massive deployments. Meanwhile, Nvidia keeps pushing forward quickly, refining both its hardware and the developer tools needed to run models.

OpenAI’s leaders are calling this move a diversification, not a departure, as they seek faster performance where it counts — code, tools, and real-time replies. Investors, however, will keep pressing: will Nvidia remain the go-to for OpenAI’s inference fleet, or just one vendor among many?

Technology News

  • Google Home adds hardware button support in update 4.8, fixes video-not-available errors
    February 3, 2026, 12:12 PM EST. Google's Google Home gains hardware button support with the 4.8 update, spotted by 9to5Google. The feature adds physical buttons to trigger devices, routines and automations, signaling a return of tactile control within a largely voice- and app-driven system. Compatibility remains unclear, but users are urged to test across supported devices after updating via the Google Play Store. The release also addresses a long-running bug where video playback could display "Video not available" from notifications or recent events; Google says the risk should be reduced. The changes could broaden interaction methods and improve reliability for Google Home users.