NEW YORK, Feb 16, 2026, 15:23 EST — Market closed
Nvidia (NVDA.O) said in a blog post that new tests of its Blackwell Ultra platform showed up to 50 times higher inference throughput per megawatt than its prior Hopper generation, translating into as much as 35 times lower “cost per token” — the basic unit of text AI models process. The company said cloud providers including Microsoft, CoreWeave and Oracle Cloud Infrastructure are deploying its GB300 NVL72 systems for low-latency, long-context uses such as coding assistants and other “agentic” tools that can take steps to complete tasks. “As inference moves to the center of AI production, long-context performance and token efficiency become critical,” said Chen Goldberg, senior vice president of engineering at CoreWeave. (NVIDIA Blog)
The timing matters. A Reuters analysis on Monday showed investors have started to press the brakes on the long-running AI trade as they question whether heavy capital spending will translate into near-term profit, dragging down valuations across the biggest technology names. It said Nvidia’s market value has fallen by about $89.67 billion since the start of 2026 to roughly $4.44 trillion as of Friday. (Reuters)
U.S. stock markets, including the NYSE and Nasdaq, were closed on Monday for the Presidents Day holiday and were set to reopen on Tuesday. (AP News)
Nvidia stock last closed at $182.81 on Friday, down 4.13 points, or 2.21%, with about 161.9 million shares changing hands, according to Investing.com data. (Investing)
The new performance claims go straight at the market’s current obsession: economics. Power has become a constraint in data centers, and “throughput per megawatt” is a blunt way to measure how many AI responses a system can push out for a given amount of electricity. “Cost per token” matters because it can decide whether a chatbot or coding tool makes money, breaks even, or quietly gets throttled.
Competition is not waiting. Advanced Micro Devices on Monday said it was partnering with Tata Consultancy Services to deploy AMD’s latest AI data center technology in India, with a plan to support up to 200 megawatts of AI infrastructure capacity. (Bloomberg)
For Nvidia, the next session is less about the blog post itself than what it signals: whether customers keep leaning into bigger systems even as Wall Street demands cleaner payback math. Traders will be looking for clues that demand is holding up, and for any change in tone on pricing and supply as more compute moves from training models to running them in production.
But the risks cut the other way, too. The market has been unforgiving when expectations get ahead of what companies can prove in revenue and margins, and “up to” benchmark numbers rarely settle an argument by themselves. If big buyers slow spending, or if competition forces tougher pricing, Nvidia’s stock can stay choppy even with better hardware.
Another near-term focal point sits on Nvidia’s own calendar: the company’s annual GTC event in San Jose on March 16–19, which often serves as a stage for product detail and partner announcements. (NVIDIA)
The more immediate catalyst is earnings. Nvidia is scheduled to report fourth-quarter and fiscal 2026 results on Wednesday, Feb. 25, with the company saying it expects to post results at about 1:20 p.m. PT and hold a conference call at 2 p.m. PT (5 p.m. ET), with written CFO commentary to be posted ahead of the call. (Nvidia)