SAN FRANCISCO, February 10, 2026, 06:20 (PST)
- Cisco introduced the Silicon One G300 switch chip alongside new data center systems designed specifically for AI clusters
- Cisco claims its design reduces AI job completion time by 28% by alleviating network congestion
- This move jumps into a crowded arena, where Broadcom and Nvidia are already battling it out with their networking chips
On Tuesday, Cisco Systems unveiled its Silicon One G300 networking chip alongside new data center systems designed to accelerate traffic within major AI facilities. The move positions Cisco against Broadcom and Nvidia amid rising investments in AI infrastructure. (Reuters)
The timing is straightforward. AI clusters have grown so massive that data transfer between chips can choke the entire process, causing expensive compute resources to sit idle.
Networking — the switches and links connecting chips — has become a hot market. Operators demand higher GPU utilization while cutting power costs, and the network is right at the heart of that balance.
Cisco announced the G300 will hit the market in the latter half of 2026. It’s built to link AI systems across hundreds of thousands of connections within and between data centers.
The chip will use Taiwan Semiconductor Manufacturing Co’s 3-nanometer process, Cisco confirmed. Martin Lund, Cisco’s executive vice president, explained the design aims to manage sudden data traffic surges without “bogging down” and can reroute around problem areas in microseconds.
Cisco projects the G300 will deliver 102.4 Tbps—terabits per second—of Ethernet switching capacity, powering the new Nexus 9000 and Cisco 8000 systems. Jeetu Patel, Cisco’s president and chief product officer, emphasized the company’s “innovation across the full stack,” spanning silicon, systems, and software. Lund highlighted data movement as “the key” to efficient AI compute. (Cisco Newsroom)
Cisco touted the hardware as an energy solution. Some of the new systems can be fully liquid-cooled and, combined with updated optics, deliver up to a 70% boost in energy efficiency over previous models.
External experts agreed on the bottleneck issue despite the crowded field. Matt Eastwood from IDC noted, “Network architecture is becoming a defining constraint,” and Dylan Patel, founder of SemiAnalysis, added that “networking has been the fundamental constraint” when it comes to scaling AI.
Cisco detailed its concept of “Intelligent Collective Networking” in a company blog, highlighting features like a shared packet buffer and path-based load balancing. The chip boasts a 252MB packet buffer and handles 1.6T Ethernet ports with on-chip 200 Gbps SerDes — serializer-deserializer circuits that accelerate data transfer. (Cisco Blogs)
Rivals aren’t sitting still. Nvidia’s newest AI gear features a networking chip going head-to-head with Cisco’s products. At the same time, Broadcom is aggressively promoting its Tomahawk series in the same space, according to Cisco and industry experts.
Cisco isn’t just pushing a chip. The G300 is linked to its wider Silicon One lineup, along with a management layer designed to simplify deploying and operating AI networks, whether on-premises or in the cloud.
Cisco’s performance numbers come with caveats. Their headline improvements rely on simulations and design assumptions, meaning actual results can vary depending on how customers wire, tune, and combine different generations of gear. Plus, the market tends to favor lock-in, as some buyers opt for full stacks from chipmakers instead of assembling Ethernet networks piece by piece.