Designed for AI clusters, Cisco’s Silicon One G300 can accelerate workloads, improve GPU efficiency, and support liquid-cooled high-density deployments.

Cisco has introduced the Silicon One G300, a 102.4 terabytes per second switching silicon, designed to power large scale AI clusters for training, inference and real time agentic workloads. The new processor underpins the latest Cisco N9000 and Cisco 8000 systems and is engineered to improve GPU utilization, reduce job completion time and increase overall data center efficiency. With Intelligent Collective Networking, the G300 delivers up to 33% higher network utilization and a 28% reduction in job completion time compared with non optimized path selection, positioning the network as an active component of AI compute infrastructure.
The architecture is built to address rising energy and operational demands as AI deployments expand beyond hyperscalers to enterprises, neoclouds and sovereign environments. The new 102.4T systems are available in both air cooled and 100 per cent liquid cooled designs, with the liquid cooled configuration delivering nearly 70 per cent improvement in energy efficiency and enabling significantly higher bandwidth density within a single system. Support for 1.6T OSFP optics and 800G Linear Pluggable Optics further improves scale out efficiency, with LPO reducing optical module power consumption by up to 50 per cent and overall switch power by 30 per cent.
At the silicon level, the G300 integrates a fully shared packet buffer, path based load balancing and proactive telemetry to manage bursty AI traffic and prevent packet drops that can stall workloads. The chip is highly programmable, allowing post deployment upgrades for emerging use cases, while security is embedded directly into the hardware. Cisco also enhanced Nexus One with a unified management plane and AI driven operational capabilities to simplify fabric deployment and provide network to GPU visibility.
Jeetu Patel, President and Chief Product Officer at Cisco, states, “We are spearheading performance, manageability, and security in AI networking by innovating across the full stack, from silicon to systems and software.”

