Broadcom has just launched the Tomahawk Ultra, a networking switch built specifically to handle AI workloads. It’s designed to tightly connect hundreds of AI chips, significantly boosting communication speed within server racks. This move puts Broadcom in direct competition with NVIDIA’s NVLink Switch and InfiniBand technology.
Connects four times more chips
Where NVIDIA’s NVLink supports only a few dozen GPUs, Tomahawk Ultra connects up to four times more devices. That gives data center architects the ability to build much larger and more powerful AI clusters.
Turbocharged Ethernet instead of proprietary tech
Rather than using NVIDIA’s closed NVLink system, Broadcom went with a high-performance version of standard Ethernet. This brings better compatibility, more vendor choices, and less risk of getting locked into a single ecosystem.
Speed and efficiency at its core
Broadcom reworked Ethernet to hit ultra-low latency—about 250 nanoseconds at 51.2 terabits per second. That’s on par with InfiniBand, which means it can support synchronous AI model training without delays or the need to resend data.
They’ve also added features like Link Layer Retry and Credit-Based Flow Control. In simple terms, that ensures smooth, lossless data transfer—crucial for big AI jobs where even small packet loss can waste a lot of compute time.
Built for ease of deployment
Tomahawk Ultra is pin-compatible with Broadcom’s previous Gen 5 switches. So companies don’t have to rebuild everything from scratch—they can upgrade current setups with minimal changes. That’s helped attract early backing from major players like HPE, Accton, AMD, Intel, and Delta Electronics.
From HPC to AI
This switch didn’t start off targeting AI. It was originally developed for high-performance computing. But after three years of development, Broadcom realized AI demands similar infrastructure. So they retooled it to handle the unique demands of generative AI and large model training. It’s now in production, built using TSMC’s 5-nanometer process.
Market impact and investor reaction
Broadcom’s stock jumped around 2% after the announcement and hit new highs. Analysts at firms like Mizuho even raised their price targets, citing strong potential in AI networking. Oppenheimer shared the optimism, pointing to continued momentum in AI-focused infrastructure.
Why this matters beyond chips
This launch signals a broader shift in how AI data centers are built. Instead of closed-off, proprietary systems, Broadcom is pushing for open, Ethernet-based networking. Big cloud providers are already on board, and even Google has shown support for this approach in its own AI accelerators.
If Broadcom gets enough adoption, it could do more than just compete with NVIDIA—it could completely change how future AI systems are designed. More scale, more flexibility, and possibly lower costs.
Bottom line
Tomahawk Ultra isn’t just another networking chip. It’s Broadcom’s high-stakes bet on open standards and Ethernet as the backbone of future AI clusters. The technology is ready. The backing is strong. Now it’s a question of whether the industry follows—and early signs say it just might.