
Broadcom’s 102.4-Terabit Switch Ushers in New Era for AI Networks
Broadcom’s 102.4-Terabit Switch Ushers in New Era for AI Networks
Tech giant rolls out first co-packaged optics Ethernet switch as data centers wrestle with power and stability challenges in massive AI training clusters
PALO ALTO — The race to fuel artificial intelligence isn’t just about faster chips or smarter algorithms. Behind the scenes, an even bigger challenge has been lurking: the connections between those chips. Every day, data centers burn through staggering amounts of electricity while struggling to keep thousands of processors talking to each other without tripping over unstable links. That hidden bottleneck now threatens the economics of AI itself.
Broadcom believes it has the answer. On Wednesday, the company revealed it has started shipping the Tomahawk 6 “Davisson,” a 102.4-terabit-per-second Ethernet switch. More importantly, it’s the first of its kind to use co-packaged optics at this scale. The release couldn’t come at a more critical moment, as hyperscale cloud operators scramble to squeeze every drop of performance from limited power and cooling resources.
For years, engineers relied on plugging optical transceivers directly into switch faceplates. That method worked—until AI training workloads exploded to hundreds of thousands of interconnected processors. At that scale, traditional hardware has hit a wall, both physically and economically.
A Marriage of Silicon and Light
So what makes this new switch different? Instead of attaching optics as an add-on, Broadcom has baked them directly onto the same chip substrate. By eliminating long electrical pathways, extra connectors, and signal-conditioning hardware, co-packaged optics cut out the middlemen that waste power and introduce instability.
Think of it as moving the engine inside the wheels instead of connecting them with a long, wobbly drive shaft. The result? Cleaner, faster, more reliable motion.
According to Broadcom, this design slashes interconnect power consumption by as much as 70 percent compared with conventional pluggable optics. Multiply that across tens of thousands of network ports and you’re looking at huge savings—not just in dollars, but in the amount of heat data centers have to get rid of.
Stability also gets a major boost. In large AI training clusters, even a tiny hiccup in the network can idle expensive GPUs and delay training runs that cost hundreds of thousands of dollars a day. By tightening the integration, Broadcom aims to minimize those costly interruptions.
The $80 Billion Question
Of course, this isn’t just a technical feat—it’s a business story too. Analysts expect spending on Ethernet switches for AI networks to hit somewhere between $80 billion and $100 billion over the next five years. Cloud giants and enterprise AI shops are in an arms race to build networks capable of handling ever more ambitious model training.
Within that enormous market, co-packaged optics occupies a small but critical corner. Adoption has been slow because the technology is complex and the supply chain isn’t fully mature. But as network speeds push past 1.6 terabits per second, conventional pluggables start to buckle under power and thermal stress. That’s where CPO begins to shine.
As one network architect put it, “At these speeds, you’re not just buying performance. You’re buying the ability to fit the bandwidth into your existing power and cooling envelopes.”
With the Davisson platform doubling speeds to 200 gigabits per optical channel—twice what Broadcom’s last-gen CPO switches managed—the company is planting its flag squarely in the middle of this high-stakes transition.
Rivals in Hot Pursuit
Broadcom isn’t alone. Cisco, Marvell, and Nvidia all have their own visions for how to wire up the AI factories of the future. Several have already announced switches capable of matching Broadcom’s raw capacity. But Broadcom has the bragging rights of being first to actually ship at this scale.
That lead may matter. Large cloud providers plan infrastructure years ahead, and once they qualify a vendor’s product, they often stick with it. Being first in line could give Broadcom a crucial edge.
Nvidia, however, poses a unique threat. By bundling its dominant AI accelerators with networking gear and software, it offers customers a one-stop package. That’s tough for pure networking vendors to counter. Expect the competition to shake out differently depending on the use case, with some scenarios favoring Nvidia’s vertically integrated approach and others leaning on Broadcom’s silicon.
System vendors like Arista Networks and niche players such as Micas Networks will also shape adoption. Their willingness to back CPO solutions will serve as an early sign of how quickly the technology spreads.
A Reality Check for Operators
Still, rolling out co-packaged optics isn’t as simple as flipping a switch. The very integration that makes the technology efficient also makes it harder to service. Swapping out a faulty optical module inside a package is a far cry from sliding in a new pluggable transceiver.
Broadcom has tried to ease those worries with replaceable laser modules, but operators will need new skills and procedures to maintain these systems. Many will hedge their bets, deploying CPO in critical layers of the network while relying on more familiar pluggable optics elsewhere.
Supply chain reliability adds another wrinkle. TSMC’s role in manufacturing the photonic engines means production volumes could hit limits just as demand surges. Any hiccup there could delay deployments.
The Road Ahead
For investors and industry watchers, Broadcom’s move highlights key trends. First-mover advantage matters, and shipping real products—not just announcing roadmaps—wins credibility. Analysts expect Broadcom to rack up design wins over the next year or two, especially among customers who tested earlier versions.
The bigger picture, however, is Ethernet’s growing dominance in AI back-end networks. Even if CPO adoption remains modest, the value per rack will rise thanks to richer silicon and optics content. Scenarios range from conservative projections where linear pluggables do most of the heavy lifting, to aggressive cases where CPO adoption climbs to a quarter of AI ports by 2027.
Broadcom is already looking ahead, with plans for fourth-generation CPO that supports 400-gigabit-per-second lane speeds. That roadmap lines up neatly with hyperscalers’ long-term planning, signaling that this isn’t just a one-off breakthrough but the start of a multi-year shift.
House Investment Thesis
Category | Summary & Author's View |
---|---|
Product & Significance | First-to-market 102.4 Tb/s Co-Packaged Optics (CPO) Ethernet switch. A real milestone, not just a spec bump. It directly targets AI fabric pain points: power/thermals and link stability at 1.6T speeds. Built on proven TH5/TSMC COUPE foundation. |
Key Advantages | 1. Throughput: 102.4T bandwidth, enabling 1.6T ports for 10k-100k+ GPU clusters. 2. Power/Thermals: CPO cuts interconnect power vs. pluggables, offering double-digit % system-level savings. 3. Stability: Fewer components reduce link flaps, minimizing costly GPU idle time. 4. Manufacturing: TSMC COUPE platform allows volume scaling with good yield. |
Market Opportunity | ~$80B Ethernet switch spend for AI over 5 years. CPO doesn't need to win everywhere; even 5-15% penetration by 2027 represents a multi-billion dollar silicon+optics opportunity, concentrated in high-power domains like GPU spines. |
Competitive Landscape | • Broadcom: Leader; first to ship 102.4T CPO, strong merchant ecosystem. • Nvidia: Vertical integration (GPU+networking+software) is its edge. • Cisco: Pushing LPO as a "good enough" alternative with fewer serviceability trade-offs. • Marvell: Fast follower; time-to-volume is key. |
Investment Thesis (AVGO) | Support: Category leadership, structural Ethernet TAM growth, photonics/packaging moat. Risks: CPO adoption rate, LPO competitiveness, operational/serviceability friction, photonics supply chain. |
Adoption Risks & Mitigations | 1. Serviceability: Harder to replace than pluggables. Mitigation: Field-replaceable laser modules. 2. Thermal Density: Local heat flux is brutal. Mitigation: Requires advanced system design. 3. Supply Chain: Scaling photonics is non-trivial. Mitigation: TSMC COUPE standard platform. 4. Software: Ethernet stack must mature to match Nvidia's end-to-end optimizations. |
Scenarios | Base Case (Most Likely): 8-12% CPO attach by 2027; AVGO captures majority share. Bull Case: 15-25% CPO attach; AVGO maintains >60% share, driving revenue outperformance. Bear Case: <5% CPO attach; LPO wins; AVGO still benefits from 102.4T ASIC cycle but misses CPU upside. |
Key Metrics to Track | 1. Shipping 102.4T CPO systems from OEMs. 2. Independent power/stability data (watts/100G, link-flap MTBF). 3. Named hyperscaler deployments and LPO/CPO tier splits. 4. TSMC COUPE capacity/yield updates. 5. Competitor 102.4T/CPO roadmap timing. |
Final Call | Groundbreaking? Yes, for Ethernet CPO. Solves Urgent Issues? Yes, materially advances power and stability. Market Size? Large, regardless of CPO mix. Leading? Yes, on shipping 102.4T CPO today. |
Bottom Line: Data centers are running out of room to grow the old way. Broadcom’s Tomahawk 6 may not solve every challenge overnight, but it shows how the industry is rethinking the very foundations of AI infrastructure. And in a race where every watt, every dollar, and every second counts, that shift could prove decisive. NOT INVESTMENT ADVICE