OpenAI partners with Broadcom to build $350B custom AI chips and shift from Nvidia to Ethernet networking

By
CTOL Editors - Ken
5 min read

The Power Play: OpenAI’s $350 Billion Bet on Reshaping Silicon’s Old Guard

Custom chip pact with Broadcom signals a shift from raw compute to infrastructure control—redrawing the fight for AI’s economic future

SAN FRANCISCO — OpenAI has launched one of the most ambitious hardware initiatives in tech history, partnering with Broadcom to design and deploy 10 gigawatts of custom AI accelerators. The multiyear effort, valued between $350 billion and $500 billion, moves far beyond chip design. It aims to redefine where power—and profit—sit within the AI ecosystem.

Rather than chasing pure processing speed, the collaboration targets the underlying foundation: energy access, cooling efficiency, and network architecture. These once-overlooked elements now drive competitiveness in large-scale AI.

Beginning in late 2026 and running through 2029, OpenAI will create custom accelerators tailored to its models, while Broadcom handles fabrication and builds complete rack systems. Notably, the entire platform will run on Broadcom’s Ethernet networking instead of Nvidia’s InfiniBand, the long-standing favorite in high-performance computing.

Broadcom has already showcased what this future could look like with the launch of its Tomahawk 6 “Davisson,” a 102.4-terabit Ethernet switch that becomes the first to ship with co-packaged optics at scale. By integrating the optics directly onto the chip substrate, the design drastically reduces power consumption and network instability—two of the biggest pain points in massive AI training clusters. As data centers struggle to keep thousands of GPUs connected without wasting energy or triggering link failures, this breakthrough signals a major shift in how AI networks will be built and maintained.

Broadcom
Broadcom

When Electricity Becomes the Constraint

The scale is unprecedented. Ten gigawatts of compute capacity translates to nearly 88 terawatt-hours annually at the chip level. Once cooling and facility overhead are factored in, the total approaches 105 terawatt-hours a year—roughly double Switzerland’s national electricity use.

“This isn’t just a chip deal, it’s an infrastructure land grab,” said one semiconductor analyst. “Securing predictable power is becoming the real moat.”

Power access has emerged as the biggest bottleneck in AI deployment. Gigawatt-scale data centers remain rare, permitting is slow, and opposition from local communities is rising. OpenAI’s reported work with Oracle on major energy-focused projects signals a broader shift: controlling electricity matters as much as controlling data.

The Networking Insurgency Nobody Expected

OpenAI’s move to Ethernet may disrupt Nvidia’s grip on AI networking. InfiniBand has long been considered essential for training advanced models due to low latency. However, for inference—where most AI workloads now occur—aggregate throughput and operational flexibility carry more weight than microsecond latency gains.

Broadcom’s Tomahawk 6 Ethernet switch, launched recently with 102.4 Tbps throughput and co-packaged optics, appears well-timed. The Ultra Ethernet Consortium’s new standard offers a vendor-neutral path to performance previously available only through proprietary tech.

“Standardized networking is how companies gain leverage over monopoly pricing,” noted a former cloud infrastructure executive. If OpenAI proves Ethernet can scale to real-world AI demand, the networking landscape could shift rapidly.

Reading Between the Silicon

The partnership structure reveals OpenAI’s strategy: design chips internally, outsource manufacturing, and keep the intellectual property. This mirrors Google’s Tensor and Amazon’s Graviton approach—but tailored to AI inference.

Industry insiders expect the accelerators to emphasize memory bandwidth and sparse computation efficiency rather than peak floating-point performance. In an era where billions of tokens are served per day, cost per token matters more than training speed.

This poses a challenge to Nvidia. The company thrives by offering integrated hardware, software, and networking. If inference economics break away from training hardware, the market may splinter in favor of specialized systems.

The Cascade of Consequences

Nvidia’s near-term outlook remains strong, but pressure is building. Analysts expect pricing adjustments or bundling strategies to protect InfiniBand adoption.

AMD faces a strategic crossroads. Its MI accelerators and ROCm stack position it as an open alternative, but custom silicon reduces the available market. If Ethernet gains traction, AMD could benefit—if it leans into open networking rather than niche use cases.

Broadcom, meanwhile, appears well positioned. Its long-term bet on Ethernet and custom accelerators now aligns with hyperscale trends. Even lower-margin systems integration work becomes more valuable when coupled with its network technologies.

Following the Megawatts

The investment story extends beyond semiconductors. Cloud providers now compete on power access, not just GPU inventory. Oracle’s reported $300 billion “Stargate” initiative centers on energy-first site selection, underlining that the real scarcity is electricity, not chips.

Utilities in data center hubs face massive load growth. Behind-the-meter solutions like fuel cells, modular nuclear reactors, and heat reuse systems are shifting from pilots to priorities. Data centers are beginning to look like heavy industry—requiring political and community buy-in.

Calibrating Expectations and Timelines

Investors should treat OpenAI’s 2026 goal with caution. First-generation custom silicon often encounters delays due to packaging, software readiness, or thermal issues. A short slip would be normal, not alarming.

Ethernet at 10,000-plus node scale still needs to prove it can match InfiniBand performance in complex production environments. Benchmarks in labs rarely mirror real usage.

Power procurement poses similar risk. Securing gigawatt-scale capacity involves regulatory hurdles, transmission constraints, and community negotiation. Building energized infrastructure typically takes longer than designing the chips that will run on it.

Investment Considerations for Market Participants

Several themes emerge. Broadcom may benefit from its networking and integration capabilities, but margin trends warrant monitoring. Nvidia’s training dominance remains a defensive moat, though diversification into Ethernet infrastructure could act as a hedge.

The most overlooked opportunity may be energy infrastructure. Utilities with data center capacity, renewable developers with baseload power, and construction firms specializing in rapid deployment stand to gain the most. Physical limitations—power, cooling, land—are becoming the defining constraints of AI growth.

As always, investment decisions require careful assessment of risk and goals. Market conditions may evolve, and readers should seek professional guidance tailored to their circumstances.

The OpenAI-Broadcom alliance signals an industrial phase of AI—one where success depends less on clever algorithms and more on managing electricity, thermal loads, and efficient infrastructure at unprecedented scale. Those who master the logistics may end up defining the next decade of the industry.

You May Also Like

This article is submitted by our user under the News Submission Rules and Guidelines. The cover photo is computer generated art for illustrative purposes only; not indicative of factual content. If you believe this article infringes upon copyright rights, please do not hesitate to report it by sending an email to us. Your vigilance and cooperation are invaluable in helping us maintain a respectful and legally compliant community.

Subscribe to our Newsletter

Get the latest in enterprise business and tech with exclusive peeks at our new offerings

We use cookies on our website to enable certain functions, to provide more relevant information to you and to optimize your experience on our website. Further information can be found in our Privacy Policy and our Terms of Service . Mandatory information can be found in the legal notice