OpenAI co-designs a custom AI accelerator with Broadcom; first chips ship 2026
Initial deployment internal to OpenAI data centres, not sold externally
reduce Nvidia GPU dependence, lower compute costs and optimize performance
Broadcom expects multibillion-dollar AI revenue upside; shares rallied on news
OpenAI is preparing to design and produce a custom AI accelerator with Broadcom Inc, with the first chips expected to ship in 2026 for OpenAI’s internal use, the Financial Times reported.
The move would mark the ChatGPT maker’s biggest step toward owning the hardware that runs its models and reducing dependence on Nvidia’s GPUs.
According to people familiar with the arrangement, OpenAI has co-designed the chip with Broadcom and will initially deploy the processors within its own data centres rather than sell them to outside customers. Broadcom’s CEO Hock Tan signalled on an earnings call that the company has won major production orders from a new, unnamed customer, remarks market watchers linked to the Financial Times’ reporting on OpenAI’s collaboration.
Custom silicon is a fast-growing lever for cloud and AI companies to cut costs and optimise performance for large language models and other inference workloads.
By owning its own accelerators, OpenAI could control procurement timelines, lower per-unit compute costs and tailor chips to the specific memory, bandwidth and interconnect needs of its models, a strategic play that follows precedents set by Google, Meta and Amazon.
Market Reaction & Scale of Deal
Broadcom said an unnamed customer committed substantial production orders; analysts and coverage of the company’s earnings have pointed to a multibillion-dollar uplift to Broadcom’s AI revenue outlook for fiscal 2026. Broadcom shares rallied on the news, reflecting investor appetite for bespoke AI infrastructure beyond Nvidia’s dominance.
The shift comes amid surging global demand for AI compute as companies train ever-larger models and run them for millions of users. While custom chips can deliver efficiency gains, building competitive accelerators is complex and costly: success depends on tight coordination across chip design, firmware, interconnects and chip-fabrication partners.
OpenAI’s plan therefore signals both its willingness to pour more resources into infrastructure and the rising strategic importance of in-house silicon.
Key near-term indicators will include confirmation from Broadcom or OpenAI about the deal’s scope, any disclosure of fabrication partners or timelines and whether OpenAI ever offers the chips to third-party customers. Regulators and hyperscale cloud partners may also scrutinise how the hardware rollout affects competition for AI compute going forward.