OpenAI is reportedly in talks with Broadcom and other chip makers in what appears to be an urgent effort to expand its artificial intelligence operations.
AI models such as OpenAI’s ChatGPT and Meta’s Llama are typically trained using massive clusters of graphical processing units (GPUs) or similar computer chips. The most popular hardware, the H100, belongs to Nvidia.
The H100 can cost anywhere from $15,000 to $30,000, depending on the number purchased and current market conditions. It can take tens of thousands of these AI chips to train a single model, with more necessary for larger, more robust systems.
Despite OpenAI’s position as a market leader in the generative AI space, a significant amount of the hardware used to train OpenAI’s models belongs to its partner Microsoft.
According to a report from The Information, OpenAI is in talks with Broadcom and other chip makers to develop its own chip. In a statement to The Information, OpenAI neither confirmed nor denied the report, but did say it was investigating increasing access to infrastructure:
“OpenAI is having ongoing conversations with industry and government stakeholders about increasing access to the infrastructure needed to ensure AI’s benefits are widely accessible. This includes working in partnership with the premier chip designers, fabricators and the brick-and-mortar developers of data centres.”
The report also claims that the amount of investment required for OpenAI’s plans, as far as a new chip is concerned, would likely be in the range of billions or more.
Back in February, The Wall Street Journal reported that Altman was actively seeking investors in hopes of raising a $5 trillion–$7 trillion war chest to develop more chips.
Part of what may be spurring Altman and company forward in their scramble to get more chips is the increasing prominence of direct competitor xAI, helmed by Elon Musk, and Mark Zuckerberg’s Meta.
[Source: Cointelegraph – Image: Shutterstock]