
OpenAI, the developer behind ChatGPT, has begun renting Google’s artificial intelligence chips, known as tensor processing units (TPUs), to power its products, marking a significant pivot from its heavy reliance on Nvidia’s graphics processing units (GPUs), as reported by Reuters.
OpenAI, a major consumer of Nvidia’s chips for training models and inference computing, is now leveraging Google Cloud’s services to meet its escalating demand for computational capacity.
This move introduces a notable collaboration between two leading competitors in the AI industry, highlighting OpenAI’s intent to diversify its chip suppliers.
Google’s decision to lease TPUs to OpenAI reflects its broader strategy to open its proprietary AI hardware to external clients, a shift from its historical focus on internal use.
This approach has already attracted major players like Apple and startups such as Anthropic and Safe Superintelligence, both founded by former OpenAI leaders.
By adding OpenAI to its roster, Google strengthens its cloud business, capitalizing on its end-to-end AI ecosystem, from hardware to software. However, Google is reportedly withholding its most advanced TPUs from OpenAI, maintaining a competitive edge in the AI race.
OpenAI’s adoption of Google’s TPUs represents its first significant use of non-Nvidia chips, signaling a departure from dependence on Microsoft’s data centers, its primary backer.
The move is driven by the potential for TPUs to lower inference costs, offering a more cost-effective alternative to Nvidia’s GPUs.
This development could reshape the AI chip market, positioning Google’s TPUs as a viable competitor.
Neither Google nor OpenAI has officially commented on the arrangement, leaving the industry to speculate on the long-term impact of this unexpected partnership.