SAN FRANCISCO, 21 February 2026 – OpenAI has informed investors that it plans to spend roughly US $600 billion on computing infrastructure by 2030, as the AI pioneer lays the groundwork for rapid expansion and prepares for future monetisation and potential public markets activity.
The compute-focused spending target, broadly covering data-centre capacity, hardware, and processing resources needed to train and operate advanced models, reflects the capital intensity of frontier AI development and underscores how central compute power remains to competitive advantage in the industry.
Big Numbers, Big Expectations
Under its long-term roadmap:
- Total compute spending through 2030 is projected at around US $600 billion, an enormous investment by any standard, even in tech.
- OpenAI expects significant revenue growth, aiming for more than US $280 billion in total revenue by 2030, split roughly evenly between consumer and enterprise products and services.
- In 2025, the company reported about $13 billion in revenue, exceeding its $10 billion target for the year.
The vast compute budget highlights how resource-intensive training and operating generative AI models can be, with advanced models requiring increasingly powerful infrastructure to handle tasks such as large-scale language processing, multimodal reasoning and real-time inference for consumer and business users.
Strategic Backing and Funding Moves
OpenAI’s compute ambitions come alongside major financing developments in the AI space: Nvidia is reported to be nearing a US $30 billion investment in OpenAI as part of a broader funding round that might exceed $100 billion total, positioning the company for further infrastructure build-outs.
Such industry deals illustrate how deep partnerships between AI developers and hardware ecosystem players are evolving, with capital commitments tied to both strategic chip supply and long-term compute capacity growth.
Implications for the AI Ecosystem
OpenAI’s compute forecast underscores several trends shaping the broader AI landscape:
- Capital-intensive growth: Large scale compute deployment remains a core competitive battleground, as training and running state-of-the-art models demands substantial hardware investment.
- Investor focus: Long-term spending commitments help signal seriousness to investors, even as many market watchers debate how rapidly higher compute costs will translate into profits.
- Market dynamics: The $600 billion plan follows earlier industry talk of even larger infrastructure requirements — at times in the trillions, highlighting the evolving scale of AI deployment strategies.
As AI adoption continues to accelerate across enterprises and consumer markets, compute infrastructure and related services, from GPU supply chains to data centre capacity, will likely remain pivotal in shaping winners and losers in the technology race.




