Alphabet’s Google has told its employees that the company must aggressively scale its artificial-intelligence computing power in the coming years to keep pace with rapidly rising global demand. Senior leadership explained that Google will need to double its AI computing capacity every six months. Achieving this pace means the company must reach more than a 100-fold expansion within four to five years. The internal message highlights how critical AI infrastructure has become in the competition among major global technology companies.
The discussion took place during an all-hands meeting on November 6, where Amin Vahdat, Google Cloud’s vice president in charge of AI infrastructure, addressed employees about the company’s current challenges and future demands. He delivered a clear message: Google cannot remain competitive without exponential scaling. The company’s leadership now views AI infrastructure as the most crucial and financially demanding component of the AI race.
At the same time, Alphabet has raised its financial outlook for 2025. The company now expects capital expenditures to reach between 91 and 93 billion dollars next year, reflecting the enormous cost of building and upgrading data centres, servers and custom silicon needed to power advanced AI systems. CEO Sundar Pichai also told employees that 2026 will likely be a difficult and intense year as Google continues expanding at an accelerated pace.
A Rapidly Intensifying Arms Race Across Big Tech
Google’s push comes amid unprecedented spending across the entire technology industry. Major competitors such as Microsoft, Amazon and Meta have all increased their investment plans, collectively raising their capital expenditure forecasts to more than 380 billion dollars in the current year.
Microsoft, for example, plans to spend nearly 35 billion dollars in a single quarter—representing a dramatic 74 percent annual increase. Amazon expects to invest around 125 billion dollars in 2025 and anticipates even higher spending in 2026 as it expands and doubles its global data-centre footprint. Meta has raised its 2025 budget to around 70 to 72 billion dollars and warned that it expects significantly higher growth in the following year.
The global infrastructure race is driven by the explosive adoption of generative AI tools, large language models and cloud-based machine-learning systems that require vast amounts of computational power. As these technologies become central to search, cloud platforms, advertising systems, autonomous tools and enterprise applications, the scale of required infrastructure has reached levels previously unseen in the technology sector.
Internal Concerns About Overheating and Market Risks
During the November meeting, employees expressed concern that the industry could be entering an AI bubble. Google leadership acknowledged that the question is relevant given the pace of investment. Pichai pointed to Google Cloud’s recent annual revenue growth of about 34 percent and a backlog valued at roughly 155 billion dollars as evidence that the company is experiencing real demand rather than speculative hype.
He also said Google Cloud’s revenue would have been even higher if the company had more compute capacity available, highlighting how infrastructure limitations are now constraining growth. At the same time, he admitted that if an industry-wide bubble were to burst, no company—including Google—would be immune from the impact.
These concerns echo a broader market debate. Nvidia, the world’s leading AI chip supplier, recently dismissed fears of an AI bubble, yet the company’s stock still declined after its latest earnings release, dragging down the Nasdaq index. Many global fund managers surveyed by Bank of America believe that an AI-related market correction could significantly affect financial markets.
Google’s Strategy: Efficiency and Custom Silicon Over Pure Spending
Despite global spending increasing at a historic pace, Google told employees that outspending competitors alone is not a sustainable strategy. Instead, the company aims to focus on building infrastructure that is more reliable, more efficient and more powerful than alternatives offered by rivals.
Amin Vahdat emphasized that Google’s success depends on major advances in efficiency. The company aims to deliver dramatically higher levels of compute power, storage capacity and network performance for the same—or lower—cost bracket than current systems. It also aims to achieve this while operating within similar power constraints.
A key part of this strategy is Google’s investment in custom silicon. The company recently introduced Ironwood, its seventh-generation Tensor Processing Unit. Google says the new chip generation is nearly 30 times more energy-efficient than its first cloud-based TPU released in 2018. Ironwood systems can scale to more than 9,200 interconnected chips and deliver more than 40 exaflops of computing power, making them among the most capable AI accelerators deployed at scale anywhere in the world.
Google also emphasized that its internal partnerships between Google Cloud and DeepMind are giving the company a strategic advantage. DeepMind’s forward-looking research capabilities allow Google to anticipate the needs of future AI models and design infrastructure that will be ready for increasingly large, complex and resource-hungry systems in the years ahead.
A Market Betting Big on AI—But With Rising Uncertainty
The industry’s trillion-dollar commitment to AI infrastructure has prompted analysts and economists to question whether the expected returns will justify the massive levels of spending. The demand for compute is rising quickly, but building new data centres involves long approval processes, environmental considerations, access to large amounts of electricity and the need for advanced cooling systems.
Even companies with vast resources face supply constraints. Demand for the most advanced GPUs and AI accelerators continues to exceed supply, with multi-year waiting lists reported by cloud providers and enterprise customers. As AI models grow larger and more complex, the compute required for training and inference also increases, putting additional pressure on infrastructure providers like Google, Amazon and Microsoft.
Despite these constraints, the major technology firms remain convinced that AI will drive the next decade of growth. They believe that failing to scale at the pace required would mean losing competitive ground in cloud services, enterprise AI, consumer applications and advertising technologies.
What Google’s Strategy Means
Google’s internal directive to double its AI computing power every six months is one of the most aggressive scaling strategies the industry has ever seen. Achieving more than a hundred-fold increase within a few years would require unprecedented innovation in chip design, networking, energy efficiency and data-centre engineering.
For businesses using AI services, the message is clear: the companies building the world’s digital infrastructure are preparing for an era in which AI workloads dominate cloud computing. For smaller players and startups, the cost of competing in this environment may continue to rise, potentially widening the gap between large tech firms and the rest of the market.
For investors, the scale of planned spending underscores both the promise and the risk of the AI boom. Returns could be transformative if AI becomes embedded in every major sector. But if demand slows or technology evolves in unexpected ways, the financial consequences could be significant.






