Power, not chips, has become the binding constraint on AI deployment
Hyperscale operators are signing nuclear power-purchase agreements, restarting decommissioned plants, and queueing for grid interconnection in slots measured in years.
Through 2023 the conversation about AI infrastructure was about GPUs. Through 2024 it was about HBM memory. Through most of 2025 it was about networking — InfiniBand, NVLink, optical interconnects. In 2026 the conversation is about power, and it is unlikely to leave that subject for the rest of the decade.
The International Energy Agency has put the global data-centre electricity outlook in the same bracket as the entire annual consumption of mid-sized industrialised countries by the late 2020s. The drivers are not subtle: a single GB200 NVL72 rack draws roughly 120 kilowatts continuously. A modern AI training campus at the gigawatt scale would have been an oddity five years ago. There are now multiple in active construction.
Hyperscalers are now buying power, not just consuming it
The most visible signal of the constraint is the supply-side response. Microsoft's agreement to restart Three Mile Island Unit 1 under a long-dated power-purchase agreement was the headline event, but it is part of a broader pattern: long-tenor PPAs with operating nuclear, accelerated permitting work for advanced reactors, gas-turbine reservations measured in years, and direct investment in grid infrastructure that historically would have been the utility's problem.
The interconnection queue itself has become a competitive instrument. PJM, ERCOT and the major European TSOs all report multi-year backlogs for new large-load connections. A site that already has a substation and a confirmed grid slot is now worth substantially more than one with the same fibre and water but a five-year wait for power. That is reshaping where new capacity gets built — and where it explicitly does not.
The second-order effects are spreading through industries that have nothing obvious to do with AI. Industrial gas-turbine manufacturers are sold out into the next decade. Transformer lead times have stretched from twelve months to multi-year. Specialised electrical-engineering capacity — the kind that designs medium-voltage switchgear for a one-gigawatt campus — has become a bottleneck on its own.
For enterprise buyers the implication is concrete. The cloud regions in which the largest training and inference jobs will land over the next several years are no longer determined primarily by latency or data-residency preference. They are determined by which regions have power. Several of the regions that have hosted AI workloads enthusiastically are now politely declining new mega-loads. Several others — northern Sweden, parts of the US Gulf Coast, the Saudi NEOM corridor, certain Brazilian states — are aggressively positioning themselves as the new frontier.
The political consequences will be uncomfortable. AI demand is colliding with electrification, with industrial reshoring, and with climate commitments — all of which compete for the same megawatts. The result is that energy policy, historically a quiet specialism, is becoming one of the most important inputs to AI competitiveness. The labs that recognise this and integrate energy strategy into their roadmap will be the ones still scaling at the end of the decade. The ones treating it as someone else's problem will be the ones whose next training run waits eighteen months for a substation.
Topics