AI’s next energy bottleneck may not be a reactor, a solar farm, or a fusion breakthrough. It may be a transformer.
That sounds too boring to matter. It is exactly why it matters.
The AI infrastructure race is usually described as a contest for chips, models, data centers, and power generation. Those are real constraints. But between a power plant and a server rack sits a less glamorous layer of equipment: transformers, switchgear, substations, protection systems, power distribution units, interconnection studies, transmission upgrades, and the engineering work that makes electricity usable at the right voltage, reliability, and location.
This is where the story gets harder. You can announce a data center faster than you can connect it. You can raise capital faster than you can build a substation. You can order GPUs faster than the grid can absorb a new industrial load. AI may be digital at the interface, but its expansion is becoming brutally physical underneath.
The AI Power Story Has Moved Down the Stack
Vastkind has already covered the obvious version of this problem: AI data center power is becoming a race between fusion, geothermal, SMRs, and other firm power options. That question still matters. If AI demand keeps growing, the industry will need more electricity.
But more generation is not the whole answer. Electricity is not useful to a data center merely because it exists somewhere on the system. It has to be delivered, stepped down, conditioned, protected, and connected through infrastructure that was not built for a sudden wave of gigawatt-scale digital load.
Harvard’s Belfer Center frames the issue bluntly: U.S. data center electricity use could rise from 176 TWh in 2023 to between 325 and 580 TWh by 2028, according to Lawrence Berkeley National Laboratory estimates cited in its policy brief. The same brief notes that in some regions, AI-driven demand is already outpacing available capacity, pushing companies toward project delays, private power contracts, and even on-site natural gas generation.
That changes the meaning of the AI boom. It is no longer only a software scaling story. It is a grid planning story.
Transformers Are Where Abstraction Ends
A transformer is not exciting in the way a model launch is exciting. It does not produce a demo. It does not trend. But it performs one of the basic acts that make modern electricity possible: changing voltage so power can move across the grid and then be used safely by the equipment that needs it.
Data centers need this hardware at scale. So do utilities, factories, renewable projects, substations, transmission expansions, and electrification projects. The problem is that all of those buyers are now competing for many of the same slow-manufactured components.
Data Center Knowledge, citing Wood Mackenzie, reported that the U.S. data center electrical equipment market could grow from about $20 billion in 2026 to $65 billion by 2030. The same reporting says annual transformer demand tied to data centers could rise from roughly 1,500 units today to more than 9,000 by the end of the decade, while substation transformer lead times have stretched from about 140 weeks in 2023 to more than 160 weeks in 2026.
That is the kind of number that cuts through hype. If a critical component takes roughly three years to arrive, then AI infrastructure is not moving at software speed. It is moving at heavy-industry speed.
And heavy industry does not care how urgent your roadmap sounds.
The Hidden Competition Is With Everyone Else
The grid equipment squeeze is not just an AI problem. That is what makes it politically important.
Utilities need transformers to replace aging equipment and expand service. Renewable developers need them to connect new wind, solar, and storage projects. Industrial companies need them for factories, mining, manufacturing, and electrification. Cities need them for load growth. Electrified transport needs them. Now hyperscale data centers are arriving as giant, capital-rich buyers that can reserve equipment early and pay for speed.
That can distort the whole market.
If a handful of AI and cloud companies become dominant purchasers of transformers, switchgear, and power distribution equipment, the result is not just faster data center construction. It may also mean longer waits and higher costs for utilities, smaller industrial buyers, clean energy projects, and local grid upgrades.
This is where AI infrastructure becomes a public-cost issue. A data center campus may be privately owned, but the grid around it is not a private toy. When one class of load demands rapid connection, the costs and delays can spill outward into ratepayers, local reliability planning, and competing infrastructure priorities.
That is why the question is no longer simply whether AI companies can get enough electricity. It is who gets moved back in the queue when they do.
Interconnection Is Becoming a Credibility Test
The other bottleneck is not a factory component. It is a process.
Large loads and new generation projects have to move through interconnection studies, grid impact reviews, upgrade requirements, utility negotiations, transmission planning, and regulatory decisions. These processes exist for good reasons: connecting major loads badly can destabilize the system or impose costs on other users. But the timelines are now colliding with AI’s appetite for rapid buildout.
Data Center Knowledge notes that in PJM territory, projects reaching commercial operation in 2025 spent an average of about eight years in the interconnection queue, based on PJM data and filings. PJM has since moved toward reforms, but the deeper point remains: grid connection is now a strategic constraint.
This is why some companies are trying to secure power directly, co-locate generation with data centers, or design campuses around private energy arrangements. Those moves may help individual firms. They do not automatically solve the public grid problem.
In fact, they can create a two-track energy system: one for capital-rich compute infrastructure that can buy its way toward dedicated capacity, and one for everyone else waiting on the slower shared system.
The Grid Was Not Built for Instant Industrial Load
The uncomfortable truth is that data centers are becoming a form of heavy industry.
They do not look like steel mills from the outside. They have cleaner branding, more abstract products, and better software metaphors. But electrically, a large AI campus can behave like a serious industrial load. It needs huge amounts of reliable power, high uptime, cooling, backup systems, land, water in some regions, and grid coordination.
That is a very different public reality from the cultural image of AI as weightless intelligence in the cloud.
The cloud is not weightless. It is buildings, chips, substations, backup generators, transmission lines, transformers, construction crews, permitting fights, and utility planning models. The faster AI grows, the harder it becomes to hide that material base behind software language.
This does not mean the AI buildout should stop. It means the public conversation has to become more adult. If societies want AI infrastructure, they need to decide how much grid capacity it deserves, who pays for upgrades, which projects get priority, how emissions are handled, and how local communities are protected from being treated as passive hosts for global compute demand.
The Energy Transition Could Get Crowded Out
There is a climate risk here that is easy to miss.
If transformers, switchgear, and interconnection capacity are scarce, then every new priority competes with other priorities. AI data centers are not merely competing with one another. They may be competing with clean energy projects trying to connect to the grid, utility upgrades needed for reliability, and electrification work needed to reduce fossil fuel use across transport, buildings, and industry.
That is a bad trade if it is handled blindly.
A society can decide that some AI infrastructure is worth the electricity and hardware it consumes. But that decision should be explicit. It should not happen by default because the richest buyers can move fastest through procurement and local incentive politics.
The risk is not only that AI uses more power. The risk is that AI absorbs the scarce grid-building capacity needed for everything else.
What a Serious Response Looks Like
The answer is not one magic technology. It is a stack of unglamorous fixes.
Grid equipment manufacturing has to expand. Utilities need better planning tools for large-load requests. Regulators need cost-sharing rules that prevent ordinary ratepayers from quietly underwriting speculative data center growth. Interconnection processes need to become faster without becoming reckless. Data center developers need stronger obligations around flexibility, on-site backup emissions, and upgrade funding.
There is also a demand-side question AI companies prefer not to foreground: not every workload deserves premium grid capacity at all times. Some compute can be scheduled. Some training can move. Some data centers may need to become more flexible grid participants rather than permanent baseload claimants. If AI companies want to be treated like critical infrastructure, they should be expected to behave like disciplined grid citizens.
That is where the next phase of the story may go. The winners will not simply be the firms with the most GPUs. They will be the firms, utilities, regions, and regulators that can turn compute growth into a physically credible infrastructure plan.
Why This Matters
AI’s grid bottleneck matters because it exposes the physical cost of digital ambition. Transformers, substations, switchgear, and interconnection queues are not background details anymore. They decide where data centers can be built, who pays for grid upgrades, which clean energy projects get delayed, and whether the AI boom becomes a public infrastructure burden. The future of AI will be shaped not only by model capability, but by the slow hardware that delivers electricity to the rack.
Conclusion
The most useful way to read this moment is simple: AI has entered the transformer queue.
That queue may turn out to be more important than another benchmark chart. Chips are scarce. Power is scarce. But the equipment that connects power to demand is becoming scarce too, and it is much harder to accelerate than a product launch cycle.
This is the part of the AI story that feels least glamorous and most real. If the industry cannot secure, finance, and fairly allocate the physical grid hardware behind its ambitions, then the cloud stops looking infinite. It starts looking like what it always was: infrastructure with limits.
Read next: AI Data Center Power: Fusion, Geothermal, and SMRs and AI Chip Sales: The Dataset That Exposes AI’s Power Grab.