Skip to content
Speed to Power, Need for Power

Two weeks ago, we at Halcyon hosted our first large in-person event on time and speed to power: the critical paths for companies building the energy infrastructure required to meet soaring demand for AI compute. It’s the right topic, at the right time, for the right industries: power generation and utilities on one end, and hyperscalers on another, with throughlines from infrastructure developers, capital allocators, grid hardware manufacturers, and EPCs connecting both sides.

The simplicity of the ‘speed to power’ imperative does not make its implications any less complex, however. Something I have heard since our event, from both industry experts and casual observers of the biggest infrastructure boom in decades, is a profound question: where is the demand for speed?

I see that demand three ways, with increasing levels of abstraction. The least abstract: revenue from computing and cloud services. Microsoft, Google, and Amazon all reported their first quarter results last week, and each showed continued growth in their respective cloud services business. Amazon Web Services grew at 28% year-on-year; Microsoft Azure grew 40%. Then there is Google Cloud, which increased its revenue from cloud services by 63% from a year earlier — its fastest growth rate on record. Growth rates this high in already-mature businesses mean total revenue doubles in two years (or less)...and that revenue can only be serviced with compute, and that compute can only serve when energized.

unnamed (7)-2

The second way the demand for speed manifests: capital expenditure expectations. Revenue is attention: it is demand made real, clear, paid for. Capex is intention, and well-telegraphed intention when made by the world’s biggest companies. The five hyperscalers building much of the world’s computing infrastructure invested more than $400 billion last year; Morgan Stanley Research estimates double that amount this year (more than $800 billion), and expect a tidy $1.1 trillion invested in 2027. None of this investment will be useful without power. But another way to view these ambitious intentions is that the inextricable link between power and compute means that power generation capacity will be built, and quickly.

unnamed (8)-1

There is a third way to view this demand as well— a glimpse further into the future, from those who plan tomorrow’s compute and from those who plan to use it. Google Cloud’s contracted future revenue now stands at $460 billion: not capex intentions or analyst projections, but signed agreements waiting to be served. Microsoft’s commercial backlog stretched to $625 billion in its most recent quarter, up 110% year-on-year. Together, just two hyperscalers carry roughly $1.1 trillion in contracted future revenue, which rhymes almost exactly with that Morgan Stanley 2027 estimate.

That symmetry is noteworthy. If capex is intention, contracted backlog is obligation. When both reach the trillions, the system is locking in. In a recent interview, Thomas Kurian, the CEO of Google Cloud, said that “I think for the next 10 years, there will always be more demand than supply” for compute (link). Even Google’s Gemini team, in Kurian’s recounting, is compute-constrained and would gladly take more compute if it were to be had.

All of this demand has to land somewhere — and somewhere is increasingly constrained. In PJM, the largest U.S. wholesale market, RMI analysis shows the timeline from interconnection application to commercial operation has stretched to more than eight years today. PJM's most recent capacity auction cleared at the FERC-approved price cap (nearly 10x the clearing price two auctions earlier), signaling that supply has not kept pace with demand and cannot, on the current trajectory, catch up. GE Vernova's gas turbine backlog and slot reservations reached 100 GW at the end of Q1, with deliveries stretching into 2030; the company expects its order book to be sold out through 2030 by the end of this year. Siemens Energy and Mitsubishi Heavy Industries face similar dynamics, with delivery timelines for heavy-duty turbines now reaching seven years in some cases.

Speed to power, in other words, is not rhetorical. It is the gap between a hyperscaler signing a contract today and a megawatt arriving at the meter — and it is widening.

Today, we have only the merest acquaintance with what enterprise AI will be tomorrow. Whatever form it takes, it will run through an expanding channel of power-intensive compute. Upstream of that computational might is power, delivered as quickly as possible. Speed to power is paramount for hundreds of billions of dollars of capex that is already in flight, and perhaps a trillion dollars more that is already taxiing, so to speak. Speed to power is a strategic imperative today and tomorrow, because the need for power is readily apparent right now.