Skip to content
Too Cheap to Meter, Too Expensive to Build

Transmutation of the elements, -- unlimited power ability to investigate the working of living cells by tracer atoms, the secret of photosynthesis about to be uncovered, -- these and a host of other results all in 15 short years. It is not too much to expect that our children will enjoy in their homes electrical energy too cheap to meter.

- Lewis Strauss, Chairman of the Atomic Energy Commission, 1954

Intelligence too cheap to meter is well within grasp. This may sound crazy to say, but if we told you back in 2020 we were going to be where we are today, it probably sounded more crazy than our current predictions about 2030.

- Sam Altman, OpenAI, 2025

Seventy-one years ago, Lewis Strauss, Chairman of the US Atomic Energy Commission, gave a lengthy speech to the National Association of Science Writers. The speech is wide-ranging and high-flying too, perhaps because the audience was itself focused on describing the leading edge of the present and considering how it becomes the future. Strauss’ speech is a piece of writing I have returned to in my two decades of energy and technology analysis, because it contains a phrase that has leapt the blood-brain barrier between science/technical/bureaucratic/political writing and popular parlance: “too cheap to meter.” 

In this instance, Strauss spoke of his vision of the future of atomic power, which was only just becoming commercialized. To him, the potential path was clear: increasing returns to scale and learning effects from building dozens, then hundreds, then thousands of nuclear power reactors would lead to a world where the marginal cost of delivered energy would be negligible.  

That is not what happened, of course. After decades of complication in design, planning, permission, and construction, nuclear power has become more expensive, not less. In Brian Potter’s memorable phrasing in 2023, it is an industry that displayed “negative learning.”  Rather than becoming too cheap to meter, it has become too expensive to build without state assistance. 

So, my rhetorical ears perked up while reading Sam Altman’s latest essay, “The Gentle Singularity,” where he says that the “average” ChatGPT query uses 0.34 watt-hours of electricity. His vision is that “As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity.”  

In other words, it should be the financial expression of its energetic margin: not the product of staffing and R&D costs, or the cost of hardware, but rather the cost of energy itself. The result, then, will be intelligence that is “too cheap to meter.” 

I appreciate the vision, but at the same time, as an energy analyst, I also know the factors that could confound it. One is AI’s booming energy demand itself, and the fact that hyperscalers are perhaps unique in their ability to absorb increasing energy costs (as long as the market rewards their growth and does not penalize their consumption). The consumption of one query may be marginal, but the sum of all of them requires building a new baseline — and thus, US  electricity demand is growing at a faster rate than it has in decades. 

Another confounding factor is the broad base of other drivers of increasing demand. As much as compute drives non-linear changes in electricity demand in diverse state electricity markets, it is not the only new large load on the grid. In other words, it’s not just data centers.

Yet another factor is a generally inflationary environment for the inputs to energy supply, from transformers to gas turbines to copper wiring to the cost of capital (not to mention the cost of hydrocarbons, which will not be exempted from the reversals of Inflation Reduction Act policies). Today’s US electricity market is one of tight supply, soaring demand, rising input costs, and specifically concentrated needs. None of these factors work, today, in favor of making electricity or its artificial intelligence derivative too cheap to meter. 

But — it is easy to talk about the reasons it is difficult to power the cost of electricity today and with it, the cost of intelligence. It is also descriptive, not proscriptive, and this is an instance where providing ideas and not just observations is worthwhile. Here are mine:

First: better parameters around the shape of AI power demand, which is confoundingly unclear today. Andy Lubershane of Energy Impact Partners recently wrote a thoughtful essay on the four big, interrelated variables that determine AI power demand. Understanding them better, and solving for them individually and collectively, will help everyone move smarter and faster. 

Second; a concerted effort to build more low-cost, reliable, clean power through general system speed, not just specific asset speed. That means addressing planning and permission, interconnection, design, finance, and capital allocation through as many coordinated efforts as possible. The best developers of any kind of fixed asset make their business out of being first and fastest; making everyone faster will be of broad benefit, while still maintaining specific advantage (and reward) for those who arrive first. 

Finally: a specific vision for an enormous AI compute landscape driven not by training, but by inference. Inference is the place where the cost of intelligence can track down to the cost of underlying energy. I think that the rewards for training are currently so great that we have not yet thoroughly explored what world-scale abundant inference might mean at an energy systems level. Will it be flexible? Distributed? Dispatchable? Each of those notions contains a multitude of energy possibilities, and ideally, travel along paths that if they are not too cheap to meter, are not too expensive to build, either.

Subscribe for more content like this; reach out with questions: sayhi@halcyon.io; follow us on LinkedIn and Twitter