I had the fortune (neither good nor bad) of being an energy analyst in 2008 as Lehman Brothers collapsed, in part triggering a global financial crisis. I was also an energy analyst, again, when a pandemic caused the mechanics and circulatory system of much of the global economy to seize up in 2020.
Last week, San Francisco Climate Week provided us the opportunity to host our very own event at Halcyon HQ and enjoy having our distributed team gathered together in the same office. The collaboration was strong, the coffee was hot, the vibes were peak; a real startup experience.
My personal highlight was challenging our data science team to analyze the project work we’ve completed via Halcyon Helpdesk (HHD), our managed service offering that helps energy professionals supercharge their research projects. Since our data scientists also moonlight as our customer service team, they have both the quant skills and the first-hand context required to paint the full picture of what we’ve learned. Their analysis illuminated some interesting trends worth sharing.
For background — the way that HHD works is simple: energy professionals bring us their research projects (ex: “Help me understand the cost of new gas turbine power plant deployment in the Southeast US”), and our data science team uses the platform and technology we’ve built to answer their question(s) faster and cheaper than they could do themselves. Artificial intelligence is a big part of this, but make no mistake: human-in-the-loop is very much a feature here, not a bug.
Since January, our customers have brought HHD about 150 research projects. These projects and the questions within are the lifeblood of our organization because they provide very clear, very strong signals about what we should build. And, just to note, building AI for energy is hard! Terminology is unique, workflows are specialized, ambiguity is high and the only thing more important than speed is comprehensiveness. This is why we value doing as many projects as possible — it’s how we learn where to start.
What did the data science team learn?
Of those ~150 projects brought to HHD, about 100 of them neatly fit into a single “analysis type.” Within those, the biggest bucket of projects we received focused on “Market Analysis" and "Utility Programs." One example here would be “Give me details about the demand response programs run by gas utilities in Massachusetts.” Another example could be something like “Compare the utility growth projections, reserve margins, and planned retirements across all IOUs in California, Oregon and Washington.”
The next biggest bucket of projects fell into the “Rate Cases” category, very closely followed by the “Comment Analysis” category. For Rate Cases, we might see something like “What was the delta between what was requested and ultimately approved, and how has that changed over the last 10 years?” For “Comment Analysis,” we usually see something like “Please map out the key stakeholders, whether they are for or against, and their sentiment as it relates to [program].”
Not only does this type of categorization help us understand what questions could be valuable to answer, but it also helps us understand what our technology is good at and what needs to improve.
A very important learning that we knew but had not formalized until this analysis: not all projects are created equal. Many “questions” are actually multiple, smaller queries that our system handles better individually rather than in totality. A request like “What is the difference between NV Energy’s 2024 and 2025 IRPs?” is hard for an LLM to answer without a lot of context. So, we break it down into a series of questions about each document, e.g. “What are the drivers of demand?” and then compare results.
Beyond totality queries, there are other reasons that certain analysis types are more challenging than others. Some of them — like a table-centric rate case request — are heavy lifts for optical character recognition (OCR) technology. Not to mention the fact that rate case analysis requires cross-referencing other documents and testimony to understand what was proposed, what was accepted, and why!
Other projects or questions are simply too vague to be sufficiently addressable. Much like in the legal profession, LLMs prefer the specific to the general. Specifying an organization and focusing on a specific geography greatly improves the success of a query. While our team has gotten quite good at helping educate our customers on how to phrase queries effectively, there are tool tips and other filters we can (and will eventually) build into our product directly. That sounds theoretical, but practically, think Google’s “Did you mean [this]?”
As we build out our platform, we’re going to build automation that makes these hard tasks easier. Those of you who are Halcyon Alerts subscribers (sign up here) can imagine how we can take what is effectively a data-feed-in-your-inbox and give you additional filters and controls to further discover and explore via a data-feed-in-your-browser. This seemingly small change is actually quite powerful due to the constraints inherent in the email medium that don’t exist in a more traditional application-based UI.
If you’re an energy professional that needs help answering these complicated, important questions (or, if you’re a technology enthusiast who has automated complex workflows and you know the energy industry) Halcyon Helpdesk would love to hear from you: sayhi@halcyon.io
Comments or questions? We’d love to hear from you - sayhi@halcyon.io, or find us on LinkedIn and Twitter