Ensuring reliability in a world of variable renewables and energy limited storage
Ask any grid operator their top priority and the answer is simple: reliability. Our society has come to expect, and require, uninterrupted power—even on the hottest days and coldest nights, and through the longest storms. These expectations do not change even as the grid transitions to high variable renewable energy; reliability remains paramount.
With increased variability and uncertainty, how can we ensure there are enough resources to serve electricity customers when and where they need it?
The answer lies in resource adequacy analysis—a form of grid planning that ensures that grid operators have the resources available to balance supply and demand—taking into account uncertainties like unexpected generator outages, fluctuating load, and changes in the weather, which are becoming increasingly important. Evaluating these uncertainties statistically, grid planners project resource needs to reach an acceptably low level of risk of capacity shortages.
It’s a big task—it determines how much investment our power grids require, how much new generation is built, and which generators can retire. In regulated utilities, resource adequacy analysis sets the planning reserve margin used to signal the need for new generation and guides procurement decisions. In some restructured markets, the planning reserve margin is often the justification for capacity markets. In other restructured markets, the planning reserve margin is advisory and, along with scarcity price signals and expectations in the spot market, influences the resource decisions of market participants or the requirement on loads to procure long-term contracts.
As the power system’s resource mix changes, resource adequacy becomes more nuanced and more complicated. For example, some utilities and grid planners are using the existing resource adequacy paradigm to justify existing or new resources which may not actually be needed. This can further confuse the discussion of how resource adequacy should be performed for modern power systems.
To overcome these challenges, we took a fresh look at resource adequacy and went back to its first principles. We asked ourselves a few simple questions—if we started from scratch, without 100 years of power system planning and conventional approaches, how would we evaluate resource adequacy for modern power systems? Is there a better way to evaluate risk and reliability in a power system with increasing wind, solar, storage, and load flexibility? What are the first principles that would ensure that enough resources are available for modern power systems, regardless of the technologies at play?
The result—five principles of resource adequacy for modern power systems:
Principle 1: Load participation fundamentally changes the resource adequacy construct
The historical notion that a specific amount of generation capacity is required to meet a static load is no longer relevant. The proliferation of energy storage, demand response, electric vehicles, and advanced rate design bring with them new options for load flexibility and should be evaluated in a similar context as generator resources, including uncertainty and availability.
Real-time markets, with a high degree of participation from price-responsive demand, may shift the resource adequacy planning challenge away from reliability needs toward economic considerations, as customers can determine and differentiate which loads matter most.
Principle 2: Modeling chronological operations across many years is essential
Historically, resource adequacy analysis had a relatively simple task: to ensure there is enough capacity installed to meet load. To simplify things further, the analysis was often limited to times of peak load, making an assumption that if you had enough capacity to cover your highest load hours, you probably had enough the rest of the year. Determining whether the portfolio could be operated was not a reliability concern, but an economic one, so it was left out of resource adequacy analysis.
VRE and energy-limited resources, like storage and demand response, are changing this construct. Periods of risk are no longer isolated to peak loads but instead may be shifting to extreme weather events or while the sun is setting. There is growing recognition that all intervals matter for resource adequacy analysis. This also requires a consideration of chronological operations and scheduling to ensure that the energy storage and demand response will be around long enough, and can fully recharge, to get the system through reliability challenges.
Resource adequacy analysis requires new methods, and the result is increased complexity. Many years of synchronized hourly weather and load data are required to accurately reflect correlations and inter-annual variability between wind and solar generation, outages, and load. Chronological Monte Carlo analysis via production cost simulations is the new gold standard for modern resource adequacy analysis.
Principle 3: Quantifying size, frequency, and duration of capacity shortfalls is critical to finding the right resource solutions
Resource adequacy analysis is also based on arbitrary measures of capacity shortfall risk. The conventional metric, loss of load expectation or LOLE, quantifies the expected amount of time when capacity might be insufficient to meet load in a given year. A common rule-of-thumb reliability criterion used throughout much of the industry is one day of outage in 10 years, or 0.1 days per year LOLE. If the system has an LOLE greater than 0.1 days per year, then capacity is added until the system meets this reliability criterion.
But LOLE is an opaque metric when used in isolation. It only provides a measure of total and average amount of shortfalls over a study period and does not characterize the magnitude, duration, or frequency of specific outage events. For example, a shortfall of 1% of load for 10 hours is measured the same way as a shortfall of 10% of load for 10 hours. These disparate events are not differentiated by conventional resource adequacy metrics even as they represent dramatically different situations in terms of options for meeting demand in today’s power system.
New metrics should quantify the specific characteristics of outage events, including frequency, duration, and magnitude (in MW and MWh). While some existing metrics, like expected unserved energy (EUE), do capture all three dimensions, the cost of mixing all three dimensions into a single metric is that you can’t distinguish between a long shallow event, a short deep event, and many short shallow events. These metrics must move beyond expected values and provide information on the distribution of events, to provide emphasis on individual, rather than aggregate, event characteristics.
These new metrics will allow planners to select mitigations and resources that are appropriately sized to fit system needs identified and avoid over-procurement of resources.
Principle 4: There is no such thing as perfect capacity
As principle 3 suggests, some capacity shortfalls may be made up of frequent but short-duration events, while others may be infrequent but long-duration events. Mitigations should be specified accordingly.
Different resources bring different capabilities. Battery energy storage may be well suited to solve frequent but short-duration shortages, while demand response may be better suited for large, infrequent events. Additional resources like long-duration storage, hydro, and thermal generation may be required for long-duration capacity shortages spanning days or weeks.
Resource adequacy analyses for modern power systems should create a framework that reflects the fact that there is no perfect resource; thermal capacity is not always necessary or the only option; and all resources have limitations based on weather, outages, flexibility constraints, and common points of failure.
Principle 5: Reliability criteria should not be arbitrary, but transparent and economic
For decades, grid planners in most regions have relied on a one-day in 10-years reliability criterion to plan their system resource needs. This criterion dates to the 1960s, and even then was based more on experience and judgment than the costs ratepayers pay for increased reliability. One day in 10 years is simply a line in the sand that grid planners predetermine as the threshold.
Resource adequacy analysis needs to include the economic or financial aspect of reliability. A single reliability criterion, absent economic considerations, is unjustified. Grid planners and regulators should have a clear understanding of the costs associated with achieving different reliability targets.
Adding an economic consideration will allow for a direct comparison to other forms of reliability mitigations like distribution system upgrades and storm hardening of infrastructure. This consideration can also be used to ensure that the value to the customer is worth the cost of additional investment and that customers are not being asked to pay more for reliability than it is worth.
What comes next?
While these five principles make sense in theory, applying them in practice is more complicated. That is the next step of the Redefining Resource Adequacy Task Force—to implement these principles in a set of analyses, using the RTS-GMLC test system, to illustrate how refined resource adequacy analysis can better address challenges of reliability in a modern power system, one with increased variable renewable energy, energy storage, and demand-side participation.
We will also be sharing this work at the upcoming ESIG Fall Technical Workshop in the System Planning Working Group, Resource Adequacy Task Force, and hope that you will attend. In the meantime, we want to hear from you! Redefining decades-old methods and processes can be messy, and the more input the better. Leave a comment below, or reach out to email@example.com and let us know what you think.
This article is reposted, with permission, from the original source: https://www.esig.energy/five-principles-of-resource-adequacy-for-modern-power-systems/
The Redefining Resource Adequacy Task Force is collaborating closely with industry experts, including Aaron Bloom, Gord Stephen, Wesley Cole, Armando Figueroa-Acevedo, and Aidan Touhy. I would like to acknowledge their valuable input and support regarding these first principles and the forthcoming modeling efforts.