Elsevier

Energy Policy

Volume 49, October 2012, Pages 243-252
Energy Policy

Distribution-level electricity reliability: Temporal trends using statistical analysis

https://doi.org/10.1016/j.enpol.2012.06.001Get rights and content

Abstract

This paper helps to address the lack of comprehensive, national-scale information on the reliability of the U.S. electric power system by assessing trends in U.S. electricity reliability based on the information reported by the electric utilities on power interruptions experienced by their customers. The research analyzes up to 10 years of electricity reliability information collected from 155 U.S. electric utilities, which together account for roughly 50% of total U.S. electricity sales. We find that reported annual average duration and annual average frequency of power interruptions have been increasing over time at a rate of approximately 2% annually. We find that, independent of this trend, installation or upgrade of an automated outage management system is correlated with an increase in the reported annual average duration of power interruptions. We also find that reliance on IEEE Standard 1366-2003 is correlated with higher reported reliability compared to reported reliability not using the IEEE standard. However, we caution that we cannot attribute reliance on the IEEE standard as having caused or led to higher reported reliability because we could not separate the effect of reliance on the IEEE standard from other utility-specific factors that may be correlated with reliance on the IEEE standard.

Highlights

► We assess trends in electricity reliability based on the information reported by the electric utilities. ► We use rigorous statistical techniques to account for utility-specific differences. ► We find modest declines in reliability analyzing interruption duration and frequency experienced by utility customers. ► Installation or upgrade of an OMS is correlated to an increase in reported duration of power interruptions. ► We find reliance in IEEE Standard 1366 is correlated with higher reported reliability.

Introduction

Since the 1960s, the U.S. electric power system has experienced a major electricity blackout about once every 10 years. Each has been a vivid reminder of the importance society places on the continuous availability of electricity and has led to calls for changes to enhance reliability. At the root of these calls are judgments about what reliability is worth and how much should be paid to ensure it.

In principle, information on the actual reliability of the electric power system and how proposed changes would affect reliability ought to help to inform these judgments. The use of this type of information in local decision making, for example between an investor-owned utility and its state public utilities commission, is common. Yet, comprehensive, national-scale information on the reliability of the U.S. electric power system is lacking.

This paper helps to address this information gap by assessing trends in U.S. electricity reliability based on information reported by electric utilities on power interruptions experienced by their customers. The focus of prior published investigations of U.S. electric power system reliability has been primarily on the reliability of the bulk power system. Yet, interruptions originating on the bulk power system represent only a small fraction of the number of power interruptions experienced by electricity consumers, as indicated in Hines et al. (2009) and Eto and LaCommare (2008). The vast majority of interruptions experienced by electricity consumers are caused by events affecting primarily the electric distribution system. Both Hines et al. (2009) and Eto and LaCommare (2008) report evidence that suggests that interruptions originating within and limited to portions of distribution systems account for more than 90% of the average number of interruptions annually. Eto and LaCommare (2008) further suggest that these interruptions account for roughly half of the average total annual minutes (i.e. the duration) of interruptions annually. Thus, the analysis of power interruptions originating on the bulk power system alone addresses only a portion of the total reliability experienced by the electricity customer.

Utilities routinely collect information on electricity consumers’ total reliability experience. This information almost always includes all power interruptions experienced by electricity consumers, both those originating on the bulk power system and those originating from within and limited to portions of the electricity distribution system. The main metrics that utilities use to report this information focus separately on the annual average frequency and duration of power interruptions experienced by all customers, taken as a whole.

Unfortunately, analyzing utility-reported reliability metrics is not straightforward because the metrics are not defined consistently. Previous work examining electric utility practices for reporting reliability information found significant variation (Eto and LaCommare, 2008). Despite the existence of standards – albeit voluntary ones – promulgated by the industry’s professional society, the Institute for Electrical and Electronics Engineers (IEEE), differences in definition and classification of power interruption events make direct comparisons among data from different utilities problematic and potentially misleading. In addition, the methods employed by utilities to collect information on power interruptions are evolving from manual record-keeping to automated record-keeping with adoption of automated outage management systems (OMS).

In this paper, we analyze up to 10 years of reported electricity reliability information collected from a convenience sample of 155 U.S. electric utilities, which together account for 50% of total U.S. electricity sales. Using these data, we quantify trends in electricity reliability and examine the relationship between these trends to the characteristics of the utilities, the climates in which their customers reside, utility reporting practices, and the adoption of advanced technologies for recording power interruptions. Our analysis uses statistical techniques that take into account differences in reliability reporting practices and record-keeping methods, which could introduce measurement error or bias, and other factors among electricity utilities so that we can explore the effect of these differences.

The questions we examine and the motivations for examining them are as follows:

  • 1.

    Are there trends in reported electricity reliability over time? The focus of prior published investigations of the U.S. electric power system reliability has been primarily on the reliability of the bulk power system. Amin (2008) suggests that the reliability of the bulk power system has been declining over time based on a review of the frequency and size of reported events. The response by Hines et al. (2009) rejects that hypothesis based on a rigorous statistical examination of the same data. Yet, as noted, analyses such as these are based on a partial record of customers’ total reliability experience. Taking explicit account of specific differences in utility reporting practices and methods (and other factors) and using comprehensive information on all power interruptions experienced by consumers, our analysis seeks to determine whether statistically significant temporal trends can be identified.

  • 2.

    How are trends in reported electricity reliability affected by the installation or upgrade of an automated outage management system? McGranaghan et al., 21–24 May 2006, McGranaghan et al., 21–24 May 2006 speculated that adoption of OMS led one utility to report lower reliability because of under-reporting of customer power interruptions prior to adoption of the OMS. This is an example of measurement error. Our analysis explores the effect of installing or upgrading an OMS and how adoption of such advanced reporting systems is correlated with changes in reported reliability over time.

  • 3.

    How are trends in reported electricity reliability affected by the use of IEEE Standard 1366-2003 (IEEE Power Engineering Society, 2004)?

Eto and LaCommare (2008) compared reliability metrics reported by a convenience sample of 11 electric utilities for a single year using both historic company practices and IEEE Standard 1366-2003.1 Based on this small sample, those authors found no evidence of systematic measurement bias resulting from use of the IEEE standard. To further examine whether reliance or non-reliance on the standard introduces measurement bias, the current analysis seeks to update the 2008 findings based on a larger sample of both older and more recent years of data.

The remainder of this paper is organized as follows: in Section 2, we review the data collected for our analysis. In Section 3, we describe and present findings from estimating statistical models, which take into account utility-specific differences that might influence time trends in reported reliability. In Section 4, we summarize our main findings from our statistical analysis and discuss next steps.

Section snippets

Data collection and review

The data we collected for this study include the following:

  • utility-reported reliability metrics,

  • temperature-related weather data,

  • retail electricity sales,

  • installation or upgrades of an OMS, and

  • adoption of IEEE Standard 1366-2003 for reporting reliability metrics.

This section describes the sources for these data and reviews selected aspects of the data we collected on reliability metrics.

Introduction to the statistical methods used in the analysis

The conventional statistical method used to analyze short, unbalanced panel data is multivariate regression. Cameron and Trivedi (2009) refer to the specific type of panel data we analyzed as “short” because the data structure has many entities (i.e., utilities), but only a few time periods (compared to the number of entities). Our panel data are unbalanced because they do not contain reliability metrics for every year from all utilities (Wooldridge, 2002). Multivariate regression models

Are there utility-specific differences in reported electricity reliability

Table 2 presents the results from the application of the F-test to the reliability metrics. The table indicates that both one-way (utility only) and two-way (utility and year) effects are statistically significant (at the 0.01% confidence level) for all four reliability metrics—SAIDI and SAIFI both with and without major events. That is, there are very strong correlations between the utility and the values of the reliability metrics as well as between the utility and the year when correlated to

Summary and interpretation of findings and next steps

This study finds that there has been a modest, yet statistically significant secular trend of decreasing or declining reported reliability over the past 10 years. We applied rigorous statistical methods both to confirm that there were utility-specific differences among electricity reliability reports and to take explicit account of these differences in exploring correlations between reported reliability metrics and other factors. Applying these methods, we find that there are statistically

Acknowledgment

The work described in this report was funded by the Office of Electricity Delivery and Energy Reliability of the U.S. Department of Energy under Contract no. DE-AC02-05CH11231. In addition, we thank two anonymous reviewers for their helpful comments on the initial draft of this manuscript.

References (17)

There are more references available in the full text version of this article.

Cited by (6)

  • The impact of variable renewable energy resources on power system reliability

    2021, Energy Policy
    Citation Excerpt :

    The two indices are said to provide a consistent approach for utilities interested in measuring the reliability of their electricity distribution system (Eto et al., 2012; Malla 2013). Following Eto et al. (2012) and Malla (2013), we separated disruptions that occurred during major event days (MEDs) from disruptions that did not happen during MEDs.13 Because we were interested in understanding the impact of using WPV on reliability, our analysis only considered values for SAIDI and SAIFI recorded on non-MEDs.

  • Modeling power loss during blackouts in China using non-stationary generalized extreme value distribution

    2020, Energy
    Citation Excerpt :

    Based on North American blackout reports, Hines at al. [13] evaluated blackouts in North American and showed evidences of seasonal trends. Eto et al. [16,17] and Larsen et al. [15,18] found that reported reliability of American electric utilities was getting worse. In the past decades, China’s power grid has evolved into an unprecedentedly large and complex system [19,20].

  • Recent trends in power system reliability and implications for evaluating future investments in resiliency

    2016, Energy
    Citation Excerpt :

    In other words, reported reliability was getting worse. However, the Eto et al. [13,14] paper was not able to identify statistically significant factors that were correlated with these trends. The authors suggested that “future studies should examine correlations with more disaggregated measures of weather variability (e.g., lightning strikes and severe storms), other utility characteristics (e.g., the number of rural versus urban customers, the extent to which distribution lines are overhead versus underground), and utility spending on transmission and distribution maintenance and upgrades, including advanced (“smart grid”) technologies” [13,14].

  • Watching the grid: Utility-independent measurements of electricity reliability in Accra, Ghana

    2021, Proceedings of the 20th International Conference on Information Processing in Sensor Networks, IPSN 2021 (co-located with CPS-IoT Week 2021)
View full text