Skip to Main Content U.S. Department of Energy
EIOC at PNNL

Looking back at the August 2003 blackout

On August 14, 2003 at 3:05:41 pm eastern daylight time (EDT), a 345-kV transmission line in northern Ohio, loaded to 44% of its capacity, sagged too close to a tree, faulted, and tripped off-line. With the loss of this transmission line the eastern interconnection was no longer compliant with North American Electric Reliability Corporation (NERC) reliability criteria. Simply put, the system no longer had the ability to successfully withstand any subsequent credible contingency, the so-called “N-1” reliability criterion. Unfortunately nobody knew the system had crossed this important boundary, and thus, no corrective action was taken.

There are a couple reasons why nobody knew the system had entered the danger zone. Earlier that afternoon the energy management system failed at the control center operated by First Energy, an electric utility company with service territory in northern Ohio. After technicians rebooted the system, an undetected problem with the alarm processor persisted. As a result, First Energy operators were not notified when their transmission line tripped off-line, which was the first of a series of events that led to the eventual wide scale blackout.

Also that afternoon, the Midwest Independent System Operator (MISO) had a problem with its state estimator. Among other functions, MISO serves as the “reliability coordinator” for First Energy and several other utility companies in the region. Reliability coordinators were formed by NERC as a result of lessons learned from the western blackouts in the summer of 1996. Their purpose is to enhance real-time information sharing and coordination among the transmission asset owners/operators. The MISO state estimator had failed to converge when it received erroneous topology information associated with an unrelated transmission line fault from another company earlier in the day. Troubleshooting had been completed, but the state estimator was not yet restored to its normal mode of automatic operation. The significance of having the state estimator off-line is that contingency analysis, which relies on a computer model generated by the state estimator, was suspended. In other words, the tools in place to analyze the “what ifs” of the system were not doing their job because they didn’t have an accurate picture of the current state of the grid.

Over the next hour, first a few and then progressively more people became aware that something wasn’t right with the grid in northern Ohio as two more 345-kV lines tripped at 3:32 and 3:41 pm by sagging too close to trees beneath them (both were also within their emergency rating), and eventually sixteen 138 kV lines in the area tripped beginning at 3:39 pm. As the system was collapsing, there were phone calls into the First Energy control center from large industrial customers complaining of extraordinarily low voltage, from power plant personnel talking about voltage spikes and swings, and from neighboring utilities and reliability coordinators trying to assess the situation. A First Energy system operator, erroneously believing that the problems existed elsewhere, said at 3:45 pm “AEP must have lost some major stuff.” Interestingly, a First Energy shift supervisor informed his manager at the same time that it looked as if they were losing their system. Information was not being shared effectively.

Slowly, more and more people were becoming aware that indeed a big problem was brewing, but nobody in a position of authority or responsibility was able to effectively put the correct pieces of the puzzle together in time to resolve the problem. It wasn’t until the lights went out in the First Energy control room that the First Energy personnel had a solid indication that the problem was indeed in their service territory (and not somewhere else). By then, the sequence of events unfolding that afternoon had progressed to the point where an uncontrolled cascading failure was under way. It had become too late for human intervention to save the grid.

The cascading outages began to accelerate at 4:05:57 pm EDT when the first 345-kV transmission line tripped that was not due to a short circuit to ground. With the loss of the aforementioned transmission lines, more northbound power was forced through the remaining transmission assets south of Cleveland. This high current, coupled with declining voltage, was calculated by a protective relay as low impedance, low enough to appear as a fault within its “zone 3” setting. Shortly thereafter, many more transmission lines and power plants were automatically tripped off-line by protective relays designed to protect them from damage. At the conclusion of the cascading failure, about 4:13 pm EDT, more than 50 million people in cities including New York City, Detroit, Cleveland and Toronto had no electricity.

One of the key lessons learned from the August 14, 2003 blackout is the importance of constantly maintaining situational awareness—knowing “the big picture” at all times. The Pacific Northwest National Laboratory, operated by Battelle for the U.S. Department of Energy, is actively utilizing its Electricity Infrastructure Operations Center (EIOC) and leveraging industry partnerships to research, develop, demonstrate, and deploy advanced tools and training to minimize the likelihood and/or severity of future blackouts. Researchers are focusing on improving situational awareness by collecting more and richer data from a broader area and developing tools that allow timely analyses and therefore quick and appropriate action.

The U.S.-Canada Power System Outage Task Force blackout investigation report chronicles the detailed sequence of events, presents the root causes, and provides extensive recommendations.

Tell me more about the task force report

About the EIOC

Collaborations

Resources

Contacts

Additional Information

Research Areas

Deploying Technologies

Highlights

Website Contact