Transforming power grid operations and planning via high-performance computing
The term high-performance computing (HPC) may evoke visions of biological and molecular systems, social systems, or environmental systems, as well as images of supercomputers analyzing human genes, massive data analysis revealing carbon evolution on the planet, and “Deep Blue” defeating world chess champion Garry Kasparov or Watson’s victory in Jeopardy. Such achievements are exhilarating—but limited thoughts have been given to the electricity that powers these brilliant computational feats and the power system that generates and delivers it.
Since the first commercial power plant was built in 1882 in New York City, the power grids in North America and elsewhere have evolved into giant systems comprising hundreds and thousands of components across many thousands of miles. These power grids have been called the most complex machine man has ever built. The reliable and relatively inexpensive electricity supplied by these power grids is the foundation of all other engineering advances and of our humankind’s prosperity. This critical electric infrastructure has served us remarkably well but is likely to see more changes over the next decade than it has seen over the past century. In particular, the widespread deployment of renewable generation, smart grid controls, energy storage, plug-in hybrids, and other emerging technologies will require fundamental changes in the operational concepts and principal components of the grid. The grid is in an unprecedented transition that poses significant challenges in grid operation, planning, and policy decision. The challenges are a result of more data, larger systems, and more complex models. Central to this transition, power system computation needs to evolve accordingly to provide fast results for power grid operation and planning. Such evolution can only be possible by taking advantage of ubiquitous high-performance parallel computers, including multi-core computers and large-scale cluster machines.
To bring HPC to power grid applications is not simply adding more computing units to solve the problem. It requires careful design and coding to match an application with computing hardware. Sometimes, alternative or new algorithms need to be used to maximize the benefit of HPC. At Pacific Northwest National Laboratory (PNNL), we believe that traditional power grid algorithms can be reformulated for high-performance computing platforms. Parallel computing is an essential ingredient to taking advantage of high-performance multi-processor computing platforms. The improved performance is expected to have a significant impact on how power grids are operated and managed, ultimately leading to better reliability and asset utilization for the power industry.
Pacific Northwest National Laboratory's primary areas of research in high performance computing for power systems include:
- Parallel State Estimation
- Massive Contingency Analysis and Visualization Tool
- Look-ahead Dynamic Simulation
- Dynamic State Estimation
- Advanced Method to Explore Voltage Stability Boundary
- Advanced Methods for Energy Market Optimization
High performance computing key staff.