by Jennifer Chu, MIT News
Each summer, power grids are pushed to their limits, as homes and offices crank up the air conditioning in response to rising temperatures. A single failure in the system — such as a downed power line or a tripped relay — can cause power outages throughout a neighborhood or across entire towns.
For the most part, though, a failure in one part of the grid won’t bring down the entire network. But in some cases, two or more seemingly small failures that occur simultaneously can ripple through a power system, causing major blackouts over a vast region.
Such was the case on Aug. 14, 2003, when 50 million customers lost power in the northeastern United States and Ontario — the largest blackout in North American history. Even more recently, in July 2012, India experienced the largest power outage ever, as 700 million people — nearly 10 percent of the world’s population — went without power as a result of an initial tripped line and a relay problem.
Massachusetts Institute of Technology (MIT) researchers have developed an algorithm that identifies the most dangerous pairs of failures among the millions of possible failures in a power grid. The algorithm pares down all of the possible combinations to the sets most likely to cause widespread damage.
The researchers tested their algorithm on data from a mid-sized power grid model consisting of 3,000 components, and within 10 minutes the algorithm had labeled 99 percent of the failures as relatively safe. The remaining 1 percent represented pairs of failures that would likely result in large blackouts if left unmonitored. ……
The speed of the new algorithm is unmatched by similar conventional alternatives, says MIT professor Konstantin Turitsyn. “This algorithm can be used to update what are the events–in real time–that are the most dangerous,” he says. Turitsyn says the algorithm identifies spheres of influence around a power failure. If two failures are relatively close, spheres of influence can overlap, intensifying the response and raising the probability of a catastrophic cascade. ………
“This algorithm, if massively deployed, could be used to anticipate events like the 2003 blackout by systematically discovering weaknesses in the power grid,” says Columbia University professor Daniel Bienstock. Article
DCL: CEP technology applies very well to building smart grids. Manufacturers have been using CEP on smart grid projects for some years now. One of the issues has always been the speed of the CEP pattern recognition. The North East USA blackout is an example I have written about several times. It started with small events that cascaded into bigger events. But none of the regional grid controllers had a complete picture as to what was happening, although they all knew “something was not right.” It is an example of the need for fast automated pattern recognition that “sees problems on the grid early” before they get out of hand. Causality between events was another issue. When researchers tell you they are looking for “pairs of problems” they are probably also telling you that they do not have a causal model for grid events that can tell them when two or more events are causally related. One would think that building a causal model for power grids would not be such a hard task.