Big data at work – from plant performance to customer interaction.
Ramesh (“Rudy”) Shankar is a research professor at the Energy Production Infrastructure Center, at the University of North Carolina Charlotte. Formerly he was an executive with the Tennessee Valley Authority.
It wasn’t long ago that engineers monitoring power plants to maintain operations within a specified range (neither exceeding nor dropping below alarm levels) would analyze only the unit’s “historian” – that being the component that stores the equipment data, typically representing the temperature, pressure or flow, as captured at given intervals. Violating these limits for certain critical components raises concerns. They could lead to an unscheduled plant outage, or even worse effects on the safety of plant personnel.
Unfortunately, however, the plant historian was rarely prognostic in its behavior. More often it was too late: alerting operators only after limits were exceeded, thus leading to breakdowns. And while such failures fortunately are rare, their occurrence can lead to extensive damage, loss of life, and sometimes a huge setback to industry. Some notable accidents include the TVA Gallatin rotor burst in 1974, which resulted in fatalities and turbine missile destroying the building; the penstock failure at the Swiss Bieudron 1,200 MW hydroelectric plant in 2000, crippling the generator and causing widespread crop damage; Eskom’s Duvha plant turbine failure in 2003, after it was returned to service after a malfunction; and the total destruction of the Sayano-Shushenskaya Dam, a 6,800 MW capacity plant in Russia, in 2009, caused by increased vibrations in a turbine that caused the entire casing cover to be blown off.