Sustainable Energy and Environment

The importance of paying proper attention to the potential hazards lurking in our complex process plants was underscored, once again, by the BP Deepwater Horizon oil spill disaster. The April 2010 oil spill resulted from an off-shore oil platform explosion that killed 11 people. The government estimates that about 5 million barrels of oil escaped from the well over three months, until it was capped in July 2010. The several months long spill, combined with adverse effects from the cleanup activities, has caused extensive damage to marine and wildlife habitats, fishing and tourism industries, and human health, which according to an article in Scientific American could continue for years to come. As of September 2013, civil and criminal settlements, as well as payments to a trust fund, have cost BP about $42 billion, making it the most expensive industrial accident.

In the history of chemical plant accidents, a few disasters have served as wake up calls. The Flixborough accident in U.K. in 1974, where a Nypro plant explosion killed 26 people, was one such call. Another important one was Piper Alpha, an offshore oil platform operated by exploded in 1988 killing 167 people and resulted in about $2 billion losses. The worst was Union Carbide’s Bhopal Gas Tragedy, in 1984, in which some 5000 were killed and about 100,000 were seriously injured by the accidental release of methyl isocynate.

More recently, in March 2005, a hydrocarbon vapor cloud explosion at the isomerization process unit at BP's Texas City refinery in Texas City, Texas, killed 15 workers, injured more than 170 others, and resulted in legal settlements of about $1.6 billion to date.

As noted in the main page, accident investigations have shown that systemic failures rarely occur due to a single failure of a component or personnel. For example, Union Carbide initially claimed that the Bhopal Gas Tragedy was caused by a disgruntled employee, who had sabotaged the equipment. But, again and again, investigations have shown that there are always several layers of failures, ranging from low-level personnel to senior management to regulatory agencies, which have led to major disasters. Such investigations have shown that the safety procedures had been deteriorating at the failed facilities for weeks, if not for months, prior to the accident. Another common failure is that people had not identified all the serious potential hazards. They had often failed to conduct a thorough process hazards analysis that would have exposed the serious hazards, which resulted in the disasters later. Yet another common cause is the inadequate training of the plant personnel to handle serious emergencies. On top of all this, it is imperative to consider the ineffectiveness of the regulatory, rating, and auditing agencies. All these were significant culprits in recent disasters. 

Indeed, the importance of addressing non-technical common causes, as those described above, as an integral part of systems safety engineering, was pointed out as far back as 1968 by Jerome Lederer, the former director of the NASA Manned Flight Safety Program for Apollo, who wrote: “System safety covers the entire spectrum of risk management. It goes beyond the hardware and associated procedures to system safety engineering. It involves: attitudes and motivation of designers and production people, employee/management rapport, the relation of industrial associations among themselves and with government, human factors in supervision and quality control, documentation on the interfaces of industrial and public safety with design and operations, the interest and attitudes of top management, the effects of the legal system on accident investigations and exchange of information, the certification of critical workers, political considerations, resources, public sentiment and many other non-technical but vital influences on the attainment of an acceptable level of risk control. These non-technical aspects of system safety cannot be ignored.”

As noted in the research tab, coping with such complexity requires concepts, methodologies, and automation tools to model, analyze, predict, explain, control and manage the behavior of such a system and its agents and components in various environments. Our Center works on addressing these challenges along the following directions: (i) Complexity Science, (ii) Multi-Perspective Modeling, and (iii) Hybrid Intelligent Systems for Real-time Decision-Making.

500 W. 120th St., Mudd 510, New York, NY 10027    212-854-2993                 
©2013 Columbia University