2020 MKO Process Safety Symposium Proceedings
2020 MKO Process Safety Symposium Proceedings
edu
Table of Contents
Welcome
______________________________________________________________________________________
2020 Symposium Sponsors
aeSolutions, ACS Chemical Health & Safety
______________________________________________________________________________________
2020 Virtual Exhibitors
Baker Risk
Dust Safe Science
Gateway Consulting
The Institution of Chemical Engineers (IChemE)
Operational Sustainability, LLC
______________________________________________________________________________________
In Association with IChemE
______________________________________________________________________________________
Awards and Scholarships
Trevor Kletz Merit Award
The Harry H. West Memorial Service Award
Lamiya Zahn Memorial Safety Scholarships
______________________________________________________________________________________
2020 MKOPSC Consortium Members
______________________________________________________________________________________
Frank P. Lees Memorial Lecture
______________________________________________________________________________________
Keynote Speaker
Mission-Oriented Leadership
Katherine A. Lemos, Chairperson and CEO, U.S. Chemical Safety Board
______________________________________________________________________________________
Plenary Panel
Integrating Pandemic Preparedness and Response Into Business Continuity
and Risk Management Planning
Gerald Parker, Paul Thomas, Malick Diara, Richard Wells and Stewart Behie
Symposium Coordinators
______________________________________________________________________________________
Symposium Track Chairs
______________________________________________________________________________________
Symposium Technical Support Team
______________________________________________________________________________________
Program
______________________________________________________________________________________
Session Summaries
______________________________________________________________________________________
Manuscripts
______________________________________________________________________________________
Speaker Biographies
______________________________________________________________________________________
Symposium Sessions
Day 1 Track 1
Importance of Process Safety Time in Design
Shanmuga Prasad Kolappan
______________________________________________________________________________________
Limitations of Layers of Protection Analysis (LOPA) in Complicated Process
Systems
Abdulaziz Alajlan and Arafat Aloqaily
______________________________________________________________________________________
On the Usage of Ontologies for the Automation of HAZOP Studies
Johannes Single, Jürgen Schmidt, and Jens Denecke
______________________________________________________________________________________
An Efficient and Effective Approach for Performing Cost Benefit Analysis,
with Two Case Studies
Henrique M. Paula, Donald K. Lorenzo, and Marcelo Costa Jr.
______________________________________________________________________________________
Does your facility have the flu? Use Bayes rule to treat the problem instead
of the symptom
Keith Brumbaugh
______________________________________________________________________________________
Integrating the PHA and Facility Siting into a Site Risk Assessment Life
Cycle
Colin Armstrong and Sam Aigen
A Framework for Automatic SIS Verification in Process Industries using
Digital Twin
Nitin Roy
______________________________________________________________________________________
The use of Bayesian Networks in Functional Safety
Paul Gruhn
______________________________________________________________________________________
My Vision of Future Instrumented Protective Systems
J. Gregory Hall
______________________________________________________________________________________
Overlooked Reverse Flow Scenarios
Gabriel Martiniano Ribeiro de Andrade, Christopher Ng, and Derek Wood
______________________________________________________________________________________
Failure Under Pressure: Proper Use of Pressure Relief Device Failure Rate
Data based on Device Type and Service
Todd W. Drennen, Michael D. Moosemiller
______________________________________________________________________________________
Additional Engineering and Documentation to Reduce Pressure Relief
Mitigation Cost
Gabriel Martiniano Ribeiro de Andrade and Kartik Maniar
______________________________________________________________________________________
Day 1 Track 2
Virtual Reality Process Safety in Counterfactual Thinking
Kianna Arthur
______________________________________________________________________________________
Is Attentional Shift the Problem (or something else) with Hazard Statement
Compliance? An Experimental Investigation Using Eye-Tracking Technology
S. Camille Peres, Jonathan Walls, and Joseph W. Hendricks
______________________________________________________________________________________
Risk management entails decision making: Does decision making in
complex situations come down to somebody’s gut feeling?
Hans Pasman, Bill Rogers and Stewart Behie
______________________________________________________________________________________
Decision Making using Human Reliability Analysis
Fabio Kazuo Oshiro
______________________________________________________________________________________
Improving Industry Process Safety Performance through Responsible
Collaboration
Ryan Wong, Shanahan Mondal, Mawusi Bridges, and Alyse Keller
______________________________________________________________________________________
How Much Does Safety Culture Change Over Time?
Stephanie C. Payne, Rhian Murphy, and Stefan V. Dumlao
______________________________________________________________________________________
Administering a Safety Climate Assessment in a Multicultural Organization:
Challenges and Findings
Atif Mohamed Ashraf, Luc Vechot, Stephanie C. Payne and Tomasz Olewski
______________________________________________________________________________________
A Comparison of Procedure Quality Perceptions, Procedure Utility,
Compliance Attitudes, and Deviation Behavior for Digital and Paper Format
Procedures
Joseph W. Hendricks, S. Camille Peres and Trent Parker
______________________________________________________________________________________
Practical Writing Tips to Prevent Human Error When Following Procedures
Mónica Philippart
______________________________________________________________________________________
The Impact of Hazard Statement Design in Procedures on Compliance
Rates: Some Contradictions to Best (or Common) Practices
Joseph W. Hendricks, S. Camille Peres, Timothy J. Neville and Cara A.
Armstrong
______________________________________________________________________________________
Day 1 Track 3
RBI Study using Advanced Consequence Assessment for Top-side
Equipment on Offshore Platforms
Chetan Birajdar and Bahram Fada
______________________________________________________________________________________
Indicators of an Immature Mechanical Integrity Program
Derek Yelinek
______________________________________________________________________________________
Remember the à la Mode: Lessons Learned from Ammonia Release at
Frozen Foods Warehouse
Matthew S. Walters, Sean J. Dee, and Russell A. Ogle
______________________________________________________________________________________
Process Related Incidents with Fatality- Trends and Patterns
Zohra Halim, T. Michael O’Connor, Noor Quddus, and Stewart Behie
______________________________________________________________________________________
Application of Mind Mapping to Organize and Recall Potential Hazards
T. Michael O’Connor and Ingry Ruiz
______________________________________________________________________________________
Would a HAZOP, LOPA or STPA have Prevented Bhopal?
Howard Duhon and Pranav Kalantri
______________________________________________________________________________________
Predictive Process Safety Analytics and IIoT - PSM Plus: The AI+PSM
Analytic Framework
Michael Marshall
______________________________________________________________________________________
Guidance to Improve the Effectiveness of Process Safety Management
Systems in Operating Facilities
Syeda Zohra Halim, Stewart Behie, and Noor Quddus
______________________________________________________________________________________
Unified Wall Panel System (UWPS) - A Value Engineering Solution for
Protective Construction in the Petroleum Industry
Scott Hardesty, A. Mangold, anf K. White
______________________________________________________________________________________
Protect Process Plants from Climate Change
Victor H. Edwards
______________________________________________________________________________________
Process Safety Implications in a Changing Environment
Trish Kerin
______________________________________________________________________________________
A Critical Evaluation of Industrial Accidents Involving Domino Effect
Ravi Kumar Sharma and Nirupama Gopalaswami
___________________________________________________________________________________
The objectives for holding this annual symposium are three-fold. First, this annual event provides an independent
and unbiased forum for exchange of ideas and discussion where industry, academia, government agencies and
other stakeholders come together to discuss critical issues of research and advances in the field of process safety.
Secondly, it provides an excellent platform for networking whereby process safety professionals can build peer-to-
peer connections for future and also gain knowledge of the various services they can avail from others. Finally, we
strongly believe that as we navigate the uncertain waters of COVID-19 today, good, robust research can help solve
the complex and intriguing problems faced by the industry today. Identifying these problems and exchanging ideas
and opinions with the expertise brought together through discussions at the symposium will provide context to help
resolve the issues at hand
In addition, participants in the Symposium can also take this opportunity to benefit from being acquainted with the
cutting-edge research done at the Mary Kay O’Connor Process Safety Center.
These proceedings contain the symposium program, the papers presented at the symposium and submitted before
the deadline, and other informative items from the center.
We wish you maximum benefit from this symposium and strongly encourage you to participate in the virtual
discussions. Please feel free to contact me or other center personnel with your ideas and input regarding the
symposium and other activities of the center. We look forward to welcoming everyone to our face to face symposium
at Texas A&M University in October 2021. Best wishes for a safe return to normalcy.
THANK YOU!
Your support is integral to the success of this important event.
As an industry, our inability to learn from past incidents and demonstrate that process safety is improving
has led to the project Process Safety in the 21st Century and Beyond. The aim of this project is to envision
better process safety by outlining efforts that each stakeholder can take.
How was the project undertaken?
Gaining a global perspective of the key challenges in process safety is the first important step. The
challenges were considered across four stakeholders: industry, academia, regulators, and society. To
determine the challenges, a series of workshops at international symposia were undertaken, including in
the UK (with input from other European countries), North America, Asia, Australia/New Zealand, and the
Middle East. Various methods of consultation were used, but the key questions remained consistent. In
process safety:
These questions were answered by professionals from various levels in industry, academia, and regulatory
bodies. Once the challenges were identified, a top five list was drawn up for each stakeholder group.
Our goal with this document is to lay out a series of actions to be undertaken at various levels and across
all stakeholders to improve process safety because people have a right to not get hurt. To enable this
vision, this roadmap is a call to action to all stakeholders and not just process safety professionals.
We invite you to look at the opportunities and think about how you can influence them and positively
impact process safety. Every professional is obliged to improve process safety because engineering and
science are essential to us all and it must be sustainable in all senses of the word, including process safety.
If we, as engineers, do not develop new strategies for continuous improvement, the engineering profession
will become irrelevant to society and the need for process safety will become extinct, thus increasing
process safety incidents. A question that needs to be answered is where this roadmap is intended to take
us. The simple answer is that the roadmap and the associated journey are focused towards improvements
in process safety performance, which will ultimately lead us to our vision of zero incidents.
In Association with IChemE
The Institution of Chemical Engineers (IChemE) is the global professional membership organization for
chemical, biological and process engineers and other professionals involved in the chemical, process and
bioprocess industries. With a membership exceeding 44,000 members in over 120 countries, and offices in
Australia, New Zealand, Singapore, Malaysia and the UK; IChemE aims to be the organization of choice for
chemical engineers.
We promote competence and a commitment to the best practice, advance the discipline for the benefit
of society and support the professional development of our member. We are the only organization
licensed to award Chartered Chemical Engineer and Professional Process Safety Engineer status.
OUR MISSION
• To provide support and services to individuals, employers and others who contribute to improving
the practice and application of chemical engineering.
• To enable chemical engineers to communicate effectively with each other and with other disciplines.
To support these aims, we operate as an effective, efficient and responsive organization, providing
leadership and demonstrating good practice as well as complying with our obligations as a charitable
organization.
IChemE is a registered charity in England & Whales (214379) and a charity registered in Scotland
(SC 039661).
Awards and Scholarships
In fond and living memory of Lamiya, the Department of Chemical Engineering and the Mary Kay O'Connor
Process Safety Center have established the Lamiya Zahin Memorial Safety Scholarship. Graduate students
are encouraged to apply for the $1,000 scholarship by writing a 1000-word essay on “Safety Innovations in
Research Projects”.
Trevor Kletz Merit Award Recipient
Bill is currently a Lecturer at the Department of Chemical Engineering, Texas A&M University and has been
associated with various research activities at MKOPSC for over twenty years. He has published and continues to
publish numerous peer-reviewed articles on process safety and risk management and has been teaching Risk
Analysis and Quantitative Risk Analysis at for the last 10 years. Bill developed several Safety Engineering courses
at the Center and taught them at different times and has inspired thousands of undergraduate and graduate
students with his enthusiasm and passion for process safety and his unparalleled dedication to teaching. His key
contribution to process safety has been disseminating the importance of quantitative risk assessment in
engineering problems to his students. Each student passing his class enters the industry knowing the importance
of “uncertainty” in risk assessment. There is probably no other educator with a greater number of students in
process safety than Bill. In his silent way, Bill continues to pave the way to safer processes by imparting wisdom
on the fundamentals of QRAs among the large number of his students.
Research Fellow and Volunteer Mentor, Mary Kay O’Connor Process Safety Center
Jeff has been a long-time supporter of the Center and has worked hard in providing direction through the
Steering Committee and Technical Advisory Committee and special projects related to process safety. Jeff
has also been an active participant in the MKOPSC annual symposium serving on the technical program
committee for the past few years as well as serving as track chairs, reviewing presentations, and helping
coordinate activities to make this event successful. He engages with the students and is always keen to
offer assistance
Lamiya Zahin Memorial Safety Scholarship
Recipient
Cassio Brunoro Ahumada is a doctoral candidate in the Chemical Engineering Department. He holds a
Master's in Chemical Engineering from Texas A&M and a Bachelor's in chemical engineering from the
Federal University of Espirito Santo, Brazil.
His research investigates how the congestion pattern variation affects the deflagration-to-detonation
transition (DDT) mechanism on flammable gaseous mixtures. He is also involved in many safety-related
projects, including facility risk assessments, facility siting, and vapor cloud explosion modeling. During his
time as a graduate student at TAMU, he interned at Tesla's car manufacturing site and Wood PLC,
conducting activities related to process safety management and technical consulting.
2020 MKOPSC Consortium Members
We appreciate all of our member companies and its representatives. Their expertise in our
Steering and Technical Advisory Committee is essential to the success of the Center. Email us
at [email protected] if you would like to become a member.
Prior to her confirmation as Chairperson and CEO of the U.S. Chemical Safety Board (CSB) Dr. Lemos
served as Director for Northrop Grumman Corporation’s Aerospace Sector, driving performance
improvements across the product lifecycle with a focus on engagement early in the value stream.
Before joining Northrop Grumman in 2014, Dr. Lemos worked at the Federal Aviation Administration
(FAA) as a technical leader and program manager in Aircraft Certification and Aviation Safety. Prior
to this she worked for the National Transportation Safety Board (NTSB) as a Senior Human
Performance Investigator in Aviation Safety, and then as Special Assistant to Vice Chairman of the
Board.
In academia Dr. Lemos focused her research on decision-making, studying the influence of
information and technology on beliefs and behaviors to more reliably yield safe outcomes during
risky and uncertain conditions. In aviation, Dr. Lemos conducted applied research to balance the
strengths of technology and humans for optimal performance. Dr. Lemos earned a B.B.A. from
Belmont University, a M.S. from California Lutheran University, and a Ph.D. from the University of
Iowa.
Throughout her career, Dr. Lemos has focused on improving safety and efficiency at the level of the
individual and the organization. She has contributed individually as a researcher, professor and
technical expert, and also contributed as a leader in managing programs and initiatives, bringing
consensus and order to efforts that result in tangible safety and efficiency outcomes.
Day 2 Plenary Panelists
Gerald Parker
Paul Thomas
Malick Diara
ExxonMobil
Richard Wells
Interim Director,
MKOPSC
Symposium Coordinators, MKOPSC
\\
\
The Mary Kay O’Connor Process Safety Center would like to recognize and thank the
Track Chairs who have volunteered their time to assist with the abstract review,
selection process, and session coordination. Their input, expertise, and leadership have
been essential to the Symposium’s success.
Delphine Laboureur,
Chris Cloney, Dust Safety Science
Von Karman Institute
[email protected]
[email protected]
Symposium Technical Support Team
The Mary Kay O’Connor Process Safety Center would like to recognize and thank the
Technical Support Team who put in countless of hours to make the virtual symposium
successful. They are the ones behind the scenes who took on the challenge of tackling
the set up and handle of the virtual sessions. Their virtual session research and
coordination was integral to the Symposium’s success.
Functional safety engineers follow the ISA/IEC 61511 Speaker: Todd W. Drennen
standard and perform calculations based on random
Component failure rate data is used in a variety of
hardware failures. These result in very low failure
quantitative and semi-quantitative study methods related to
probabilities, which are then combined with similarly low
process safety and reliability, including Fault Tree Analysis
failure probabilities for other safety layers, to show that the
(FTA) and Layers of Protection Analysis (LOPA). In each of
overall probability of an accident is extremely low (e.g., 1E-
these methodologies, failure rate data is used to determine
5/yr). Unfortunately, such numbers are based on frequentist
the probability that specific protective components, such as
assumptions and cannot be proven. Yet accidents are not
pressure relief devices, will fail to function as designed when
caused by random hardware failures, they are typically the
called upon to prevent an incident. In the case of pressure
result of steady and slow normalization of deviation (a.k.a.
relief devices, standardized probabilities of failure on
drift). Bayes’ theorem can be used to update our prior belief
demand are often applied with minimal consideration of the
(the initial calculated failure probability) based on observing
device type or the process service in which the device is
other evidence (e.g., the effectiveness of the facility’s process
employed. This presentation will examine pressure relief
safety management process). The results can be dramatic.
device failure rate data from multiple published sources,
______________________________________________________
categorize the data based on device type and service, and
3:15 PM to 3:45 PM
then develop guidelines for determining probability of
My Vision of Future Instrumented Protective Systems
device failure on demand based on the proposed device
Speaker: J. Gregory Hall type and service categories. Additionally, this presentation
will provide commentary on the administrative aspects of
I will share my vision of what future Instrumented Protective relief device handling relative to observed relief valve
systems IPS will look like and what is our current objective to reliability.
achieve that future. ______________________________________________________
______________________________________________________
Category: Relief Systems Session
______________________________________________________
4:00 PM to 4:30 PM
Overlooked Reverse Flow Scenarios
Intentionally opening a line that carries a hazardous This presentation attempts to answer three questions: 1)
substance — a procedure known as a line break — is often Would a HAZOP on the Bhopal MIC design, conducted in the
necessary for performing maintenance activities on pipes, 1960's, have prevented the tragedy? Would a LOPA have
valves, pumps, compressors, and other process equipment. prevented it? 3) would and STPA have prevented it?
However, inadequate or improper line break practices may _____________________________________________________
increase the risk for a loss of containment event, complicate Category: Improving Process Safety with
troubleshooting efforts if a loss of containment occurs, or Technological Advances Session
inadvertently expose workers to hazardous materials. A case ______________________________________________________
study that examines an incident related to a line break in a 2:15 PM to 2:45 PM
frozen foods warehouse will be presented. The loss of Predictive Process Safety Analytics and IIoT - PSM Plus:
containment event described here provides valuable lessons The AI+PSM Analytic Framework
that can aid in developing an effective procedure for safe
process operation following a line break, and the impact that Speaker: Michael Marshall
improper line break procedures can have on leak
With an IIoT predictive application environment as the
identification and system troubleshooting.
_____________________________________________________ backdrop and an asset integrity and process safety analytic
framework as the primary enabler, the paper and
Category: Recalling and Learning from Incidents
presentation discuss methods, metrics, performance
Session analyses, and KPI benchmarking techniques for driving
______________________________________________________ Operational Excellence as it relates to the ultimate concern of
10:30 AM to 11:00 AM any PSM program, i.e., the loss of primary containment
Process Related Incidents with Fatality- Trends and (LOPC) and associated impacts to production, profitability
Patterns and process safety.
Speaker: Syeda Z. Halim ______________________________________________________
2:45 PM to 3:15 PM
A database of the Occupational Safety and Health
Guidance to Improve the Effectiveness of Process Safety
Administration (OSHA) captures incident data from
Management Systems in Operating Facilities
investigations for fatal incidents and hospitalizations since
1984. OSHA Region 6 includes 5 states including Texas and Speaker: Syeda Zohra Halim
Louisiana, where much of the US chemical manufacturing
and petroleum refining industry is located. An analysis of In this presentation we analyze the recent trend in process
process related investigations by OSHA in Region 6 shows safety incidents and identify issues behind current incidents.
that large-scale multi-fatality incidents have been Based on the identified issues we recommend methods to
significantly decreased since the implementation of Process improve the effectiveness of process safety systems.
Safety Management (PSM) program in 1995. It is noticeable
______________________________________________________
that currently majority of the fatalities occurs in single fatality
3:15 PM to 3:45 PM
incidents. Our preliminary analysis suggests that these
Unified Wall Panel System (UWPS) - A Value Engineering
individual process related fatalities are a result of operating
Solution for Protective Construction in the Petroleum
and maintenance activities that are not well addressed by
Industry
current process safety practices or by personal safety
measures. An analysis of such incidents and their Speaker: Scott Hardesty
circumstances will be conducted proving recommendations
for improved performance to reduce the incidents with An overview of the novel protection technologies being
single fatality. developed for use in the UWPS, including high- cementitious
______________________________________________________ structural paneling, non-aramid advanced mineral fiber
11:00 AM to 11:30 AM reinforcement and metallic foam energy absorption.
Application of Mind Mapping to Organize and Recall _____________________________________________________
Potential Hazards
BACK TO THE
PROGRAM
DAY 2: Wednesday Oct 22 Summaries
The approach taken for selection and placement of gas Commercially available azo-type low temperature radical
detectors is found to vary widely between different initiators provide efficient initiation of many chemical
companies. There is a growing interest in not only the reactions. However, the azo group initiators are energetic
confidence but also the effectiveness of these gas detection compounds that also have thermal stability issues at ambient
systems as a key mitigation barrier. The intention of this or even sub-ambient temperatures. These initiators can also
presentation is to provide a methodology that is both generate nitrogen gas during slow decomposition under
effective and cost efficient while also presenting the main heat and/or light, which could present a safety challenge for
considerations that design engineers and process safety shipping, storage and usage. In order to define safe storage
professionals should address for the gas detection system and handling conditions, a variety of calorimetry studies
elements of (1) a comprehensive gas detection philosophy, were carried out. Exotherm and pressure data were collected
(2) appropriate detector technology selection, and (3) correct from these studies in an effort to gain a better
detector placement. understanding of the decomposition kinetics. Thermal-
________________________________________________________________ kinetics and thermal safety model simulations were then
1:45 PM to 2:15 PM used to obtain the self-accelerating decomposition
Consequence Assessment Considerations for Toxic temperature (SADT) and decomposition activation energy for
Natural Gas Dispersion Modeling the azo-type initiator. This methodology for thermal
decomposition kinetics data and parameter determination,
Speakers: SreeRaj Nair and Noma Ogbeifun
acquired with 5mg to 1g scale samples, enables safe storage,
Consequence modeling provides information on the handling, and scale-up process preparation.
potential impact zone and is key for process risk ________________________________________________________________
management. 5:00 PM to 5:30 PM
________________________________________________________________ Analysis of Pressure Behavior during Reaction Runaway
and Estimation of Available Depressurization Design
Category: Reactive Chemicals Session
______________________________________________________ Speaker: Yuto Mizuta
4:00 PM to 4:30 PM
Modelling and Simulation to Predict Energetic Material Analysis of pressure behavior during reaction runaway and
Properties estimation of available depressurization design, dynamic
simulation by Aspen, runaway experiment by ARSST, two-
Speaker: Kok Hwa Lim phase flow model of ISO.
______________________________________________________
With the rapid development and advancement in computing
power, modelling and simulation (M&S) has demonstrated
its vast potential in predicting the properties of energetic
material and helping to design energetic material. One such
application is predicting crystal packing and crystalline
structure from first-principle simulation. Such technique has
demonstrated the ability to distinguish different polymorphs
of the same energetic molecules and accurately predict the
crystal structure and density. In addition to the ability to
predict detonation pressures and velocities of more
established classes of energetic materials based on their
Track II - Human Factors - People in Action Thus, ensuring operator safety is of utmost importance in
Category: Human Performance/Decision Making this domain, and in particular in stressed contexts. Advances
in Virtual Reality (VR) have enabled cost-effective, relatable,
II Session
and remote trainings that can potentially transform the
______________________________________________________
8:30 AM to 9:00 AM future of operator training in complex environments.
Preventing Cognitive-Attributed Errors in Safety Critical ________________________________________________________________
Systems: A Path Forward 10:45 AM to 11:15 AM
Towards a Predictive Fatigue Technology for Oil and Gas
Speaker: Tom Shephard Drivers
The presentation provides background and example models, Speaker: John Kang
methods and tools for assessing and eliminating cognitive
attributed errors in active human barriers. Towards a Predictive Fatigue Technology for Oil and Gas
________________________________________________________________ Drivers.
9:00 AM to 9:30 AM ________________________________________________________________
Two Views of Evaluating Procedural Task Performance: A 11:15 AM to 11:45 AM
Transition from Safety-I to Safety-II Approach Validation of the Fatigue Risk Assessment and
Management in High-Risk Environments (FRAME) Survey
Speaker: Changwon Son
Speaker: Stefan V. Dumlao
This presentation provides the development of new
procedural task performance measures based on Safety-II The oil and gas extraction (OGE) industry continues to
perspective, an emerging safety paradigm. experience a fatality rate nearly seven times higher than that
________________________________________________________________ for all U.S. workers. OGE workers are exposed to intensive
9:30 AM to 10:00 AM shift patterns and long work durations inherent in this
Beyond Human Error: Integration of the Interactive environment. This leads to fatigue, thereby increasing risks of
Behavior Triad and Toward a Systems Model accidents and injuries. In the absence of any regulatory
guidelines, there is a critical need for the development of
Speaker: Joseph W. Hendricks comprehensive fatigue assessment practices specific to OGE
operations that take into consideration not only the various
In an effort to move beyond the "human error" explanation
OGE-specific sources of fatigue, but also the barriers
for safety incidents, we surveyed individuals employed in the
associated with effective and feasible fatigue assessments in
process safety industry and were primarily from the Oil &
OGE work. In response to this need, Shortz, Mehta, Peres,
Gas and Chemical industries. Results indicated that
Benden, and Zheng (2019) developed the Fatigue Risk
perceptions of procedure quality was the focal variable in all
Assessment & Management in high-risk Environments
of the results, including positive relationships with attitudes
(FRAME) survey. Further, they provided evidence that the
toward the procedure change process and negative
FRAME survey content captures fatigue-related information
relationships with procedure deviations, and both safety
specific to the OGE industry not found in any one other
incidents and near-misses. Additionally, we integrated the
measure of fatigue. The present study expands on these
three elements of the Interactive Behavior Triad—person,
efforts by examining the psychometric properties (i.e.,
task, and context—into Dekker’s Model 2 of safety. We
reliability and validity) of the FRAME survey—a critical step
found support for two-way interactions using moderator
before the survey can be recommended for use in practice.
regression analyses. We conclude that these elements are
A sample of 210 OGE and petrochemical refinery workers
important factors to consider when evaluating and
were sought to participate in this study. Linkages between
developing procedure systems.
the FRAME survey and a number of fatigue-related measures
_______________________________________________________________
validated for use outside of the OGE industry will be
Category: Fatigue & Stress Session examined. Once data analysis is complete, the FRAME survey
_______________________________________________________________ will be refined for implementation, and recommendations for
implementation will be provided.
10:15 AM to 10:45 AM
_______________________________________________________________
Operator Performance Under Stress: A Neurocentric
Virtual Reality Training Approach
This presentation describes the design and development of This presentation compares flame jetting distances from
the new HBT, the large-scale shock and detonation tube vented explosion tests to predictions made using NFPA 68,
facility for the study of deflagrations, detonations, and the EN 14994 and the FLACS CFD code.
transition processes. ________________________________________________________________
________________________________________________________________ Category: Consequence Analysis: Flammability
Category: Explosion Phenomena II Session Session
_______________________________________________________________ ________________________________________________________________
BACK TO THE
PROGRAM
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
The importance of time can be well conveyed in the proverb “Time and tide waits for none”. The
essential actions not taken in time could propagate to Hazardous events especially in the Process
industry. Hence these plants should consider this vital criterion called Process Safety Time as part
of their design. Although Process Hazard techniques are applied to identify the hazards of the
plants, the time factor and dynamical behaviours of the Process are often not considered in these
assessments as they are complex. But in recent times dynamic models are available, although the
application is limited to be integrated in the design especially the Functional safety i.e. Safety
Instrumented System (SIS) and LOPA. So, what is this Process Safety Time? This paper
demonstrates the importance of this Process Safety Time, how it can be assessed and its role in the
design to minimize the risk of undesirable events leading to hazards.
Particularly the response time required to take corrective action is very critical. For Process
industry, this response time required depends upon the dynamical behaviours of the Process which
are mostly complex to predict as it depends upon various factors such as effective operation, heat
and mass transfer, type of design, reaction kinetics etc. So, a good understanding of the Process is
essential to determine the response time which will be based on this unique term called the ‘Process
Safety Time (PST)’ dictating the precise response time to be considered. This PST can be conveyed
as the time that is available to take action on the process to bring it back it to a safe condition once
the process value gets deviated from the normal level.
The estimation of PST is system dependent and hence it relates to the dynamic behavior of the
process, equipment design limits and process control system within the context of a unmitigated
specific hazard scenario which could result from various initiating events. In the recent times the
dynamic models are given good importance to determine how the Process acts upon time. Though
the application of these models are very limited due to various reasons such as the cost, project
schedule, difficulties in integrating in the design of the plants particularly in the SIS design etc.
This work demonstrates the importance of this Process Safety Time and its role in the design of
the Plants, particularly the factor of SIF response time to achieve potential risk mitigation. In a
nutshell, the SIF response time shall be lesser than that of the Process Safety Time to bring down
the process to a safe state in order to eliminate the occurrence of hazardous event.
Safety engineering also is the key component for eliminating hazards that would otherwise be
controlled by either administrative controls such as the SIS, alarms etc. or use of personal
protective equipment as a barrier between a hazard and a worker. These engineered safeguards
include machine guards, selection of less hazardous equipment, development of maintenance
schedules to ensure equipment safety, audit and inspection procedures, selection of safer tools,
safety review of new equipment, employee maintenance training, safe design of the flow of
material and people through a facility and risk analysis for both possible man-made and natural
incidents.
Layers of Protection Analysis (LOPA) is one of the risk analysis technique which has many
applications and it can be applied in the safety design to assess/ design to ensure sufficient
protections are available against the potential hazards. Using LOPA, the design the protective
devices or safety barriers on the Process plants can be done which acts a ‘Layers of Protection’ in
between the hazard and the receptors. They are designed to act as an armour to protect against the
hazards which may harm people, environment and also the commercial interests of the company.
These layers of Protections can be identified and assessed using the LOPA for effective control
over process upsets. These layers starts from the prevention of hazard by Process control either by
automatic or manual even by mitigation of the consequences including the emergency responses.
The following figure portrays a typical Layers of Protection in the process industry.
The first layer of protection is the Prevention layer in which the Process design itself maintains
desired process value not being escalated. The next three layers are ‘control layers’ designed to
prevent a safety related event. The first of the control layer (2 nd layer) is the basic process control
system which provides safety through proper design of control of the process. The second of the
control layer (3rd layer) is the alarm system which provides the appropriate information to process
operators, supporting them in the identification of the cause of the unsafe situation and allows them
to take actions to restore the plant to normal operation. The third of the control layer (4 th layer) is
the Safety Instrumented System (SIS). A typical SIS comprises of several Safety Instrumented
Functions (SIFs) with sensors, valves and logic system to make appropriate decisions and take
action on the process to bring it back to a safe state. The rest of the layers are termed as Mitigation
layers (Active& Passive Fire Protections and the emergency response plans) as they act upon only
if the incident occurs and are designed to mitigate the impact of the hazardous event.
Importance is placed on the Operator Action and SIS layers where the alarm and SIF are designed
and installed for a specific function of the process, and the design information ensures that the
device specified will meet those requirements.
The efficiency of the Operator action part of the control layer depends upon various factors such
as proper training, capability, physical ability, response time etc. and it’s very much arguable since
it is a manual. For the next control layer of SIS comprising of SIF’s, the response time is very
critical, as it should act upon quickly to bring back the process to a safe state in a time much lesser
than of the Process Safety Time. Hence the important task is to estimate this Process Safety Time
(PST) although it is quite difficult to assess the exact condition at which the hazard scenario
develops, and it is very contrary and hence it is better to evaluate the lower limit of the time at
which the hazard might occur under worst case condition.
As said earlier the PST spans from the deviation of the process value until where the incident
occurs. And particularly the response time of this layer shall be minimum and should fit well with
the PST for a successful protection as if it crosses could result in an incident which is like action
taken after the accident happened
3. Hazardous event Timeline
This section briefs about a typical event timeline for a hazardous process scenario as shown in
figure-2. The event timeline explains how a process at a normal operating process value could
propagate into an hazardous event and in particular how the Safety Instrumented System (SIS)
protection layer could bring down the process value to normal operating level.
The normal operating range of a typical process value floats around the value as shown being
control by the BPCS i.e. Basic Process Control system. Once the process value gets deviated from
the normal operating range due to any reason and it is not being controlled further then it would
reach the safe operating limit. Eventually the process value could exceed the design limit of the
process/ equipment resulting in potential accident. And if suitable actions are taken then the
process value could fall back to the normal operating range. In this case suitable action is
considered to be taken by the SIS. The response time taken for this SIS is critical as delayed action
would not control the process value exceeding the limits leading to potential hazardous event.
The following sections briefs the various factors in this event timeline including Process Safety
Time, Process Lag time, trip delays, Time to Trip and Safety Margin. Further it defines the method
for evaluation for better understanding of the response time required for efficient hazard
management.
Figure 2: Typical Hazardous Event - Timeline
Process Safety Time (PST)
Process safety time is a function of the behavior of process and process equipment within the
context of a specific unmitigated scenario. The term Process safety time (PST) is defined as per
different codes and standards are as follows:
IEC 61511:2003 Part 2 defines PST as “the time between a failure occurring in the process or the
basic process control system (with the potential to give rise to a hazardous event) and the
occurrence of the hazardous event if the safety instrumented function is not performed”.
IEC 61508:2010 Part 4 defines PST as “the period between a failure, that has the potential to give
rise to a hazardous event, occurring in the EUC [equipment under control] or EUC control system
and the time by which action has to be completed in the EUC to prevent the hazardous event
occurring”.
API 556 second edition, 2011 defines PST as “the interval between the initiating event leading to
an unacceptable process deviation and the hazardous event”
The overall concept of the process safety time (PST) in simple words shall be the amount of time
that is available to take action on the process to bring it back it to a safe condition after the initiation
of hazardous event which may leads to out-of-control condition that can cause severe
consequences.
PST is unique to each cause-consequence pair even when multiple initiating events may lead to
the same consequences. The potential impact on process caused by the initiating event shall be in
different ways due to the different process involved, different process equipment, different
operating modes, design conditions and consequences affecting different risk receptors.PST is
quite difficult to measure because the exact conditions under which a hazard scenario may occur
is unpredictable. PST should not be considered as a single specific value at which a hazardous
event will immediately occur in all circumstances. Instead, PST can be estimated approximately
by identifying the lower boundary line at which the hazardous event occurs so that protection
layers available with sufficient response time for that credible scenario.
An analytical approach shall be taken into consideration for estimating the PST by considering the
theories involved in process by knowing the normal operating and design limits of the equipment.
The process variable associated with the occurrence of the hazardous event to be identified first.
The point at which the deviations occurs in normal operating conditions shall give the
approximated value of the initiating event to occur and the point at which the hazardous event
occurs can be estimated approximately by the abnormal conditions of the process equipment i.e.
equipment’s exceeding its design limit which no longer can be prevented. The time taken between
the initiating event and the hazardous event depends on the rate of change of process conditions
from the initiating event to the hazardous event.
This can be represented by the equation as follows:
PVHE - The value of the process value of interest at the time the Hazardous Event occurs or can
no longer be prevented, may be assumed to be at the design limit of the equipment.
PVSOL - The value of the process value of interest at the time assumed to be at the safe operating
limit of the equipment.
PVIE - The initial value of the process value of interest at the time of the Initiating Event, may be
assumed to be at the extreme of the normal operating range nearest the hazard.
PVROC - The Estimated Rate of Change of the process value of interest under worst-case credible
conditions in the context of the specific hazard scenario.
Time difference between normal operating range and design limit = PV HE— PVIE
Rate of change of PV of interest under worst case credible conditions PV ROC = dPV/ dt
Process Safety Time PST = PVHE – PVIE
PVROC
From the graph it is noted that PST can also be evaluated by the summation of time to trip, SIF
response time, process response and safety margin. It can be represented by the following equation
Process Safety Time= Time to Trip+ SIF Response Time + Process Response Time + Safety
Margin Time
PST = TTT+ SRT+ PRT+SMT
SIF trip point
SIF trip point is the point at which the safety instrumented functions identifies and trips the system
while the deviation occurs. It will bring back the process to normal operating condition to avoid
the occurrence of undesirable scenario. The undesirable event may occur when the safety
instrumented functions doesn’t respond appropriately, and the operating parameters of the
equipment exceeds the design limit. This consequence may create injuries or even fatalities to the
personnel’s in the plant which leads to loss of reputation, assets and life.
SIF Response Time (SRT)
SIF response time is the length of the time from successful detection onset of an incident until the
time at which the final control elements have acted and performed their function. The SIF response
time includes the whole SIF loop components including initiator, Logic Solver and the Final
element including the delays in the signal transmission. The SIF response time evaluation requires
good process engineering knowledge and engineering judgement for its performance.
Time to Trip (TTT)
Time to trip is the length of the time from the initiation of the hazardous event until the time at
which the SIF acts upon by successful detection.
Process Response Time (PRT)
Process Response Time is the length of the time once the SIF have completed its action until the
time taken for the process value to come under the safe operating limit. This time varies from
process to process. For example- for a storage tank scenario with level SIF, the Process Response
time will be very shorter as the level stops rising once the SIF acts upon, however for the situations
such as for Reactors the process reaction will take considerable amount of time to normalize.
Safety Margin Time (SMT)
Safety Margin Time is the length of time applied over the SIF response and Process Response
Time in addition as a Safety Factor and this time factor usually depends upon the company’s
decision.
Operating Limits
Operating Limits are the values or ranges of values within which the process parameters normally
should be maintained when operating. These values are usually associated with preserving product
quality or operating the process efficiently. Whereas the Safe operating limits are established for
critical process parameters, such as temperature, pressure, level, flow, or concentration, based on
a combination of equipment design limits and the dynamics of the process. [6]
4. Example on Process Safety Time and Response Time calculation
This section discusses how the SIF response Time to be evaluated and the factors to be considered
while performing the calculation with a simple example.
The following example illustrates a Surge Tank containing hexane liquid with an overflow line.
The level control valve which on the line which feeds the tank, maintain the level by controlling
the flow from upstream process into the tank, as the outlet flow is being drawn by a Pump to
downstream consumers constantly.
Let us consider the maximum allowable level of the tank as 10 m and at 100% i.e. 10.0 m (PV HE)
the tank overflows which will be the potential hazard scenario that we focus upon in this
illustration. Although there is a presence of dyke containment, it does not have any value in this
calculation as it is only a mitigative protective functional layer to prevent the escalation of the
potential consequence of pool fire scenario. This is done by containing and reducing the diameter
of the liquid pool to be formed due to the overflow of the tank.
Let us consider there is a SIF in this tank for evaluation, consisting of Level transmitters as an
Initiator, DCS as the Logic solver and the final control element being the shutoff valve present at
the inlet of the tank next to the control valve as shown in the figure.
The initiating event in this example can be considered as the failure of the tank inlet control valve
being fail open, feeding more to the tank which can eventually lead to hazardous scenario of
overflow of the tank with potential consequences. The normal operating level in this tank can be
considered as 50% i.e. 5.0 m (PVIE), controlled by a Basic Process Control System (BPCS) at level
to maintain the level. At the level 80% i.e. 8.0 m being considered High level, alerts the operator
by an Alarm to act upon to take corrective action. At the level of 90% i.e. 9.0 m being the
considered as the ‘High High’ level in which the SIF acts upon to safely shutdown the system by
closing the inlet shut off valve at the inlet line of the tank. The maximum rate of change of level
(PVROC) in the tank is considered as 0.05 m/ min at maximum incoming flowrate.
So, here the PST ranges from the deviation of normal level at 5m until the 10m at which the
incident occurs i.e. the tank getting overflowing and the hexane getting into the dyke. This in turn
can lead to hexane accumulation forming a pool inside the dike with potential risk of a pool fire
scenario. As explained earlier the Process Safety Time in this example is calculated as below
Process Safety Time (PST) =PVHE – PVIE
PVROC
PST= (10.0 -5.0)
0.05
Overall PST = 100, minutes
From this we know the overall Process Safety time in this case is 100 minutes. Then the time to
trip, which is the time taken for the SIF to get activated on High High level can be calculated as
Time to Trip (TTT) = SIF response Trip level (HHH)- PV IE
PVROC
TTT= (9.0-5.0)
0.05
TTT=80, minutes
There is no Process Response Time applicable in this example as it is only a tank. The level stops
rising immediately once the inlet valve is closed, unlike in Reactors, columns etc. where the
stabilization takes considerable time. But a Safety Margin Time (SMT) can be considered based
upon the client requirement/ engineering judgement for safety purposes. In this case SMT is
considered as 5 minutes.
So, with this information available, we can calculate the required SIF response Time (SRT) as
below,
SIF Response Time (SRT) = Process Safety Time- Time to Trip – (Process Response Time+
Safety Margin Time)
SRT=PST-TTT-(PRT+SMT)
SRT= 100-80-(0+5)
SRT= 15 minutes
Hence from these calculations, it is found the SIF response time required for safeguarding against
the overflow hazardous scenario is 15 minutes. So, the entire time taken for the entire SIF to
respond starting from the level transmitter sensing successfully, the logic solver takes the decision
as per the interlock designed in time to close the valve including the time loosed in the signal
transfers shall be within 15 minutes. This is the actually the maximum response time and SIF are
expected to act within this for a successful hazardous scenario mitigation. The following figure
represents the same calculation results in graphical form.
5. Conclusion
So, this paper highlighted the importance of the Process Safety Time and its influence in the design
for a successful safeguarding of the personnel, environment and property from the potential
hazardous consequences. The estimation of PST is system dependent which means it varies from
process to process. For evaluating a precise PST, it requires deep knowledge in Process and also
good engineering judgement skills to preform design very close to the reality.
The Process Safety Time in the example can be easily estimated as the rate of change is linear in
nature and the response time can be easily judged. In reality, many Processes have much shorter
Process Safety Time for the shutdown functions especially for the SIF loops protecting the rapidly
occurring Process. These process might occurs at exponential rates, such as for hazardous
scenarios leading to potential Fire, explosions, toxic releases, runaway reactions and other serious
consequences where the time taken for the incident to occur is really quick.
Also, if the shutdown protections systems to be designed, it shall consider factors of physical,
chemical, kinetic and thermodynamic nature of the Process including the signal delay time lags.
There will undoubtedly be uncertainty associated with the prediction of dynamic process where
the conditions can be complex. So taking conservative assumptions and consistent safety margins
throughout the evaluation is very important for a good protection against potential hazards. We
should also consider the factor of Process lag time on top of it as it again varies from Process to
Process.
Typically, the SRT will be one half of the Process Safety Time which is not the case in rapid
processes. It is very important that the whole response time of SIF loop should be lesser than the
PST and this shall be done selecting appropriate instruments to be in the loop to serve the purpose.
Hence care shall be taken for the SIS particularly in the SIL classification, Safety requirement
Specification for instruments, SIL verification and validation throughout their lifecycle for
effective protection. But to validate the complete response time would be quite difficult, so a good
engineering judgement is essential and can be warranted by focusing more on the core attributes
of the equipment to be selected and some more detailed analysis.
On conclusion, the importance for the PST shall be considered as one of the critical information
in the design of safety systems of the plant. It requires deep knowledge about the process and shall
be addressed through a consistent and timely approach close to reality for an effective protection
of people, environment and assets against the potential hazards.
6. References
[1] IEC. IEC 61511 Functional safety – Safety instrumented systems for the process industry sector,
Parts 1–3, edition 1.0. International Electrotechnical Commission, Geneva, Switzerland, 2003.
[3] CCPS. Guidelines for Enabling Conditions and Conditional Modifiers in Layer of Protection
Analysis. Center for Chemical Process Safety, American Institute of Chemical Engineers, New York,
NY, 2013.
[4] CCPS. Guidelines for Initiating Events and Independent Protection Layers in Layer of Protection
Analysis. Center for Chemical Process Safety, American Institute of Chemical Engineers, New York,
NY, 2015.
[5] API. API Recommended Practice 556 Instrumentation, Control, and Protective Systems for Gas
Fired Heaters, second edition. American Petroleum Institute, Washington, DC, 2011.
Layer of Protection Analysis (LOPA) has been used as a tool to conduct risk assessments for
determining the required level of protection in the oil and gas processes for long time. It is
easy to use and can provide a reasonable quantifying approach to determine the required
number of independent protection layers in a given process, based on the perceived risk
levels and acceptable risk. Since its development, LOPA has gained more popularity over
other techniques such as the risk graph or table, mainly because it is more quantitative than
the other approaches. However, some literature suggests that LOPA has its limitations
including subjectivity as it depends on the judgement of the team conducting the study to
determine the potential risk and hazard to protect against. In addition, LOPA is not suitable
for complicated multi-causes scenarios.
This paper presents a case study showing the limitations of applying LOPA in upstream
scenarios to develop protection layers requirements for a complicated network of pipelines
and processing units with unlimited number of causes contributing to the risk. It compares
LOPA with the more sophisticated more quantified other techniques such as Fault Tree
Analysis (FTA). Based on the cases analysis, it is recommended that LOPA can be used to
assess simple scenarios with limited number of causes, while more complicated cases are
better assessed using FTA. Detailed analysis is presented in the paper to support such
recommendation.
Keywords: LOPA, Fault Tree Analysis, FTA, LOPA Limitations, Complex Process Systems, SIL
Assessment, SIL Verification, SIL Assignment
Hazard and Operability (HAZOP) studies are conducted to identify and assess potential hazards
which originate from processes, equipment, and process plants. These studies are human-centered
processes that are time and labor-intensive. Also, extensive expertise and experience in the field
of process safety engineering are required. In the past, there have been several attempts by different
research groups to (semi-)automate HAZOP studies. Within this research approach, a knowledge-
based framework for the automatic generation of HAZOP worksheets was developed. Therefore,
ontologies are used as a knowledge representation formalism to represent expert knowledge from
the process and plant safety (PPS) domain. Based on that, a reasoning strategy is developed using
semantic reasoners to identify hazards based on the developed ontologies in a HAZOP similar
manner. The developed methodology is applied within a case study that involves a storage tank
containing hexane. The automatically generated HAZOP worksheets are compared to the original
worksheets. The results were evaluated and show that an ontology-based reasoning algorithm is
well-suited to identify equipment-based hazardous scenarios. Node-based analyses can also be
carried out by slightly adapting the method. The presented method can help to support HAZOP
study participants and non-experts in conducting HAZOP studies.
Objectives
This paper aims to present an ontology-based method to generate HAZOP worksheets
automatically. Also, the importance of the ontological model and its semantic concepts is
presented. Furthermore, a strategy is developed and described to infer logical conclusions from the
proposed ontology. In the process, extended concepts such as causes (primary and secondary),
chain of consequences, and safeguards are identified. After the ontology-based method is
specified, it is applied within a case study to a hexane storage tank. Within the case study, an
equipment-based automatic HAZOP is conducted using the described method. The automatically
created HAZOP worksheets are compared to the original HAZOP worksheets and assessed.
Methodology
A computer system that utilizes a knowledge base to draw conclusions and infer facts is called a
knowledge-based system. The first step in development is the conceptualization of relevant
knowledge. The conceptualization process is shown in the upper part of Figure 2
(conceptualization process). It includes the design of a structure and the modeling and
formalization of knowledge. Furthermore, the process plant, equipment, and substances must be
adequately represented. The results of the conceptualization process are ontology-based
knowledge representation and an object-oriented process unit model library (see Figure 2, upper
part).
The conclusions that can be drawn based on the ontologies depend on the inference strategy that
is used to evaluate the ontology (see Figure 2, lower part). The starting point is the selection of the
relevant process units, processes, and the involved substances. This is done based on an object-
oriented process unit model library. After the selection of the required input data, an inference
algorithm infers causes, consequences, safeguards, and related concepts based on the (process)
deviations, process unit, and substance information. This is shown in Figure 2.
Figure 2: Conceptualization process and application of the evaluation logic
These fundamental relationships between the core concepts (super cause, cause, effect,
consequence, deviation) are not detailed enough to be used for the automatic identification of
scenarios. Therefore, complementary concepts and relationships are introduced to complete the
ontological model. Also, there are relationships between the core and complementary concepts.
These concepts include:
substance involves properties of the substance, such as state of aggregation or hazardous
attributes (e.g., flammability),
process unit describes units (e.g., atmospheric storage tank) and operation related
equipment (e.g., circulation pump, drain valve),
process describes the interaction between substances and units,
circumstances additional requirements that describe conditions such as ignition sources or
other environmental conditions.
These considerations result in an ontological model that is shown in Figure 3. The core concepts
are directly connected to the complementary concepts, such as process, intended function, and
substance. Without taking the process unit, substances, or other circumstances into account, no
reliable conclusions can be drawn about the HAZOP relevant concepts. Based on concepts
deviation, unit, substance, and additional circumstances, potential causes, and effects can be
modeled. Plausible causes and effects can only be identified based on an adequate representation
of the process unit. In the case of oversimplification, specific causes and effects cannot be
identified. The “OperationRelatedEquipment” concept is used for this purpose (compare
Subsection 4.1).
core concepts complementary concepts relations between concepts
safeguard_prevents_cause
safeguard_of_deviation
is_consequence_of_deviation
safeguard_mitigates_
Cause effect_implied_by_cause Effect effect
1 2
is_supercause_
of_cause safeguard_ consequence_
involves_unit involves_unit
is_consequence_ safeguard_mitigates_
cause_involves_ effect_involves_ consequence
Unit of_effect
SuperCause unit unit
unit_realizes_ has_intended_
process function
2 Consequence
is_subsequent_
Intended
Process is_composed_of consequence
Function
consequence_involves_
cause_involves_ effect_involves_
OperationRelated hazardous_property
operation_ operation_
related_ Equipment related_
equipment equipment
Super(ordinate) causes (or primary causes) are used to describe causes further. For instance, the
cause “ExternalLeakage” could have the super cause “DefectiveSeal”. Furthermore, consequences
follow effects and depend on the substance properties, process unit, and additional circumstances.
For instance, the hazardous properties of substances significantly influence potential
consequences. For instance, the release of a flammable gas could lead to the formation of an ex-
atmosphere, while the release of an inert gas could pose the danger of suffocation. Also, the
additional circumstance „Confinement“ is required for the identification of the consequence
“Explosion”. Proposals for safeguards can be modeled based on causes, effects, and consequences.
“Safeguard” concepts are also connected to the “Unit” concept since they strongly depend on the
process unit and equipment.
The designed ontology was formalized using the Web Ontology Language (OWL) that was
recommended by the World Wide Web Consortium (W3C) in 2004 [22]. Furthermore, the OWL
DL sublanguage was chosen because of its expressiveness and efficient reasoning. Thus, the
classes (concepts), properties (relationships), individuals (instances), and axioms are formalized
with the OWL ontology language. The Python module Owlready2 was used to create the ontology
in an object-oriented manner programmatically [23].
Within this research, the OWL DL ontology is mainly based on classes, object properties to relate
these classes and axioms to restrict these classes. Annotations are another component of the OWL
ontology language. Annotation properties can be used to model additional information, such as
labels, descriptions, or further resources. They can be added to classes, instances, or properties
(objects and data). Within this work, they were used to provide explanations regarding the
ontological model, explain concepts in detail, and provide sources.
Effect
Consequence 1°
Consequence 2°
Consequence 2°
The developed knowledge-models are based on formal logic. This context shall be illustrated using
an example and with the help of description logic (DL):
𝑂𝑝𝑒𝑟𝑎𝑡𝑜𝑟𝐸𝑟𝑟𝑜𝑟 ≡ 𝑆𝑢𝑝𝑒𝑟𝐶𝑎𝑢𝑠𝑒 ⊓ ∃𝑖𝑠𝑆𝑢𝑝𝑒𝑟𝑐𝑎𝑢𝑠𝑒𝑂𝑓𝐶𝑎𝑢𝑠𝑒. 𝐶𝑙𝑜𝑠𝑒𝑑𝑂𝑢𝑡𝑙𝑒𝑡𝑉𝑎𝑙𝑣𝑒.
Effects can lead to consequences that are expressed as causal chains that consist of primary,
secondary, and tertiary consequences, e.g., loss of primary containment formation of the
explosive atmosphere explosion.
The hazardous attributes of substances and additional circumstances, e.g., ignition source, have a
direct influence on the inferred consequences. For instance, this can be expressed as:
𝐹𝑖𝑟𝑒 ≡ 𝐶𝑜𝑛𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒 ⊓
∃𝑖𝑠𝑆𝑢𝑏𝑠𝑒𝑞𝑢𝑒𝑛𝑡𝐶𝑜𝑛𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒. 𝐿𝑜𝑠𝑠𝑂𝑓𝑃𝑟𝑖𝑚𝑎𝑟𝑦𝐶𝑜𝑛𝑡𝑎𝑖𝑛𝑚𝑒𝑛𝑡 ⊓
∃𝑐𝑜𝑛𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒𝐼𝑛𝑣𝑜𝑙𝑣𝑒𝑠𝐻𝑎𝑧𝑎𝑟𝑑𝑜𝑢𝑠𝐴𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒. 𝐹𝑙𝑎𝑚𝑚𝑎𝑏𝑙𝑒 ⊓
∃𝑐𝑜𝑛𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒𝑅𝑒𝑞𝑢𝑖𝑟𝑒𝑠𝐶𝑖𝑟𝑐𝑢𝑚𝑠𝑡𝑎𝑛𝑐𝑒. 𝐼𝑔𝑛𝑖𝑡𝑖𝑜𝑛𝑆𝑜𝑢𝑟𝑐𝑒.
Safeguards or required actions can either be derived based on potential effects and consequences
or directly on probable causes. This means, there are preventive and mitigative safeguards, for
instance:
𝑃𝑟𝑒𝑠𝑠𝑢𝑟𝑒𝑉𝑎𝑐𝑢𝑢𝑚𝑅𝑒𝑙𝑖𝑒𝑓𝑉𝑎𝑙𝑣𝑒 ≡ 𝑆𝑎𝑓𝑒𝑔𝑢𝑎𝑟𝑑 ⊓
∃𝑠𝑎𝑓𝑒𝑔𝑢𝑎𝑟𝑑𝐼𝑛𝑣𝑜𝑙𝑣𝑒𝑠𝑈𝑛𝑖𝑡. 𝑆𝑡𝑜𝑟𝑎𝑔𝑒𝑇𝑎𝑛𝑘𝑈𝑛𝑖𝑡 ⊓
∃𝑠𝑎𝑓𝑒𝑔𝑢𝑎𝑟𝑑𝑀𝑖𝑡𝑖𝑔𝑎𝑡𝑒𝑠𝐸𝑓𝑓𝑒𝑐𝑡. 𝐶𝑜𝑙𝑙𝑎𝑝𝑠𝑒𝑂𝑓𝐸𝑛𝑐𝑙𝑜𝑠𝑢𝑟𝑒.
Furthermore, substance attributes, such as the state of aggregation and the hazardous properties
of hexane, are also considered:
Intended state of aggregation: liquid,
Hazardous properties: flammable, harmful, health and environmental hazard.
Additionally, circumstances such as potential ignition sources, the presence of ambient air, human
intervention, and the introduction of impurities during the filling process are assumed. These
details regarding the substance and the process unit and involved operation related equipment and
additional circumstances are modeled as ontology concepts that serve as an input for the inference
algorithm.
Table 3. The direct comparison of the automatically generated with the original results is made in the
column “Identified”.
Possible freezing of
accumulated water in
Low ambient
the heel of the tank or
temperature while
Low the tank’s drain line or
3 there is water Indirect Indirect - Other
temperature instrument lines,
contamination in the
resulting in fracture of
tank
the drain line and loss
of containment
In case the automatically created results match the original results, it is indicated with a “yes” in
the “Identified” column. In the case of a similar conclusion within a different scenario, it is
indicated with an “Indirect”. In case another conclusion is drawn, it is indicated with an “Other”.
For instance, in Table 3, another safeguard for the “low pressure” deviation was identified,
compare to Figure A-2. Also, there can be additional scenarios that have been found. For instance,
multiple “high temperature” deviation scenarios have been identified, while no scenarios have
been listed in the original worksheets, see Table 2.
Equipment damage
Tank blocked in Standard procedures
resulting in a collapse
5 Low pressure before cooldown, Yes Yes for steam-out of Other
of the tank under a
following steam-out vessels
vacuum
Corrosion Operation/maintenanc No
e response as required,
Erosion including isolation if
needed
External fire No
Capability to manually
External impact isolate the tank
Gasket, packing or Yes
Periodic non-
seal failure Release of hexane Yes destructive inspection
Loss of per API recommended
Improper maintenance
containment Fire hazard affecting a practice and ASME
7 Partially
(Elsewhere large area particularly code
Instrument or Yes
flow) instrument line failure if the capacity of the Yes
dike is exceeded Relief valve that
Material defect discharges to the
tank’s dike
Sample station valve Yes
leaking Dike sized for 1.5
times the capacity of
Vent or drain valve the tank
leaking No
Emergency response
Low temperature procedures
High pressure (if the
overpressure cause
exceeds the equipment
pressure rating)
4.4 Comparison and discussion of the HAZOP worksheets
Within a conventional HAZOP, potential causes and consequences are usually identified based on
a deviation and information concerning the node. For instance, within the original HAZOP
worksheet in Table 3 (Pos. 5) based on the deviation “low pressure”, the cause “tank blocked in
before cooldown” and the consequence “equipment damage resulting in a collapse”. Based on
these results, appropriate safeguards can be selected. Within the developed extended worksheet,
there are multiple causes of the deviation “low pressure” (see Table A-2, Scenario 27-32). For
example, the cause “ObstructedVentPath” with the super causes “FaultyInstallation” and
“HumanError” have been automatically identified. Furthermore, the effect “CollapseOfEnclosure”
with two different consequence chains “LossOfContainment” and “Fire” or “Explosion” have been
identified. Based on these findings, multiple safeguards were proposed, e.g., “CollectingBasin”,
“FlameArrester”, “PeriodicalExamination”, “PressureVacuumReliefValve” (compare Table A-2,
Scenario 27-28).
From a qualitative point of view, a large part of the causes and consequences were identified
(compare Table 2 and Table 3). The direct comparison of HAZOP results requires the
interpretation of scenarios. Different chains of causes or consequences are shown separately in a
new row (see Table 1, Table A-1, Table A-2, Table A-3). Different deviations may lead to similar
scenarios. Some scenarios show similarities or were interpreted differently with similar
conclusions. For instance, in the original HAZOP, the consequence of the high level deviation is
“high pressure”. Within the automatically generated worksheets, the consequence is “Rupture”. In
the original worksheet, this consequence is again listed under the high pressure deviation.
References to other scenarios have been avoided, and all scenarios have been described in full to
improve readability. Furthermore, the consequence “freezing and fracture of the drain line” was
recognized within this work with the deviation other than composition. In the original HAZOP, it
was recognized with the deviation low temperature. This consequence is the result of the two
deviations low temperature and other than composition.
These examples show that different terms can be used while similar conclusions can be drawn.
This is also a typical issue within conventional HAZOP studies because experts use different
vocabulary, which also depends on company-specific guidelines. Within the automatically created
worksheets, Causes/Supercauses, and Effects/Consequences are listed separately. In the original
HAZOP worksheet, all causes and consequences are listed together. For instance, in the original
worksheet, a cause of the deviation “loss of containment” is “Corrosion”. Within the proposed
approach, “Corrosion” is a super cause while the corresponding cause is “ExternalLeakage”. A
separate listing of Causes/Supercauses and Effects/Consequences improves the transparency of
causal relationships of the automatically generated results and helps to identify and resolve
inconsistencies. Each scenario to be identified must first be represented within the ontological
model. Scenarios must also be abstracted and simplified in such a way that they can be represented
within the ontological model. New concepts can only be implemented by taking the other concepts
and the ontological structure into account. Otherwise, existing concepts could be dissolved, or the
wrong conclusions could be drawn and the wrong scenarios identified accordingly. Since the
automatically generated results are based on ontologies, a plausibility check by human experts is
required.
Safeguards depend heavily on the hazard potential, risk assessment, industry, and company-
specific guidelines. Some of the listed safeguards can also correspond to general recommendations
that are not tied to a specific scenario. For example, a flame arrester for a storage tank containing
a flammable liquid could be recommended (see position ten in Table 1). Thus, the automatically
identified safeguards are proposals that still require expert evaluation.
In general, more scenarios were recorded within the proposed method. On the other hand, the
original HAZOP was intended to demonstrate different methods. It is questionable to what extent
the completeness of the original HAZOP was the claim of the authors [26]. Nevertheless, this
HAZOP example is well suited to compare the quality of the results. The numbers of identified
causes and consequences of the HAZOP worksheets are shown in Figure 6 and Figure 7. To
determine the number of concepts identified, chains of causes and consequences are counted. For
instance, the scenario with the super cause “ExternalFire” and the cause “ThermalExpansion”
would count as one in Figure 6. The scenario with the effect “Rupture” and the consequence
“LossOfPrimaryContainment, FormationOfExAtmosphere, Explosion” would count as two
consequences in Figure 7. This means that intermediate events in the consequence chain, such as
“FormationOfExAtmosphere” are not counted separately.
Overall the number of automatically identified causes (own: 34, original: 22) and consequences
(own: 25, original: 13) is higher than in the original HAZOP. Within the proposed approach, more
causes and consequences have been identified, especially regarding the “high temperature”
deviation. In the original worksheet, more causes regarding the “elsewhere flow” deviation have
been identified. In both HAZOP approaches, many scenarios consider a loss of containment. This
can lead to a fire or even an explosion due to the flammability of hexane. Within the original
HAZOP worksheet, the scenario of an explosion was not considered. Therefore, a different number
of consequences can be explained.
Table A-1: Automatically created extended HAZOP worksheet for the deviations “other than composition” and “elsewhere flow”
Table A-2: Automatically created extended HAZOP worksheet for the deviations “high pressure” and “low pressure”
Table A-3: Automatically created extended HAZOP worksheet for the deviations “high temperature” and “low temperature”
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Henrique M. Paula, Founder & President, Galvani Risk Consulting, LLC, The
Woodlands, TX USA
Donald K. Lorenzo. P.E., Process Safety Instructor, Knoxville, TN USA
Marcelo Costa Jr., Risk Engineer, Orlândia, SP, Brazil
Abstract
Risk management involves the application of one or more of a variety of inter-related techniques
(hazard and operability [HAZOP], hazard identification [HAZID], facility risk review [FRR], etc.).
Most of these applications result in recommendations or suggestions for risk reduction. In fact,
the number of recommendations is often significant (well over 100 in many cases and thousands
in a case study discussed in this paper). A large number of recommendations is beneficial because
each recommendation provides an opportunity for risk reduction and/or other actions for asset
improvement. However, a large number of recommendations can overwhelm the managers
responsible for their implementation, making it difficult to decide what to do and/or when to do it.
Additionally, there may be overlap or similarities of recommendations from the application of
different techniques, sometimes confusing their review and resolution.
Cost benefit analysis is a powerful tool to help managers sort through the recommendations and
effectively/efficiently prioritize them. It consists of evaluating the risk reduction and the estimated
cost associated with each recommendation, including capital expenditures (CAPEX) and
operational expenditures (OPEX). This paper provides a simple, efficient, and effective approach
for performing cost benefit analysis. This method is not intended to replace more detailed
methodologies. Rather, it is a complementary tool particularly useful for applications with many
recommendations.
This paper summarizes a cost benefit analysis approach based on: (a) training course manuals on process
hazard analysis (PHA) and quantitative risk assessment (QRA) [1], (b) research performed for the U.S.
Coast Guard [2], and (c) several studies conducted by the authors for oil production and refining companies
[3] [4].
The basis of the approach is that the priority of a recommendation is (a) directly proportional to the risk
reduction expected from the implementation of the recommendation and (b) inversely proportional to the
cost of implementation:
The expected risk associated with continuing to operate under the current situation (i.e., if
the recommendation is not implemented)
Minus the expected risk associated with continuing to operate after the changes are
implemented (i.e., if the recommendation is implemented)
If we assume that the risk associated with a scenario is the product of (a) the frequency of occurrence and
(b) the consequence(s), then,
D D
Risk Fn, before Cn, before Fn, after Cn, after
n 1 n 1
Where:
D = number of accident scenarios affected by the recommendation
Fn = frequency of accident scenario n
Cn = consequences of accident scenario n
The consequences of interest may include any combination of a variety of concerns, including worker
safety, public safety, environmental, business interruption, reliability, and so forth.
The frequency and consequences before the implementation of a recommendation (current situation) are
evaluated during the hazard analysis [5], and this is typically accomplished using risk matrices. The
frequency and consequences after the implementation of each recommendation are evaluated as follows,
for each recommendation individually:
Identify all risk scenarios that would be affected by the recommendation. That is, the
risk review team identifies the scenarios associated with the risks that the recommendation
is trying to reduce. A recommendation may impact a scenario by reducing the frequency
of the scenario, by mitigating one or more consequence(s) associated with the scenario, or
by doing both.
Assess the expected impact that each recommendation has on (a) the frequency and
(b) the consequences associated with each affected scenario. This involves several
individual evaluations for each recommendation and is accomplished by selecting Impact
Categories from Table 1. Specifically, the team selects the impact category that best
applies to the frequency of the scenario and additionally an impact category for each
consequence of interest.
Evaluate the risk after the implementation of a recommendation. This is accomplished
by multiplying the original assignments of the frequency / consequences for each scenario
by the corresponding Risk Reduction Factor from Table 1.
Risk
Impact
Benefits of Implementing Recommendations Reduction
Category
Factor
No Impact
1 The recommendation does not help reduce the 1.00
frequency or a specific consequence of a scenario
Small Impact
The recommendation helps reduce the frequency or a
2 0.90
specific consequence of a scenario, but this reduction is
relatively small (no more than about 10%)
Small to Medium Impact
The recommendation definitely helps reduce the
3 0.50
frequency or a specific consequence of a scenario (as
much as 50%)
4 Medium to Major Impact 0.10
Table 1 — Example Categories for Assessing the Benefits of Implementing
Recommendations [2]
Risk
Impact
Benefits of Implementing Recommendations Reduction
Category
Factor
The recommendation significantly reduces the
frequency or a specific consequence of a scenario (as
much as 90%)
Major Impact
The recommendation essentially eliminates the
5 0.01
frequency or a specific consequence of a scenario
(more than about 99%)
The methodology presented in the previous section to evaluate ∆Risk does not address some potential
contributors to risk:
The expected risk associated with making the modifications suggested by the
recommendation (or simply the modification risk)
The possibility that the recommendation will increase risk by creating, for example, new
hazards
We added quotation marks to the word “limitations” to recognize that although the methodology does not
consider these contributors, these risks are in fact outside the scope of a cost benefit analysis. This is
discussed in more detail next.
Regarding the modification risk, suppose the implementation of a recommendation requires construction
at the facility. Also, suppose that at least a portion of the process at this facility continues to operate during
construction. It is possible that an accident could occur during construction (e.g., a crane accident that
damages process equipment and causes a release of hydrocarbons). Accidents may also result from other
deficiencies or errors during the implementation of the recommendation, including during the phases of
design, engineering, procurement, manufacturing, training of operations/maintenance staffs, and several
others.
Regarding the possibility that a recommendation may increase the risk of some scenarios or create new
hazards, consider, for example, a recommendation to add a fire sprinkler to reduce the risk of burning a
building down. While the proposed sprinkler system should reduce the risk of fire, it may also increase in
the risk of water damage (e.g., from inadvertent operation of the sprinkler).
Our methodology does not consider the modification risk or new hazards because of the difficulty in
evaluating them at the time that we perform the cost benefit analysis. To properly evaluate these risks, the
analysts would need detailed information about the change (design documentation, construction plans,
updated P&IDs, revised operating procedures, etc.), and this information is unlikely to be available when
performing the cost benefit analysis.
In general, experienced safety/risk analysis teams try to account potential detrimental effects of the
recommendations. In addition, operating companies have management systems in place to ensure adequate
controls of modifications (e.g., a Management of Change [MOC] system), including procedures for all
activities associated with the implementation of the recommendations. In the case of adequate controls, the
risk of implementing the recommendations should be small compared to the other risks addressed here. At
any rate, we have not accounted for this issue in our previous applications of the approach presented in this
article.
The expected cost is evaluated using Cost Categories and Cost Ranges, as illustrated in Table 2. In selecting
a cost category for each of the recommendations, the review team considers the total cost associated with
the recommendation, including capital expenditures (CAPEX) and operational expenditures (OPEX)
related to design, engineering, procurement, construction, installation, training (e.g., operational and
maintenance staffs), etc.
Obviously, this method for cost evaluation is only an approximation based on the experience of the review
team. A precise cost can only be estimated after managers review each recommendation and decide the
specific action that should be taken to address it. That is, the cost estimate depends on the details of
implementation of each recommendation, and this information is generally not available when performing
the cost analysis. However, ranges like those in Table 2 are broad enough that it is possible to select a
reasonable cost category even without these details.
2. CASE STUDY 1
Over a period of a little over one year, a major oil company conducted a series of safety, hazard, and risk
studies for nine production facilities in the Middle East. Most of these facilities were gas oil separation
plants (GOSPs) with typical equipment such as manifolds, separators, coalescesr, desalters, gas
compressing, oil pumping, control room, electrical substations, chemical injection systems, water treatment
facilities, oil storage, pipelines and so forth. The motivation for these studies came from internal company
guidelines, insurance requirements and recommendations from incident investigation reports for these are
other company operating facilities. There were five consequences of interest in this case study – people
(worker and public), assets, environmental, production (i.e., business interruption), and company’s
reputation.
Cost
Cost Range1
Category
5 Up to US $ 10,000
To help satisfy the company’s purpose and specific objectives, the company retained safety and risk
consultants to conduct a total of nine studies for each of the nine sites (i.e., a total of 81 studies):
1. Hazard and operability analysis (HAZOP) and Facility risk review (FRR)
2. Hazard identification (HAZID)
3. Quantitative risk assessment (QRA), including event/fault tree, vulnerability analysis, and
consequence analyses
4. Safety integrity level (SIL) assessment
5. Hazardous area classification review (HACR) and assessment of electrical/instrumentation
equipment
6. Control of substances hazardous to health (COSHH) assessment
7. Permit to work (PTW) review
8. Design review
9. Asset integrity review (AIR)
1 In using these cost categories in oil & gas applications, the analysts typically include all applicable CAPEX/OPEX
costs (engineering, procurement, manufacturing, installation, operation, maintenance, training, etc.) for a period of
time (e.g., five years).
The FRR [5] was of particular interest in the cost benefit analysis. As part of the FRR, the analysis team
developed a list of risk scenarios for each facility, which included incidents that can lead to the release of
hazardous materials with potential for fires, explosions etc. These, in turn, can generate the consequences
of interest (impacts on people, assets, environment, production, and company’s reputation).
The analysis team then assessed the frequency and consequences of each scenario. Figure 1 illustrates the
results of the FRR for one of the nine facilities. The matrix in Figure 1 considers the impact “production”
(i.e., business interruption), and there were similar matrixes for the other four consequences of interest.
Frequency
Impact on
Production
1 (A) 2 (B) 3 (C) 4 (D) 5 (E)
8, 9, 51, 62,
2 63
16, 19, 41 23, 44, 47
1 22, 50 4, 7, 54
The nine safety and risk studies generated an average of 260 recommendations for each of the nine plants
for a total of over 2,300 recommendations. But it was clear that there were overlaps and similarities among
several of the recommendations from each of the nine distinct studies (HAZOP, HAZID, QRA, etc.). Thus,
it was convenient to group or consolidate recommendations that addressed similar or related issues. The
consolidation provided two benefits: (a) reduced the number of recommendations for the cost benefit
analysis and (b) facilitated the work of managers by grouping similar issues for review and resolution.
For example, recommendations 1.43, 1.97, 1.113, 2.18, 2.19, 2.21, 3.3 and 4.9 addressed issues related to
the fire protection system. Note that the recommendation number starts with the number of the study and
finishes with the unique identifier from that study. For example, Recommendation 1.43 means the 43rd
recommendation from the first study, which was a HAZOP study. Recommendation 2.18 is the 18th
recommendation from the HAZID. Because 1.43, 1.97, 1.113, 2.18, 2.19, 2.21, 3.3 and 4.9 all addressed
the same issues, it was convenient to group them for the purpose of the cost benefit analysis. For illustration
purposes, the combined description of this group of recommendations is as follows:
The HAZID, HAZOP/FRR, QRA and SIL studies identified potential deficiencies in the
firefighting capabilities at the facility. The specific recommendations for the fire water system,
pumps, distribution, etc., include:
Performing an engineering review of the entire fire water supply and distribution
system to assess the adequacy of: (1) fire water pumps regarding their efficiency
(i.e., head pressure); (2) pump start-up (i.e., manual vs. automatic); (3) the capacity
of fire water tank; (4) fire pump drive-redundancy (i.e., diesel/electric), (5) the
deluge systems on the tanks, (6) the design criteria (especially materials of
construction) of the rupture disks of the foam pourer systems on the tanks. In this
review, consider whether the fire water pumps should be replaced
Adding emergency cooling systems (e.g., fire curtain, sprinklers) for selected
equipment and providing long-range fixed monitors at critical locations
Developing / improving a testing program for the entire system and components,
including written procedures for testing and test acceptance criteria
Developing / improving a maintenance program for the entire system and
components, including written procedures and schedules for performing the
different maintenance tasks
Generating a complete set of comprehensive P&IDs for the entire system and
components
The consolidation reduced the number of recommendations to a little more than 900, which is about 40%
of the original 2,300 or so recommendations.
Tables 3 and 4 illustrate the results from the cost benefit analysis. Both tables show the top
recommendations only as there were about 100 consolidated recommendations for this facility. The first
column in these tables shows the recommendation numbers. The second column shows the impact category
(using the definitions from Table 1) applicable to the frequency of each scenario. The next columns show
the impact category (again from Table 1) for each of the 5 consequences of interest: people (worker and
public), assets, environmental, production, and company’s reputation.
The appropriate cost category (using the definitions from Table 2) appears next in Tables 3 and 4. Thus,
for each recommendation, it was necessary to make one assessment of impact category on the frequency of
the event, five assessments of impact category for the consequences of interest, and one assessment of cost
category, resulting in 7 assessments per recommendation per risk scenario. Assuming an average of 100
recommendations and 63 risk scenarios per each of the 9 facilities, the total number of assessments was 7
x 100 x 63 x 9 = 396,900 (about 400,000). The number of assessments would have exceeded 1 million
without the consolidation of the recommendations, thereby demonstrating the usefulness of the
consolidation.
The last two columns in Tables 3 and 4 show the risk reduction (∆ Risk) and the BCI, as defined in Section
2. In these tables, the risk reduction and BCI for a group of recommendations represent the combined
impact of all recommendations in the group, not the impact of each one of them individually. The evaluation
of the risk reduction and BCI was accomplished using Excel in this case study.
One final remark about these results is that the ranking by risk reduction and BCI are often different for
each recommendation. For example, Recommendation 1.45 is ranked nineth by risk reduction but first by
BCI. The appropriate ranking for the recommendation depends on management objectives and will be
discussed in more detail in the conclusion section.
This was a reliability study of a manufacturing process using fault tree analysis. Compared to Case Study
1, this was a more focused (narrower scope) and deeper (more detailed) analysis. The manufacturing
process produces a consumer product that reaches millions of customers in the United States of America
daily. The objective of the analysis was to evaluate the frequency that products with a specific type of
defect would reach the consumers. The company had established a quantitative goal for this frequency and
believed to be meeting or exceeding the goal. However, the company decided to conduct the study to further
verify compliance with the goal as well as to identify opportunities for further improvements.
Figure 2 shows the results of the study regarding the potential implementation of four recommendations (1
through 4). The base case is the current situation, and it is arbitrarily set at 100% in the figure. The
implementation of Recommendations 1, 2, 3, and 4 result in the reduction of the frequency of defects to
98%, 98%, 14%, and 34% of the current frequency, respectively. Recommendations 1 and 2 have small
impact in reducing the frequency of defects. However, Recommendations 3 and 4 are effective.
Impact of Implementing Recommendations:
Reduction in the Frequency
100% 98% 98% of Undesirable Events
34%
14% 16%
11.4% 11.3%
Base
1, 2, and 3
1, 2, 3, and 4
Case
redundancy
and reduce
1, 2, 3,
Figure 2 – Results for Case Study 2
Since there were only 4 recommendation, Case Study 2 went beyond Case Study 1 and evaluated the risk
reduction for combinations of implementation of the recommendations. This would be difficult to conduct
in Case Study 1 because the number of recommendations is much larger, even after the consolidation. As
shown in Figure 2, implementing Recommendations 1, 2, and 3 results in a reduction to 11.4% of the current
frequency of the undesirable consequence. This is a small improvement over implementing
Recommendation 3 by itself, but it was worthwhile because the cost of Recommendations 1 and 2 was
small. Adding Recommendation 4 to this combination brings almost not benefit (i.e., drops to 11.3% instead
of 11.4%) even though Recommendation 3 by itself had a significant impact. The reason is that
Recommendation 3 and 4 address the same issues in different ways. Thus, Recommendation 4 has almost
no beneficial impact if Recommendation 3 is also being implemented.
One final observation regards the last bar column in Figure 2. It also consisted of implementing
Recommendations 1, 2, and 3. However, there was redundant equipment in this system that was not
providing meaningful protection. The combination of implementing these 3 recommendations and
removing the redundancy resulted in a frequency reduction to 16% of the current value. Since this was
strictly an issue of defective products (i.e., no safety considerations) and since the system already exceeded
the reliability goal, this alternative was attractive because of the cost reduction. In the end, the company
achieved an overall reduction of the frequency to 16% of the current value, and the savings from the removal
of redundancy more than paid for the cost of the study.
5. CONCLUSIONS
Two key results of interest are the rankings of the recommendations by the expected risk reduction (Table
3) and by the benefit to cost index (BCI) (Table 4). Note that the higher the risk reduction, the higher the
motivation to implement the recommendation because it provides greater potential to reduce the overall
risk to a lower level. That is, risk reduction helps identify the most effective recommendations. In general,
significant reduction in the risk at the facility can only be achieved by implementing at least a few of the
recommendations ranked high by risk reduction.
The BCI is the ratio of the risk reduction to the cost of implementation of the recommendation. Note that
the larger the BCI for a recommendation, the greater the risk reduction per unit of capital investment. That
is, BCI helps identify the most efficient recommendations (i.e., most risk reduction per monetary unit).
Therefore, high BCI often implies “quick wins.” However, high BCI does not necessarily guarantee a
significant reduction in the overall risk.
It is crucial that managers understand the definitions and meanings of the two measures provided in Tables
3 and 4 because, as illustrated in Section 4, the rankings provided by risk reduction and BCI may be
different. That is, a recommendation may receive different priority depending on whether managers want
to focus on efficiency or effectiveness. Furthermore, it is an iterative process, because given the risk
reduction achieved by implementing the highest priority recommendations, the risk reduction and BCI will
likely be different (smaller) for subsequent recommendations.
In summary, the cost benefit methodology presented here offers an approach to sort through the
recommendations from safety, hazard, and risk evaluations and prioritize them effectively and efficiently.
Its simplicity makes it particularly useful for applications with a large number of recommendations.
6. REFERENCES
[2] PAULA, H. et. al., Investigation of Fuel Oil/Lube Oil Spray Fires on Board Vessels, Volume 1 (Main
Report and Appendixes H through L) and Volume 2 (Incident Databases [Appendixes A through G]), U.S.
Department of Transportation, U.S. Coast Guard Headquarters, Washington, DC, October 1998.
[3] PAULA, H., “Cost Benefit Analysis”, Internal Publication, ABS Group, originally published in October
2006 and updated in October 2010.
[4] PAULA, H., LORENZO, D., AND COSTA JR., M. “An Efficient and Effective Approach for
Performing Cost Benefit Analysis,” ABRISCO Congress and PSAM Topical Meeting, Rio de Janeiro,
November 23-25, 2015.
[5] CASADA, M., KIRKMAN, J. AND PAULA, H., “Facility Risk Review as an Approach to Prioritizing
Loss Prevention Efforts”, Plant/Operations Progress, Vol.9, No.4, October 1990.
Table 3 — Recommendations Ranked by Risk Reduction2
Impact Category
Consequence
Risk Reduction
Cost Category
Frequency
Environmental
Production
Reputation
People
Assets
Recommendation
Number(s) BCI
2 This table presents all recommendations with Risk Reduction equal to or greater than 5%.
Table 3 — Recommendations Ranked by Risk Reduction2
Impact Category
Consequence
Risk Reduction
Cost Category
Frequency
Environmental
Production
Reputation
People
Assets
Recommendation
Number(s) BCI
1.111, 2.22, 9.7 2 2 2 2 2 2 3 9% 8.8E-02
1.115 2 2 2 2 2 2 3 9% 8.8E-02
1.46, 9.8 2 2 2 2 2 2 3 9% 8.8E-02
9.3 2 2 2 2 2 2 3 9% 8.8E-02
1.38 5 1 1 1 1 1 4 6% 6.0E-01
2.10, 5.1 1 2 2 2 2 2 3 5% 4.6E-02
Table 4 — Recommendations Ranked by BCI3
Impact Category
Consequence
Risk Reduction
Cost Category
Frequency
Environmental
Production
Reputation
People
Assets
Recommendation
Number(s) BCI
1.45 4 1 1 1 1 1 4 16% 1.6E+00
1.6 2 1 1 1 1 1 4 9% 9.3E-01
1.41 3 1 1 1 1 1 4 9% 9.2E-01
1.42 3 1 1 1 1 1 4 9% 9.2E-01
1.49 3 1 1 1 1 1 4 9% 9.2E-01
1.38 5 1 1 1 1 1 4 6% 6.0E-01
2.1, 3.1 1 2 2 2 2 2 4 0% 4.6E-01
1.43, 1.97, 1.113,
2.18, 2.19, 2.21, 1 2 3 2 3 3 3 35% 3.5E-01
3.3, 4.9
1.37 2 1 1 1 1 1 4 3% 3.0E-01
1.48, 1.112, 2.4, 3.5 1 2 2 2 2 2 4 3% 3.0E-01
1.73 3 1 1 3 1 1 3 27% 2.7E-01
1.69 2 1 1 1 1 1 5 0% 2.3E-01
1.75 2 1 1 1 1 1 4 2% 2.0E-01
1.1a, 1.7 2 2 2 2 2 2 3 19% 1.9E-01
3 This table presents all recommendations with BCI equal to or greater than 10% of the largest BCI in the
table.
Table 4 — Recommendations Ranked by BCI3
Impact Category
Consequence
Risk Reduction
Cost Category
Frequency
Environmental
Production
Reputation
People
Assets
Recommendation
Number(s) BCI
1.1b, 1.7, 1.54,
2 2 2 2 2 2 3 19% 1.9E-01
1.109, 1.116, 9.16
1.3, 1.22, 1.29,
1.55, 1.99, 2.8, 2.9, 2 2 2 2 2 2 3 19% 1.9E-01
9.4, 9.13
1.24, 5.3 2 2 2 2 2 2 3 19% 1.9E-01
1.40 2 1 1 1 1 1 4 2% 1.8E-01
1.13, 1.39, 1.47,
2 2 2 2 2 2 3 18% 1.8E-01
2.7, 2.23, 7.C, 9.19
1.26, 4.3 2 1 1 1 1 1 4 2% 1.8E-01
1.27 2 1 1 1 1 1 4 2% 1.8E-01
1.28 2 1 1 1 1 1 4 2% 1.8E-01
1.74 1 3 3 3 3 2 3 18% 1.8E-01
1.71 3 1 1 1 1 1 4 2% 1.7E-01
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Does your facility have the flu? Use Bayes rule to treat the problem
instead of the symptom
Keith Brumbaugh
aeSolutions
Millennium Tower, 10375 Richmond Ave, #800, Houston, TX 77042
[email protected]; [email protected]
Introduction
Is our industry addressing the problems facing it today? We idealize infinitesimally small event
rates for highly catastrophic hazards, yet are we any safer? Have we solved the world’s
problems? Layers of protection analysis (LOPA) drives hazardous event rates to 10-4 per year or
less, yet industry is still experiencing several disastrous events per year.
If one estimates 3,000 operating units worldwide and industry experiences approximately 3
major incidents per year, the true industry accident rate is a staggering 3 / 3,000 per year (i.e. 10-
3
). All the while our LOPA calculations are assuring us we have achieved an event rate of 10-6.
Something is not adding up! Rather than fussing over an unobtainable numbers game; wouldn’t
it be wiser to address protection layers which are operating below requirements? We are
(hopefully) performing audits and assessments on our protection layers and generating findings.
Why are we not focusing our efforts on the results of these findings? Instead we demand more
bandages (protect layers) for amputated limbs (LOPA scenarios) instead of upgrading those
bandages to tourniquets. Perhaps the dilemma is we cannot effectively prioritize our corrective
actions based on findings. Likely we have too much information and the real problems are lost in
the chaos. What if there was a way to decipher the information overload and visualize the impact
of our short comings? Enter Bayes rule to provide a means to visualize findings through a
protection layer health meter approach; to prioritize action items and staunch the bleeding.
Keywords:
Bayes, Bayes rule, Bayes theory, LOPA, IPL, SIS, SIF, SIL Calculations, systematic failure,
human factors, human reliability, operations, maintenance, IEC 61511, ANSI/ISA 61511,
hardware reliability, proven in use, confidence interval, credible range, safety lifecycle, functional
safety assessment, FSA stage 4, health meter.
The State of Our Industry
The objectives of this paper are to look at some issues with the contemporary safety lifecycle
industry and provide solutions. A major trend across industry has been an overall decrease of the
tolerable catastrophic event likelihood (i.e. multiple fatalities) down to one such event every
100,000, or even 1,000,000 years 1. This lowering of the target has the good intention of making
a facility safer. After all, superb targets will make superb facilities, should it not? The downside
of extravagant targets is they are harder to achieve. Realistically these targets are impossible to
achieve once one considers the real uncertainties of physical systems (systematic error). Note
that it is hard-enough meeting the targets already set, how will making the targets even smaller
help matters?
Smaller tolerable risk targets will have the result of producing more Independent Protection
Layers (IPL) with greater integrity requirements. This leads to a “Forest-for-the-tree syndrome”
where a plant is trying to manage more IPLs than it can handle, missing the bigger picture of
plant health and safety (e.g. key performance indicators). If every IPL has its own multifaceted
maintenance and management requirements, how can a facility ever manage all responsibilities
effectively?
The solution in this paper is simple, a facility should focus on managing what it is capable of
managing. Minuscule targets have good intentions of making everyone safer by providing more
protection but throw enough IPLs at a problem and one will soon reach the tipping point, the
1
One event every 100,000 or 1,000,000 years is a target of 10-5 and 10-6 respectively.
straw that broke the camel’s back. When everything is a problem due to much information
nothing will be managed effectively. Furthermore, when there are fewer IPLs to distract a plant
management team, the team is free to focus on the real problems. If one can identify problems,
then effective management will occur. One of the best ways to identify problems is with a
periodic health check of the IPLs. This paper will present one such health check called the
“Bayes truth meter.” The concept with the Bayes truth meter is to strip away all guess work and
generic data, instead showing true IPL health reflecting a facility’s own systematic biases.
To back up the claim that targets of 10-5 and 10-6 are unobtainable, consider figure 2 which is a
list of current investigations from the Chemical Safety Board (CSB) over the course of the last
year (circa August 2020).
Current Investigations
There were three major multiple fatality accidents (and one near miss) over the course of one
year. This sort of catastrophe would likely earn the maximum hazard mitigation target from any
client following OSHA. If targets of 10-5 or 10-6 were achievable, at least 300,000 to 3,000,000
operating facilities would be needed to average out the three major accidents from year.
Unfortunately, the current estimated number of petrochemical facilities in the United States is
around 2,300 (keep in mind the CSB only covers US operations).
If the number of facilities is generously rounded up to 3,000 and divided by the three
catastrophic accidents, the average industry catastrophic event rate is 10-3, far short of the 10-5 or
10-6 targets! To put this into a visual form consider figure 3, pretend the 100 boxes represent the
industry’s 10-5 mitigation target.
Figure 3- Industry target, 100 boxes represent a hypothetical 10-5 (10-6 would be 1,000 boxes!)
Figure 4 represents how far off the industry is from the mitigation target (10-3):
Figure 4 - Reality, 1 box represents where the industry is, 10-3! Compare to 10-5 above
As the graphics show the industry accident rate is off target by at least 2 orders of magnitude (3
orders if 10-6 is considered). This should be a wakeup call that the industry is missing something
major. Something interesting to note is a 10-3 catastrophic event rate is the same as the lower
bound of Human Error Probability (i.e. HEP). HEP is systematic error and is not typically
considered in any calculations outside of Human Reliability Analysis (HRA) 2. Maybe this is the
factor the industry is are missing?
Returning to the ideas touched on previously, of increasing risk reduction targets to make
everything “safer.” There are good intentions behind ultra-low targets, however meeting these
obscene targets would require throwing every feasible protection layer that can be mustered at
the problem, hoping that something “sticks” (i.e. is effective). This leads to safety barrier
overload. The idea is when there are too many things to manage, and everything is important,
how can the signal be separated from the noise. If everything is important, what is drastically
2
For more information on HRA and Human Factors consider the following two papers:
Conducting a Human Reliability Assessment to support PHA and LOPA, Dave Grattan -
https://www.aesolns.com/post/conducting-a-human-reliability-assessment-to-support-pha-and-lopa
Can we achieve Safety Integrity Level 3 (SIL 3) without analyzing Human Factors?, Keith Brumbaugh and Dave
Grattan - https://www.aesolns.com/post/can-we-achieve-safety-integrity-level-3-sil-3-without-analyzing-human-
factors
important? What is behaving fine and doesn’t need attention? What is behaving fine right now
but might be a problem a few years down the line? What has no chance of working when
demanded, putting the facility at risk right now, and is going to drive the system over the edge of
a cliff?
When there are too many protection layers to manage due to astronomical LOPA targets, no one
will know which warning signs are important and which are just a nuisance. Protection layer
overload leads to a “forest-for-the-trees-syndrome.” As an example, the most sophisticated
protection layers designed by top dollar consulting companies will be all for naught once they
fall out of maintenance. How good is the gold plated protection layer when the valve has
polymerized stuck due to never being tested? What might lead to maintenance oversights?
Perhaps there were too many protection layers with not enough manpower to manage them.
Another problem with astronomical protection layer targets is Safety Instrumented Functions
(SIFs) will need to be applied with high safety integrity calculations to meet a target. These
calculations will “prove” a protection layer is good enough to close a 1 in 10,000 year gap, yet is
that number real? Theoretically a Safety Integrity Level (SIL) calculation is correct if we
consider hardware failures alone and the system operates in a vacuum, yet as soon as a human
touches the system good luck maintaining that integrity level without highly sophisticated
management practices. And the problem only gets worse as more high integrity protection layers
are added to the facility.
All of this naysaying may seem to be blasphemous coming from safety system engineer,
implying lofty risk reduction targets cannot be met, but has reality been considered? As
previously mentioned, the industry catastrophe rate is sitting around the lower bound of Human
Error probability (10-3). This is key to understanding what has not been addressed in traditional
LOPA math and SIL calculations. It is the authors’ opinion that some very major degradation
factors are being missed when modelling protection layer integrity. The elephant in the room is
systematic error.
This is all not to say that the industry is in a bad state. There are a lot of good people out there
doing important work, trying to make everyone safer. Their contributions should not be
discounted as they are all based on lessons learned in blood. Yet it feels like the industry has
gotten as far as it can with its current practices, floundering between moving forward or
backwards depending who is asked. The next step forward in process safety needs to be in the
best direction possible.
The Bayesian approach allows matching optimistic rare event assumptions and IPLs with real-
world observations, turning fantasy into reality. This approach allows one to base plant health
metrics on observed evidence. Otherwise the industry is stuck with using generic data which is
not specific to the facility’s own systematic biases. If typical SIL calculation modelling data is
based on industry averages, figure 4 shows how good industry average is.
A Bayes approach will likely show a facility isn’t as good as it hoped it was. The problem is
inductive reasoning has been used to predict catastrophic rare events. This is like the black swan
theory, historically the world used to think all swans were white since a black swan had never
been observed in nature, but then low and behold, they were discovered eventually. Bayes rule
would have allowed factoring in the possibility of a black swan occurring. A black swan would
always be in the realm of possibilities, as more evidence was gathered such as feathers, third
party sighting reports, occurrences in other similar species, etc; the model could have been
updated to better predict where reality laid. The traditional model would have said “There has
been millions to billions of sightings of white swans, no black swan has ever been seen in nature,
therefore there is no black swan.”
Back to the process safety industry, one can make a similar comparison between the current
industry and a Bayesian approach. The current industry approach is based on frequentist-based
statistics. This approach requires enormous amounts of data in order to derive a conclusion, such
as the millions to billions of white swans and no black swans. If the analogy is given little
thought, all that is known with certainty is there is a good chance of operating in a safe state, but
there is no idea of the dangerous state; how bad is it, how it would unfold, and how likely it is.
The only way to know the answer to the dangerous state questions is to collect data (which is
likely a trick that can only be performed once). A frequentist approach requires enormous
amounts of data to definitively state a comparative frequency. It should be obvious that a facility
will not have, nor will it ever want enormous amounts of data for a rare catastrophic event.
Contrast the frequentist approach with the Bayesian approach. The Bayesian approach allows for
the input of subjective data in a logical manner. This method allows all relevant evidence to be
factored into the model. To again use the black swan analogy; things like feathers, third person
accounts, and occurrences in similar species can be directly related to near misses, audit results,
and similar process accidents. Bayes allows one to factor in systematic biases and errors.
Frequentist methods cannot do this.
Bayes rule can answer the question, “is a catastrophic event rate of 10-6 obtainable.” Most likely
if Bayes rule was embraced, the results would show that the industry is aiming for something
that is unachievable, “biting off more than one can chew.”
When Bayes shows that 10-6 can’t be met, a facility will need to step back and ask, “what are we
really trying to do here.” The answer that makes the most sense is the facility is trying to make
the most money while not “going boom.” Since the facility’s resources (time and money) are
limited, then the “not going boom” part needs to be focused on the systems that need the most
help while not spending all the surplus resources. Bayesian methods can provide an outlook on
how each individual protection layer is behaving. Advanced warnings can be given based on
evidence, staunching the bleeding of a bad acting barrier.
Implementing Bayes in process safety can be as simple or as difficult as one cares to make it.
The author’s previous paper 3 went over a simple approach to implement Bayesian Methods into
the management of Safety Instrumented Functions. It is suggested to review the referenced paper
3
What is Truth – Do our SIL calculations reflect reality, Keith Brumbaugh 2019,
https://www.aesolns.com/post/what-is-truth
for further details as well as a rough “how to” example. The approach does not need to be limited
to SIFs; all protection layers are ripe for a Bayesian management approach.
Begin a Bayes model with any protection layer, with any theoretical achieved Probability of
Failure on Demand (PFD). Convert the achieved PFD point value, such as 0.01, into a
probability distribution (Poisson or example). If the distribution is known this would be
preferable, but often times the distribution is not known.
The probability distribution should represent all of the possible PFD the protection layer could
hold. The boundary of the distribution should contain all realistic values which the protection
layer could ever be. For example, it might be expected a SIF can operate somewhere between a
PFD 0.1 to 0.0001. The probability distribution also assigns the likelihood that the protection
layer is any of the particular PFD values.
The initial probability distribution is known as the prior. The Bayesian prior distribution is then
updated over time with new evidence to form a posterior. Below is a conceptualized
representation of a SIF which has undergone a Bayesian conversion and updating process.
There are two types of evidential data that can be used, the first is quantitative data. Quantitative
data can be absolutely proven as true or false (a Bernoulli trial). The protection layer is subject to
a test and the result is recorded. Items that fall under this category are proof test results and
actual demands on the system (planned or unplanned).
The other type of evidential data is qualitative. Qualitative data is based upon expert opinion.
The application of qualitative data may seem subjective, but the precedent of using qualitative
data has already been set for process safety. Today it is accepted industry practice to use
subjective judgements in LOPA, don’t forget that LOPA drives everything (most of the time)! If
qualitative judgements are documented and justified, the industry has no problem with them. A
qualitative Bayesian update can fall under the same scrutiny. So long as the application of a
qualitative Bayesian update is made to be repeatable and predicable there should be no issues.
This can be achieved with a repeatable checklist from a common assessment task. Data which
fits the qualitative bill are audits, Functional Safety Assessments (FSA), and Human Reliability
Analysis. All of this data is aimed at discovering systematic errors by a repeatable and well-
established practices.
The probability distribution from figure 5 can also be represented as a cumulative distribution.
The data source of a cumulative distribution is the exact same as a probability distribution, the
difference is the likelihoods are summed from 0% to 100%.
Figure 6 is a cumulative distribution of the probability distribution from figure 5. In figure 6, the
X-axis represents an IPLs probability of failure on demand. The X-axis is unitless, representing
either PFD on the top, or Risk Reduction Factor (RRF) on the bottom (inverse of PFD). Each
vertical line represents one order of magnitude with 4 gradients within (logarithmic PFD). For
example, looking at the far-right side of the graph, the four different gradients from right to left
represent 2.5, 5, 7.5, up to 10 RRF. The important take away is the further to the left the point of
interest, the better (i.e. smaller PFD).
The Y-axis represents the cumulative likelihood from 0% to 100%. The 3 colored lines at 95%,
75%, and 50% are credibility levels. The point where these credibility level lines intersect the
cumulative distribution curve represents the upper credible interval, which says an IPL is a
particular PFD value or better. Note that these credibility levels are arbitrary, however they align
with examples from the IEC 61511 standard. 4 As an example, the curve labelled “1” intersects
the 95% credibility limit around 25 RRF. With this intersection, a statement can be made that it
is 95% credible that the IPL represented by distribution “1” is 25 RRF or better.
It might be apparent that as better RRF numbers are targeted, the credibility decreases. For
example, on the same “1” curve there is only a 50% credibility the IPL is 125 RRF or better.
This is the key concept when trying to determine how “good” an IPL is in a Bayesian system,
enabling one to state the credibility an IPL is meeting a certain performance target.
With the basic concepts of credibility and probability addressed, this example can move onto the
concept of a Bayesian update. In figure 7, the curves “1,” “2,” and “3” represent a theoretical
Safety Instrumented Function. The system starts with a SIL calculation result converted to a
distribution as seen in curve “1.” The SIL calculation using traditional methods returned a PFD
of 1.3x10-2 (77 RRF), this achieved RRF value seeded a Poisson distribution to intersect 77 RRF
at the 75% credibility level.
Next, pretend there is a failure during the first proof test. This is a simple Bayesian update with
quantitative data represented in curve “2”, 1 test, 1 failure. Curve “2” has shifted to the right, a
worse result. Now the 75% upper credibility limit is around 12 RRF. Once this warning sign is
discovered from the Bayesian update, pretend the management team initiates a root cause
analysis, identifies the problem, fixes the problem, then performs a Management of Change
Functional Safety Assessment with a positive assessment result. This positive assessment is a
qualitative update and is applied as curve “3.” Observe curve “3” has shifted back to the left, a
better result. As the example ends the system is better than after the failed test of “2,” but not as
good as it originally started in curve “1.”
The graphs in the previous section were full of details but unfortunately, they are not intuitively
obvious. In order to make the concept of a Bayesian update more intuitive it would be beneficial
to simplify the information into what is important. After all, management of IPLs, and
addressing bad actors is the most valuable application of Bayes. This paper introduces a concept
called the “Bayes Truth Meter.” Imagine that a corporate criterion will accept an IPL upper
credibility limit of at least with 50% credibility, preferably 75%, and over achievement at 95%
(see IEC 61511-2018, Part 2, Figure A.7 as basis for the levels). Stripping away all of the
distribution “mumbo jumbo,” the IPL example of the Prior from Figure 6 (i.e. curve “1”) is
shown in Figure 7 converted to the Bayes Truth Meter.
4
IEC 61511-2018 - Part 2, Figure A.7 – Typical probabilistic distribution target results […]
Figure 7- Bayes Truth Meter. Prior "1" from Figure 6.
To quickly describe the meter, the red, orange, and yellow pointers show where the 50%, 75%,
and 95% upper credibility limits cross the cumulative distribution curve.
The blue bar shows the PFD target (recall the previous example was a SIF with a target of
1.3x10-2, i.e. 77 RRF).
There is a “Status” button in the lower right corner to quickly tell how an IPL is operating in
relation to its target. Red is bad, Orange is Ok, Yellow is good, and Green is Great.
Figure 9 - Status lights (key)
The status light changes based on where an IPLs target (the blue line) lies in relation to its upper
credibility levels. For the meter in Figure 7, the 75% credibility marker is better than the target,
so Status light is yellow (good).
Following the previous example, the system encounters its first Bayesian update, a failed test.
Figure 10 - Bayes Truth Meter. Posterior 1 (i.e. curve "2" from Figure 6).
Figure 10 depicts a failure during the first proof test. The meter shows the target is worse than
even the 50% credibility marker. It is not depicted in the meter, but the target is only 20%
credible! This poor result has put the status light in the “Bad” zone. Management would know
that it is time to focus attention on this SIF before a problem develops, and at least try to recover
to the “OK” status.
Figure 11 - Bayes Truth Meter. Posterior 2 (i.e. curve "3" from Figure 6).
The final update shown in figure 11 represents the root cause analysis was performed to
determine the source of the failure, the cause was fixed, and then an MOC functional safety
assessment was run with favorable results. This subjective judgement updates the meter from
Figure 10 and shows the system has recovered to the “OK” Status. It is not back to “Good,” but
there are likely more important issues demanding attention at this point.
Returning to a previous point made in this paper, once a Bayes engine has been implemented, the
difficulty in achieving the lofty targets set by current industry practice will become apparent (10-
5
and 10-6). To prove the point, consider the same trial run previously in figures 7 through 11, but
with a SIL 2 SIF instead of a SIL 1.
Figure 12 - SIL 2 cumulative distribution with 1 prior and 2 posterior updates
The example in figure 11 is a SIL 2 SIF with near identical parameters as the previous SIL 1 SIF
an order of magnitude greater. This example has seeded the prior target of 1.3x10-3 at the 75%
upper credibility limit. The poor results are shown in this table, but the Bayes truth meter makes
it easier to understand.
Figure 13 - Bayes Truth Meter. SIL 2 Prior "1" from Figure 12
Figure 13 shows the prior from figure 12 (plot 1) converted to the Bayes truth meter. The status
light indicates the system is on the line between “Good” and “Ok” (i.e. the target PFD is at the
75% credibility marker). Next the system is subjected to the same one test, one failure.
Figure 14 - Bayes Truth Meter. SIL 2 Posterior 1 (i.e. curve "2" from Figure 12).
It can be seen just as in in the previous SIL 1 example (figure 10), the system has dropped
significantly. Notice however, this drop is more drastic. Like last time the target has a very low
credibility, in fact it is only 13% credible that the SIF is operating at the target. Next a similar
root cause analysis with a recovery factor is applied to the SIF.
Figure 15- Bayes Truth Meter. SIL 2 Posterior 2 (i.e. curve "3" from Figure 12).
Unfortunately, even after a recovery factor has been applied the SIF is still in the “Bad” zone. It
may seem like not enough recovery was applied, however the recovery factor applied was an
entire order of magnitude greater than the SIL 1 example (due to the SIL target was an order of
magnitude greater). In fact the recovery may have been granted too much weight (recovery
factors are also subjective).
This might seem unfair but to put everything into context, consider the target again. The target is
around one failure every 1000 years, however in just 5 years there was a failure. How much
evidence would it take to convince someone that the function has been fixed and is operating
back at the 1 in 1000 level?
Consider hurricane Harvey. That hurricane was a 1 in 1000 level event, yet Houston, Texas
experienced this major hurricane a few years back. The city of Houston has implemented new
safety measures to help combat any future flooding events, but would anyone living in Houston
today claim that there will never be another hurricane Harvey in their lifetime? The answer is
most likely no, and every hurricane season for the next 20+ years the entire city will be on full
alert (until the next generation comes along, thinking they know better than their elders).
Conclusion
It is the authors’ belief that if the industry started to implement Bayes into its models it will
quickly be demonstrated that lofty 10-5 and 10-6 event frequency targets can never be met. As
witnessed in the SIL 2 SIF example, just one failure at any time during a facility’s operating
history will quickly shatter the illusion that a SIF can reach SIL 2 (not to mention SIL 3).
Imagine that a target of 10-6 would require at least three IPLs of this same magnitude to mitigate
the target. Good luck!
When a facility pretends it can meet 10-6 it is ignoring the elephant in the room, systematic errors
and common cause failures. These failures are real, but their impacts are largely unknown. A
Bayesian model can prove that they are worse than the industry gives credit.
If the industry were to acknowledge that 10-6 isn’t possible, then what is possible? Back on
figure 4 it was shown the industry is operating around a 10-3 catastrophic event rate on average,
but that doesn’t seem like a good target. To compare the process safety industry to the airline
industry, the approximate probability of dying in an airplane crash is also 10-5 (see footnote 5).
Keep in mind the airline industry has much simpler systems designed for one purpose only, yet
still has multiple fatality accidents. It might be best to split the difference between where the
process safety industry is, and where the airline industry is. This admits the difficulties due to the
complexity of process safety systems, realizing that process safety can never be as simple as an
airplane’s safety system.
If the industry were able to accept a lower target it would be much easier to close a LOPA gap.
Lower targets would equate to few IPLs to manage. Keep in mind also that these IPLs would
show a realistic number based on the Bayesian model, updated with real evidence. Less IPLs
would lead to more effective management of IPLs and greater ease on maintenance. The Bayes
Truth Meter approach allows a plant management to focus on bad actors. Finally, with less
“trees” (IPLs) to manage, a facility is free to focus on the “forest” as a whole (overall plant
health).
5
In the year 2019 there were 10 major airline crashes with multiple fatalities. There are approximately 18 million
flights per year on average. The odds of dying in a plane crash is around 10-5 to 10-6. https://www.1001crash.com/.
Airline systems are very sophisticated, and have one goal in mind, technologies are mostly the same, and failure
modes are well understood. Compare to the process safety industry.
Figure 16 - Less trees = easier to manage the forest
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Integrating the PHA and Facility Siting into a Site Risk Assessment
Life-Cycle
Abstract
The PHA process has been implemented in industry for decades, and PHA stakeholders are
already fluent in risk communication. PHAs provide an accepted framework in organizations
which details scenarios to be evaluated, credible safeguards, and the organization’s acceptable
risk criteria. Siting studies may consider risk in the same way as PHAs, but organizations typically
fail to align the two assessments.
PHAs already contain the hazard scenarios and safeguards and the organization’s risk criteria.
Aligning the PHA scenarios and safeguards with the siting study can improve the quality of the
siting study; generic release scenarios are generally included in a siting study but could be
improved by process-specific hazard scenarios from the PHA. PHA recommendations can create
an unnecessary cost to the organization if the consequences and risk ranking is not
accurate. Conversely, PHA scenarios that fail to identify major risk potential may result in
increased risk exposure for personnel and the business. Aligning the qualitative risk criteria from
the PHA and the siting study quantitative risk criteria can allow PHA scenarios consequences and
level of risk to be accurately identified, result in cost-effective and more accurate risk reduction
recommendations, and improve the organization’s ability for consistent risk-based decision
making.
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Nitin Roy
California State University
Sacramento, CA
[email protected]
Abstract
Increasing complexity of distributed control systems (DCS) and control logics has made (safety
instrumented systems) SIS validation complex and time-consuming. IEC and ISA safety standards
recommend comprehensive logic checks of Safety instrumented functions. It can take months to
check logic in delivered product. This work introduces automated testing of logic in process plants
using Digital Twins. This method makes the process efficient and saves considerable amount of
time, manpower and in turn capital. The verification which takes months can be reduced to weeks.
It also ensures the verification is comprehensive and accurate making the system safer. In this
work we also review the current practices in SIS verification and future improvements.
Keywords: automation and control, functional safety, process safety, process simulation
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
Functional safety engineers follow the ISA/IEC 61511 standard and perform calculations based on
random hardware failures. These result in very low failure probabilities, which are then combined
with similarly low failure probabilities for other safety layers, to show that the overall probability
of an accident is extremely low (e.g., 1E-5/yr). Unfortunately, such numbers are based on
frequentist assumptions and cannot be proven. Looking at actual accidents caused by control and
safety system failures shows that accidents are not caused by random hardware failures. Accidents
are typically the result of steady and slow normalization of deviation (a.k.a. drift). It’s up to
management to control these factors. However, Bayes’ theorem can be used to update our prior
belief (the initial calculated failure probability) based on observing other evidence (e.g., the
effectiveness of the facility’s process safety management process). The results can be dramatic.
Keywords: PSM, Process Safety Management, Bayes’ Theorem, SIS, Safety Instrumented
System, SIL, Safety Integrity Level, Swiss Cheese Model, Normalization of Deviation, Drift
1. Introduction
Some statistics are easy. For example, what’s the probability of a fair 6-sided die rolling a 3? That
shouldn’t challenge anyone. The answer is based on frequentist principles and can be proven by
testing or sampling.
Some seemingly simple statistical examples aren’t as simple as they might first appear. For
example, imagine there is a one in a thousand chance of having a particular heart disease. There is
a test to detect this disease. The test is 100% accurate for people who have the disease, and 95%
for those that don’t. This means that 5% of people who don’t have the disease will be incorrectly
diagnosed as having it. If a randomly selected person tests positive, what’s the probability that the
person actually has the disease?
Some statistical cases are not simple at all. For example, what’s the probability of your plant having
a catastrophic process safety accident within the next year? You and others might have designed
and calculated it to be as safe as driving a car (i.e., 1/10,000 per year), but how can you prove it?
Frequentist based statistics cannot be used to confirm or justify very rare events. Do you believe
your plant is safer (or worse) than any another facility you may have visited? Might there be
variables, conditions, or precursors that you could observe that might affect your belief? And if so,
might you be able to evaluate and quantify their impact on risk?
(The answer for the heart disease example above is 2%. See the annex at the end of this paper for
the solution if you didn’t get the correct result.)
Past performance is not an indicator of future performance, especially for rare events. Past
performance would not have indicated (at least not to those involved at the time) what would
happen at Bhopal, Texas City, or any other accident you can think of. How many managers have
you heard say, “We’ve been running this way for 15 years without an accident; we are safe!”
What’s the probability of dying in a vehicle accident? In the US there are about 35,000 traffic
deaths every year. Considering our population, that works out to a probability of about 1/10,000
per year. You’re obviously not going to live to be 10,000 years old, so the probability of your
dying in a car crash is relatively low. Yet might there be factors that influence this number, ones
that you might be able to observe and control?
Imagine the following: A salesman you know — but have never met — picks you up at your office
and drives you both out for lunch. What probability would you assign to being in a fatal accident?
On the way to the restaurant you notice him texting while driving, speeding, and being a bit
reckless. You’re a bit distressed, but you know you don’t have far to go, and you keep your mouth
shut. At your one-hour lunch you see him consume three alcoholic beverages. Assuming you’d
even be willing to get back in the car at that point (there’s always Uber), what probability would
you assign to being in a fatal accident now? (Records have shown that alcohol is involved in 40%
of traffic fatalities, speeding 30%, and reckless driving 33%. You are 23 times more likely to crash
while texting. Seatbelts reduce the risk of death by 45%.) This is an example of updating a prior
belief based on new (even subjective) information. That’s Bayes’ theorem.
So one can observe conditions and make even subjective updates to previous predictions. People
do this all the time. Even insurance companies do this when setting premiums (as premiums are
not simply based on past performance).
Bhopal was the worst industrial disaster of all time. The facility was designed and build in the
1970’s and the accident took place in 1984. While this was a decade before layer of protection
analysis (LOPA) was introduced, it’s useful to use this technique to evaluate the original design
and compare it to the operation on the day of the event. This is not an attempt to explain why the
event happened, nor should this be considered an example of 20/20 hindsight. This is simply an
attempt to show how Bayes’ theorem might be used in the process industry.
The facility in Bhopal was patterned after a successful and safe plant in the US. There were
inherently safe design principles and multiple independent protection layers to prevent the
escalation of an event caused by the possible introduction of water into a storage tank. These are
listed in Table 1, along with sample probabilities for their failure.
Probability
Description
of failure
Stainless steel construction .01
Nitrogen purge .1
Refrigeration system .1
High temperature alarm .1
Empty reserve tank .1
Diluting agent .1
Vent gas scrubber and flare .1
Rupture disk and relief valve .1
All safety layers failing at the same time 1E-9
Considering an initiating event frequency of perhaps 0.1/yr (a common number used in LOPA for
many initiating events), the risk associated with this event would appear to be much lower than
the risk of driving a car. Yet how could this be proven? In reality none of the layers were effective
at Bhopal and the accident happened within the first five years of operation (i.e., within the
assumed time period of practically any single initiating event). All the layers at Bhopal didn’t
magically fail at the same time. Trevor Kletz was well known for saying, “All accidents are due to
bad management.” Ineffective management allowed all the layers to degrade (and there were
common causes between many of them) to the point where none of them were available the day
the event happened. Normalization of deviation — or drift — was not unique to Bhopal. This is a
serious issue that affects many facilities even today. How might we be able to model this?
Functional safety engineers focus on the ISA/IEC 61511 standard. Following the lifecycle of the
standard involves determining a performance requirement for each safety instrumented function
(SIF) and evaluating that the intended hardware design meets the performance requirements (and
changing the design if it doesn’t). This entails performing calculations considering the device
configuration, failure rate, failure mode, diagnostic coverage, proof test interval and much more.
Yet the calculations only involve random hardware failures, and the numbers are often so low that
they cannot be proven by frequentist statistics and sampling. The standard does discuss systematic
failures (e.g., human errors of specification, design, operation, etc.), but not in a quantitative
manner.
What really causes accidents involving control and safety systems? Figure 1 is well-known to all
functional safety practitioners. (The results were published by the United Kingdom Health and
Safety Executive more than 20 years ago, and it’s unlikely that any of the values have changed
since.) Few, if any, accidents have been due to a random hardware failure, yet that’s what everyone
is focusing on in their calculations. How might we include the management related issues shown
in Figure 1 in our overall modelling? And if we were to do so, how much might it change our
answer?
Changes after
commissioning
20%
Incorrect and
incomplete
Operations and specifications
maintenance 15% 44%
Installation and
commissioning Design and
6% implementation
15%
What’s the definition of a safe plant? Some have responded, “One that hasn’t had an accident.” As
discussed earlier, such thinking is flawed. Similarly, what’s the definition of a safe driver? One
that hasn’t had an accident? If the salesman driver mentioned earlier tried to reassure you by saying
that he drives that way all the time and he’s never had an accident, would you be reassured? It
should obvious to everyone that a safe driver is one who follows the rules and laws, doesn’t drive
under the influence of alcohol or drugs, is not distracted by texting, wears a safety belt, keeps the
car in good condition, etc. Yet does doing so guarantee there will not be an accident? Obviously
not, but it does lower the probability. The same applies to a safe plant. You don’t define safety by
the absence of a very rare event; you define it by the presence of common and readily observable
behaviors. And we can model this!
6. What the swiss cheese model, process safety management, and Jenga have in common
James Reason came up with the swiss cheese model in the late 1990’s, as shown in Figure 2. It’s
a graphical representation of protection and mitigation layers. The effectiveness of each layer is
represented by the size and number of holes in each layer. The holes are controlled management;
the more effective the management, the few and smaller the holes. Accidents happen when the
holes line up and a single event can proceed through each layer.
Figure 2: The swiss cheese model
A similar concept can be represented graphically by comparing the 14 elements of the OSHA
process safety management (PSM) regulation to a Jenga tower, as shown in Figure 3. Think of the
14 main elements as layers, and the sub-elements as individual pieces within each layer. An
effective implementation of all the clauses in the regulation would be similar to a complete Jenga
tower, or a swiss cheese model with very few holes, and small ones at that.
1. Trade secrets
2. Compliance audits
3. Emergency planning and response
4. Incident investigation
5. Management of change
6. Hot work permit
7. Mechanical integrity
8. Pre-startup safety review
9. Contractors
10. Training
11. Operating procedures
12. Process hazards analysis
13. Process safety information
14. Employee participation
Figure 3: (Effective) Process Safety Management, Jenga, and the swiss cheese model
But how many people working in process plants truly believe their facility has all the pieces in
place, and that they are all 100% effective? Perhaps your facility is more like the tower and swiss
cheese model in Figure 4.
Figure 4: (Ineffective) Process Safety Management, Jenga, and the swiss cheese model
What’s deceptive is that the tower in Figure 4 is still standing. Everyone then naturally assumes
they must be OK. (“We’ve been operating this way for 15 years and haven’t had an accident yet;
we must be safe.”) Yet anyone would realize the tower is not as strong or as resilient as the one in
Figure 3. Langewiesche said “Murphy’s law is wrong. Everything that can go wrong usually goes
right, and then we draw the wrong conclusions.” Might we be able to evaluate the completeness
of the tower, or the number of holes in the swiss cheese model, and determine the impact on safety?
If you knew the various layers were imperfect, might you be able to update your “prior belief”
based on newly acquired information, even if that information were subjective?
7. Bayesian networks
Functional safety practitioners will be familiar with fault trees and event trees. What might be new
to many are Bayesian networks, a simple example of which is shown in Figure 5. Just as with the
other modelling techniques, there is math associated with how the network diagram elements
interact with each other. There are also commercial programs available to solve them
automatically, as diagrams can get large and complex and the math too unwieldy to solve by hand.
One interesting aspect of Bayesian networks is that the math and probability tables may be based
on subjective ranking (e.g., low, medium, high).
Figure 5: Sample Bayesian network
The case of interest here is to model the impact of the PSM program on the performance of a safety
instrumented function (SIF). Imagine a SIF with a target of safety integrity level (SIL) 3. Imagine
a fully fault-tolerant system (sensors, logic solver, and final elements) with a calculated probability
of failure on demand of 0.0002. The reciprocal of this number is the risk reduction factor (RRF =
5,000), which is in the SIL 3 range as shown in Table 2.
As noted earlier the calculations are based on frequentist statistics and the numbers cannot be
proven. But as cited in the examples above, our “prior estimate” could be updated with new
information, even if it were subjective. This example can be represented in the simple Bayesian
network shown in Figure 6.
The event tree using one value of PSM effectiveness (99%) is shown in Figure 7. Table 4 lists the
results for all the possible values.
Ranked Scale Optimistic Value SIF RRF Pessimistic Value SIF RRF
Very high 99.99% 3,300 99% 98
High 99.9% 833 90% 10
Medium 99% 98 80% 5
Low 90% 10 60% 3
Very low < 90% <10 < 60% <3
8. Conclusion
Being a safe driver is accomplished by following all the rules that are known to help avoid
accidents. Similarly, operating a safe plant is accomplished by following all the rules and
regulations effectively. Yet it’s easy for functional safety engineers to focus instead on math and
hardware calculations. The frequentist based statistical calculations result in extremely small
numbers that cannot be proven. However, the prior belief probability can be updated with even
subjective information. Doing so can change the answer orders of magnitude. The key takeaway
is that the focus of functional safety should be on effectively following all the steps in the ISA/IEC
61511 safety lifecycle and the requirements of the OSHA PSM regulation, not the math (or
certification of devices). Both documents were essentially written in blood through lessons learned
the hard way by many organizations.
References:
Only one person out of a thousand has the disease. Yet if 5% of the people test as false positives,
that would be 50 people out of a thousand that are diagnosed, but do not actually have the disease.
So the probability of actually having the disease based on test results is one out of 51 people (the
50 false positives, plus the one who actually has the disease), which is just under 2%.
Every medical test result in false positives. Don’t be mislead by your medical practitioner who
may not have a full understanding of the statistics!
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
Instrumented Protective Systems (IPS) are essential to reducing the potential risk of possible process
hazards in a process facility. IPS are composed of any combination of sensor(s), logic solver(s), and final
element(s) used to implement protective functions that detect abnormal or unacceptable operating
conditions and take action on the process to achieve or maintain a safe state. IPS are used to reduce the
process risk associated with health and safety effects, environmental impacts, loss of property and
business interruption costs. What is the future of IPS in respect to design, construction, operation, and
maintenance throughout their safety life cycle?
Instrumented Protective Systems (IPS) are essential to reducing the potential risk of possible
process hazards in a process facility. IPS are composed of any combination of sensor(s), logic
solver(s), and final element(s) used to implement protective functions that detect abnormal or
unacceptable operating conditions and act on the process to achieve or maintain a safe state. IPS
are used to reduce the process risk associated with health and safety effects, environmental
impacts, loss of property and business interruption costs. IPS are composed of three categories
SIS (Safety Instrumented Systems), Safety, and Non-Safety interlocks.
“Star Trek” has shown glimpses of what the future of IPS could look like. Of course, my
favorite characters are “Mr. Scott”, “Geordi”, and “Data”. “The Captain” constantly relied upon
his “Engineer” to push the Enterprise past the limits of its design specifications without having
an incident and breaking into a million pieces. Not a good thing to do in space. Does that sound
familiar to your process facility? Management constantly pushing operations to increase
production, reduce downtime, reduce capital cost, and lengthen the time between maintenance
turnarounds.
In 1966, “Star Trek” the show was so technically advanced with its personal communicators,
talking computer, flat screen monitors, and tricorders. In 1987, “Star Trek the Next Generation”
brought more advanced computers and a robot named “Data”, which presented information,
documentation, and real-time data about the operation and health of the Enterprise to Geordi and
offer solutions to fix problems in the middle of space, far from any space port. This was done
through voice, holographs, and other virtual reality HMI. Geordi’s special glasses allowed him
to see and communicate with the computer from any location. All these things have become
common place in today’s world and are being applied to IPS, process control, and safety
systems.
I see a future where engineering, operations, and maintenance personnel will have instantaneous
access to real-time data and information about the operating status and health of IPS. Thus,
allowing solutions to be conceived and implemented before production is reduced, lost, or
shutdown. We must improve our engineering documentation, metadata management, process
control, and safety systems to ensure data and information flow into and through our computer
systems to provide this real-time data and information to whomever needs it at their current
location 24/7 and for them to be able to respond accordingly.
1. Engineering Documentation
It all begins with engineering documentation used to design, construct, and implement IPS.
Each interlock is assigned an IPS classification based on the hazard and risk assessment.
The interlock designer must meet specific requirements for each IPS classification.
Standards, practices, and procedures outline the engineering documents that will become
electronic records in the instrumentation database. These records are available as needed
through the computer system for engineering, operations, and maintenance during the IPS
safety lifecycle. The objective is to ensure accurate and correct information is entered into
the record management system. With any database, “garbage in” creates “garbage out”.
Once these documents are entered into the instrumentation database, they must be revised
and modified to reflect any changes done after engineering during operation and
maintenance. If these are not kept current, it will take more time and effort to troubleshoot
problems and keep process facilities operating at maximum up-time.
2. Metadata Management
Metadata is the newest term for data management. There is an overload of engineering,
operating, and maintenance data available and we continue to want more. The key is to
gather and coordinate this data from different computer systems. There are metadata
standards to assist with this effort which define standard data fields. There are many
programming challenges for IT, process control, and IPS programmers to work together to
provide real-time IPS operating and health status in a useable format to whomever needs it
at any time and location.
3. Operations / Maintenance
Operations is constantly being pushed to meet production schedules and reduce down-time
which directly conflicts with IPS maintenance (e.g., calibration, verification, proof testing,
and repair). SIS and Safety interlocks must be tested on regular frequencies which
typically corresponds with plant turnarounds to perform full stroke valve testing. SIS
interlocks must always maintain the required SIL (safety integrity level) or additional
constraints or measures are required to replace any risk gap/s lost due to failures, defeats, or
repairs and must be completed within the MTTR (mean time to restore). IPS interlocks can
be designed with redundant or additional instrumentation, and the safety logic solver
programmed to maintain the SIL when a failure occurs, or defeats are used to allow time
for repairs to be completed, proof testing completed, and the interlock restored. If the SIL
is maintained, the repair does not have to be completed within the MTTR (mean time to
restore).
There is lots of potential in developing better on-line and automated proof testing tools and
procedures. Imperfect testing has a big effect on SIS proof test intervals and determining
the end of life of instrumentation. We must constantly work on methods and procedures to
improve testing and move closer to 100% proof test coverage with automated safety shutoff
valves being most difficult to implement. On-line and automated proof testing will need to
advance to make headway on proving valve operation. There is a fine line between more
frequent testing and repairing or replacing valves more frequently. This is an interest
subject, because every time a valve is touched with in the field, systematic errors can occur
which can be worst than imperfect test coverage.
Robotics, robots, and drones are advancing very quickly and taking over dangerous jobs
that were performed by human. These devices are providing real-time data with video to
show us equipment located in area where human can’t go. They can make repairs and
other vital functions. This is only the beginning of what can be done.
4. Virtual Reality
There have been many advancements in Virtual reality in recent years and we are
beginning to look like “Star Trek”. Operators and maintenance personnel are using remote
HMI (Human Machine Interface) in the field to operate, diagnose, maintain, test, and view
real-time data. Tablets and headsets are becoming more common.
Engineers, technicians, or others have remote access to the process control / safety systems
which can email, text, or call to alert specific personnel about problems and provide
possible solutions. If automated on-line testing is available, it can be done within seconds
of the alert.
Virtual reality simulation assist engineering with process designs and eliminate potential
problems before they are installed in the field. Equipment and processes can be redesigned
to be intrinsically safe and reduce the need for IPS.
CONCLUSION
It is a great time to be involved with IPS and the exciting possibilities of making our systems
safer. “Star Trek” and other science fiction books, shows, and movies help us see what is
possible. We are moving toward interactive systems that will keep us inform of the operating
status and health of IPS and process control systems. There will be advancements in proof test
coverage, on-line testing, and automated testing. Manufacturers will develop standard software
modules to detect failures, allow defeats for repairs, and automate on-line testing to restore to
normal operation while maintaining the integrity level of the SIS or Safety interlock the entire
time.
It all begins with improving our documentation, metadata management, process control, and
safety systems to easily provide real-time data and information about the operation and health of
IPS to engineering, operations, and maintenance to support operating facilities 24/7. This will
allow timely solutions to be conceived and implemented before production is reduced, lost, or
shutdown.
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
Reverse flow scenarios due to latent check valve failure are critical in the design of relief systems,
but often overlooked or incompletely evaluated. This type of scenario is often controlling for relief
device sizing, especially for systems involving high differential pressure across pumps or
compressors. This paper reviews current industry best practices to evaluate such scenarios.
Specific application examples are then presented to highlight key aspects for the analysis,
including identification of pressure sources as well as potential paths for reverse flow, location of
and credit for relief devices, initiating events, and limiting basis for system pressure rating. In
addition, potential to relieve both forward plus reverse flow simultaneously should be evaluated.
Guidance is also provided to determine if vapor, liquid, or two-phase relief should be expected and
whether liquid displacement or non-obvious backflow from a utility header should also be
considered. Criteria to allow credit for system settle-out pressure, if applicable, and how to
evaluate such credit are also provided. As system complexity increases, tips on how to estimate
relief loads accurately and efficiently are also provided. Lastly, consideration of other safeguards
beyond relief devices for high-risk cases is also discussed.
Keywords: safety, pressure relief, reverse flow, back flow, check valve failure
Introduction
Overpressure protection design is an integral requirement to ensure safe operation of process
plants. While external fire and blocked outlet are some of the more frequently identified scenarios,
less obvious ones such as reverse flow scenarios are sometimes overlooked, but could potentially
result in similar or even more severe consequences for plants if not properly accounted for.
In particular, this paper draws upon the collective experience of the authors in overpressure
protection design and analysis and is focused mainly on reverse flow scenarios. Methods to
Restricted
establish system boundary and to identify interfaces between relatively high-pressure (HP) and
lower-pressure (LP) systems are discussed, together with methods to identify initiating events,
basis for sustaining reverse flow, and available reverse flow paths. Application examples are
shown to demonstrate key concepts and provide guidance for estimating relief requirement
consistent with current industry best practices such as API STD 521.
Reverse flow scenarios should be considered for an LP system with potential for exposure to an
unintended flow from a HP system, such as due to failure of a pump or compressor or other
initiating event. By itself, such an event could lead to overpressure if there is no check valve or
other safeguard installed to prevent reverse flow. Even with check valve(s) installed, the potential
for overpressure remains valid due to the latent and unrevealed failure of check valve(s), i.e. stuck
wide-open or leaking, which in turn could provide a reverse flow path during such an event.
Pump
Discharge Vessel
Low Press ure System
Com pressor
Suction Vessel
Restricted
After identifying the LP/HP interface, the initiating event for the reverse flow scenario needs to be
identified.
While losing the pump or compressor is typically what could allow reverse flow to occur, the
initiating event can be anything from a power failure to an inadvertent closure of the suction valve
that causes the pump to trip. Even without a pump/compressor, misdirected flow could occur due
to a blocked outlet downstream of where multiple feeds with different maximum operating
pressures combine. Identifying all events that may lead to reverse flow is important in
understanding what may occur during the reverse flow and where the protected (and isolable)
system boundaries are.
The next step would be to identify the potential source(s) for reverse flow once the initiating event
happens. Once the pump/compressor at the LP/HP interface is lost, evaluate if the HP system has
another source or feed and if its high pressure and reverse flow to the LP system would be
sustainable based on:
a) Relatively large vapor inventory in HP equipment, pipeline, or utility header.
b) Vapor generation due to continued heat input
c) Other compressors continuing to operate (e.g. not related to initiating event)
d) Other liquid feed pumps continuing to operate (e.g. not related to initiating event)
In this paper, our main example is related to a hydrotreater feed surge drum as show in Figure 3.
In this system, the initiating event for potential reverse flow to the feed surge drum would be the
loss of the feed pump to the downstream reactor system. During this event, the reverse flow cannot
be driven by the normal liquid once the pump is lost. Therefore, the backflow sources need to be
identified based on the downstream system.
PRV-1
Flare
LV-1
Feed
PRV-2
Hydrogen
Flare
EV-1
FV-1
To Reactor System
BV-1
Feed Pump
A hydrotreater would typically have some significant HP vapor inventory consisting mainly of
hydrogen (in the reactor loop) and a hydrogen make-up compressor (which would not be lost
during our initiating event). Thus, the reverse flow might be sustainable for some time, resulting
in overpressure of the feed surge drum upon loss of the feed pump. Upon losing the liquid feed to
Restricted
the reactor system, the consumption of the make-up hydrogen would decrease and could possibly
sustain, or even increase, the normal reactor system pressure. For a conservative approach, no
credit should be taken for any trip that might shut off all remaining unit feed. The maximum
potential operating pressure in the reactor should then be used as the basis for reverse flow.
Whereas the feed surge drum in a hydrotreater is used in the example above, other similar
applications are possible such as a feed surge drum feeding a column as shown in Figure 4.
PRV-3
Flare
PRV-1
Off gas
Flare
PV-1
LV-1
Condenser
Feed
Accumulator
Top Product
Feed Surge Drum FV-2
PRV-2 LV-3
Reflux/Product Pump
Flare
EV-1
FV-1
Stripping Column
BV-1
Feed Pump
Steam
TV-1
Condensate Reboiler
Bottoms Product
LV-2
The column system could also be adversely impacted by the same initiating event as the loss of
the feed pump might also result in reduced liquid traffic and less cooling in the column; therefore,
pressure at the column could increase above its normal operation as reboiler duty might continue.
Another initiating event might involve a power failure impacting the following equipment: feed
pump, condenser (air cooler fans), and reflux pump. The overall impact to the HP system (the
column system) would need to be evaluated to understand what maximum pressure should be used
for estimating the reverse flow rate.
Per Figure 3 and Figure 4, the LP system is assumed to have a relatively small volume compared
to the HP system. If the LP system has a significant volume and the HP system does not have
another sustained HP source, the two systems may equalize to a pressure between them based on
the available total system volume and the total quantity of vapor in the systems. This is referred to
as a settle-out condition. As an example, closed loop compressor systems, such as for refrigeration,
Restricted
are commonly designed to ensure that the settle-out condition does not result in overpressure of
any equipment in such a system.
Restricted
PRV-1
Flare
Reverse flow path
LV-1
Feed
PRV-2
Hydrogen
Flare
EV-1
FV-1
To Reactor System
Check Valve 1
BV-1
Feed Pump
FV-2
PRV-1
Flare
Reverse flow path
LV-1
Feed
PRV-2
Hydrogen
Flare
EV-1
FV-1
To Reactor System
FV-2
Restricted
PRV-1
Flare
Reverse flow path
LV-1
Feed
PRV-2
Hydrogen
Flare
EV-1
FV-1
To Reactor System
FV-2
Identifying the protected system - impact of relief path location and initiating event
Next, the required location of relief devices, the boundaries of the protected system, and the type
of relieving fluid should then be considered as the overpressure protection requirements and
available relief path or device would depend on the initiating event.
In Figure 8, the high-pressure reactor system downstream normally contains a 2-phase inventory.
What encompasses the protected system, as well as where and what size the relief valves need to
be, must take many factors into account.
First, determine the possible initiating events for the scenario and the protected system boundaries.
If the focus is only on failure of the Feed Pump (red boundary in Figure 8), there would be an open
path back to relief valve PRV-1 on the Feed Surge Drum; thus, it might seem that there is no need
for PRV-2 on the pump suction. However, if the initiating event were inadvertent closure of BV-
1 (green boundary), it becomes clear that the pump suction valve and piping will become isolated
from PRV-1 (taking no credit for opening of minimum flow valve FV-2). The initiating event
would also impact what the limiting system pressure would be as the isolated pump suction piping
might be rated higher than the upstream Feed Surge Drum.
Restricted
PRV-1
Flare
Forward Flow Reverse Flow HP Vapour
LV-1
Feed
PRV-2
Hydrogen
Flare
EV-1
FV-1
To Reactor System
FV-2
If the pump suction piping is designed for the pressure found in the HP system (which is common),
perhaps no relief valve is necessary at this location. However, if the higher pressure rating only
extends back to BV-1 and if the initiating event were failure of EV-1 (blue boundary), then PRV-
1 may still be necessary to protect the piping from EV-1 to BV-1. As can be seen, the boundaries
of the protected system and the requirements for overpressure protection can be impacted by the
initiating event.
Next, consider what relief loads would be required for the upstream relief valve(s). This might be
primarily affected by where the limiting elements are within the process. In Figure 5, with only
one check valve downstream of the pump and assuming latent failure of that check valve (i.e. stuck
wide-open), both PRV-1 and PRV-2 may see relatively large relief loads. In Figure 6, flow to
PRV-2 will be limited by the dual check valves downstream of the pump, whereas flow to PRV-1
will likely be limited by the capacity of FV-2. In this case, PRV-2 may have a relatively small
required relief rate while PRV-1 might have a relatively large required relief rate. In Figure 7, with
the dual check valves optimally placed after the minimum flow line, both relief valves may see
relatively small relief loads. Refer to the next section on how the reverse flow rate through dual
check valves might be estimated. Depending on the exact design of the system in Figure 7, PRV-
2 may be all that is necessary to protect against reverse flow and PRV-1 may be able to be designed
for other scenarios.
Another factor to consider is what phase of fluid will be relieved. If a pump discharge line joins a
header with several other pump discharges, the reverse flow may be all liquid. If the downstream
system contains a large vapor inventory, the reverse flow may be vapor or two-phase. Looking at
Restricted
the hydrotreater system above, there is potential to consider multiple fluid phases for relief given
the downstream Make-up Gas Compressor (assuming it is unaffected by the initiating event), and
the large 2-phase inventory in the reactor system itself.
Moreover, the relief valve on the pump suction (PRV-2) is located on liquid-full piping. The initial
fluid through the relief valve will be existing liquid inventory that is being displaced by the reverse
flow fluid. In a typical set-up such as in Figure 7, the dual check valves might be located near the
pump discharge and the relief valve might be located near the pump suction. As such, the volume
of liquid between the two locations tends to be small, and it might be justified to not consider the
initial liquid for relief. But in some systems, the check valves may be located a significant distance
downstream of the pumps or the relief valve might not be right near the pump suction. There is
then a significant amount of liquid that must be displaced before the reverse flow fluid reaches the
relief valve. The displacement of the liquid may need to be considered in such cases. In the worst-
case scenario, high-pressure downstream vapor could flow across the check valves and become
low-pressure, low-density vapor that would displace the liquid back to the relief valve. Given the
typical orders of magnitude difference between vapor and liquid densities, this displacement rate
can become prohibitively large for relief valve sizing and may need to be mitigated by other
methods. Based on the specific details of each scenario/system, good engineering judgement
should be applied to determine what fluid phase should be considered for the relief stream.
Restricted
on collective industry experience as embodied in API STD 521 and is not intended to be
prescriptive in nature. The reader is cautioned to adapt accordingly to account for the design,
operational, and maintenance constraints for each specific application. For example, a check valve
that is designated as safety-critical might be required to demonstrate history of reliable service
under specific process conditions and fluid properties. The check valve might also be required to
follow enhanced inspection and maintenance program.
Single check valve or no resistance for reverse flow
For cases where there is only a single check valve or no check valve present to prevent reverse
flow, calculating the reverse flow rate by assuming the following for the check valve (if present):
a) Failed check valve has no resistance for reverse flow; or
b) Failed check valve has the same resistance for reverse flow as in the forward flow direction
(open)
The resistance from all control valves or pipe fittings along the path should be taken into account
to estimate the reverse flow rate in general, regardless of how many check valves are installed.
In cases where multiple check valves are present but not designated as safety critical, the reverse
flow rate should be estimated the same way as if a single or no check valve were present.
Dual check valves designated as safety critical
For cases where two or more check valves designated as safety critical are present and installed in
series, the following can be assumed:
a) The smallest of the check valves completely fails wide-open.
b) The remaining check valve(s) has severe leakage.
API STD 521 presents two acceptable methods or calculating leakage rates:
a) Treat the leaking check valve as an orifice with the bore diameter equal to 10% of the check
valve nominal diameter; or
b) Estimate the leaking check valve as an orifice with a bore diameter that would allow for
10% of the normal forward flow through the check valve. The calculation would typically
consider maximum operating pressure of the HP system and maximum allowable
accumulation of the LP system.
Reverse flow through an alternative path or multiple paths
Where applicable, the distribution of reverse flow across multiple parallel paths should be taken
into account. The resistance through each path should be evaluated, and. a network analysis may
need to be considered. It is critical to evaluate if there is any common limiting element such as a
control valve that could limit the total reverse flow rate to reduce the complexity of the case.
Restricted
Fluid considerations and physical limitations
In certain cases, the reverse flow rate might be physically limited to a lower flow rate than
calculated simply based on differential pressure and flow resistance. For example, the capacity of
the pump may be the limiting factor for a HP liquid source. Alternatively, the vapor generation
rate of a reboiler or the capacity of a compressor may be the limiting factor for a HP vapor source.
Vapor reverse flow
In the hydrotreater feed surge drum system example presented in this paper, if the initiating event
is losing the feed pump, refer to Figure 9, the HP vapor will relieve through surge drum and suction
piping relief devices.
PRV-1
Flare
Reverse Flow HP Vapour
LV-1
Feed
PRV-2
Hydrogen
Flare
EV-1
FV-1
To Reactor System
FV-2
Expanding on the hydrotreater feed surge drum case, the reverse flow could also impact the system
upstream of the hydrotreater feed surge drum (refer to example on Figure 10). A common event,
such as a power failure, could impact multiple systems and the initiating event would have to be
evaluated to fully understand the impacts of reverse flow in HP / LP interfaces.
Restricted
PRV-3
Off gas
PV-1
Condenser PRV-1
Accumulator Flare
Top Product
FV-2
LV-3
Reflux/Product Pump Feed Surge Drum
FV-1
Stripping Column Hydrogen
EV-1
FV-3
Feed Pump
TV-1
Condensate Reboiler
Bottoms Product
LV-2
Liquid Displacement
In the hydrotreater feed surge drum system example presented in this paper, for reverse flow cases
where the initiating event would be isolating the feed pump suction valve (either automatically or
manually, refer to Figure 11), even though the driving force for reverse flow is a HP vapor, before
the relief device can discharge vapor, the liquid trapped in the piping would need to be displaced.
In this case, the liquid would be pushed at a rate where the volumetric flow rate of liquid would
be equal to the volumetric flow of vapor. This rate can be obtained by multiplying the mass flow
rate of vapor by the ratio between the liquid density and vapor density.
Restricted
PRV-1
Flare
Liquid Displacement Reverse Flow HP Vapour
LV-1
Feed
PRV-2
Hydrogen
Flare
EV-1
FV-1
To Reactor System
FV-2
Impact of other pressure sources beyond reverse flow (estimating the total relief load)
In the previous section, multiple techniques to estimate the reverse flow rate was discussed.
However, the total relief load for a reverse flow scenario may not necessarily be limited to the
reverse flow itself. The same initiating event that could cause a reverse flow might also trigger
other related impacts, such as blocked outlet.
Reverse flow + forward flow
As discussed, the loss of a feed pump in a hydrotreater may expose the feed surge drum system to
reverse flow. If the feed to the surge drum could continue based on available upstream source
pressure, the total relief load could be a mixture of the normal feed flow rate plus the reverse flow
rate as show in Figure 12.
Restricted
PRV-1
Flare
Forward Flow Reverse Flow HP Vapour
LV-1
Feed
PRV-2
Hydrogen
Flare
EV-1
FV-1
To Reactor System
FV-2
In another example involving a column system, the initiating event of reverse flow may also relate
to other potential overpressure scenarios such as power failure that in turn would result in loss of
cooling or reflux failure. As shown in Figure 13, a power failure might impact the Condenser fans,
Reflux/ Product Pump, and Feed Pump. With respect to the column, loss of overhead cooling and
reflux would then occur while column feed and reboiler duty could continue. An additional load
to consider might be cascading reverse flow from downstream equipment via the Feed Surge
Drum. A broader analysis could determine if the reverse flow might be sustainable, taking account
for relative design pressures of interconnected equipment and available pressure sources
downstream.
Restricted
PRV-3
Off gas
PV-1
Condenser PRV-1
Accumulator Flare
Top Product
FV-2
LV-3
Reflux/Product Pump Feed Surge Drum
FV-1
Stripping Column Hydrogen
EV-1
FV-3
Feed Pump
TV-1
Condensate Reboiler
Bottoms Product
LV-2
Restricted
If possible, the dual check valves should be placed after the branch for any alternate flow paths
back to the protected system (such as minimum flow lines) so that the total reverse flow would be
limited by the common check valves.
Even with the presence of dual check valves, if the pressure ratio between the high-pressure and
low-pressure systems is high enough, it may be required that the check valves have different
designs (dual diverse). This could prevent a potential common-mode failure and thereby lower the
risk that both valves will fail simultaneously. An implied assumption would be that both check
valve designs would be adequate for a specific application. Otherwise, it might be more reliable to
provide redundancy by using a common proven design, rather than implementing diverse yet
unproven design.
Minimum flow trip
An additional layer of protection that may be applied is a minimum flow trip on the reverse flow
path. In the event that forward flow stops, i.e. feed pump fails, isolation valve(s) would be closed
to prevent reverse flow.
SIL rated Safety Instrumented Function (SIF)
For higher risk systems, such as those with high pressure ratios between HP and LP systems, a
minimum flow trip may not be adequate. A Safety Instrumented Function (SIF) with appropriate
Safety Integrity Level (SIL) rating may be necessary to mitigate the potential risks of reverse flow.
In cases where it is not practical to install adequate total relief valve capacity, the SIF may be put
in place to remove reverse flow. Alternatively, a SIF may be designed to cut prevent continued
forward flow from the upstream system during the same event. What SIL rating is required and
the exact design of the SIF should be determined on a case by case basis. IEC 61511 provides
general guidance on SIF design. Note that even when considering a SIL rated SIF, reverse flow
can be a rapid process and valve closure times initiated by the SIF should be quick enough to
prevent overpressure.
Inherently safer design
For new installations, the most practical protection against overpressure due to reverse flow may
be inherently safe design of the system. This can include, but is not limited to, any of several
factors. One might design as much equipment and piping as possible to meet the maximum
pressure from a downstream system. In Figures 5-7 above, this could mean designing the piping
for high pressure all the way back to EV-1. In some instances, re-rating existing equipment to
higher design pressure might be feasible.
Installing dual and, if warranted, diverse check valves on particularly high-pressure ratio systems
and making sure those check valves are located downstream of any branches off of the main
process line could limit potential for reverse flow. Paying attention to the possible risks of reverse
flow during initial design can eliminate the need for costly mitigations in the future.
Conclusion
While the evaluation methods discussed in this paper draw upon general guidance from API STD
521, the reader is cautioned that associated risks for individual systems might vary and depend on
Restricted
the specific design, operation, and maintenance program for each plant. For example, there’s
potential for higher leakage rates through check valves that are poorly maintained or normally
operating in dirty/ fouling conditions than ones that are properly maintained and in clean service.
Beyond relief devices used in the examples, additional or alternate safeguards might be warranted,
especially for higher risk applications such as those involving very high differential pressure
between the LP and HP systems. The risk management program at a plant should be applied to
ensure installed safeguards would be considered adequate. Results from other plant assessments
such as Process Hazard Analysis (PHA) or Layer of Protection Analysis (LOPA) should also be
considered.
Disclaimer
The information contained in this paper represents the current view of the authors at the time of
publication. Process safety management is complex, and this document cannot embody all possible
scenarios or solutions related to compliance. This document contains examples for illustration and
is for informational purposes only. Siemens makes no warranties, express or implied, in this paper.
References
1. API Standard 521 Pressure-relieving and Depressuring Systems, 6th ed., 2014
2. IEC 61511 Functional safety - Safety instrumented systems for the process industry sector,
Edition 2.0, 2018
Restricted
22nd Annual International Symposium
October 20-21, 2020 | College Station, Texas
Michael D. Moosemiller
Baker Engineering and Risk Consultants, Inc. (BakerRisk©)
Abstract
Component failure rate data are used in a variety of quantitative and semi-quantitative study
methods related to process safety and reliability, including Fault Tree Analysis (FTA),
Quantitative Risk Assessment (QRA), and Layers of Protection Analysis (LOPA). In each of these
methodologies, failure rate data are used to determine the probability that specific protective
components, such as pressure relief devices, will fail to function as designed when called upon to
prevent an incident. In the case of pressure relief devices, standardized probabilities of failure on
demand are often applied with minimal consideration of the device type or the process service in
which the device is employed. This paper will examine pressure relief device failure rate data
from multiple published sources, categorize the data based on device type and service, and then
develop guidelines for determining probability of device failure on demand based on the proposed
device type and service categories. Additionally, this paper will provide commentary on the
administrative aspects of relief device handling relative to observed relief valve reliability.
Keywords: Pressure Relief Valve, Rupture Disk, Vacuum Breaker, Failure Rate Data, Fault Tree
Analysis (FTA), Quantitative Risk Assessment (QRA), Layers of Protection Analysis (LOPA)
Probability of Failure on Demand
1 Introduction
The majority of industry facilities employ pressure relief systems. These systems are designed to
prevent plant overpressure scenarios, thereby protecting personnel and the public from explosions,
fires, and toxic exposures, preventing environmental releases, and preventing damage to
equipment, piping, and buildings.
In the consideration of process safety risks by quantitative methods such as Fault Tree Analysis
(FTA) and semi-quantitative methods such as Layers of Protection Analysis (LOPA), it is critical
to quantify the probability that a relief device will fail to operate when called upon to prevent an
overpressure scenario. For a relief device, or any other type of safeguard being analyzed
quantitatively, this parameter is commonly referred to as the Probability of Failure on Demand
(PFD).
In the Center for Chemical Process Safety (CCPS) Guidelines for Initiating Events and
Independent Protection Layers (IPLs) in LOPA, typical PFD values for various types of pressure
relief devices are given as shown in Table 1 below [1]. This paper will analyze pressure relief
device failure rate data from a variety of published sources and compare the results of that analysis
to these PFD values.
Table 1: PFD Values for Pressure Relief Devices per CCPS Guidelines [1]
IPL Classification and Description PFD*
Spring-operated pressure relief valve 1.00E-02
Dual spring-operated pressure relief valves, no isolating valves present 1.00E-03
Dual spring-operated pressure relief valves, single manual valve can isolate one PRV 1.00E-02
Dual spring-operated pressure relief valves, single manual valve can isolate both
1.00E-01
PRVs simultaneously
Pilot-operated pressure relief valve 1.00E-02
Buckling pin relief valve 1.00E-02
Rupture Disk 1.00E-02
Spring-operated pressure relief valve with rupture disk (on inlet, assumes non-
1.00E-02
fragmenting type disk and monitoring for disk burst between disk and PRV)
Conservation vacuum and/or pressure relief vent 1.00E-02
Conservation vacuum and/or pressure relief vent 1.00E-02
Vacuum breaker 1.00E-02
*Assumes properly sized device and piping for specific scenario, clean service, and correct metallurgy
For pressure relief devices, as with any mechanical component, there are multiple potential failure
modes; however, not every failure mode necessarily constitutes a failure on demand. Failure of
any component or system over its useful life is typically illustrated by a bathtub-shaped curve [2],
as shown in Figure 1. This curve shows an initial high instance of failure during what is commonly
called the “early failure” or “infant mortality” period in which failure occurs due to manufacturing
defects and/or improper installation. The second period of the curve is relatively flat through the
component useful life and is commonly called the “intrinsic failure period,” as it characterizes the
inherent component failure rate during its useful life. The third period of the curve slopes upward
as the component service time extends beyond the useful life; therefore, it is called the “wear-out”
or “breakdown” failure period.
Figure 1: Example Component Bathtub Curve [2]
It should be noted that there are certain types of failures, administrative in nature, which are outside
the scope of this paper. For example, the authors are aware of instances in which multiple pressure
relief valves were installed at a facility without removing shipping pins designed to protect the
valves from damage during shipping, but which prevented the valves from operating as designed.
Pressure relief valves can also be damaged through improper transport from the warehouse or shop
to the field. However, other types of administrative failures would fall in the intrinsic failure period
and therefore be germane to this paper. An example of this type of failure would be improper
management of relief device manual isolation valves causing the protected equipment to be
isolated from pressure relief protection.
For the purpose of this paper, only the intrinsic failure period will be considered, as relief devices
would be expected to be either repaired to “like-new” condition or replaced prior to being operated
beyond their useful life. Even when the early and late period failures are removed from
consideration, not all intrinsic failure types would be failures on demand.
Mechanical failures of pressure relief devices fall into four primary categories of interest: failure
to open as designed, delayed operation, spurious opening, and leakage. Failure to open as designed
and delayed operation (i.e., the device opening at a higher pressure than intended) would be
considered types of failure on demand, sometimes characterized as “dangerous” failures, as in
either case the relief device would fail to open within the acceptable tolerances of its set pressure.
Spurious opening and leakage, however, would not be considered failure on demand, as neither
would prevent the device from opening when called upon to prevent overpressure. However,
spurious opening and leakage of pressure relief devices could cause releases of hazardous
materials, and their risks should be properly evaluated, but discussion of that evaluation would not
fall within the scope of this paper.
The data constraints applied to the analysis performed for this paper are as follows:
Data was only considered for pressure relief valves, as significantly more data was
available for them than for other types of pressure relief devices, such as rupture disks and
low pressure tank vents.
Failure rate data was only considered if it was documented on a per unit time basis (i.e. per
year or per hour), as opposed to a per demand basis, as pressure relief valve demand rates
can vary significantly by industry, by site, and by service, while a time basis provides a
more defensible comparison across different locations and circumstances. Relief valve
failure mechanisms are also more likely to be time-dependent than demand-dependent.
The assumptions made regarding the data analyzed for this paper are as follows:
All sources of data use the same criteria for defining their failure modes – for example, that
“failure to open on demand” means that the relief valve did not open at the same “X%”
overpressure. This failure mode definition issue is pervasive in industry across all
equipment types, as noted in the CCPS book on data collection [3].
As discussed in Section 2, the only failure types considered to be failure on demand were
failure to open as designed (a.k.a. valve stuck closed, valve seizes closed, etc.) and delayed
operation (a.k.a. failure to open fully at relief pressure, 10% heavy, etc.).
For data sets for which no test interval was specified, the assumed test intervals were as
follows:
o 12 months for valves located on offshore facilities, based on the requirements
established by the U.S. Bureau of Safety and Environmental Enforcement (30CFR
250.880) for pressure relief valves on U.S. offshore facilities [4]
o 5 years for valves located at onshore facilities, as this was the maximum reported
test interval for all data that listed test intervals
For data that was categorized both by calendar time and by service time, the data sets based
on service time were used.
When calculating PFD, the following equation was used to obtain the average PFD
(PFDavg) [5]:
Where:
λDU = Rate of dangerous undetected failure
T1 = Test interval
Note that in Tables 10, 11 and 12 in Section 5, this equation is simplified to “λt/2” for
readability. Also, note that in the case of a pressure relief valve in continuous service, all
dangerous failures (i.e. failures to operate on demand) are assumed to be undetected.
4 Data
This section presents the failure rate data analyzed for this paper. The data from each published
source is given in table format.
Table 2 below presents PRV failure rate data obtained by Exida [6]. Note that the “useful life” of
the PRVs studied (i.e., the period of time in which the failure rate is low and constant in the absence
of proof testing) is reported to be 4 to 5 years.
Table 2: Exida PRV Failure Rate Data [6]
Failure to Open*
Max. Rate 1.00E-07 per hour 8.76E-04 per year
Min. Rate 1.00E-08 per hour 8.76E-05 per year
Table 3 shows PRV “fail dangerous” probabilities from Lees [7]. These probabilities were
calculated using the PFDavg equation, shown in Section 3, based on a test interval of one year.
Table 3: Lees PRV “Fail Dangerous” Probability Data [ 7]
"Fail Dangerous" Probabilities*
Max. Probability 1.00E-02
Min. Probability 4.00E-03
Tables 4, 5, and 6 show failure rate data from the Offshore Reliability Data (OREDA) handbooks
from 2002 [8], 2009 [9], and 2015 [10], respectively, with the additional summary data shown at
the lower left. Table 7 shows failure rate data from Parry [11]. Note that, for Table 7, failure
categories a and b are considered failures on demand as discussed in Section 2.
Table 4: OREDA 2002 PRV Failure Rate Data [8]
Fail to Fail to Fail to
Open Open Open
Delayed Delayed Delayed
On On On Combined Combined Combined
Install- Operational Operation Operation Operation
Item Pop. Demand Demand Demand (Lower, (Mean, per (Upper,
ations Time (hrs.) (Lower, (Mean, per (Upper,
(Lower, (Mean, (Upper, per hour) hour) per hour)
per hour) hour) per hour)
per per per
hour) hour) hour)
Relief Valves 278 7 7,169,800 1.60E-07 1.68E-06 4.61E-06 1.00E-08 1.98E-06 7.79E-06 1.70E-07 3.66E-06 1.24E-05
PSV -
170 2 4,884,700 3.30E-07 2.59E-06 6.63E-06 3.30E-07 2.59E-06 6.63E-06
Conventional
PSV -
Conventional 148 2 4,244,500 3.60E-07 2.94E-06 7.61E-06 3.60E-07 2.94E-06 7.61E-06
1.1" to 5"
PSV - Bellows 32 2 428,400 9.20E-07 4.05E-06 8.97E-06 6.80E-07 4.39E-06 1.08E-05 1.60E-06 8.44E-06 1.98E-05
PSV - Bellows
25 2 339,400 1.30E-07 8.73E-06 2.76E-05 3.60E-07 5.84E-06 1.71E-05 4.90E-07 1.46E-05 4.47E-05
1.1" to 5"
PSV - Pilot
34 3 804,100 0 1.13E-06 4.39E-06 0.00E+00 1.13E-06 4.39E-06
Operated
Aggregate 653 15 17,066,800 2.77E-07 2.45E-06 6.50E-06 2.84E-08 1.11E-06 4.09E-06 3.05E-07 3.56E-06 1.06E-05
All
318 4 9,129,200 3.44E-07 2.75E-06 7.09E-06 3.44E-07 2.75E-06 7.09E-06
Conventional
All Bellows 57 4 767,800 5.71E-07 6.12E-06 1.72E-05 5.39E-07 5.03E-06 1.36E-05 1.11E-06 1.11E-05 3.08E-05
All Pilot 34 3 804,100 0.00E+00 1.13E-06 4.39E-06 0.00E+00 1.13E-06 4.39E-06
Table 9 presents PRV failure rate data obtained by the United Kingdom Atomic Energy Authority
(UKAEA) [12]. Note that, for these data, the effective test interval was calculated by dividing the
total number of valve-years by the total number of tests for each data set.
Table 9: UKAEA PRV Failure Rate Data [13]
Effective Dangerous Dangerous
# # #
# # Valve- Test Failure Failure
Valve Type Seize 10% Dangerous
Valves Tests Years Interval Rate Per Rate Per
Closed Heavy Failures
(yrs.) Year Hour
Conventional 3906 7459 12651 1.70 130 340 470 3.72E-02 4.24E-06
Bellows 522 1587 2659 1.68 3 35 38 1.43E-02 1.63E-06
Pilot 77 135 188 1.39 2 0 2 1.07E-02 1.22E-06
All 4505 9181 15498 1.69 135 375 510 3.29E-02 3.76E-06
Effective Dangerous Dangerous
# # #
Valve # # Valve- Test Failure Failure
Seize 10% Dangerous
Service Valves Tests Years Interval Rate Per Rate Per
Closed Heavy Failures
(yrs.) Year Hour
Air 52 102 166.6 1.63 3 3 6 3.60E-02 4.11E-06
Ammonia 47 93 142.2 1.53 0 2 2 1.41E-02 1.61E-06
Aromatic 272 795 1256.0 1.58 7 19 26 2.07E-02 2.36E-06
C2/C3 109 201 298.5 1.49 3 4 7 2.34E-02 2.68E-06
Crude Oil 30 59 103.0 1.75 0 6 6 5.82E-02 6.65E-06
Feed 7 23 35.7 1.55 0 1 1 2.80E-02 3.20E-06
Fuel Gas 55 153 239.8 1.57 9 12 21 8.76E-02 1.00E-05
Fuel Oil 52 95 153.8 1.62 1 5 6 3.90E-02 4.45E-06
Hydrogen 40 110 191.5 1.74 0 2 2 1.04E-02 1.19E-06
Light HC 47 106 185.8 1.75 0 1 1 5.38E-03 6.14E-07
LPG 13 43 67.9 1.58 0 0 0 0.00E+00 0.00E+00
Lube Oil 68 171 314.4 1.84 1 8 9 2.86E-02 3.27E-06
Mid. Dist. 63 93 170.8 1.84 2 2 4 2.34E-02 2.67E-06
Nitrogen 30 65 119.5 1.84 0 2 2 1.67E-02 1.91E-06
Organic 81 236 381.0 1.61 7 22 29 7.61E-02 8.69E-06
Process 133 297 569.0 1.92 9 19 28 4.92E-02 5.62E-06
Steam 150 352 610.0 1.73 2 12 14 2.29E-02 2.62E-06
Thermex 23 54 84.2 1.56 1 1 2 2.38E-02 2.71E-06
5 Results
The data presented in Section 4 was analyzed to obtain PFD results from each source based on the
reported failure rate data and test intervals, using the equation shown in Section 3 with the
exception of the Lees data [6], which was already presented as probability. With these
probabilities calculated, they were aggregated into various categories by taking the geometric
mean of probabilities across multiple sources. In this manner, the disparate data could be
categorized by valve type from the available sources, as shown in Table 10, or by fluid service, as
shown in Table 11. However, the same could not be said of valve size or set pressure, as only one
of the available sources categorized the valves based on these factors.
While the data could be categorized by valve type, as shown in Table 10, it is important to note
that all of the available data was taken from offshore (OREDA) and Nuclear (UKAEA) facilities;
each represent specific niches among processing facilities rather than typical chemical process
industry examples. Additionally, there is no apparent correlation between PFD and valve type
between the OREDA and UKAEA data for any valve type other than pilot-operated. Note that
bellows valves have a higher PFD than conventional valves per OREDA, but a lower PFD per
UKAEA. Furthermore, statistical hypothesis testing of these data, such as a chi-square test, is of
dubious value due to both the small number of valve types evaluated (three) and the small numbers
of data sets available for each valve type (three for conventional, two for bellows, and four for
pilot-operated). As such, these data should be approached with skepticism.
Table 11: PRV PFD Data Categorized by Valve Service
Probability
Source of Failure on Basis
Demand
Parry: Ammonia 1.65E-01
Lube/Hydraulic Oil 2.13E-01
Natural Gas/Fuel Gas 7.29E-02 3-year inspection interval (λt/2)
Nitrogen 2.37E-01
Steam 0.00E+00
UKAEA: Ammonia 1.08E-02 Calculated effective test interval = 1.53 years (λt/2)
Lube/Hydraulic Oil 2.63E-02 Calculated effective test interval = 1.84 years (λt/2)
Natural Gas/Fuel Gas 6.86E-02 Calculated effective test interval = 1.57 years (λt/2)
Nitrogen 1.54E-02 Calculated effective test interval = 1.84 years (λt/2)
Steam 1.99E-02 Calculated effective test interval = 1.73 years (λt/2)
Geometric Mean (All Above Data)* 5.35E-02 --
Geometric Mean (Ammonia) 4.22E-02 Note the limited data set
Geometric Mean (Lube/Hydraulic Oil) 7.48E-02 Note the limited data set
Geometric Mean (Natural Gas/Fuel Gas) 7.07E-02 Note the limited data set
Geometric Mean (Nitrogen) 6.04E-02 Note the limited data set
Geometric Mean (Steam)* 1.99E-02 Note the limited data set
*Probabilities of zero removed from geometric mean calculations.
The data could also be categorized by valve service, as shown in Table 11, but it is unclear whether
there was any overlap between the Parry and UKAEA data, either in terms of individual valves
evaluated or in terms of the industry from which the data was collected. The UKAEA data came
from the nuclear industry in the UK, and the Parry data, collected by a UK-based industry
cooperative, may also contain some data obtained from nuclear facilities. While a chi-square test
suggests that the differences in failure rates between valves in different services are statistically
significant, it is noteworthy that the data above only includes those services that were evaluated
by both Parry and the UKAEA. As such, skepticism seems warranted regarding these data, as well
as the data categorized by valve type shown in Table 10.
Regarding the aggregation of data from all sources for pressure relief valves of all types and
services, though, a broader spectrum of data are available. Table 12 illustrates this analysis, and
returns a geometric mean PFD of 6.76 x 10-3 per year. On an order of magnitude scale, this value
would approximate to 10-2 per year, consistent with the typical PFD assumed for a single pressure
relief valve (spring or pilot operated) laid out in the CCPS guidelines [1].
Table 12: Overall PRV PFD Data Summary
Probability
Failure
of Failure
Source Rate (Per Basis
on
Year)
Demand
Exida: Min. 8.76E-05 1.75E-04 Min. Test Interval = 4 years (λt/2)
Max. 8.76E-04 2.19E-03 Max. Test Interval = 5 years (λt/2)
Lees: Min. 8.00E-03 4.00E-03
Defined by Lees as yearly inspection (λt/2)
Max. 2.00E-02 1.00E-02
OREDA 2002: Lower (All) 2.67E-03 1.34E-03
Mean (All) 3.12E-02 1.56E-02 Assume annual testing per 30CFR 250.880 (λt/2)
Upper (All) 9.28E-02 4.64E-02
OREDA 2009: Lower (All) 3.81E-04 1.91E-04
Mean (All) 3.33E-02 1.66E-02 Assume annual testing per 30CFR 250.880 (λt/2)
Upper (All) 1.00E-01 5.00E-02
OREDA 2015: Lower (All) 1.22E-04 6.08E-05
Mean (All) 2.97E-02 1.48E-02 Assume annual testing per 30CFR 250.880 (λt/2)
Upper (All) 1.29E-01 7.40E-03
Parry 8.54E-02 1.28E-01 3-year inspection interval (λt/2)
SINTEF 4-year inspection interval (λt/2)
8.76E-03 1.75E-02
UKAEA: Conventional 3.72E-02 3.16E-02 Calculated effective test interval = 1.70 years (λt/2)
Bellows 1.43E-02 1.20E-02 Calculated effective test interval = 1.68 years (λt/2)
Pilot 1.07E-02 7.40E-03 Calculated effective test interval = 1.39 years (λt/2)
Geometric Mean (All) 9.23E-03 6.76E-03 --
One can argue that the comparisons in Tables 10 and 11 are specious, given that the PFDs that are
calculated are a function of differing test intervals. To clarify, a relief valve is not more likely to
fail after six months in service just because its next test is 3 ½ years away (4-year test interval),
compared to the same valve also in service for six months, whose next test is six months away (1-
year test interval). In that sense, the best comparisons between data sets are on a ‘per unit time’
basis. Nonetheless, the comparisons are useful in defending the commonly held values for relief
valve reliability as used in LOPA and similar studies.
6 Conclusions
On the macro level, the data analyzed in this paper serves to validate the PFD values for spring
operated and pilot operated pressure relief valves presented in the CCPS IPL guidelines [1].
However, no definitive conclusion can be drawn from the PFD data categorized by valve type or
by valve service due to the limited amount of available source data. In the cases of valve size and
set pressure, the amount of available source data was even smaller. Therefore, it is recommended
that more failure data be collected for pressure relief valves and that these data be better
categorized by valve type, service, size, and set pressure. This was a primary goal of the CCPS
Process Equipment Reliability Database (PERD) effort that created the CCPS data collection and
analysis guidelines [3]. To our knowledge, this effort has generated some relief valve data, but
little of it has been published. With more robust and detailed data, further analysis could be
conducted to determine whether a correlation exists between any of these valve parameters and
valve PFD.
References
1. Center for Chemical Process Safety; Guidelines for Initiating Events and Independent
Protection Layers in Layer of Protection Analysis; John Wiley & Sons, Inc., 111 River
Street Hoboken, NJ 07030, 2015.
2. Gross, R.E. and S.P. Harris; “Analysis of Safety Relief Valve Proof Test Data to Optimize
Lifecycle Maintenance Costs”, circa 2007.
3. Center for Chemical Process Safety; Guidelines for Improving Plant Reliability through
Data Collection and Analysis; American Institute of Chemical Engineers, New York, 1998.
5. MTL Instruments; "Availability, Reliability, SIL: What’s the difference?", AN9030, Rev.
3; Cooper Crouse-Hinds, 2010.
6. Bukowski, J.V., and W.M. Goble; “Analysis of Pressure Relief Valve Proof Test Data”,
Process Safety Progress, Vol. 28, No. 1, March 2009.
7. Mannan, S.; Lees’ Loss Prevention in the Process Industries, 3rd Ed.; Elsevier
Butterworth-Heinemann, 2005.
10. 2015 OREDA Participants; Offshore Reliability Data Handbook, 6th Ed.; Norway, 2015
12. SINTEF; Reliability Data for Control and Safety Systems, 1998 Edition; Trondheim,
Norway, 1999.
13. Sutton, J.W.C.; “A Comparative Relief Valve Study”, United Kingdom Atomic Energy
Authority, SRS/DB/10, circa 1976.
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
Unit revalidation and baseline studies for pressure relief analysis can result in a long list
of potential deficiencies, which may result from an increase in unit throughput, changes
to industry guidance or standards, changes to company internal guidelines for such
studies, conservative assumptions in the absence of required data or based on simplified
initial approach, management of change (MOC) at system or unit level, or may be a
combination of all these. This paper addresses what kind of additional engineering tools
or processes can be applied on typical systems during revalidation studies, such as reactor
loops, columns, turbines and heat exchangers, to ensure a more accurate representation
of the relief scenarios to validate the deficiencies. In addition, the paper addresses what
improvements in MOC processes can be implemented in order to capture, assess and
reduce the cumulative adverse effect to unit pressure relief analysis due to changes.
Sadia Najneen, Rachel Smallman, Mindy Bergman, Camille Peres, Cassie Lewis, & Joseph W.
Hendricks
Environmental and Occupational Health
Psychological and Brain Sciences
Texas A&M University
College Station, Texas 77843-3122, USA
[email protected]
Abstract
Counterfactual thinking focuses on how the past might have been different and allows us to
mentally manipulate our past behavior and imagine a better (or worse) alternative outcome.
These thoughts are usually prompted by negative events that block one’s goals. Workers in high-
risk jobs often report that they are more likely to attend to potential risks in their work if they
have experienced a work-related negative event. This may be due to their engaging in
counterfactual thought and applying that to future situations. This pilot study is investigating
whether the benefits of counterfactual thinking can be overtly included into training paradigms
for workers in high risk industries. Data is being collected using a virtual reality (SecondLife®)
warehouse where participants will complete two performance tasks. Between the two tasks, we
seek to capitalize on a negative incident (an explosion) by having participants engage in
counterfactual thinking by being prompted by a “good” or “poor” counterfactual training prompt,
or a control task (not associated with counterfactual thinking; three total conditions).
Participants’ performance between the first and second task will be compared for the two
counterfactual and control conditions. Currently there is limited research investigating the
application of counterfactual training to this domain and the current research will address this
gap. If successful, this training methodology may be able to minimize the risk of future incidents
(and maximize performance/safety) thus it is an important line of research because it may save
lives, money, and reduce injuries and incidents overall.
Keywords: Counterfactual Training, Procedure, Safety, Virtual Reality, Human Performance
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
Recent empirical work has demonstrated some counter-intuitive findings regarding hazard
statement design when embedded in procedures. Notably, this recent research suggests warning
icons have the opposite intended effect of leading to higher compliance rates. The current study
utilized eye-tracking technology to determine whether or not participants are attending to hazard
statements based on two different exemplar designs that have yielded the largest gap in hazard
statement compliance. In other words, do we observe significant differences in attention to hazard
statements based on a few predominant design characterisitcs (i.e., warning icon, yellow
highlighting, numbering, and borders)? Futhermore, do these attentional shifts lead to different
compliance rates? Forty participants were asked to complete four rounds of tasks using the
constituent procedures for those tasks. Ongoing repeated measures analyses (ANOVA, HLM) are
being used to determine a) do the designs matter? and b) if differences exist, do they impact
compliance? Preliminary results suggest there are differences in attention and they reflect what we
expected based on previous research – previous low-compliance designs are associated with less
attention. Future directions for this line of research will be discussed.
Abstract
After a risk assessment has been completed and feasible risk reduction measures have been
reviewed, safeguard selection must be made and/or a decision taken what residual risk would be
conditionally accepted. The straightforward way is setting up a binary decision tree and compare
for a certain event, e.g., the risk reduction gain versus cost of two alternatives. However, often
many contributing factors must be considered, such as: the nature, importance and context of the
risk, availability of the measures, procurement and maintenance costs, presence of personnel and
particularly the public near the hazardous area, vulnerability of the environment, and
determination and weighting of important contributing safety factors.
In such more complex cases the decision problem may take the form of building argumentation
for a preferred solution with a team of experts, or making a choice from a number of options and
selection criteria using independent experts/stakeholders’ opinions. The former is known as the
Toulmin model of argumentation, the latter are Multi-Criteria Decision Making (MCDM)
methods in which one criterion being weighted as more important than another and expert
opinion weighting factors are based on, e.g., education and experience. The outcome of the latter
will be a ranking of the alternatives. Where the Toulmin model will squeeze out explicit rational
arguments sharpened by rebuttal ones, in MCDM methods expert’s gut feeling may dominate,
but due to the weighting and mathematical processing best compromise ranking of the options is
the result of a synthesis process. Most known is the simple linear model of the Analytic
Hierarchical Process (AHP), but a number of more sophisticated methods will be briefly
described. In Multi- Attribute Utility Theory (MAUT) utility is a guiding principle, hence
economics dominate. A few methods will be selected for working out an example.
Keywords: Risk management decisions; Structured reasoning; Multicriteria decision making,
Multi-attribute utility; risk-based decision management
1. Introduction
As follows from the title, this paper is about decision making support methods, and more
specifically in the context of risks by hazardous material processing in the widest sense. It is,
however, not about the behavior decision theory of Slovic et al. [1] that deals with risk
perception of people, nor with types of psychic mechanisms of human decision making, such as
naturalistic decision making in highly demanding operational situations [2]. Rather this paper is
about decision making, often in a team, whether safety is sufficiently secured, or whether
additional safeguards/barriers must be added and which ones will be adequate. These decisions
are taken after having identified and characterized the hazards, assessed the risks in a certain
situation, and the question arises of weighting risk reducing measures against what cost and
effort. Now, most cases may be relatively straightforward and may be taken intuitively, but there
are in plant designs and operations also a considerable number of complex cases in which an
optimal solution is not so simple. There may be alternatives, uncertainties, immeasurable
contributing effects, different views, different criteria, and overruling business objectives, while
a decision should be well substantiated, and recorded so that the basis for the decision and the
reasoning can be retrieved and perhaps reproduced. So, in those cases rational decision making
or at least applying a rational method, rather than only intuitively, is important.
Because there is a well-recognized demand, in particular in business and management decision
making under risk, many approaches and methods to prepare and aid in decision making have
been developed, and a wealth of literature is available. Of general character is the paper on
decision analysis by Keeney [3] and the book on smart choices by Hammond, Keeney, and
Raiffa [4]. For engineering purposes Hazelrigg [5] presents a collection of approaches, such as
optimization, probabilistic operations, utility, financial, and economic considerations. Predictive
risk assessment, though, is afflicted with much uncertainty. This is due to incompleteness of
hazard identification and scenario definition, of model deficiency and lack of reliable data. Of
course, the expectation is that large scale digitization will provide a source of “big” data, while
interpreting analytics will distil from those data useful information, on which more accurate
predictions can be based. Although a glut of data will undoubtedly contribute importantly, so-
called expert opinion and judgment will remain necessary in the decision-making process with
regard to risk. As made clear, e.g., by Baybutt [6], engineering judgment and expert opinion will
be relied on despite the heuristics and the many types of cognitive biases that may influence the
resulting judgment. Human judgment is simply subjective and imprecise, although training may
have an improving effect. Also, personality traits play an important role such as strong focus on a
target, risk aversion or appetite, (im-)patience, and impulsivity.
Asking a number of experts to give their opinion, at least initially independently of each other,
will to a certain extent compensate for the limitation of the individual person. Based on expert
judgment, an array of methods has been developed covering variations on the theme of judging
alternatives/options against criteria resulting after mathematical treatment in a ranking of the
alternatives. Most of these methods originated in the 1970s and 1980s, but over the years these
methods have been improved and extended.
All of the above methods have applications in many fields, such as economics, engineering,
management, and business. With regard to risk assessment it is in generating and ordering data
for tools, such as Failure Mode and Effect Analysis (FMEA), combining severity and frequency
data into risk for a variety of operations, or judging alternative safety improving solutions against
a set of criteria. Their application for these purposes is rather unknown in the safety community,
and the objective of this paper is to raise awareness about the use of these techniques for
improving risk management and process safety. Although this paper has the character of a
review, it is not a real review of the massive body of literature in the field, which would
“overshoot’ the objective. Our intention is to present just an overview given data and constraints
of various possibilities for aiding decision making under uncertainty how best to curb risk.
Section 2 contains the overview. A listing of methods covered is presented in Table 1, and in
Section 3 brief summaries will be given with, in most cases, a few recent example articles.
Figure 1. Decision tree of what value of information of performing a perfect test can contribute
to the choice between two process designs, A or B, adapted after Ang and Tang [21]. In contrast
to expectation, in the test design B shows best. The gain of choosing the right design without test
would be: −6 − 0.3 × (−100) = 24 k$, where the test cost is 10 k$, so it pays.
Another long existing method is that of the Binary Decision Diagrams (BDDs) [22] with
Boolean functions at the branch nodes: 1 or 0, true or false. BDD trees are directed acyclic
graphs [23]. The approach, originally developed for designing electrical circuits, has been
exploited for assisting predictive decision making under risk [24] in, e.g., a design process. Here
one can take into account the value of acquiring more information e.g., by performing a test to
lower uncertainty, versus the risk of taking the decision now. To that end, both the costs of the
test, and the cost of the risk of choosing the wrong design must be expressed in dollars. Figure 1
shows the tree of a test to determine the efficiency of a process in order to choose design A, if
the test shows a high efficiency, or B otherwise. Prior to the test, the probability of the test result
(high or low) shall be estimated by experts. However, such trees can easily be implemented as
well in Bayesian network [25, pp. 398-399].
Multi-Criteria Decision-Making (MCDM) methods are all about ranking alternatives given
criteria and other constraints. One of the criteria is often cost. A simple cost-effectiveness
approach ignores besides a single effectiveness criterion, other valued characteristics/qualities of
the alternative to be chosen. Therefore, a trade-off/compromise solution should be sought that
satisfies most of the criteria. This means that a multiplicity of objectives that are non-measurable
or comparable should be optimized, e.g., not only the objective of the least possible cost solution,
which depends on a number of factors such as investment cost, usability, maintainability, etc.,
but also as the (conflicting) objective the lowest possible risk. For a case, the latter may depend
on different weighted criteria as consequence and probability. This is not a simple problem and it
turned out to generate a multiplicity of methods. The search for such compromise started as early
as the 1970s, in which Zeleny [26] who was one of the pioneers also giving an interesting view
on the developing management science at the time [27]. In MCDM by pairwise comparison,
experts express preferences, often using a form of Likert scales, which provide alternatives of a
variety of possibilities would fit best a series of different criteria. Both criteria and experts can be
weighted. The result after a few matrix operations is a ranking of the alternatives from the best to
the worst.
Experts can be asked to produce a relative numerical or linguistical graded term, when judging
alternatives with respect to a certain measure, e.g., a property, a key performance indicator, an
index or another value. The most popular method is Saaty’s Analytic Hierarchy Process (AHP)
[28], solving of which is claimed to be improved by the Best-Worst Method (BWM) [29]. AHP,
some simpler methods and BWM are described in Section 4. Other methods in this category and
their fuzzy versions will be discussed more extensively in Section 5.
In the early 1970s Battelle Memorial Institute developed the Decision-Making Trial and
Evaluation Laboratory (DEMATEL) method to analyze cause-effect relationships of the many
interacting factors playing a role in complex problems. The method enabled investigating
problems, such as those of the world [30], resulting in better decisions. Experts are asked to
indicate by pairwise comparison the strength of influence of one factor upon another. After
mathematical treatment the cause-effect relations between the factors can be identified.
DEMATEL results can be input to Bayesian Network (BN) analysis, and for a different
application to the Analytic Network Process (ANP). This will be explained in detail in Section 6.
A separate category of approach is formed by the Multi-Attribute Utility Theory (MAUT) in
which the discriminating measure is the utility of an alternative given criteria and constraints. If
the decision making is about risk, the utility function can be chosen according to human
perception of risk proposed by noble prize winners Kahneman and Tversky [31]. This function is
embodied in the TODIM method [32],[33]. All of this will be summarized in Section 7. Pareto
Front optimization method is also a means to enable improved decision making and will be
briefly described in Section 8.
In Section 9 the various categories of methods will be compared, in Section 10 results of
applying several methods to a simple process safety example will be compared, and the paper
conclusions will be made in Section 8. Given the wide scope of the field a few more, for our
purpose less interesting methods, can be found.
2 1 5 3.5 3.5
Alternatives
3 5 1 2 4
5 3 4 1 2
Figure 4 Left: ORESTE example showing ranks; Right: Orthogonal projection on the position
matrix of the example adapted after [62]. For further explanation, see text.
In Figure 4 the linear orthogonal projection is shown. In agreement with the above inequalities,
for example, makes the 4th position in the 1st column equivalent to the 3rd position in the 2nd
column. Hence, for this type of projection, the substitution rate 𝑇 between criteria ranks and
/ ( )
action/alternative ranks is 𝑇 = = 1, in which 𝑟𝑐 is the rank of the criterion and
/
𝑟𝑐 (𝑎) the rank of the alternative. It means that loss of a rank position of an alternative is made
up by the increase of importance of a criterion by one rank. For the other types of projection, we
refer to [62]. The 𝑃𝐼𝑅 structure of ORESTE makes it according to Liao et al. [63] more reliable.
These latter authors presented references which subsequently made method improvements and
introduced a sophisticated type of fuzzy set (hesitant fuzzy linguistic set), where earlier a
traditional fuzzy set was applied [64].
TOPSIS
Yoon and Hwang developed TOPSIS (Technique for Order Preference by Similarity to Ideal
Solution) at the Kansas State University in 1980 [65]. They followed a concept already
mentioned by Zeleny [26]. A selected alternative should have the shortest distance to a positive
ideal solution and the longest to a negative-ideal one. It starts with a matrix of columns 𝑖 of M
alternatives versus rows 𝑗 of N criteria, which have weights attributed to them. Preferences 𝑥 of
each alternative with respect to each criterion are expressed in a 𝑚 × 𝑛 matrix. In its simplest
form the evaluation goes as follows: Criteria preference figures should be normalized by
computing 𝑟 = 𝑥 / ∑ (𝑥 ) / . In each column the elements 𝑟 are multiplied with the
corresponding criteria weights yielding 𝑣 = 𝑤 𝑟 values. Next, as ideal solution a vector 𝐴 is
composed of the highest of elements 𝑣 of the 𝑗 columns representing benefits and the lowest of
the conflicting cost column, and as negative ideal one 𝐴 the lowest benefit elements 𝑣 and
the highest cost. Distances to the ideal and negative ideal are derived by calculating for each row
𝑆 = {∑ (𝑣 −𝑣 ) } / 𝑖 = 1, 2, … , 𝑀, and 𝑆 = {∑ (𝑣 −𝑣 ) } 𝑖 = 1, 2, … , 𝑀.
Finally, the relative closeness for each alternative to the ideal solution follows as 𝑆 /(𝑆 +
𝑆 ). The largest is the best alternative and the smallest the worst. The results can be shown
graphically in 2-dimensional objective space, see Figure 5.
Figure 7 Utility/value function reproduced according to Kahneman and Tversky [31]: concave
with risk aversion behavior in the positive quadrant; convex and steeper with risk propensity in
the negative one.
†
For this work Daniel Kahneman received the Nobel prize in Economics in 2002, 6 years after the death of Amos
Tversky (the prize is not awarded posthumously)
Wang et al. [101] applied the prospect theory and MCDM on failure modes and effect analysis
(FMEA) for the evaluation of risk factors of which values are aggregated using fuzzy measure
and the Choquet integral. It all serves to determine FMEA’s RPNs (risk priority numbers) and
prioritize failure modes of. Grabisch [102] reviewed in the context of MAUT various
aggregation approaches including the Choquet integral for fuzzy measures such as membership
functions . If there exist functions 𝑓 ∶ 𝑋 → [0, 1] with respect to , then the Choquet integral
𝐶 (𝑓(𝑥 ), … , 𝑓(𝑥 )): = ∑ (𝑓 𝑥( ) − 𝑓 𝑥( ) )((𝐴( ) ), where 𝑓 𝑥( ) = 0 and 𝐴( )
represents the importance weight of the set of criteria 𝐴( ) .
Huang et al. [103] note that decision makers do not need to decide groupwise on preferences but
can make their choices individually, while the results can be aggregated by means of the SAW
method. So, it is even possible to perform a MCDM applying internet communication means.
In 2004, Chavas [104] treated decision making under risk comprehensively, expanded the
expected utility hypothesis with some further assumptions, and described the mathematics for
various cases of risk appetite, neutrality, constant decreasing, and increasing risk aversion.
TODIM
As mentioned by Gomes and Rangel [32], Gomes and Lima developed the MCDM method
TODIM (Interactive and Multicriteria Decision Making in a Portuguese acronym) in 1992. The
method applies the value function of the prospect theory [31] and the additive utility function
[95] requiring verification of independence of attribute options. Further, the method uses
pairwise comparison of criteria, enables elimination of inconsistencies, allows linguistic grades,
fuzzy inputs and interdependencies of options, but no trade-offs. The method has been
characterized as having on the one-hand American MAUT features and on the other, French ones
of ELECTRE or PROMETHEE.
Again, there are 𝑚 alternatives and 𝑛 criteria‡. Experts weight the criteria 𝑐, after which weights
𝑤 are normalized, the highest value criterion indicated as the reference one, and all weights
divided by the reference, 𝑤 . Next, the dominance 𝛿 of alternative 𝐴 with performance 𝑃 over
alternative 𝐴 with 𝑃 is determined as: 𝛿 𝐴 , 𝐴 = ∑ 𝐴 , 𝐴 ∀ (𝑖, 𝑗). Applying the
value curve of Prospect Theory on the terms 𝐴 , 𝐴 = 0, if 𝑃 − 𝑃 = 0, or 𝐴 , 𝐴 =
/ /
[𝑤 (𝑃 − 𝑃 )/ ∑ 𝑤 ] , if 𝑃 − 𝑃 > 0, and (− )[(∑ 𝑤 )(𝑃 − 𝑃 ) /𝑤 ] , if
𝑃 − 𝑃 < 0, where 𝜃is an attenuation factor controlling the shape of the loss curve.
After calculation for each criterion the square matrix 𝐴 , the overall matrix shall be calculated
∑ , ∑ ,
by summing and normalizing by = ∑ ∑
. The last step is ordering
, ,
to rank. A sensitivity analysis is recommended with respect to the reference criterion, weights,
the attenuation choice, and the alternative performance ones.
TODIM knows fuzzy applications too, for examples [105], [106].
‡
In contrast to [40] we follow the conventional matrix notation of 𝑚 × 𝑛, and not 𝑛 × 𝑚.
8. Pareto Front Optimization
To complete the array of decision analysis and aiding techniques, the Pareto Frontier method is
mentioned in this section and an overview is given on how it works. In its simplest form the
frontier forms the envelope around optimal choices of alternatives in case of two objective
variables, plotted on Cartesian coordinates, as shown in Figure 8 [25, p. 395]. Hence, it is a
Multi-Objective Optimization, often added in the name with Programming to MOOP or MOP,
because in particular with multi-dimensional cases the optimizing will become rather intricate.
Figure 8 Risk versus costs to reduce it. The envelope is an illustration of a Pareto front, from
[25].
Note that the Optimum risk reduction envelope must be an interval or range that includes an
estimate of the aggregated epistemic uncertainty of the data used to construct the envelope.
Otherwise, the Costs of risk reduction below the Risk acceptance level will be underestimated.
An explanation of Pareto Front and a number of two-dimensional examples is provided by Ščap
et al. [107]; solutions at the front dominate the ones away from it. Many algorithms to perform
optimization have already been proposed. Giagkiozis and Fleming [108] focus on multi-objective
evolutionary algorithms. This includes metamodeling and the use of surrogate models to speed
up the computational process, Pareto estimation via either dominance-based algorithms or
decomposition ones, and Radial Basis Function Neural Network technique to fit and map results.
The latter enable optimization, e.g., in 3-dimensional space with three objective variables. An
example of optimizing a design of a bridge has been worked out by Pouraminian and
Pourbakhshian [109]. These authors applied ANSYS for structural analysis, determined a Pareto
front by optimizing using the swarm particle technique and subsequently used VIKOR to
determine the optimal point at the Pareto front.
Figure 9 Hierarchical structures of AHP and MAUT respectively, adapted after [112].
Several papers present results of comparisons, a few have already been mentioned above, such as
[68], [72], not showing dramatic differences between method outcomes. An interesting
comparison between two radically different methods: AHP and MAUT, was made by Belton
[112] who showed the fundamental difference between the two approaches. AHP treated
importance weights of criteria and scores of the alternatives given a criterion the same way. In
contrast, to achieve a certain goal MAUT derives first utility functions of the attributes based on
judgment inputs and then determines the utility values of the alternative solutions (options).
Belton made it clear by a schematic drawing showing influences of three “experts” on AHP
criteria and from those to each alternative, while in case of MAUT utility values determined by
the three “experts” yield a series of outcomes. The difference is schematically presented in
Figure 9. Outcomes of comparable inputs to the two methods for an example with rather
extremely chosen figures showed relatively large dispersion and a certain bias. De Leeneer and
Pastijn [113] compared ORESTE with PROMETHEE and a SAW-like method, finding slight,
but no serious differences, although of course the additive method is not suited for
incomparability.
The possible advantage of the use of fuzzy set theory in decision analysis remains a point of
discussion, Dubois [114].
Sensitivity analysis would help to identify major influencing parameters. However, because of its
complexity with a method such as ELECTRE it is difficult to perform a sensitivity analysis, see
[94].
On the web one can find “decision radar” [115] offering to calculate online for free a problem
with TOPSIS, ELECTRE, SAW, Linear Assignment, or AHP. The program uses the term
indicator for criterion and choice for alternative/option. In case of AHP only the consistency is
checked and the priority matrix calculated. Linear Assignment is a cost minimization for a
number of tasks to be assigned to the same number of agents (balanced) or a different number
(unbalanced), where each agent has a different cost for the task assigned. So, it can be applied as
a decision-making method. The system can be solved by linear programming. There are also
software toolkits with a license for sale for ELECTRE I and III [116] and ELECTRE III and IV
[117]. PROMETHEE software can be freely downloaded [118], while VIKOR, TOPSIS and
DEMATEL, including their fuzzy versions, can be solved online [119]. The International Society
on MCDM [120] offers an even much broader variety of software, among other online MAUT
Decision Navigation (“Entscheidungsnavi”), an MCDA package for R, a MATLAB solver for
MOP and more. Finally, there is the Creative Decisions Foundation [121] established by Thomas
L. Saaty in 1996 providing for free the educational software Super Decisions that assists step-by-
step in applying the AHP and ANP methods.
Calculations have been performed according to AHP process, SAW-WSM and WSP using MS
Excel, of which details are provided in the Supplementary material, while for all other MCDM
methods use was made of the software mentioned in Section 9. Results are presented in Table 3.
Table 3 Ranking results calculated according to various decision aiding methods
Method AHP SAW/WSM WPS PROMETHEE ELECTRE TOPSIS VIKOR MAUT Rank
PRV 0.19 0.19 0.20 -0.8 D>P 0.13 0.14 0.18 3
Burst.Disk 0.30 0.31 0.35 -0.1 D>B 0.37 0.31 0.31 2
Dump 0.51 0.50 0.44 0.9 0.50 0.54 0.51 1
Note the agreement in ranking of the alternatives despite quantitative differences. ELECTRE
only provides relative preferences. The quantitative output of TOPSIS (closeness vector) and
VIKOR (S group utility) is high when rank is low; the figures were converted by taking
reciprocal values and normalizing. MAUT output is through the inflection of the utility curve
versus the defined range of the attribute quite sensitive to the extent of willing to take risk, which
was assumed here as: inflection up = risk aversion –> availability; straight line = neutral –> risk
reduction and clean-up; down = risk appetite –> capital cost.
A recent example of a practical application of AHP and TOPSIS is a Multi-Attribute HAZOP
[122] based on a wellhead P&ID. Experts identified 50 hazards and produced fuzzy hazard
scenario risk factor inputs of severity, frequency, undectability, sensitivity to maintenance
effectiveness, and sensitivity to failure of safety measures. Risk factor weights were determined
with the aid of AHP, while in a second step the hazards are ranked with TOPSIS.
11. Conclusions
There is an abundance of methods available to support decisions under uncertainty, which one
way or the other can be applied in risk management and risk governance. The methods have been
described rather superficially and only a fraction of the literature on these methods has been
referenced. The methods have very different footing and are each suited for different situations.
They vary from qualitative methods focused on rational reasoning, such as Toulmin’s approach,
or recalling past similar cases, to quantitative methods. The latter can be probabilistic, such as
decision tree, or based on MCDM preference ranking. The importance preference scores are for
the greater part based on expert estimates, but where possible measured properties, test
outcomes, or calculation results can be source too. Preference data can be processed in matrix
operations varying from simple to complex ones; the latter trying to obtain the best of
compromises with graphical output that even can be multi-objective. Most methods produce a
ranking of options. Some are technologically oriented to obtain an optimum choice; others have
an economical/financial objective. Some specialized methods, such as ELECTRE Tri, are
dedicated to categorization/classification of options. DEMATEL exposes influences among
factors that play a role in a problem field. MAUT is guided by the value persons attribute to their
individual utility function. No method constitutes a panacea [43], [111]. In many cases base
methods are combined with other techniques, such as fuzzy inputs, aggregation of output, e.g.,
with SAW, or in the case of DEMATEL further processing of output with ANP, or even better
Bayesian Network.
Some methods are worked out on a simple example case. Results are presented for a rather
simple case of the best protection option for a reactor, which ma runaway. Appendices in
Supplementary material contain details. For most methods described software is available at the
Internet.
References
1. Slovic, P., Fischhoff, B, Lichtenstein, S. 1984. Behavioral Decision Theory Perspectives on Risk
and Safety. Acta Psychologica 56, 182-203.
2. Klein, G., 2008, Naturalistic Decision Making. Human Factors 50 (3), 456-460.
3. Keeney, R.L. 1982. Decision Analysis: An Overview. Operations Research 30 (5), 803-838
4. Hammond, J.S., Keeney, R.L., Raiffa, H. 1999. Smart Choices: A Practical Guide to Making Better
Decisions. Harvard Business School Press, Boston Mass.
5. Hazelrigg, G.A. 2012. Fundamentals of decision making for engineering design and systems
engineering. © Copyright 2010 by George A. Hazelrigg. ISBN:978-0-984- 99760-2, 0984997601.
6. Baybutt, P. 2017. The Validity of Engineering Judgment and Expert Opinion in Hazard and Risk
Analysis: The Influence of Cognitive Biases. Process Safety Progress 37 (2), 205-210.
7. Schank, R.C. and Leake, D.B. 1989. Creativity and Learning in a Case-Based Explainer. Artificial
Intelligence 40, 353-385.
8. Aamodt, A. and Plaza, E. 1994. Case-Based Reasoning: Foundational Issues, Methodological
Variations, and System Approaches. AI Communications 7 (1), 39-59.
9. ElKafrawy, P., Mohamed, R.A. 2014. Comparative Study of Case Based Reasoning Software.
International Journal of Scientific Research and Management Studies 1 (1), 224-233.
10. Linstone, H.A. and Turoff, M. (Eds.), 1975. The Delphi Method, Techniques and Applications;
https://pdfs.semanticscholar.org/8634/72a67f5bdc67e4782306efd883fca23e3a3d.pdf?_ga=2.15499
9160.1796105135.1581231636-920793509.1581231636.
11. Armstrong, C., Grant S., Kinnett, K., Denger B., Martin, A., Coulter I., Booth M., Khodyakov, D.,
2019. Participant experiences with a new online modified-Delphi approach for engaging patients
and caregivers in developing clinical guidelines. Eur. J. for Person Centered Healthcare 7 (3), 476-
489.
12. Toulmin, S.E. 2003. The Uses of Argument, Updated Ed. Cambridge University Press, ISBN-13:
978-0521827485 (first edition 1958).
13. Zhao, J., Cui, L., Zhao, L., Qiu, T., Chen, B., 2009. Learning HAZOPexpert system by case-based
reasoning and ontology. Computers and Chemical Engineering 33 (1), 371–378
14. Paltrinieri, N., Tugnoli, A., Buston, J., Wardman, M., Cozzani, V.,2013. Dynamic Procedure for
Atypical Scenarios Identification (DyPASI): a new systematic HAZID tool. J. Loss Prev. Process
Ind. 26, 683–695.
15. Bellamy, L.J., Ale, B.J.M., Whiston, J.Y., Mud, M.L., Baksteen. H., Hale, A.R., Papazoglou, I.A.,
Bloemhoff, A., Damen, M., Oh, J.I.H. 2008. The software tool storybuilder and the analysis of the
horrible stories of occupational accidents. Safety Science 46, 186–197. See also 2011 Storybuilder
User Manual, https://www.rivm.nl/documenten/storybuilder-23011-user-manual .
16. Su, Y., Yang, Sh., Liu, K., Hua K. and Yao, Q. 2019. Developing A Case-Based Reasoning Model
for Safety Accident Pre-Control and Decision Making in the Construction Industry. International
Journal of Environmental Research and Public Health 16, 1511 20p.
17. Kaplan, R.S. and Norton, D.P. 1992. The Balanced Scorecard – Measures that Drive Performance,
Harvard Business Review, Jan-Feb 1992, 71-79.
18. Kaplan, R.S. and Norton, D.P. 2001. Transforming the Balanced Scorecard from Performance
Measurement to Strategic Management, Part I. Accounting Horizons 15 (1), 87-104, and Part II.
Accounting Horizons 15 (2), 147-160.
19. Nicolás, C., Gil-Lafuente, J., Urrutia Sepúlveda, A., Valenzuela Fernández, L. 2015. Fuzzy Logic
Approach Applied into Balanced Scorecard. International Forum for Interdisciplinary Mathematics
FIM 2015: Applied Mathematics and Computational Intelligence pp. 140-151 (AISC, Vol. 730
Springer).
20. Hong, Y., Pasman, H.J., Quddus, N., Mannan M.S. 2020. Supporting risk management decision
making by converting linguistic graded qualitative risk matrices through interval type-2 fuzzy sets.
Process Safety and Environmental Protection 134, 308–322.
21. Ang, A. H-S. and Tang, W.H. 1990. Probability Concepts in Engineering Planning and Design, Vol
2, Wiley, Decision, Risk, and Reliability.
22. Lee, C. Y. Binary decision programs. 1959 Bell System Technical Journal, 38 (4), 985–999, July.
23. Andersen, H.R. 1999. An Introduction to Binary Decision Diagrams. University of Copenhagen.
2019: https://www.cmi.ac.in/~madhavan/courses/verification-2011/andersen-bdd.pdf.
24. Jordaan, I. 2005. Decisions under Uncertainty, Probabilistic Analysis for Engineering Decision.
Cambridge University Press, Cambridge U.K. ISBN 0 521 78277 5.
25. Pasman, H.J., 2015. Risk Analysis and Control for Industrial Processes – Gas, Oil and Chemicals.
A System Perspective for Assessing and Avoiding Low-probability, High consequence Events, pp.
398-399. Butterworth Heinemann, Copyright © 2015 Elsevier Inc. ISBN: 978-0-12-800057-1.
26. Zeleny, M. 1974. A Concept of Compromise Solutions and the Method of the Displaced Ideal.
Computers and Operations Research 1, 479-496.
27. Zeleny, M. 1975. Notes, Ideas & Techniques. New Vistas of Management Science. Computers and
Operations Research 2, 121-125.
28. Saaty, T.L. 1977. Scaling method for priorities in hierarchical structures. Journal of Mathematical
Psychology 15 (3), 234-281.
29. Chang, D.Y. 1996. Applications of the extent analysis method on fuzzy AHP. European Journal of
Operational Research 95, 649-655.
30. Gabus, A. and Fontela, E. 1972. World Problems, An Invitation to Further Thought within The
Framework of DEMATEL, Battelle Geneva Research Centre, Geneva, Switzerland.
31. Kahneman, D. and Tversky, A. 1979. Prospect theory. Econometrica 47 (2), 263-298.
32. Gomes, L.F.A.M., Rangel, L.A.D. 2009. An application of the TODIM method to the multicriteria
rental evaluation of residential properties. European Journal of Operational Research 193, 204–211.
33. Gomes, L.F.A.M., Lima, M.M.P.P., 1992. TODIM: Basics and application to multicriteria ranking
of projects with environmental impacts. Foundations of Computing and Decision Sciences 16 (4),
113–127. (no digital version available).
34. OREDA, 2015. Offshore and Onshore Reliability Data, sixth ed., SINTEF NTNU.
<https://www.oreda.com/>.
35. Deepwater Horizon, 2011. Macondo - The Gulf Oil Disaster, Chief Counsel’s Report. National
commission on the BP Deepwater Horizon oil spill and offshore drilling, Ch. 4.6.
https://editors.eol.org/eoearth/wiki/Macondo:_The_Gulf_Oil_Disaster .
36. Saaty, T.L. 2013. The Modern Science of Multicriteria Decision Making and Its Practical
Applications: The AHP/ANP Approach. Operations research 61 (5), 1101-1118.
37. Saaty, Th.L. 1990. How to make a decision: The Analytic Hierarchy Process. European Journal of
Operational Research 48, 9-26.
38. Saaty, Th.L. 2008. The Analytic Hierarchy and Analytic Network Measurement Processes:
Applications to Decisions under Risk. European Journal of Pure and Applied Mathematics 1 (1),
122-196.
39. Saaty Th.L., Luis G V. 1984. Comparison of eigenvalue, logarithmic least squares and least
squares methods in estimating ratios. Mathematical modelling 5 (5), 309–324.
40. Abdullah, l., Adawiyah, C.D.R. 2014. Simple Additive Weighting Methods of Multi criteria
Decision Making and Applications: A Decade Review. International Journal of Information
Processing and Management 5 (1), 39-49.
41. Churchman, C. W., Ackoff, R. L., Arnoff, E. L. 1957. Introduction to Operations Research, Wiley.
New York.
42. Afshari, A., Mojahed, M and Yussuf, R.M. 2010. Simple Additive Weighting approach to
Personnel Selection problem. International Journal of Innovation, Management and Technology 1
(5), 511-515; ISSN: 2010-0248.
43. Triantaphyllou, E., Mann, S.H. 1989. An Examination of the Effectiveness of Multi-Dimensional
Decision-Making Methods: A Decision-Making Paradox. International Journal of Decision Support
Systems. 5 (3): 303–312.
44. Belton, V., and Gear, T. 1983. On a Short-coming of Saaty's Method of Analytic Hierarchies,
Omega, 11 (3), 228-230.
45. Fishburn, P.C. 1965. Independence in Utility Theory with Whole Product Sets. Operations
Research 13, 28-45.
46. Fishburn, P.C. 1966. A Note on Recent Developments in Additive Utility Theories for Multiple-
Factor Situations. Operations Research 1143-1148.
47. Fishburn, P.C. 1967. Additive Utilities with Incomplete Product Set: Applications to Priorities and
Assignments, Operations Research 537-542.
48. Rezaei, J. 2015. Best-worst multi-criteria decision-making method. Omega 53, 49–57.
49. Rezaei, J. 2016. Best-worst multi-criteria decision-making method: Some properties and a linear
model. Omega 64, 126–130.
50. Mi, X., Tang, M., Liao, H., Shen, W., Lev, B. 2019. The state-of-the-art survey on integrations and
applications of the best worst method in decision making: Why, what, what for and what’s next?
Omega 87, 205–225.
51. Van Laarhoven, P.J.M., and Pedrycs, W. 1983. A fuzzy extension of Saaty's priority theory", Fuzzy
Sets and Systems 11, 229-241.
52. Saaty, Th.L., Tran, L.T. 2007. On the invalidity of fuzzifying numerical judgments in the Analytic
Hierarchy Process. Mathematical and Computer Modelling 46, 962–975
53. Chan, H.K., Sun, X., Chung, S.H. 2019, When should fuzzy analytic hierarchy process be used
instead of analytic hierarchy process? Decision Support Systems 125, 113114.
54. Govindan, K., Brandt Jepsen, M. 2016. ELECTRE: A comprehensive literature review on
methodologies and applications. European Journal of Operational Research 250, 1–29.
55. Benayoun, R., Roy, B., & Sussman, B. (1966). Une méthode pour guider le choix en presence de
points de vue multiples. Note de travail 49. SEMA-METRA. Direction-Scientifique.
56. Figueira, J.R., Greco, S., Roy, B and Słowínski, R. 2013. An Overview of ELECTRE Methods and
their Recent Extensions. Journal of Multi-Criteria Decision Analysis 20, 61–85.
57. Corrente, S., Greco, S., Słowínski, R. 2016. Multiple Criteria Hierarchy Process for ELECTRE Tri
methods. European Journal of Operational Research 252, 191–203.
58. Brans, J.P. and Vincke, PH. 1985. A Preference Ranking Organisation Method (The PROMETHEE
Method for Multiple Criteria Decision-Making). Management Science 31 (6), 647-656.
59. Behzadian, M., Kazemzadeh, R.B., Albadvi, A., Aghdasi, M. 2010. PROMETHEE: A
comprehensive literature review on methodologies and applications. European Journal of
Operational Research 200, 198-215.
60. Wolters, W.T.M., Mareschal, B. 1995. Novel types of sensitivity analysis for additive MCDM
methods. European Journal of Operational Research 81, 281-290.
61. Roubens, M. 1982. Preference relations on actions and criteria in multicriteria decision making.
European Journal of Operational Research 10, 51-55.
62. Pastijn, H. and Leysen, J. 1989. Constructing an Outranking Relation with ORESTE. Mathematical
and Computer Modelling 12 (10/11), 1255-1268.
63. Liao, H., Wu, X., Liang, X., Xu, J., Herrera, F. 2018. A New Hesitant Fuzzy Linguistic ORESTE
Method for Hybrid Multicriteria Decision Making. IEEE Trans on Fuzzy Systems 26 (6), 3793-
3807.
64. Fasanghari, M. and Pour, M.M. 2008. Information and communication technology research center
ranking utilizing a new fuzzy ORESTE method (FORESTE),” 3rd Int’l. Conf. on Convergence and
Hybrid Information Technology, Busan, South Korea, 2, 737–742.
65. Hwang, C.L., & Yoon, K. 1981. Multiple attribute decision making, methods and applications: Vol.
186. New York: Springer-Verlag.
66. Lai, Y.J., Liu, T.Y. and Hwang, C.L. 1994. TOPSIS for MODM. European Journal of Operational
Research 76, 486-500.
67. Balioti, V., Tzimopoulos, Ch. and Evangelides, Ch. 2018. Multi-Criteria Decision Making Using
TOPSIS Method Under Fuzzy Environment. Application in Spillway Selection. Proceedings 2,
637.
68. Opricovic, S., Tzeng, G.H. 2004. Compromise solution by MCDM methods: A comparative
analysis of VIKOR and TOPSIS. European Journal of Operational Research 156, 445–455.
69. Zeleny, M. 1998. Multiple criteria decision making: eight concepts of optimality. Human Systems
Management 17, 97–107.
70. Duckstein, L., Opricovic, S. 1980. Multiobjective Optimization in River Basin Development.
Water Resources Research 16 (1), 14-20.
71. Chatterjee, P. and Chakraborty, Sh. 2016. A comparative analysis of VIKOR method and its
variants. Decision Science Letters 5, 469–486.
72. Opricovic, S., Tzeng, G.H. 2007. Extended VIKOR method in comparison with outranking
methods. European Journal of Operational Research 178, 514–529.
73. Wang, H., Pan, X., He, S.. 2019. A new interval type-2 fuzzy VIKOR method for multi-attribute
decision making. International Journal of Fuzzy Systems, 21(1), 145-156.
74. Yazdi, M., Khan, F., Abbassi, R., Rusli, R. 2020, Improved DEMATEL methodology for effective
safety management decision making. Safety Science 127, 104705.
75. Pearl, J. 2009. Causality: Models, Reasoning and Inference. 2nd Ed. Cambridge University Press.
ISBN 0-521-77362-8.
76. Darwiche, A. 2009. Modeling and Reasoning with Bayesian networks. Cambridge University
Press. ISBN 978-0-521-88438-9.
77. Si, Sh.-L., You, X.-Y., Liu, H.-Ch. and Zhang, P. 2018. DEMATEL Technique: A Systematic
Review of the State-of-the-Art Literature on Methodologies and Applications. Hindawi
Mathematical Problems in Engineering 3696457, doi.org/10.1155/2018/3696457.
78. Jiao, J., Wei, M., Yuan, Y. and Zhao, T. 2020. Risk Quantification and Analysis of Coupled
Factors Based on the DEMATEL Model and a Bayesian Network. Applied Sciences 10, 317.
79. Pasman, H., Rogers, W. 2013. Bayesian networks make LOPA more effective, QRA more
transparent and flexible, and thus safety more definable! Journal of Loss Prevention in the Process
Industries 26, 434-442.
80. Scutari, M., Graafland, C.E., Gutiérrez, J.M. Who learns better Bayesian network structures:
Accuracy and speed of structure learning algorithms. International Journal of Approximate
Reasoning 115, 235–253.
81. Fenton, N., & Neil, M. 2019. Risk assessment and decision analysis with Bayesian networks (2nd
Ed.). Boca Raton, FL: CRC Press, ISBN 13: 978-1-138-3511-9; Section 9.5.
82. BayesFusion, LLC, https://www.bayesfusion.com.
83. Kaya, R, Yet, B. 2019. Building Bayesian networks based on DEMATEL for multiple criteria
decision problems: A supplier selection case study. Expert Systems with Applications 134, 234–
248.
84. Lee, W.Sh., Huang, A.YH., Chang, Y.Y., Cheng, Ch.M. 2011. Analysis of decision-making factors
for equity investment by DEMATEL and Analytic Network Process. Expert Systems with
Applications 38, 8375–8383.
85. Abdullah, L., Zulkifli, N. 2019. A new DEMATEL method based on interval type-2 fuzzy sets for
developing causal relationship of knowledge management criteria. Neural Computing and
Applications 31, 4095–4111.
86. Yazdi, M., Nedjati, A., Zarei, E., Abbassi, R. 2020. A novel extension of DEMATEL approach for
probabilistic safety analysis in process systems. Safety Science 121, 119–136.
87. Yager, R.R., 2014. Pythagorean Membership Grades in Multicriteria Decision Making. IEEE
Transaction on Fuzzy Systems 22 (4), 958-965.
88. Khakzad, N., Reniers, G., Van Gelder, P. 2017. A multi-criteria decision-making approach to
security assessment of hazardous facilities. Journal of Loss Prevention in the Process Industries 48,
234-243.
89. Von Neumann, J. and Morgenstern, O. 1953. Theory of Games and Economic Behavior, 3rd Ed.
(1st Ed. 1944). Princeton, NJ. Princeton University Press; ISBN-13: 978-0691041834.
90. Savage, L.J., 1954. Foundations of Statistics, first published by John Wiley & Sons, N.Y.; enlarged
Dover edition, 1972, ISBN-13: 978-0-486-62349-8.
91. Pratt, J.W., Raiffa, H., Schlaifer, R. 1964. The foundations of decision under uncertainty: an
elementary exposition, Journal of the American Statistical Association 59, 353–375.
92. Shafer, Gl. 2016. Constructive decision theory. Int’l Journal of Approximate Reasoning 79, 45–62.
93. Fishburn, P.C. 1970. Utility Theory for Decision Making. John Wiley & Sons, NY; SBN 471-
26060-6.
94. Keeney, R.L., Wood, E.F. 1977. An Illustrative Example of the Use of Multiattribute Utility
Theory for Water Resource Planning. Water Resources Research 13 (4), 705-712.
95. Keeney, R.L. 1974. Multiplicative Utility Functions. Operations Research 22, 22-34.
96. Bukshs, Z.A., Stipanovic, I., Klanker, G., O’Connor, A., Doree, A.G. 2019. Network level bridges
maintenance planning using Multi-Attribute Utility Theory. Structure and Infrastructure
Engineering, 15:7, 872-885.
97. Bukshs, Z.A., Stipanovic, I., Doree, A.G. 2020. Multi-year maintenance planning framework using
multi-attribute utility theory and genetic algorithms. European Transport Research Review 12 (3),
1-13.
98. Ogle, R.A., Dee, S.J., Cox, B.L. 2015. Resolving inherently safer design conflicts with decision
analysis and multi-attribute utility theory. Process Safety and Environmental Protection 97, 61-69.
99. Bolinger Jr., J.J., Ghose, P., Sosinski, J.H., Esser, W.F. 1978. Decision Analysis Utilizing Multi-
Attribute Utility Theory in Engineering Evaluations. IEEE Transactions on Power Apparatus and
Systems Vol. PAS-97 (4) 1245-1253.
100. \Tversky, A., Kahneman, D. 1986. Rational Choice and the Framing of Decisions. The Journal of
Business 59 (4) S251-S278.
101. \Wang, W., Liu, X., Qin, Y., Fu Y., 2018. A risk evaluation and prioritization method for FMEA
with prospect theory and Choquet integral. Safety Science 110, 152-163.
102. Grabisch, M. 1996. The application of fuzzy integrals in multicriteria decision making. European
Journal of Operational Research. 89 (3), 445–456.
103. Huang, Y.-Sh., Chang, W.-Ch., Li, W.-H, Lin, Z.-L. 2013. Aggregation of utility-based individual
preferences for group decision-making. European Journal of Operational Research 229, 462–469.
104. Chavas, J.-P. 2004. Risk Analysis in Theory and Practice. Elsevier Ac. Press; ISBN 0-12-170621-
4.
105. Wei, G. 2018. TODIM Method for Picture Fuzzy Multiple Attribute Decision Making. Informatica,
29 (3), 555–566.
106. Wang, J., Wang, J-q., Zhang, H-y. 2016. A likelihood-based TODIM approach based on multi-
hesitant fuzzy linguistic information for evaluation in logistics outsourcing. Computers & Industrial
Engineering 99, 287–299. © Massachussets Institute of Technology.
107. Ščap, D., Hoić, M, Jokić, A. 2013. Transactions of Famena XXXVII-2, 15-23; ISSN 1333-1124.
108. Giagkiozis, I., Fleming, P.J. 2014. Pareto Front Estimation for Decision Making. Evolutionary
Computation 22(4): 651–678
109. Pouraminian, M., Pourbakhshian, S. 2019. Multi-criteria shape optimization of open-spandrel
concrete arch bridges: Pareto front development and decision-making. World Journal of
Engineering 16 (5), 670–680; ISSN 1708-5284.
110. Velasquez, M. and Hester, P.T. 2013. An Analysis of Multi-Criteria Decision Making Methods.
International Journal of Operations Research 10 (2), 56-66.
111. Saaty, Th.L., Ergu, D. 2015. When is a Decision-Making Method Trustworthy? Criteria for
Evaluating Multi-Criteria Decision-Making Methods, International Journal of Information
Technology & Decision Making 14, 1-17.
112. Belton, V. 1986. A comparison of the analytic hierarchy process and a simple multi-attribute value
function. European Journal of Operational Research 26, 7-21.
113. De Leeneer, I., Pastijn, H. 2002. Selecting land mine detection strategies by means of outranking
MCDM techniques. European Journal of Operational Research 139, 327–338.
114. Dubois, D. 2011. The role of fuzzy sets in decision sciences: Old techniques and new directions.
Fuzzy Sets and Systems 184, 3–28.
115. https://decision-radar.com/
116. https://www.xlstat.com/en/solutions/features/multicriteria-decision-aid-electre-methods
117. https://japarthur.typepad.com/electre_toolkit/
118. https://en.freedownloadmanager.org/Windows-PC/Visual-PROMETHEE-FREE.html
119. http://www.onlineoutput.com/
120. http://www.mcdmsociety.org/content/software-related-mcdm, select Entscheidungsnavi (“decision
navigation”) or go directly to https://entscheidungsnavi.de/en/#/landingpage
121. https://www.superdecisions.com/
122. Cheraghi, M., Baladeh, A.E., Khakzad, N. 2019. A fuzzy multi-attribute HAZOP technique (FMA-
HAZOP): Application to gas wellhead facilities. Safety Science 114, 12–22.
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
Brazil Regulatory Agencies have a risk criteria which have been defined based on quantitative risk
approach. The most well-known regulations are defined for São Paulo and Rio de Janeiro cities
which have a strong relationship with IBAMA (Brazilian Environmental Agency). Their standards
establish the activities to be developed defining the use of human reliability techniques to calculate
the human error probability. Human errors are the main factors of the industrial accidents and their
effects are not being assessed systematically with the level of details that are used during a typical
human reliability assessment.
The objective of this paper is to analyse some methodologies of human reliability considering the
external (observable) and internal (cognitive) human factors. The calculation of the human failure
frequency of QRA studies are being performed by consulting firms in a subjective and conservative
manner when compared to failure of equipment analysis.
In this paper, the human error probability was quantified using standardised methods. The study
was based on the evaluation of some methodologies of human reliability and decision making. The
method was assessed through a case study of an accident occurred in 2004 at Formosa Plastics
Corp. Illiopolis. Initially, an analytical method was developed as Hierarchical Task Analysis
(HTA), then by Predictive Human Error Analysis (PHEA) and a qualitative analysis using Systems
for Predicting Human Error and Recovery (SPEAR). To complete the study a quantitative
assessment using Fault Tree Analysis (FTA) and Human Error Assessment and Reduction
Technique (HEART) was developed. The recommendations were assessed in two different
categories: First, using the Weighed Score Method based on the management point of view and
second, through the HEART and FTA methods representing the point of view of the operators.
The paper concludes that results based on operational focus were more objective and transparent
when compared to results from management techniques. This is because the operational indicators
were easier to interpret and less subjective with less financial concerns involved. Also, the
methodologies used provided a thorough understanding of the events in each phase of the accident.
Keywords: decision making, human reliability, accident in Formosa-IL, human error probability,
performance factors influence. A maximum of eight keywords should be included.
1 INTRODUCTION
There are numerous studies related to human behavior and each one possesses specific
characteristics. Basically, they are differentiated in external (observable) and internal (cognitive)
factors and the selection of analytical method depends on the availability of information and the
viability of cognitive analysis. Nowadays, in Brazil, the frequency of human error is evaluated in
a subjective and conservative way when compared to equipment failure and its quantification,
which can be developed using methods that represents the risk closer to reality. This study was
based on the evaluation of human reliability and decision making methodologies, followed by a
practical application of human reliability assessment of an accident which occurred in 2004 at
Formosa Plastics Corp. Illiopolis.
2 METHOLODOGY
The description of the accident which occurred at the Formosa-IL plant was extracted and
summarized from the investigation report (Chemical Safety and Hazard Investigation Board,
2007). The plant layout of Formosa-IL is presented in Figure 2.1.
Figure 2.1: Layout of the plant of Formosa-IL (Chemical Safety and Hazard
Investigation Board, 2007)
The method for human reliability assessment used in this study is presented in Figure 2.2.
Figure 2.2: Method of human reliability assessment (AICHE/CCPS, n.d.)
The first step of human reliability evaluation consists of general analysis and identification of
human interactions. The cleaning of reactors was identified as the critical activity by the Chemical
Safety and Hazard Investigation Board (CSB) at the Formosa-IL plant. Normally, before an
installation, it is necessary to get information about the most critical operational and maintenance
activities directly from the operational team through meetings to stimulate transparent
communication about work activities.
The second step consists of the SPEAR methodology application. Initially, the action oriented
technique HTA was used in chart and tabular format to represent the activity i.e. reactor cleaning.
Following the completion of the task analysis, the Performance Influencing Factors (PIF) analysis
was developed in accordance to AICHE/CCPS classification. The last three steps of SPEAR were
completed using the Predictive Human Error Analysis (PHEA) where consequences and error
reduction analyses were developed. The results were obtained in tabular form preserving the logic
between the type of human errors, its consequences and the measures for risk reduction.
The third step of the human reliability assessment, defined as Representation, was developed using
Fault Tree Analysis (FTA) and Influence Diagram Analysis (IDA) to represent the accident at
Formosa-IL.
In the last step which the human error was quantified, the Human Error Assessment and Reduction
Technique (HEART) was used to estimate the probability of human error to quantify the FTA and
the IDA developed in the previous step.
The purpose of this exercise was to present some tools to identify where faults were occurring.
Ideally, these analyses should be developed for all existing critical activities in an industrial plant.
It is observed in Table 3.1 that improper opening of a reactor during cleaning, whilst it is in
operation is not considered as part of the task. The HTA developed follows the concept of the
method, it addresses with precision and detail the activities to be performed and not the possible
deviations. These should be analysed using alternative tools. The development of HTA allows
procedures to be more appropriately defined and training to be more efficient. However, it does
not show possible errors that may occur.
3.1.2 Analysis of PIFs
After the task analysis, it is important to evaluate the PIFs of the operators during reactor cleaning.
The scale used for assessment of PIFs is shown in Table 3.2.
Table 3.2: Scales used for assessment of PIFs of reactor cleaning
Rating Scale PIF Procedure Physical Work Environment
There are no written procedures or High level of sound
standards for implementation of Poor lighting
Worse – 1
activities. High or low temperatures, high humidity or
Not integrated with training. high winds
Written procedures available, but not
always used. Moderate levels of noise
Average – 5
Standardized methods to perform the Temperature and humidity variables
task.
Detailed procedures and checklists Noise levels at optimal levels
available. Lighting based on analysis of the task
Better – 9 Procedures developed using analysis requirements
task. Temperature and humidity at optimum
Integrated with training. levels
The list of standard PIFs was used to identify factors that could influence the reactor cleaning
activity. The list is not a formal definition of PIFs, and depending on the activity, this list should
be developed and reviewed by the plant analysts. This evaluation was based on the descriptions of
the accident presented by the CSB. Numerous deficiencies were commented and considered in the
evaluation. Factors with value of 5 were considered relevant to the study although no information
was found in the accident report (Chemical Safety and Hazard Investigation Board, 2007).
Operating environment o Conflicts between safety and production
o Weather: 5 requirements: 2
o Illumination: 5 o Training for emergencies: 1
o Working hours and breaks: 5 Characteristics of the operator
Work details o Skills: 5
o Place/access: 3 o Risk assumption: 5
o Identification:2 Social and organization factors:
o Displays and controls identification: 3 o Clarity of responsibilities: 3
o View of critical information and alarms: o Communications: 1
3 o Authority and leadership: 2
o Clear the instructions: 1 o Commitment of management: 2
o Quality of controls and warnings: 1 o Overconfidence in technical safety
o Grade of support of diagnosing fault: 1 methods: 2
o Organizational learning: 1
The assessments were conducted after the investigation of the accident once the faults had already
been analysed. If this assessment were performed before the accident, judgement would most
likely be different and higher notes would be obtained.
The results demonstrate that there were mainly deficiencies in group task characteristics and
organizational and social factors. Within the group task characteristics, specific categories such as
clarity of instructions, quality of checks and warnings, degree of support on fault diagnosis
presented the worst reviews. These deficiencies could have occurred due to the absence of a
supervisor allowing for a hierarchical distance and lack of communication between operations and
management. Emergency procedures training were considered the most critical as the effects of
the accident would be very different if the operators were adequately trained in evacuation
procedures. In the group organizational and social factors, specific categories communications and
organizational learning presented the worst results, although authority and leadership, commitment
of management, overconfidence in technical safety methods were also considered critical.
These reviews can be justified mainly because there was already evidence of criticality of the
bypass procedure of the safety interlock and no effective modification was performed. Also, there
was no routine of communication such as radios and intercoms, nor adequate availability of
supervisor, and such evidence were not considered by management.
3.1.3 Predictive Human Error Analysis (PHEA), Consequences and Error Reduction
Table 3.3 presents the PHEA methodology that analyses human error and cognitive perspective
developed to assess the step of task 3.2. The information was extracted from the CSB accident
report, but the logic was developed using the methodology. It is observed that the analysis of
consequences of SPEAR is associated with each type of human error defined by PHEA. This way,
the consequences are associated with a cause that was identified during the study. A strategy to
reduce the error should be developed based on the consequence, that depending on the criticality
of the consequence should be mandatory or not.
Table 3.3: Human Error Analysis (PHEA) of the reactor cleaning activity (step 3.2)
Task Type of Strategy to reduce the
Type of error Description Consequences Recovery
Step task error1
Action in the Move in the wrong Reactor identification at Optimize layout of the
Operator will be in the
Action wrong direction of the right the bottom of reactor reactors in order to facilitate
wrong group of reactors
direction reactors and control panel identification
- Evacuation System
Operator performs bypass Large release of vinyl
- Study of protection layers
Right action on of interlock system and chloride monometer
Action None - Historical analysis
wrong object drains the reactor in (VCM) followed by
- Improve procedures and
operation explosion and fire
training
Operator does not check Impossibility to drain Indication of interlock Include in checklist the
3.2 – Go Omission of
Checking the reactor identification reactor due to interlock activity in the control activity verification of
to the checks
that should be drained activation panel reactor to be drained
reactor
that is in Blaster operator confirms
Right check in Impossibility to drain Indication of interlock Include in checklist the
cleaning that the reactor is in
Checking the incorrect reactor due to interlock activity in the control activity verification of
progress cleaning process, but is on
object activation panel reactor to be drained
the wrong reactor
Blaster operator is in the
Wrong check correct reactor but Operator goes to another Operator of the upper
Improving procedures and
Checking in the correct confirms that another reactor and will not drain level will fix the blaster
training
object reactor is in cleaning it due interlock activation reactor
process
Blaster operator is in the
Wrong check wrong reactor and Operator goes to another Operator of the upper
Improving procedures and
Checking in the wrong confirms that another reactor and will not drain level will fix the blaster
training
object reactor is in cleaning it due interlock activation reactor
process
Blaster operator has no Operator will go to the
confirmation about which Operator will be in the upper level and verify
Recovery No information
reactor is in cleaning wrong group of reactors which reactor is in
process cleaning process
1
– Strategies to reduce the error should be related mainly to changes in procedures, training, equipment and design.
The results of PHEA allow the main PIFs contributing to the risk to be analysed. Table 3.4 shows
the PIFs related to types of errors evaluated in PHEA.
Table 3.4: Identification of the most critical PIFs during cleaning reactor activity
Type of error Performance Influencing Factors (PIFs)
Action in the wrong direction Distraction, practices with unfamiliar situations or poor identification
Distraction, poor identification, poor lighting, identification of displays
Right action in the wrong object
and controls or poor communication
No action Practices with unfamiliar situations or working hours and breaks
Practices with unfamiliar situations, working hours and breaks or
Omitted action
distraction
Omission of checks Distraction or poor communication
Distraction, poor identification, poor lighting, identification of displays
Right check in the wrong object
and controls or poor communication
Distraction, poor identification, poor lighting, identification of displays
Wrong check in the right object
and controls or poor communication
Distraction, poor identification, poor lighting, identification of displays
Wrong check in the wrong object
and controls or poor communication
No information Poor communication or poor authority and leadership
The interlock pressure system theoretically prevents an undue drainage on the various operator
errors on by-pass activation. It is the last protective barrier of the preventive system. However, its
actual efficiency should be further evaluated through studies of LOPA.
The list of possible error types with PIFs demonstrates that factors such as distraction and working
hours and breaks contribute directly to errors related to the operator’s physical state. Identification
of displays and controls, poor identification and poor lighting are related to visual factors that
influence the decisions of the operator. Poor authority, poor leadership and poor communication
refer to organizational policy and practices whilst unfamiliar situations refer to operator
experience.
3.2 REPRESENTATION
3.2.1 Fault Tree Analysis
The fault tree analysis representing the development of the accident was developed and is shown
in Figure 3.2.
Fatalities and
Injuries
* P
G0
Presence of
operators in the Explosion
reactor building
* P * P
G2 G1
Operators executing
Operators failure to Large release of
reactor cleaning Ignition source
evacuate VCM
process
E4 E5 E1 * P
G3
P P P
Operator goes to the wrong Operator use
reactor and believe that is the incorrectly by-pass to
reactor in cleaning process drain the reactor
E2 E3
P P
Figure 3.2: Fault tree representation of a large release of VCM scenario followed by
explosion and fire causing fatalities
The basic events are directly influenced by the root causes that contribute to the occurrence of the
top event (accident). Below is the list of root causes related to each basic event (Chemical Safety
and Hazard Investigation Board, 2007).
Basic Event E2 - Operator believes he went to the reactor which required cleaning, when in fact
he went to the reactor in operation
There is no status indicator in the reactor
Symmetrical layout of reactors
Similarity of reactors
Overload of blaster operator
Basic Event E3 - Operator uses the bypass valve to open the bottom valve of reactor in operation
Bottom valve of the reactor does not open (interlock system - pressure above 10 psi)
Existing system bypass
No physical control of air injection hoses of emergency
No bypass procedure during normal operation
Supervisor unavailable
Basic Event E4 - Employees fail to evacuate the area
Ambiguous procedures about how to control large releases of VCM
Insufficient evacuation training
No routine drills
3.2.2 IDA (Influence Diagram Analysis)
IDA allows a simplified and detailed view of the factors that influence the event (see Figure 3.3).
The main elements that affect the scenario are represented by the ellipse, while the white square
represents the uncertainty that led to the accident. The hexagons correspond to the investment
possibilities that need to be performed. These investments are shown in blue. The IDA provides a
quick and practical decision model and the great value of the diagram is its power of
communication since it is easily understood and allows a large amount of information to be
considered.
Basic Event E1 (Source of ignition) has the ignition probability of 30% (Uijt de Haag, 1999).
Basic Event E5 (Operators present for cleaning process of reactor) has a probability of 4/24
representing 4hr of day (day and night) on the lower level.
Critical analysis of the probabilities of the basic events
Table 3.6 summarizes the probabilities of occurrence of the basic events.
Table 3.6: Probability of occurrence of the basic events
Description of
ID Details Probability
Basic Event
The probability of use of the by-pass valve to open the bottom of the
Operator uses
reactor corresponds to 47% which is a high value for use of bypass
bypass to open
security systems. Normal safety standards do not allow security systems to
3 bottom valve of 47%
be shut down even during maintenance. Since this procedure of bypass of
reactor in
this safety valve was common in company of Formosa – IL, the value is
operation
quite representative.
Employees fail Normally the fault of operators during evacuation in major accidents
4 to evacuate the should correspond to very low values; the calculated value of 27% that 27%
area corresponds to almost 1 fault every 3 times is very representative.
Operators
present for the It is considered that there are operators in the surrounding areas of the
5 16.7%
reactor cleaning reactor during the cleaning process for approximately 4 hours of the day.
process
* P=4,82E-4
G0
Presence of
operators in the Explosion
reactor building
* P=4,50E-2 * P=1,07E-2
G2 G1
Operators executing
Operators failure to Large release of
reactor cleaning Ignition source
evacuate VCM
process
E4 E5 E1 * P=3,57E-2
G3
P=2,70E-1 P=1,67E-1 P=3,00E-1
Operator goes to the wrong Operator use
reactor and believe that is the incorrectly by-pass to
reactor in cleaning process drain the reactor
E2 E3
P=7,60E-2 P=4,70E-1
Figure 3.4: Representation and quantification of fault tree of a large release of VCM
scenario followed by explosion and fire causing fatalities
3.3.2 Quantification of the IDA (MANAGEMENT FOCUS)
The management has no detailed information of operation; therefore, the decision making process
is based on general techniques that do not require specific information of the activity in question.
The general view allows an evaluation of the system as a whole, ensuring that the interactions of
various sectors occur in the best possible way.
Each recommendation was evaluated through a score. This technique can be performed by
different managers from different sectors through an individual assessment of the various
stakeholders, yielding a final average. Table 3.7 shows the weight of each recommendation
considered to quantify the IDA.
Table 3.7: Weight of evidence
Weight of evidence Effective Ineffective
What is the weight of evidence of procedures for the use of by-pass in normal
0.3 0.7
operation to ensure bypass of the interlock with safety
What is the weight of evidence of the implementation of the recommendations of the
0.6 0.4
PHA 1992 to ensure bypass of the interlock with safety
What is the weight of the evidence of implementing LOPA studies to ensure bypass
0.8 0.2
of the interlock with safety
What is the weight of evidence for increasing the availability of the supervisor to
0.2 0.8
ensure bypass of the interlock with safety
Table 3.8 shows the results of IDA quantification.
Table 3.8: Weight of evidence to conduct by-pass of the bottom valve of the reactor with
safety
If And And And
D
The A
procedures C B Increase
Total Weighted Weighted
for using Implementation Implementing the Success Fault
Weight success fault
the by- Recommendations LOPA availability
pass in for PHA 1992 Studies of the
normal supervisor
operation
Effective Effective Effective Effective 0.95 0.05 0.0288 2.7% 0.1%
Effective Effective Effective Ineffective 0.90 0.10 0.1152 10.4% 1.2%
Ineffective Effective Effective Effective 0.90 0.10 0.0672 6.0% 0.7%
Ineffective Ineffective Effective Effective 0.90 0.10 0.0448 4.0% 0.4%
Ineffective Effective Effective Ineffective 0.85 0.15 0.269 22.8% 4.0%
Effective Ineffective Effective Effective 0.80 0.20 0.0192 1.5% 0.4%
Effective Ineffective Effective Ineffective 0.70 0.30 0.0768 5.4% 2.3%
Ineffective Effective Ineffective Effective 0.60 0.40 0.0168 1.0% 0.7%
Effective Effective Ineffective Effective 0.60 0.40 0.0072 0.4% 0.3%
Effective Effective Ineffective Ineffective 0.50 0.50 0.0288 1.4% 1.4%
Ineffective Ineffective Effective Ineffective 0.50 0.50 0.1792 9.0% 9.0%
Ineffective Effective Ineffective Ineffective 0.40 0.60 0.0672 2.7% 4.0%
Effective Ineffective Ineffective Effective 0.40 0.60 0.0048 0.2% 0.3%
Ineffective Ineffective Ineffective Effective 0.30 0.70 0.0112 0.3% 0.8%
Effective Ineffective Ineffective Ineffective 0.10 0.90 0.0192 0.2% 1.7%
Ineffective Ineffective Ineffective Ineffective 0.01 0.99 0.0448 0.0% 4.4%
68.2% 31.8%
The Weighted Score Method determines the possible combinations between the recommendations
and presents a successful weighted acceptance. Combinations that have the higher weighted
success should have their cost of implementation verified. The implementation of the
recommendations B and C correspond to the combination that attracts most managers and presents
a probability of weighted success of 22.8%. The implementation of recommendation B only is
very effective, but the weighted success of the activity is only 9%, being the third favourite. The
second preferred combination corresponds to recommendations B, C and D with 10.4% probability
of success. The implementation of all recommendations, obtaining the highest probability of
success is in the eighth position. Recommendation A was considered of low efficiency (weighted
success 0.3%) and consequently its implementation makes no significant contribution to the
existing combinations. This analysis is based on the subjective judgment of management group
members and the values used in this study were estimated.
3.4 RECOMMENDATION IMPACT USING FTA (OPERATIONAL FOCUS)
To each proposed recommendation, the EPC is re-assessed considering the reduction fraction in
its value and quantifying the fault tree of the top event once more. This way it is possible to observe
how each recommendation can contribute to reducing the probability of occurrence of the top
event.
Table 3.9 shows the probability of accident occurrence and their respective relative reduction
considering the implementation of each recommendation. From the operational point of the view,
recommendation B has the largest 92% reduction in the probability followed by a 50% reduction
of recommendation A. The third largest reduction of 34% is related to recommendation D.
Table 3.9: Impact of implementation of recommendations
ID Recommendation E2 E3 E4 FTA
Without
0 0.076 Reduction 0.47 Reduction 0.27 Reduction 4.82E-04 Reduction
recommendations
Implement studies
B 0.076 0% 0.34 29% 0.03 89% 3.88E-05 92%
of LOPA
Increase the
A availability of 0.04 47% 0.45 6% 0.27 0% 2.43E-04 50%
supervisor
Procedures for use
D of by-pass in 0.076 0% 0.31 35% 0.27 0% 3.18E-04 34%
normal operation
Implementation of
C Recommendations 0.076 0% 0.35 26% 0.27 0% 3.59E-04 26%
PHA1992
A+B+C+D 0.04 47% 0.27 42% 0.03 89% 1.65E-05 97%
4 CONCLUSIONS
There are numerous studies related to human behavior and each one possesses specific
characteristics. Basically, they are differentiated in external focus (observable) and internal
(cognitive). The method to be selected for analysis depends on the availability of information and
the viability of cognitive analysis.
The human error probability was calculated based on both observable and cognitive focus
following the structure of the SPEAR method. The observables factors were obtained from the
HTA and the cognitive factors were analyzed with the application of PHEA. The most important
step that ensured that both factors were considered in the calculation of the probability of human
error is the development of the FTA based on the causes and consequences evidenced in PHEA.
The development of IDA is also based on the results of the task analysis and the analysis of human
errors, which allows a visualization of variables and uncertainties of the decision process that, must
be performed by managers. The results of the management focus can be less transparent than the
operational focus, as it is more subjective and may be related to the interests of the decision makers.
The results of the operational focus take more objective factors into consideration with more
precise indicators as its assessment is based on mental models of the plant process, which facilitates
the evaluation. These different results demonstrate the need to consider the operating environment
in decision making and that they are essential for the calculation of the probabilities of human
errors. This study shows that cognitive studies are not simple and are not always feasible. The
efforts to calculate the probability of human error should be evaluated. Although the objective of
this study was to assess the probability of human error, the results of this cognitive study provide
information and possible recommendations that may contribute to reducing risks at the industrial
plant.
5 REFERENCES
AICHE/CCPS. Tools for Making Acute Risk Decisions with Chemical Process Safety
Applications. New York, AIChE, 1994.
AICHE/CCPS. Guidelines for Preventing Human Error in Process Safety. New York, AIChE,
1994
CETESB. Manual de orientação para a elaboração de estudos de análise de riscos. São Paulo, 2003.
FEEMA: Instrução Técnica para elaboração de Estudo de Análise de Risco para
Instalações Convencionais.
IMA: Norma Técnica NT – 01/2009 – Gerenciamento de Risco no Estado da Bahia. 2009.
RASMUSSEN, J. Information Processing and Human-Machine Interaction. Amsterdam, North
Holland, 1986.
REASON, J. T. Human Error. Cambridge, Cambridge University Press, 1990.
Chemical Safety and Hazard Investigation Board, 2007. “Investigation Report – Vinyl Chloride
Monomer Explosion”, United States: s.n.
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
Since 2012, the American Fuel & Petrochemical Manufacturers (AFPM) and the American
Petroleum Institute (API) have been working together with industry members to improve the
industry’s process safety programs under the umbrella of Advancing Process Safety (APS)
programs. This paper presents “learning from experience” illustrating the ways that industry
participants are using the APS programs, what changes they have implemented, and what
improvements have been achieved by those changes. Examples include practice sharing
documents, training, industry safety bulletins, and Walk the Line. Most importantly, this paper
provides a variety of practical takeaways that highlight the sharing an application of industry
lessons learned.
Prior to 2010, there were several high-profile events Advancing Process Safety
involving the petroleum refining and petrochemical Guiding Principles:
industries. Safety has always been a top priority, but it Focus on improving industry’s process
was obvious that industry needed to evolve if it was safety performance.
going to maintain its license to operate. In 2010, Prioritize mitigating higher-
several Senior Executives from AFPM and API consequence risks – supported by data
and industry experience.
member companies saw an opportunity to share safety
Avoid being prescriptive and provide a
lessons learned and help all of industry. That was the range of tools to achieve desired
first meeting of what is now called the Process Safety outcomes.
Advisory Group (PSAG). The PSAG members Protect companies’ liability, intellectual
empowered their Process Safety Directors to develop property, and antitrust exposure.
tools to help industry improve. The Process Safety
Directors created the Process Safety Workgroup (PSW), which oversees the APS resource
development program.
The philosophy of the APS program is that it is in every company’s best interest to improve
process safety across the industry. The PSAG challenged the legal subgroup to develop a way for
the programs to exist and thrive, while navigating the obstacles and risk landscape. After 18
months of development and the commitment from
Obstacles to Success industry and senior leaders, AFPM and API launched the
History of competitive silos
APS program in the Spring of 2012.
Trade Group Logistics
Legal concerns
Antitrust These voluntary industry programs attract participants by
Documenting areas for opportunity providing high-value resources that are relevant to
Inadvertently setting industry petroleum refining and petrochemical facilities of all
standards and expanding sizes. The APS programs leverage the unique strengths
RAGAGEP of both AFPM and API member companies to ensure
Sharing Safety Practices
program growth and success.
The APS programs are industry-led initiatives to continuously improve process safety
performance through enhanced information sharing, communication, and responsible
collaboration. Under the oversight of the PSAG and PSW there are seven formal subgroups that
develop and provide industry with tools to help improve process safety. The subgroups are
described below:
Each subgroup develops valuable resources based on their areas of focus. By working to reduce
process safety events from a variety of perspectives, a wide audience of industry practitioners are
reached. These resources are disseminated through the AFPM Safety Portal, monthly webinars,
newsletters, and the process safety regional networks.
Figure 1 illustrates the continuous improvement of the APS program over the years.
Focused Improvement Human Reliability and
replaces the Training and Mechanical Integrity
Certification program become subgroups; Quarterly API RP 754
Approves the incorporation webinars are expanded to a
of the Hazard Identification monthly webinar series with
Practice Sharing review program with Practice an additional focus on
process is developed and Sharing into one subgroup occupational safety topics
approved (HIPS) and process safety topics
Walk the Line becomes a API develops Focused
subgroup and launches the Process Safety Site
first annual Workshop Assessments
With a foundation of data-driven programs, APS has evolved since 2010 and will continually
improve.
2 Industry Performance
2.1 API RP 754 Process Safety Indicators
In 2010, the first edition of the ANSI/API RP 754 Process Safety Indicators for the Refining and
Petrochemical Industries was published; the second edition was published in 2016 [2]. This
recommended practice (RP) created a single common definition for a process safety event
thereby allowing industry to collect, analyze, and benchmark process safety performance. The
ILO subgroup leverages this information to analyze and share learnings to aid in process safety
improvements.
Figure 2 shows the quantitative improvements in PSE rates between 2011 - 2018 (2018 is the
most recent data available). The combined Tier 1 and Tier 2 PSE rate per 200,000 employee
hours for both petroleum refineries and petrochemical facilities has decreased since 2011. Based
on 2018 data, the 3-year rolling average PSE rate per 200,000 workforce hours is for each
facility type is noted below:
Figure 2: Process Safety Event Performance for Petroleum Refining and Petrochemical
Facilities.
2.2 Process Safety Site Assessments
Industry improvements are also reflected in the protocol performance from the API Process
Safety Site Assessments. Figure 3 illustrates how assessments performed in the 2016 to 2019
timeframe (green squares) are scoring better than those done earlier in the program (2012 to
2015 – blue dots), with the largest improvement in the Mechanical Integrity-Fixed Equipment
scores. Facility Siting is the exception as additional questions were added after RP 756 was
published. The belief is that sites that had an assessment performed in the later years have
benefitted from the programs and learnings of the APS effort. For example, Hazard IDs,
learnings from site assessments, API RP 754 metrics, discussions at Regional Networks, and the
efforts of the MI Subgroup, among others have assisted these later sites to strengthen their
process safety programs. Some sites are beginning to be assessed for the second time. This will
provide specific data points on site improvements.
Protocol Performance
Beginning in 2020, API will offer PSSAP Focused, which aims to conduct more assessments at
small refining, petrochemical, and chemical facilities in order to improve process safety
performance across the industry. This new addition to the Process Safety Site Assessment
Program is s tailored design to cover the seven original protocols and address key process safety
activities in a shorter time frame. Specifically, PSSAP Focused utilizes smaller assessment teams
and fewer protocol questions.
2.3 Walk the Line
The Walk the Line (WTL) program, developed by AFPM in coordination with Jerry Forest of
Celanese to share with industry, seeks to address Loss of Primary Containment (LOPC) causes
related to: open ended lines/ valve left open, operational readiness, and line-up error [3-7].
Annual analysis on the API RP 754 Tier 1 and Tier 2 PSE data indicates a downward trend in
WTL related events, as shown in Figure 4. LOPC causes related to line-up errors has seen a
~90% reduction, whereas causes related to operational readiness and open-ended lines/valve left
open have seen a ~9% and ~20% reduction, respectively. Based on the number of events, the
greatest opportunity for improvement is with open ended lines/valve left open.
The remainder of the paper discusses examples and use cases from operating companies to
illustrate the use of APS to drive safety improvement.
The process safety event sharing database contains a collection of high-level, blinded, industry
events, submitted voluntarily by operating companies. Users can query the database, searching
on criteria that includes process type, API 754 consequences, mode of operation, equipment
type, cause, and keywords. Examples of how event sharing has been used include:
Gathering related events to discuss in a Process Hazard Analysis (PHA)
Searching for topics for safety meetings and toolbox talks
Reviewing Crude Unit events to support the preliminary hazard review for a new Crude
Unit design
Discussing related events during design reviews for new or replacement equipment
Searching related events and the associated corrective actions following PSE
investigations, which may be included in the reports to potentially prevent a future event
that has occurred in industry
3 Operating Company Testimonials on How the APS Products Have Helped
Improve Safety
The following testimonials are from operating companies on how the APS tools led to tangible
improvements.
This company participates in every aspect of the APS suite of programs and due to their
involvement has made enhancements to corporate and site procedures.
This company has incorporated the WTL philosophy and concepts corporate wide as a core pillar
of process safety. By leveraging WTL practices and Human Reliability principles, they have
been able to maximize their program effects on the front line. Examples include:
Through translating the WTL terms into their company "language," they have found a
greater rate of acceptance and adoption.
A presentation at the 2019 WTL Workshop discussed the concept and implementation of
a valve line up board. The program focuses on operator ownership, having a clear visual
to tag open bleeds, and having a simple system to audit. This board has been incorporated
into one site's daily operating meetings. The presentation is available on the AFPM
Safety portal and a related practice share is currently in development. This is a great
example of Operations knowledge sharing and improvement facilitated at the WTL
Workshop.
3.3 Medium Company Testimonial: US Petroleum Refining Company
Process Safety Site Assessment Through involvement with the WTL program, this
company is better able to speak a common process
Facility Testimonial
safety language among their petroleum refining,
“… One of the most important takeaways of pipeline, and terminal business units. The practical
the process was the opportunity for our staff approach to address specific areas for improvement
to learn through engagement with the are applicable to all facility types. Since implementing
assessors …The assessors had a wealth of
plant experience that allowed them to dive WTL, this company has seen noticeable improvements
into the key aspects related to the health of around culture and operator ownership.
our overall process safety management …”
As a mandatory step in their workflow, teams
updating safe work practices review the APS resources. By reviewing these materials, including
event lessons learned, Hazard Identification and Practice Sharing Documents, and Safety
Bulletins, they benefit from industry knowledge to help make their company safer. The useful
and relevant topics are then incorporated into their company practices. This has been an effective
way to help make their company safer and familiarize a wider audience with the APS resources.
Many petroleum refineries are in geographically remote areas. This can present a challenge for
companies to benchmark, learn, and improve. This small company utilizes the Process Safety
Regional Network meetings to develop a larger network of process safety peers and help close
any gaps, by leveraging lessons learned from network companies.
The Regional Networks provide an opportunity for talent development and training, as many
process safety professionals have an Engineering or Operations background. The meetings
provide practical examples from industry peers that help process safety professionals supplement
their technical training and help to further understand how to set up and roll out effective
programs. The peer-to-peer discussions also help with the cultural adoption of programs by
leveraging peer experiences.
Small companies may not have internal Subject Matter Experts (SMEs). The process safety
regional networks provide an opportunity for invited topical experts (e.g. Fired Heaters, Facility
Siting, Safety Instrumented Systems, Mechanical Integrity, Alarms, etc.) to interact with the
members. The subject matter covered at these meetings is both practical and challenging, giving
the participants a deeper understanding and better questions to ask their colleagues.
The annual API RP 754 PSE analysis and report allows companies and sites to benchmark their
process safety performance against industry.
This company also uses the industry data during internal investigation team training,
highlighting the importance of having good investigation recommendations and
implementing mitigations to minimize risks.
They also use the PSE metrics as a company and site key performance indicator (KPI).
This helps set goals and keeps the importance of minimizing PSEs at the forefront for all
employees, from the front line to senior leadership.
A focused learnings report was developed, based on 2018 PSE Tier 1 and Tier 2 Petroleum
Refining and Petrochemical data analysis (Figure 7). This two-page document provides a clear
and concise communication, identifying data trends and available resources to help. This report
highlighted industry focus areas and helped this company support budget requests for related
projects, such as high-high alarms on tanks to potentially help prevent overfill events.
This company developed and conducted an event lessons-learned training exercise at a process
safety regional network meeting. Participants were given a picture of the area after the event,
high-level details (facility type, equipment, type of LOPC, and Tier classification), and the cause
category descriptions. See Figure 8 below.
This small company has leveraged the APS programs to maximize their learnings and growth
opportunities through interactions with larger companies. While improvements have been
numerous, they have seen the greatest step changes in the following three areas:
A Process Safety Site Assessment at one
facility revealed a deficiency in their MI Process Safety Site Assessment
approach/ program. Based on this learning, the PSSAP Assessor Testimonial
company has created a corporate Director of “I have taught over 100 process safety
MI (position filled and in place for 2.5 years) to courses. I think the API PSSAP has a much
provide a centralized approach to MI across all bigger impact and provides much better
sites. They are currently implementing a formal training than any of the courses that I have
Fixed Equipment Mechanical Integrity (FEMI) taught. The hands in the field activities
with industry experts for 40+ hours with
program. protocols of best practices creates a
Participation in WTL has resulted in the phenomenal learning environment and
creation of “WTL Process Safety Committees” opportunity.”
at the sites. Based on the 2019 WTL Workshop
presentation, the sites have incorporated the use of a committee to not overwhelm one
person and keep the initiative fresh and moving forward. These committees are in
addition to their Health & Safety Committee. Their Operators are interested in this
program and are excited about the opportunity to participate. The presentation is
available on the AFPM Safety Portal and a related practice sharing document is in
development.
A popular topic of focus at regional network meetings has been around Human Factors
and Learning Teams. The examples and lessons learned shared by larger companies has
greatly benefited this smaller company. While the implementation of Learning Teams is
in its infancy at this company, all sites have conducted one Learning Team following an
event. The front-line employees are energized by this process and are learning focused.
4 Conclusion
Through the development and implementation of a collaborative safety program, the petroleum
refining and petrochemical industry has moved the needle in process safety performance.
Company executive leadership has influenced, encouraged, and supported a culture of
commitment by their employees to improve process safety. Individuals have utilized the
available resources such as event sharing, PSE data, Hazard Identification and Practice Sharing
documents to learn from industry events to help drive improvement at their sites.
The responsible collaboration developed through APS has evolved beyond process safety. The
Process Safety Regional Networks inspired the creation of additional regional networks for
topics like Occupational Safety, Mechanical Integrity, and Hydrofluoric (HF) Acid Alkylation
Operations. These new regional networks engage with existing industry groups, such as the
Mechanical Integrity RP standards committees and the HF Alky Safe Operations Forum. These
responsible collaborations further reinforce the importance of industry helping industry improve
safety. AFPM and API have also shared the APS model with other industry trade associations, as
improving safety is in everyone’s interest.
The knowledge sharing relationships extended beyond the AFPM and API member companies.
Beginning in 2018, industry Senior Executives have participated in “fly ins” with regulators in
Washington, DC. These fly ins provide an opportunity to develop relationships and share the
industry focus on safety improvements through the APS program. Proactive efforts such as these
illustrate that industry takes its commitment to safety and environmental stewardship seriously
and works to maintain its license to operate.
A successful program needs to continually improve and look forward. As the workforce
demographics change, so may the available training tools. To proactively address this need, the
AFPM Immersive Learning subcommittee was formed to build tools to help improve training.
There is also a drive to understand more about Process Safety Events, with a focus on potential
leading indicators. APS and the newest focus areas will continue to evolve to address the needs
of the industry.
These programs could not have developed into the impactful tools they are without widespread
industry support and collaboration. Advancing Process Safety has a strong foundation and will
continue to move the needle over the next ten years. Thank you to all the companies who help
advance process safety and have shared your testimonials. For more information and to get
involved, please email [email protected].
5 References
[1] “Mechanical Integrity: Fixed Equipment Standards & Recommended Practices (API),”
Jan 2019, http://mechanicalintegrity101.com/~/media/MISG/Mechanical%20Integrity
%20Standards.pdf, Accessed Jan. 25, 2020.
[2] API. API RP 754 Process Safety Performance Indicators for the Refining and
Petrochemical Industries. ANSI/API Recommended Practice 754, second edition, April
2016.
[3] J. Forest, Management discipline, Process Saf Prog 31(4) (2012), 334–336.
[4] M. Vela and J. Forest, “Conduct of Operations Best Practice Networks,” presented at
American Institute of Chemical Engineers 2013 Spring Meeting 9th Global Congress on
Process Safety, San Antonio, TX, April 28–May 1, 2013.
[5] J. Forest, Walk the line, Process Safety Progress 34(2) (2015), 126–129.
[6] J. Forest, Don’t Walk the Line—Dance it!, Process Safety Progress 37(4) (2018), 493–497.
[7] J. Forest, “Process Safety: Walk The Line,” Available at https://www.chemical
processing.com/articles/2016/process-safety-walk-the-line/, Accessed on Jan. 25, 2020.
[8] AFPM, “2018 Focused Learnings Report,” Available at www.afpm.org/safetyportal,
Accessed on Jan. 20, 2020 – Login credentials required. Based on analysis of API RP 754
Tier 1 and 2 Data, submitted to AFPM from Petroleum Refining and Petrochemical
Facilities
[9] AFPM, “Hazard Identification: Opening Flare System While in Service,” Available at
www.afpm.org/safetyportal, Accessed on Jan. 20, 2020 – Login credentials required.
[10] AFPM, “Practice Sharing: Open Valve Labeling and Management,” Available at
www.afpm.org/safetyportal, Accessed on Jan. 20, 2020 – Login credentials required.
[11] AFPM, “Presentation: What Went Wrong,” Available at www.afpm.org/safetyportal/
regionalnetworks, Accessed on Jan. 20, 2020 – Login credentials required.
Additional References Not Cited
[12] API. API RP 756 Management of Hazards Associated with Location of Process Plant
Tents. ANSI/API Recommended Practice 756, first edition, September 2014.
[13] L. Swett, “AFPM/API Advancing Process Safety – Practical Tools for Process Safety
Performance Improvement,” presented at American Institute of Chemical Engineers 2018
Spring Meeting 14th Global Congress on Process Safety, Orlando, FL, April 22- 25, 2018
6 Appendix
6.1 List of APS Documents Available
Abstract
The Bureau of Safety and Environmental Enforcement (BSEE) defines safety culture as “the core values
and behaviors of all members of an organization that reflect a commitment to conduct business in a
manner that protects people and the environment” (BSEE, 2013, p. 1). The Committee on Offshore Oil
and Gas Industry Safety Culture noted “Operators and contractors should assess their safety cultures
regularly as part of a safety management system” (Recommendation 5.1, emphasis added).
Interestingly, “regularly” is not defined; thus, it is unclear how frequently safety culture should be
assessed. In all fairness, despite 30 years of research on safety culture (Zohar, 2010), the science of
safety culture change is quite limited. In an effort to advise oil and gas companies on how frequently
they should assess safety culture, we review the empirical literature and identify all of the longitudinal
studies that have examined safety culture over time (i.e., minimum of two assessments). In our review,
we track the industry/jobs for each sample, the average time period over which safety culture has been
examined, as well the full range of time lags tested. We summarize this literature and the conclusions to
date concerning the extent to which safety culture changes over time and identify the extent to which
further research is warranted.
Atif Mohammed Ashraf , Dr. Luc Vechot, Dr. Stephanie Payne, Dr. Tomasz Olewski
Mary Kay O’ Connor Process Safety Center ---Qatar, Department of Psychology – Texas A&M
University
Texas A&M University at Qatar, Education City, PO 23874, Doha, Qatar ,b 4235 TAMU,
College Station, TX 77843-4235
: [email protected]
Abstract
Substantial improvements have been made in the realm of safety performance by applying
inherently safer designs concepts and implementing effective HSE management systems.
However, one of the biggest challenges faced by multinational corporations today is strategically
managing their workforce – particularly a multicultural workforce. In addition to many
advantages, a multicultural workforce faces language barriers and cultural differences in
perceptions of safety-related phenomena including risks, hazards, and personal and process safety
which can inhibit critical safety communications, as well as the overall health of the organization’s
safety culture.
This presentation will provide a brief description of the administration of a safety climate
assessment across four different sites to a 1200 employee organization in the Middle East. The
inter-disciplinary approach between psychology and engineering to formulate a science-based
safety climate survey will also be highlighted. Challenges including but not limited to achieving
maximum survey participation, overcoming language barriers and the involvement of contractors
will be discussed. Finally, a review of the strengths and areas in need of improvement concerning
safety at the respective sites will be presented. Apart from this, the relationship between national
culture values and safety-related psychological constructs will also be examined. This research
provides an initial peek at the influence of organizational safety culture over and above national
culture on safety knowledge, motivation, and behaviours, which has important theoretical and
practical implications for workplace safety.
Abstract
There is a dearth of research on digital (hand-held, interactive; not .pdf) procedures in the process
safety industries. This study surveyed employees (N = 32) at a large, chemical processing company
in both chemical processing and logistics divisions. The goal was to determine if there are
substantial differences in procedure quality perceptions between digital and paper procedure
formats in addition to differences in procedure deviation behavior. Preliminary results indicate that
for those using both paper and digital formats, quality perceptions are significantly higher for
digital formats, with digital being perceived as higher quality. Further, although not statistically
significant, preliminary analyses indicate workers deviate more frequently when using paper
procedures vs. digital. Ongoing analyses include a between-subjects analysis examing these
variables for workers that use only paper procedures vs. those that use both. We will compare
paper vs. paper and paper vs. digital for these analyses. Additionally, we will present data on
attitudes towards procedures regarding both utility (how useful they are) and compliance. Future
directions for procedure format will be discussed.
Abstract
Operating procedures are used daily in high-risk industries. They are part of the interface between
users and other system components, and contribute to the user's understanding of those other
components. Ideally, operating procedures help users perform their tasks effectively, efficiently
and safely. Users differ from each other in many ways and, unfortunately, those who write
procedures are not usually trained to account for these differences.
Incident investigations show that a large part of adverse events in processing operations is related
to procedures. Specifically, systemic problems in written procedures negatively impact human
reliability and reduce process safety performance across organizations. There are a few simple
recommendations that, if followed by writers of operating procedures, can significantly improve
user performance, ultimately contributing to the safety and effectiveness of the operational system
as a whole.
This paper explains how people process information, describes how written information can lead
to human error, and offers practical tips that those writing procedures in the petroleum industry
can easily implement to help their readers avoid perception, interpretation and decision-making
errors. The selection of tips is based on the author’s experience in teaching the subject,
conducting incident investigations, and reviewing operating procedures in the aerospace and
petroleum industries for more than two decades.
Keywords: Human Error, Human Factors, Operating Procedures, Human Reliability, Process
Safety, System Safety.
Introduction
Procedures are used for many purposes and in many situations. Depending on the organization,
procedures can be used, for example, for:
Construction, manufacturing and design.
Testing, validation, verification and quality assurance.
Normal operations.
Emergency, abnormal and off-nominal operations or conditions.
Interactions with contractors.
Scheduling a conference room or fixing the photocopy machine.
Daily living activities in conditions that are out of the ordinary such as now, with the
Coronavirus pandemic.
Industrial organizations are required to use SOPs by statutory safety and health regulations and
by guidelines such as the U.S. Occupational Safety and Health Administration (OSHA) Process
Safety Management (PSM) and Guidelines for Risk Based Process Safety (RBPS). Users of
operating procedures differ from each other in countless ways including level of expertise, time
constraints, work styles, reading abilities, personal circumstances, experience and more. Writers
can make operating procedures easier to use by accounting for user differences. However, those
who write procedures are typically engineers who may not be qualified for technical writing, and
could be either very familiar with the job or writing instructions for a completely new job.
Problems with SOPs are regularly pointed out as one of the major causes of incidents in chemical
process industries. For example, in approximately 70% of over 60 incident investigations
reviewed by the U.S. Chemical Safety Board (CSB), SOPs were not properly developed,
incomplete, or not followed as instructed during the course of the incidents.
Because SOPs are so widely used, and because they continue to contribute to incidents, this paper
focuses on practical tips specifically selected for writing effective SOPs. Although these tips were
chosen with SOPs and the typical petroleum industry SOP writer in mind, they can also be used
for other types of user documentation including user guides and manuals, user handbooks and
general technical instructions, job performance aids, quick reference guides, and instruction
placards.
There are many writing standards and guidelines, some specific to particular industries such as
aviation or nuclear. This paper leverages those resources, applying the Pareto principle to offer a
practical selection of tips that the intended audience can most easily implement while significantly
improving the usability of their SOPs to reduce their organization’s susceptibility to human error.
The selection is limited to guidance specific to writing the SOP text and was made based on the
writing shortfalls most commonly observed by the author in the petroleum industry. In addition to
the selection excluding other existing writing standards and guidelines, there are other aspects
relevant to the development of first-rate SOPs that fall outside of the scope of this paper but are
equally important including, but not limited to:
In order to recognize the importance of the writing tips that will be provided, it is helpful to
understand how people process information and how written information, in particular, can
easily lead to human error.
Information processing begins with input from the sensory organs. Once sensory stimuli (touch,
heat, sound waves, photons of light, etc.) are received by the person, they need to be interpreted
for their meaning to be understood. Countless factors affect this interpretation, resulting in
possibly different conclusions from the same stimuli depending on the individual and the
circumstances under which the stimuli are received. After a conclusion has been made on the
meaning of the stimuli, a person can then make a decision as to how to respond, if at all.
Depending on the stimuli, responses could also be involuntary (such as when a hand moves away
from a burning handle); but, for the purpose of this paper, the scope of the stimuli is limited to
information presented through written procedures, which will be addressed as SOPs.
Figure 1 illustrates how people interact with a system. In the case of reading an SOP, the reader
must first be able to see the information on the document (step 1 in the figure). Next, he must be
able to decode what he sees (words, numbers, pictures, graphs or anything else) and understand
what it means (2a). The decision on how to proceed (2b) is based on a combination of his
understanding of this information and many other factors (explained in the next section). The
reader will then act or not (3) based not only on the decision he made, but again on numerous
other factors including the capabilities of the individual and the object of the action.
1 RECEIVE 3 PERFORM
2 PROCESS
a Interpret / Analyze
b Decide
Human
System component
PROCESS
PERFORM RECEIVE
The bottom half of Figure 1 illustrates the same steps on the part of the system component on
which the person performed the action. That system component could be a thing or another
person. The resulting action, or lack thereof, provides feedback to the first person in the form of
another sensory stimuli, and the cycle continues.
This is, of course, a simplified illustration. In reality, people are continuously receiving sensory
stimuli and, therefore, constantly interpreting, making decisions and responding to information.
Myriad factors affect each of the information processing steps just described. In the case of
processing information from SOPs, those factors could negatively affect an individual’s ability to
see the information provided, causing errors of perception. Some of the factors leading to errors
of perception could be internal to the individual, such as the person’s vision (maybe he needs
glasses and doesn’t have them, or can’t differentiate the colors in a color-coded graph) or
preoccupation (causing him to be distracted and skip reading a step on the procedure). Some of
the factors leading to errors of perception could be external to the individual, such as the
document being unreadable after being exposed to the elements or due to poor lighting.
Similarly, factors internal to the individual could negatively affect how information provided in
SOPs is interpreted after being adequately perceived, leading to errors of interpretation. Cultural
background, knowledge, intelligence, memory, attention, ability to focus and ability to divide
attention are only some examples. Examples of external factors that could lead to errors of
interpretation are the amount of information provided in the procedure (both, insufficient
information and too much information, could be detrimental), the clarity of the information, the
quality of the information, the individual’s workload, and consistency in many respects, such as
the terminology used throughout the procedure.
Decision-making, the last step of information processing, is also highly susceptible to error due
to internal and external factors. Many of the internal and external factors that affect interpretation
also affect decision-making. Experience, knowledge, motivation, attitude, risk propensity,
fatigue and stress are internal factors that could negatively influence a person’s decision after
having adequately interpreted information. From his knowledge, a person could correctly
interpret the red traffic light that signals drivers to stop at an intersection; but that person could
decide to not stop because he also knows (same factor – knowledge) that there are rarely other
vehicles and never law enforcement at this intersection.
Examples of external factors that could affect decision-making are the clarity of the information,
the amount of information, the timeliness of the information, time constraints, having conflicting
goals, and workload. But also factors such as the existence of rules and their clarity, and the
enforcement of the rules strongly affect decision-making. The driver in the example would
probably stop at the red light if there was a police officer present.
Although most internal factors are difficult and some are not possible to control, they can, to an
extent, be predicted, taken into account and offset with external factors. Employers can hire
people adequately skilled and with sufficient experience (internal factors), and can also provide
training, additional experience, and instructions (external factors).
SOPs are external factors that organizations can manage and, when properly accomplished, can
prevent perception errors, interpretation errors and decision-making errors.
Writing Tips
Regardless of the intent of the writer, other elements such as how a procedure is written, its
design, the context in which it is used, and the physical and mental abilities of the person at the
time of use are ultimately going to dictate the extent to which the procedure is followed. This
section focuses on improving the content of SOPs specifically by helping the writer present
information more effectively.
Words matter. They are the most basic building blocks of SOPs and must be chosen carefully.
The following tips are a selection of standards and guidelines that help convey written
information, and meet the following criteria:
Reputable source
Prevent writer errors commonly found in petroleum industry procedures
Reduce likelihood of perception, interpretation or decision-making errors
Improve the usability of procedures
Easy to understand without training
Easy to implement without training
Easy to remember with a simple checklist
For simplicity and brevity, tips are followed only by examples if no elaboration is necessary.
Self-explanatory tips stand alone. Tips have been grouped into categories only to make reading
easier; so the titles of the groups are not precise.
Conciseness
Use short sentences. Long sentences tend to be more complex and therefore difficult to
understand. Question whether you need every word; especially modifiers like absolutely,
actually, really and very.
Express only one idea in each sentence. Break complex ideas into parts and make each one
the subject of its own sentence.
Use short paragraphs.
Include only one topic in each paragraph.
Clarity
Highlighting
Use bold for emphasis. ALTHOUGH CAPITAL LETTERS DRAW THE READER’S
ATTENTION, IT IS NOT A GOOD EMPHASIS TECHNIQUE BECAUSE IT MAKES IT
HARDER TO READ. Underlining will also draw the reader’s attention, but is harder to read
and appears to be a link when read in electronic form.
Limit emphasis to important information; otherwise, you’ll dilute its impact.
Use Warnings to alert the reader of potential hazards to personnel if instructions are not
obeyed.
Use Cautions to alert the reader of potential damage to equipment if instructions are not
obeyed.
Use Notes to call attention to important supplemental information.
Make all Warnings, Cautions and Notes distinct from the rest of the text and distinct from
each other. Example:
WARNING
Not making a Warning distinct from the rest of the text could cause the reader to
inadvertently miss vital information, potentially resulting in equipment damage or
personnel injury.
Place Warnings, Cautions and Notes immediately before or after the step to which they
apply. If they apply to a complete section, place at the beginning of that section.
Include the hazard and consequence of failure to obey instructions in Warnings and Cautions.
Include in Warnings the necessary PPE for the task, if different from what is already in use.
Do not allow Warnings, Cautions or Notes to be separated from the step to which they apply
due to page breaks.
Include only one topic in each Warning, Caution and Note.
Conditions
Start conditional sentences with the “if” provision, followed by the “then” provision.
If there are several “if” provisions, use a different sentence for every “if.”
If there are multiple “if” as well as “then” provisions, use an “if-then” table. Tables organize
the material by a situation (if something is the case in one column) and the consequence (then
something else happens in a parallel column).
Use tables or bullet lists to present multiple exceptions or conditions. Examples:
Lists
Use numbers or letters to designate items in a list if future reference or sequence is important.
Otherwise, use bullets (solid round or square preferably).
Make sure each of the bullets in the list can make a complete sentence if combined with the
lead-in sentence. The last bullet in the following example does not complete a sentence with
the lead-in sentence.
Vertical lists (lead-in sentence):
Highlight levels of importance.
Help the user understand the order in which things happen.
Make it easy for the user to identify all necessary steps in a process. Add blank
space for easy reading.
Are an ideal way to present items, conditions, and exceptions.
Use bullet lists to help your user focus on important material (This does not make
a complete sentence if combined with the lead-in sentence.)
Conclusion
Operating procedures are used daily as part of the interface between users and other system
components in high-risk industries. Investigations continue to point to procedures as important
contributors to incidents. This paper begun describing how people process information, and
explained how and why information processing is so vulnerable to human error. The purpose of
this preliminary information was to help readers recognize the importance of adhering to the
writing tips presented.
The purpose of the paper was to provide practical tips that SOP writers in the petroleum industry
can easily implement to help users avoid errors when following procedures. The paper can be
used as a training aid and the tips as a checklist for SOP writers. Because most of the tips apply
to other types of technical documents in addition to SOPs, the list can be used for other purposes
as well, eventually creating systemic changes that will not only reduce the likelihood of human
error, but also improve human reliability and performance across organizations.
Bibliography
Bates, S. and J. Holroyd, Human factors that lead to non-compliance with standard operating
procedures. 2012, Health and Safety Executive: Derbyshire, UK.
Baybutt, P., Insights into process safety incidents from an analysis of CSB investigations. Journal
of loss prevention in the process industries, 2016. 43: p. 537-548.
Center for Chemical Process Safety, Guidelines for Risk Based Process Safety. 2010, Hoboken,
NJ: John Wiley & Sons.
National Archives and Records Administration, Office of the Federal Register, Document
Drafting Handbook. 2018.
Procedure Professionals Association, Procedure Writer’s Manual, PPA AP-907-005. 2016,
Revision 2
U.S. Department of Labor, Occupation Safety and Health Administration Process Safety
Management. 2000.
U.S. Energy Information Administration, Office of Communications, EIA Writing Style Guide.
April 2015. Available at: www.eia.gov/eiawritingstyleguide.pdf
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
This study was designed to examine whether or not certain design elements of hazard statements
(HS) actually impact compliance rates. Participants (N = 52) were trained on how to carry out
eight tasks (each with an associated procedure) in a virtual 2nd Life® warehouse over the course
of 32 trials. We manipulated four HS elements (present vs. absent) – Icon, Number, Fill
(Highlight), and Boxed – leading to a 16 condition within-subjects design. We observed a range
of approximately 20 percentage points in compliance rates across the various conditions. Some
of the conditions on the lower end of compliance rates included elements such as a warning icon
and boxing elements. One interpretation proposed for this observed effect is banner blindness,
with the implication being individuals may be habituated to these types of elements and therefore
ignore them. Future research directions are discussed.
Keywords
Abstract
An important driver that influences the production profile of an offshore facility is the reliability
of production critical equipment. The improvement in production profile can be achieved by
minimizing the equipment downtime using reliable components, inclusion of redundant units and
effective-efficient inspection maintenance service. One technique of achieving an effective-
efficient inspection maintenance service is to employ the Risk-Based Inspection (RBI). RBI is a
decision-making method for inspection planning based on risk – comprising the Consequence of
Failure (CoF) and Probability of Failure (PoF) and its effective implementation depends on
thorough understanding of both components, CoF & PoF. The PoF is related to the extent of, and
uncertainty in, the degradation related to the component’s resistance to its loading. The
calculations methods for PoF are detailed in several industry accepted recommended practices like
API, DNV, etc. CoF is defined as the outcome of a leak given that the leak occurs, and its extent
depends on the inventory properties, storage and processing conditions and the surrounding area
around the loss of containment. This paper focusses on application of effective CoF estimation
techniques for Topside Static Mechanical Equipment on offshore installations.
API 581 is widely used industry guidelines for RBI technique and provides calculations for PoF
and CoF. The mechanical static equipment on a typical offshore installation will be located on
various decks (levels) and within an open or enclosed module of each deck, while the mechanical
static equipment on a typical onshore installation will be located at ground level. Hence, the
consequences of loss of containment on offshore installation can be observed at various
heights/decks (depending on the release locations); while that on onshore installation will be
predominantly at ground level. The API 581 calculation methods do not differentiate between the
type of installation. While the CoF calculations are extensive for application on onshore facilities,
their application on the offshore facilities might provide misleading results.
Many of the offshore installations have detailed consequence analysis performed as a part of
Quantitative Risk Analysis (QRA), Fire and Explosion Risk Analysis (FERA), etc. This paper
suggests a method to use results of the detailed consequence analysis as an input to the CoF
calculations in RBI Assessment. The CoF results calculated using this methodology have been
compared to those using the API 581 methodology for several offshore installations. The results
obtained by employing this technique suggests a considerable improvement in the CoF results.
Since the results of existing consequence assessment are used and no new study or calculation is
required to be performed to employ this RBI technique, it is not considered as cost intensive
technique to implement.
This method suggests using detailed consequence assessment results as an input to RBI technique
can aid in developing more cost-effective inspection program for Mechanical Static equipment on
offshore installations at no additional effort.
Derek Yelinek
Siemens Process & Safety Consulting
4615 Southwest Freeway, 900, Houston, TX 77027
[email protected]
Abstract
In the oil, gas and chemical businesses it has been considered the nature of the business to delegate
responsibility and decisions for safety, maintenance and inspections to plant personnel.
Historically, this made sense.
Unfortunately, the degree of executive accountability is not always matched by the quality of
information, visibility and decision tools generally available. The good news is that mature
mechanical integrity programs and information systems can significantly improve this state.
We’ll look at three indicators that your Mechanical Integrity (MI) program might be limiting your
strategic capabilities:
1. Mechanical integrity is not a standardized program that is universally applied across the
organization
2. Traditional information practices are inadequate to meet new requirements
3. Arbitrary time-based inspection practices are still used
When each facility is left to establish and follow their own set of mechanical integrity practices,
organizations risk or experience adverse impact in:
• Lower Revenue due to interruptions, downtime, inefficiencies
• Higher operational costs due to reactive maintenance, ineffective inspections,
mismanaged equipment and personnel
• Higher organizational risks of incidents, compliance infractions, insurability issues
• Higher management risk due to lack of information and visibility needed for effective
management; commensurate with the level of accountability
Abstract
Intentionally opening a line that carries a hazardous substance — a procedure known as a line
break — is often necessary for performing maintenance activities on pipes, valves, pumps,
compressors, and other process equipment. However, inadequate or improper line break practices
may increase the risk for a loss of containment event, complicate troubleshooting efforts if a loss
of containment occurs, or inadvertently expose workers to hazardous materials. In this paper, best
practices and considerations for line breaks into hazardous piping systems and responding to
suspected leaks of hazardous materials will be reviewed. Mitigation and prevention strategies
including mechanical integrity and leak detection programs will also be discussed. Then, a case
study that examines an incident related to a line break in a frozen foods warehouse will be
presented. The incident resulted in a reportable release of anhydrous ammonia from a bank of
parallel compressors. Just hours prior to the release, contractors conducted a line break to remove
an ammonia compressor from service. Gas detector alarms activated during post-shift hours in the
unmanned process building as anhydrous ammonia escaped the refrigeration system. Operators
discovered the release upon reporting to work the following morning and initiated an emergency
shutdown. After reviewing the work performed during the line break, personnel believed they had
identified the cause of the leak and restarted the system. However, the leak persisted after the
restart before being properly diagnosed during a second shutdown. The loss of containment event
described here provides valuable lessons that can aid in developing an effective procedure for safe
process operation following a line break, and the impact that improper line break procedures can
have on leak identification and system troubleshooting.
1 Introduction
The U.S. Occupational Safety and Health Administration (OSHA) and U.S. Environmental Protection
Agency (EPA) both require employers1 to develop and implement specific safe work practices to control
hazards at facilities covered by OSHA’s Process Safety Management (PSM) program and EPA’s Risk
Management Plan (RMP) rule [1,2]:
The [employer] shall develop and implement safe work practices to provide for the
control of hazards during operations such as lockout/tagout; confined space entry;
opening process equipment or piping; and control over entrance into a facility by
maintenance, contractor, laboratory, or other support personnel. These safe work
practices shall apply to employees and contractor employees. [emphasis added]
Recognized and generally accepted good engineering practices (RAGAGEP) are well-established for
developing lockout/tagout (LOTO) [3,4] and confined space entry [5,6] procedures. Additionally,
resources that aid the development of administrative controls for access to hazardous locations are
readily available [7,8]. RAGAGEP for opening process equipment or piping containing hazardous
chemicals, however, are less established and are often situationally dependent [9]. This paper explores
practices and considerations for intentionally opening piping that carries a hazardous substance,
commonly referred to as line breaking. A loss of containment incident from an ammonia refrigeration
system will also be reviewed, demonstrating potential consequences of line breaking and emphasizing
the importance of developing and properly implementing a line breaking procedure at PSM/RMP
covered facilities.
1
OSHA regulations refer to “employer,” while RMP regulations refer to “owner or operator.”
Figure 1. General steps for safe and successful line and equipment breaking
[10].
Common methods of isolation for line breaking, in order of increasing effectiveness in preventing
unintended leakage, include (i) valved isolation, (ii) double block and bleed, and (iii) positive isolation
(blinds and blanks) [14,15]. These three common isolation methods are shown in Figure 2 and detailed
below. Other specialized methods of isolation, such as pipe plugs or inflatable bags, are not described in
this paper and should be considered on a case-by-case basis. As previously discussed, the method of
isolation should be reviewed during the HIRA, prior to any work being performed. In addition to the
nature and severity of the process fluid’s hazards, additional considerations may influence the selection
of the method of isolation. These other factors may include, but not necessarily be limited to feasibility,
accessibility, and urgency.
Figure 2. Three common isolation strategies during line breaking are (a)
valved isolation, (b) double block and bleed, and (c) positive isolation using a
blind or blank (blank flange shown).
An isolation strategy that relies solely on an isolation valve as protection from hazardous substances
should be given careful consideration during the review process. In situations where valved isolation is
the only practical or feasible option, particularly for older equipment, additional levels of monitoring
and protection may be necessary. The Case Study presented below demonstrates how a loss of
containment event occurred through a closed isolation valve that was intended to contain the process
fluid (ammonia) during a line break. Additional monitoring and alternative protections that could have
been implemented to mitigate the consequences due to the leaky isolation valve are discussed in the
Lessons Learned section.
2.2.2 Double Block and Bleed
The double block and bleed method of isolation consists of an open bleed valve located between two
closed isolation (block) valves (see Figure 2b). The objective of the bleed valve is to vent hazardous
materials to a safe location (e.g., a vent header that is routed to environmental controls), so there will
be no pressure accumulation between the two isolation valves even if the upstream isolation valve leaks
[17]. However, a build-up of pressure may occur if the bleed line is too small, too long, or discharges into
a vent system that is susceptible to backpressure [14]. In one near-miss event, a compressor was
severely overpressurized with flammable gas by backflow through a double block and bleed
configuration [18]. Operators were unaware that the bleed valve, which had the intended purpose of
reliving pressure by routing material to a flare header, was still open as they began to de-isolate the
system after maintenance work was completed. Pressure increased in the flare header during the de-
isolation process, resulting in significant backpressure occurring between the two isolation valves and
ultimately leading to backflow of flammable gas.
An additional concern for the double block and bleed isolation strategy is the ability to adequately
handle the potential hazardous material at the safe bleed location. In ammonia refrigeration systems
like the one discussed in the Case Study, emergency relief vents typically emit directly to the
atmosphere. This may be an acceptable safe location for small leaks, but consequences of large or
sustained leaks through the upstream isolation valve should be considered for double block and bleed
configurations.
Lockout: The placement of a lockout device (such as a lock) on an energy isolating device to ensure
the energy isolating device and the equipment being controlled cannot be operated.
Tagout: The placement of a prominent warning (such as a tag) on an energy isolating device to
indicate that the energy isolating device and the equipment being controlled may not be operated
until the warning is removed.
While the purpose of both a LOTO procedure and a line breaking procedure is to control hazardous
energy, this paper seeks to emphasize and reinforce the distinctions between LOTO and line breaking. At
processing facilities with hazardous chemicals, a primary purpose of line breaking procedures is to
identify an appropriate hierarchy of controls that protect workers [9]. The hierarchy encompasses
engineering controls (e.g., energy isolation device), administrative controls (e.g., barricade to restrict
access), and PPE. LOTO is clearly an important administrative control during line breaking that helps
ensure hazardous materials remain isolated, but it is just one of many controls that should be
considered. It is important to recognize that LOTO is more than simply placing a lock and tag on a piece
of equipment or valve. Workers must first correctly identify the source(s) of the hazardous energy, so
they can be confident that their isolation strategy is appropriate. Next, after LOTO is performed, it is
common to attempt to verify (Try Out) the LOTO was successful. Incorporating these additional
considerations of LOTO into a line break hazard evaluation can further protect workers from the
hazardous energy source. However, as described in the Case Study, a properly implemented LOTO
program can be compromised if other inadequacies exist in the execution of a line break.
…A work authorization notice or permit must have a procedure that describes the
steps the maintenance supervisor, contractor representative or other person needs to
follow to obtain the necessary clearance to get the job started. The work
authorization procedures need to reference and coordinate, as applicable,
lockout/tagout procedures, line breaking procedures, confined space entry
procedures and hot work authorizations… [emphasis added]
Operating procedures are generally written in sufficient detail such that a qualified worker can
consistently and successfully perform the task [22]. But as the term “nonroutine” suggests, it is not
possible to develop a one-size-fits-all procedure to cover the potential hazards for every line break
scenario. The uncommon nature of nonroutine tasks also adds to the challenge of reliable execution,
since workers will likely only have limited or infrequent experience performing the job. Typically, line
breaking procedures require a review of potential hazards, such as those discussed in Section 2.1 above,
with the idea that a more detailed plan will be incorporated into the line breaking permit following the
review. This review also allows workers with limited experience opening a particular piece of equipment
or piping to become familiar with the potential hazards. Considerations for effective line breaking
procedures and permits are presented in the Lessons Learned from the Case Study.
Freezer
-20°C
Cooler
Ice Cream 5°C
Storage
-25°C
#1 #2 #3
Screw Compressors
Figure 3. Simplified PFD for PSM/RMP covered anhydrous ammonia refrigeration system discussed in this Case Study.
The Permit did not include any additional details on how to accomplish the requirements listed above.
Notably, a procedure was not developed for effective isolation of the system. Once the Permit was
signed, contractors began work to remove Compressor #2. As shown in Figure 4, the suction and
discharge lines for Compressor #2 were isolated by closing and locking Gate Valves #1 and #2 on the
suction side and Y-pattern Globe Valve #1 on the discharge side. The disconnections occurred at
Compressor #2’s flanges, which were located downstream from Gate Valve #2 and upstream from Y-
pattern Globe Valve #1. Ammonia detectors in the compressor building briefly alarmed as contractors
removed Compressor #2, but all workers in the area were wearing appropriate PPE with the positive
forced ventilation system activated.
Figure 4. Isolation strategy used during the removal of Compressor #2.
The distributed control system (DCS) for the subject refrigeration process was monitored remotely by
the system vendor, with no operations staff onsite during the overnight hours. Shortly after workers left
the facility on the day Compressor #2 was removed, ammonia gas detectors began to continuously
alarm in the compressor area. This caused the positive forced ventilation system to activate
automatically. However, the gas alarms did not automatically trigger an emergency shutdown because
the system shutdown procedure required operators to manually close certain valves. Process data show
that the level in the High Temperature Receiver started to slowly drop following the initial detection of
ammonia. The gas alarms at the facility are recorded in the DCS, but the system did not notify local
operators of the alarm. Gas detectors continued to alarm throughout the night, at times reaching the
instrument’s upper detection limit. Level measurements in the Medium and Low Temperature Receivers
also began to drop.
A refrigeration operator reported to work the following morning and discovered an ammonia release in
progress upon opening the door to the mechanical building that housed the screw compressors. The
operator immediately evacuated the area and initiated an emergency shutdown of the refrigeration
system. The contractors who had performed the maintenance on Compressor #2 were called back to the
facility and informed that the refrigeration operator had observed a cloud of ammonia in the area of
Gate Valve #2. Based solely on the operator’s visual observation, it was determined that a small
ammonia leak had occurred through the closed and locked Gate Valve #2 on the suction side of
Compressor #2. Contractors removed and replaced Gate Valve #2, which had previously been scheduled
for replacement as part of the facility’s mechanical integrity program, and installed a blank on the
downstream side of the new valve. No additional leak investigation was conducted.
The system was restarted following the replacement of Gate Valve #2. No modifications were made to
the discharge side isolation strategy. After two hours of operation, ammonia detectors again alarmed in
the compressor area at the same time operators smelled ammonia and visually observed liquid droplets
at the Y-pattern Globe Valve #1. A second emergency shutdown of the refrigeration system was
initiated. Facility personnel determined that the source of the second ammonia leak was Y-pattern
Globe Valve #1, depicted in Figure 7. Contractors used a re-seating tool on the globe valve to ensure
uniform contact between the valve plug and seat ring. Inspection of the valve bonnet also revealed wear
on the packing. The facility did not have replacement packing in stock, so the bonnet of Y-pattern Globe
Valve #1 was capped to contain any possible leaks through the packing. The system was restarted for a
second time and no subsequent ammonia releases were detected.
4 Lessons Learned
4.1 Line Breaking and LOTO Procedures Are Related yet Distinct
A dedicated line breaking procedure that is distinct from LOTO procedures can help ensure consistent
safe work practices when opening piping that may contain hazardous materials. While a line breaking
procedure existed at the facility described in the Case Study, operators largely viewed it as an extension
of LOTO. The operators believed that properly implementing LOTO on the valves previously shown in
Figure 4 would provide continuous, reliable isolation, and therefore the only isolation requirement
included in the Line Breaking Permit was LOTO. A more thorough HIRA may have exposed limitations in
this isolation strategy.
Figure 7 provides an example of a line breaking procedure that was presented in a document produced
under OSHA’s grant program. While this example procedure is similar to the line breaking procedure
that was in place for the subject ammonia refrigeration system, the “Hazard Review” step in the sample
document was not listed in the subject facility’s line break procedure. Facilities should consider
emphasizing the importance of conducting a hazard review in their line breaking procedures, with the
goal of highlighting that effective isolation strategies commonly involve many other considerations in
addition to implementing a LOTO procedure. Additionally, incorporating more specific guidance on
possible isolation strategies into line breaking procedures (e.g., additional guidance if a simple valved
isolation is implemented) may assist facility personnel in making more informed decisions. Finally, a
requirement within the line breaking procedure to involve a SME in the review step may increase the
likelihood that an effective isolation strategy is developed.
Figure 7. Example Line and Equipment Opening (L.E.O.) procedure presented in a document produced under OSHA’s grant
program that requires a hazard review and the completion of a permit before a line break may be performed [10].
4.2 Permit Systems for Line Breaks Can Help Promote Safe Work Practices
In a 2019 investigation report, the U.S. Chemical Safety and Hazard Investigation Board (CSB) concluded
that “consistently using safe work practices, such as line-breaking permits, ensures that each time a
piece of process equipment is put into a nonroutine situation, personnel use effective safeguards” [29].
The CSB also noted that it is important to train personnel on different situations that would require a
line break permit to ensure the permit process is followed, particularly under nonroutine conditions.
The operators at the subject ammonia refrigeration system did in fact recognize that a line breaking
permit was required to remove Compressor #2. However, the completed permit only contained a
generic checklist that lacked detail on how to provide adequate safeguards. Facilities should consider a
requirement to attach the procedure developed during the HIRA to line breaking permits, which may
help promote a more effective permitting system. A permit that consists of only a simple checklist used
for all line breaks may not provide facility personnel with sufficient detail to enact appropriate controls
in every scenario.
A mechanism to ensure that equipment is maintained in a manner appropriate for its intended
application,
A preventative maintenance program that reduces the need for unplanned maintenance, and
An inspection and testing program that helps recognize when equipment deficiencies occur and
reduces the likelihood that deficiencies lead to serious accidents.
Maintenance records for Compressor #2 indicated that the suction and discharge valves were included
in the annual maintenance inspection program and were determined to be in satisfactory condition a
year prior to the incident. Both the suction and discharge valves, however, were nearing the end of their
expected useful lives and were scheduled to be replaced as part of the Compressor #2 maintenance
activity. Had the facility leveraged their mechanical integrity program records as part of a line break
HIRA, an alternative isolation strategy may have been selected that did not rely on isolation valves that
were scheduled for replacement. Implementing a robust mechanical integrity program, and using it
when reviewing hazards for a line break, can help manage risks when opening piping and equipment.
The release that was discovered the following morning, however, was unexpected. Had this release
triggered a leak investigation, facility personnel would likely have reached two significant conclusions:
(1) the leak was much larger than initially thought (due to the reduction in level in the HTR, MTR, and
LTR) and (2) the discharge isolation valve was also a possible source of the leak. Despite the operator’s
visual observation of an ammonia cloud near the suction isolation valve, any leak from the suction side
must have passed through two closed gate valves and a closed control valve. Additionally, the suction
side of the compressor operates at much lower pressure compared to the discharge side. While it
certainly is possible for a leak to occur on the suction side, the magnitude of the release should have
placed suspicion on the single discharge isolation valve that was responsible for containing the higher-
pressure side of the gas systems. Had operators fully appreciated the size of the leak, it is likely they
would have taken action to improve the isolation strategy on both the suction and discharge side
isolation valves before restarting the system.
5 Conclusion
Unfortunately, the Case Study described here is not an isolated occurrence — numerous unintended
ammonia releases have been documented that were purportedly associated with line breaking activities
[31,32]. More broadly, line breaking can be a nonroutine yet ubiquitous task at process facilities that is
inherently hazardous. Despite U.S. regulations requiring PSM/RMP facilities to control hazards during
line breaking, the regulations do not provide prescriptive requirements and RAGEGEP for line breaking is
not fully developed. This paper has presented several Lessons Learned from an ammonia release that
occurred following a line break on a refrigeration system. These learnings represent good engineering
practices that can help reduce risk associated with opening piping or equipment that potentially
contains hazardous material.
6 References
[1] Process Safety Management of Highly Hazardous Chemicals, 29 CFR § 1910.119(f)(4).
[3] NFPA 70E, Standard for Electrical Safety in the Workplace, 2018, Informative Annex G: Sample
Lockout/Tagout Program.
[4] The Control of Hazardous Energy (Lockout/Tagout), 29 CFR § 1910.147, Appendix A: Typical Minimal
Lockout Procedure.
[5] NFPA 350, Guide for Safe Confined Space Entry and Work, 2019, Annex B: Sample Confined Space
Pre-Entry Evaluation Form and Permit.
[7] U.S. Environmental Protection Agency. Standard Operating Safety Guides, Publication 9285.1-03,
June 1992, Chapter 4: Site Control.
[8] American Chemistry Council. Site Security Guidelines for the U.S. Chemical Industry, October 2001.
[9] Zimmerman, J., Haywood, B. “Process Safety Management Best Practice Line Break Program,” ASSE
Professional Development Conference and Exposition, Atlanta, June 26–29, 2016, Session No. 720.
[10] Georgia Tech Applied Research Corporation. “Process Safety Management of Highly Hazardous &
Explosive Chemicals Module 4: PSM Standard Operating Procedures,” osha.gov. Accessed May 1, 2020.
[11] Dee, S.J., Cox B.L., Walters M.S., Ogle R.A. “PPE – Can you have too much of a good thing?” 2019
Mary Kay O’Connor Process Safety Symposium, College Station, TX, October 22–24, 2019.
[12] U.S. Occupational Safety and Health Administration. “Three Employees Are Exposed To Phosgene
during Leak,” Accident: 201925278.
[13] Crowl, D.A., Louvar, J.F. Chemical Process Safety: Fundamentals with Applications, 3rd Ed., Boston,
MA: Pearson Education, 2011, Chapter 5, Section 5-5: Toxic Effect Criteria.
[14] Mannan, S. Lees’ Loss Prevention in the Process Industries: Hazard Identification, Assessment and
Control, Oxford, UK: Elsevier, 2012, Volume 2, Chapter 21, Section 21.4: Isolation.
[15] U.K. Health and Safety Executive. The Safe Isolation of Plant and Equipment, 2006.
[16] U.S. National Transportation Safety Board. Pipeline Accident Brief: Natural Gas Explosion at
Educational Facility, Minneapolis, Minnesota, January 2, 2020.
[17] Center for Chemical Process Safety (CCPS). “Double Block and Bleed,” Process Safety Beacon, March
2012.
[18] Australian National Offshore Petroleum Safety Authority. “Safety Alert 03: Compressor
Overpressure Incident — Near Miss.”
[19] ANSI/ASSP Z244.1, The Control of Hazardous Energy: Lockout, Tagout and Alternative Methods,
2016, Chapter 7: Control of Hazardous Energy.
[20] Center for Chemical Process Safety (CCPS). Guidelines for Auditing Process Safety Management
Systems, 2nd Ed. Hoboken, New Jersey: John Wiley & Sons, 2011, Chapter 12: Safe Work Practices.
[21] Process Safety Management of Highly Hazardous Chemicals, 29 CFR § 1910.119 Appendix C:
Compliance Guidelines and Recommendations for Process Safety Management (Nonmandatory).
[22] Center for Chemical Process Safety (CCPS). Guidelines for Risk Based Process Safety. Hoboken, New
Jersey: John Wiley & Sons, 2007, Chapter 10: Operating Procedures.
[23] American Petroleum Institute (API). API RP 2001, Fire Protection in Refineries, April 2012, Section
7.4: Loss of Containment.
[24] American Petroleum Institute (API). API RP 574, Inspection Practices for Piping System Components,
4th Ed., November 2016, Section 9.4: Investigation of Leaks.
[25] American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE). ASHRAE
Position Document on Ammonia as a Refrigerant, February 1, 2017, pp. 1-14.
[26] NFPA 921, Guide for Fire and Explosion Investigations, 2017, Table 23.8: Combustion Properties of
Common Flammable Gases.
[27] Process Safety Management of Highly Hazardous Chemicals, 29 CFR § 1910.119 Appendix A: List of
Highly Hazardous Chemicals, Toxics and Reactives (Mandatory).
[28] Emergency Planning and Notification, 40 CFR Appendix A to Part 355: The List of Extremely
Hazardous Substances and Their Threshold Planning Quantities.
[29] U.S. Chemical Safety and Hazard Investigation Board, Investigation Report: Toxic Chemical Release
at the DuPont La Porte Chemical Facility, No. 2015-01-I-TX, June 2019, Section 6.5: Line-Breaking
Practices.
[30] Center for Chemical Process Safety, Guidelines for Mechanical Integrity Systems, John Wiley & Sons,
Inc., New Jersey, 2006.
[31] U.S. Occupational Safety and Health Administration. Accidents: 95869.015, 108981.015, 200357788,
and 201486065; Inspection: 1217909.015.
[32] U.S. Environmental Protection Agency. “Unified Grocers Settles EPA Claims for Delayed Reporting of
Ammonia Release, Risk Management, and Emergency Planning Violations,” October 1, 2015.
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Process Related Incidents with Fatality and the Effectiveness of the Process
Safety Management Program
Abstract
A database of the Occupational Safety and Health Administration (OSHA) captures incident data
from investigations for fatal incidents and hospitalizations since 1984. OSHA Region 6 includes
5 states including Texas and Louisiana, where much of the US chemical manufacturing and
petroleum refining industry is located. An analysis of process related investigations by OSHA in
Region 6 shows that large-scale multi-fatality incidents have been significantly decreased since
the implementation of Process Safety Management (PSM) program in 1995. It is noticeable that
currently majority of the fatalities occurs in single fatality incidents. Our preliminary analysis
suggests that these individual process related fatalities are a result of operating and maintenance
activities that are not well addressed by current process safety practices or by personal safety
measures. An analysis of such incidents and their circumstances will be conducted proving
recommendations for improved performance to reduce the incidents with single fatality.
Abstract
Applying mind-mapping to planning and executing workplace activities could provide a means of
reducing errors in common tasks such as welding/cutting, filling/emptying tanks and confined
space entry. Trevor Kletz, one of the “founding fathers” of process safety engineering (Vechot et
al, 2014), emphasizes the inability of organizations and individuals to learn from past mistakes in
his final book, “What Went Wrong?: Case Histories of Process Plant Disasters and How They
Could Have Been Avoided, 5th ed. (2009)”. Due to the complexity of both human and
organizational behavior, finding a solution to previous mistakes has been an immense challenge
for many companies. As a result, more effective methods to increase individual and organizational
learning are needed. Mind-mapping appears to offer a means to organize and and assist in the recall
of hazards that have resulted in previous incidents.
This paper describes a methodology for examining previous incidents pertaining to certain
common tasks. The emphasis in this effort is to help identify hazards that are hidden from view,
and/or not normally encountered and to organize them in a way that is easy to recall or identify.
Applying this method can improve procedures and work permits, as well as training and as a check
by those directly involved before undertaking a hazardous task.
Keywords: Lessons Learned, Hazard Recognition, Hot Work, Permitting, Confined Space, Tank
Filling, Hazardous Substances, Work Control
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
In the early morning hours of Dec 3, 1984, a large toxic gas release from a Union Carbide India
Limited (UCIL) pesticide plant in Bhopal, India swept over a large, densely populated area south
of the plant. About 500,000 people downwind were exposed to the gas cloud. Thousands of people
died in the immediate aftermath (we don’t know how many) and tens of thousands were severely
injured.
Bhopal has changed the way we think about safety culture. PHAs such as HAZOPs have become
popular largely because of Bhopal. In this paper we ask and attempt to answer three questions:
Would a HAZOP have prevented Bhopal?
Would a LOPA have prevented Bhopal?
Would an STPA have prevented Bhopal?
To be clear, the question is not whether these technologies, applied today, would prevent a Bhopal-
like tragedy in the future. The question is, would any of these technologies, applied to the Bhopal
design in the 1960’s, have prevented the accident or mitigated the impact.
1 Background
1.1 Bhopal – The City
Bhopal is a bustling metropolis of 2 million people. The city and surrounding area are home to a
large open- air zoo, a fascinating museum of Indian tribal life, a collection of historical palaces
and temples and a cave with stone-age paintings that is a UNESCO World Heritage site.
It is located in the geographic center of India. Excellent rail connections make it an attractive
location for manufacturing. That is one reason why Union Carbide India Limited (UCIL) chose
to locate an agricultural chemicals plant there in the 1960’s.
2 Process/Plant Description
MIC is an intermediate in the development of a family of pesticides called carbamyls of which
Sevin was the principle product of the Bhopal pesticide plant.
Phosgene Plant: CO + Cl Phosgene
MIC Plant: Phosgene + Methylamine MIC [Continuous Plant]
1-Napthol Plant (new technology, worked at pilot scale, never worked at full scale)
Sevin Plant: MIC + 1-Naphthol SEVIN pesticide [Batch Plant]
Figure 1 illustrates the important features of the pesticide production facilities focusing on MIC
storage.
MIC was produced onsite (MIC Production Plant). This was a continuous process.
MIC was consumed onsite as a raw material in the Pesticide Plant (MIC Consumer). This
was a batch process.
MIC storage was required between the continuous and batch plants. MIC storage was
supposed to be kept to a ‘minimum’.
A Caustic Scrubber was provided to neutralize MIC vented from the storage tanks.
A flare was provided to burn MIC vented from the storage tanks.
A refrigeration system was provided to keep the stored MIC cold. Keeping the MIC cold
decreases the reaction rate of MIC with water and other contaminants.
Vent
Caustic
Scrubber
Caustic Tank
PCV
Nitrogen
MIC
Production PSV
Refrigeration
MIC
Consumer
3 The Accident
3.1 Initiating Event
People who believe in root causes also believe in initiating events or triggering events. The
triggering event for this accident was the introduction of a large amount of water into one of the
MIC storage tanks E-610. The MIC-water reaction is highly exothermic resulting in a massive
gas release.
There is dispute about how the water got into the tank. Early speculation centered on a filter
washing operation, without proper isolation, in another part of the plant. But I think it is now
well established that the triggering event was sabotage – water intentionally introduced into the
tank by a disgruntled employee.
The source of the water is not important for our current discussion. Water introduction into the
tank should not have created a catastrophe, regardless of the source of the water.
5 - CATASTROPHIC
Yellow Yellow RED RED RED
multiple fatalities
4 - MAJOR
Green Yellow Yellow RED RED
single fatality
3 - SEVERE
Green Green Yellow Yellow RED
severe lost time injury
2 - MINOR
Green Green Green Yellow Yellow
reportable injury
1 - SLIGHT
Green Green Green Green Yellow
first aid Likely to Happen Multiple
Happens in our Company
Happens in the Industry
Plantt
A B C D E
Figure 2 – HAZOP Risk Matrix
5 Would a LOPA have Prevented it?
What follows is a simplified LOPA conducted via a Required Risk Reduction matrix (Duhon,
Cronin, 2015). The matrix, Figure 3, has the following properties:
1. Both the horizontal axis (frequency) and vertical axis (severity) have one order of
magnitude difference from one row/column to the next. Note that I have extended the
vertical axis to show 10, 100 and 1000 fatalities rather than the single ‘Multiple Fatalities’
severity used in the HAZOP matrix above.
2. Rather than red, yellow, green, the cells have numbers. Each number represent the required
order of magnitude risk reduction in the given design.
This form of the risk matrix allows explicit identification of the unmitigated risk and explicit
accounting for the impact of safeguards.
8
4 5 6 7 8
1000 Public Fatalities
7
3 4 5 6 7
1000 Fatalities/100 Public Fatalities
6
2 3 4 5 6
100 Fatalities/10 Public Fatalities
5 - CATASTROPHIC
1 2 3 4 5
10 Fatalities
4 - MAJOR
0 1 2 3 4
single fatality
3 - SEVERE
0 1 2 3
severe lost time injury
2 - MINOR
0 1 2
reportable injury
1 - SLIGHT
0 1
first aid
1/10000 Years
1/1000 Years
1/100 Years
1/10 Years
1/1 Year
A B C D E
Figure 3: Required Risk Reduction Matrix
Performing a LOPA via the RRR
Let’s do a LOPA on the two scenarios we considered for the HAZOP,
Scenario 1: Large amount of water accidently introduced into the tank via human error.
Likelihood: 1/100 years, 1/1000 years???
Severity: 100 fatalities, 1000 fatalities???
Conclusion: Somewhere in the B6 to C8 range with an RRR of 3 to 5.
Let’s assume that the LOPA team settled on an RRR = 4
Applying IPLs:
PSV/Flare: 2
PCV/Scrubber: 0 (the scrubber was sized for minor releases)
Pressure Alarm: 0 -1 (Probably 0. Would have depended on identification
of available responses such as addition of diluent.)
Temperature Alarm: 0
Refrigeration 0 or 1 (Probably 0. Refrigeration would have slowed the
reaction initially, but eventually the run-away reaction
would have overwhelmed the refrigeration system.)
Buffer zone around plant: 1?? (Would LOPA team have considered the buffer zone?)
No vent or drain connections 1??
Diluent 0 (See pressure alarm - LOPA team would have sought
information of the nature of the diluent, its effect on the
reaction rate, the ability of operators to implement it, etc.)
Sum of IPLs: 3 to 4
Scenario 2: Large amount of water accidently introduced into the tank via sabotage.
This discussion would have proceeded much as in scenario 1 except that one safeguard would
not apply to this scenario – the lack of drain and vent connections which made human error less
likely, but did not make sabotage any harder or less likely.
Would a LOPA have prevented Bhopal? The most obvious impact of a LOPA would have been
to discount the scrubber as a safeguard/IPL for a major release. It was sized for minor venting.
It seems likely that a LOPA would not have found adequate safeguards to balance the threat.
Ergo, a LOPA would likely have driven process modifications of some sort.
6 Would an STPA have Prevented it?
STPA is a systems-theory-based PHA methodology developed by Nancy Leveson (Leveson,
2011). It is currently mainly used in highly complex, highly hazardous industries such as
military, space, aeronautics, medicine and autonomous vehicles.
The guiding principle of STPA is that accidents happen when we lose control. The
methodology, in-a-nutshell is:
1. Define the system and scope of study
2. Develop the control structures
3. For each control action identified in the control structure, identify how we could lose
control via:
a. Safe action not applied
b. Unsafe action applied
c. Action applied too early, too late or out of order
d. Action applied too long or not long enough
4. For each unsafe control action identified, identify causal factors
5. Make recommendations to prevent the unsafe control actions
STPA uses control structures to identify all control actions (process related, economic,
regulatory, social or otherwise), that directly or indirectly affect the control objective under
study. Such control structures help us think about control loops other than process control loops,
which is perhaps the most important impact of STPA.
Given the hazardous nature of MIC, the obvious control objectives that would have been
analysed by the STPA team were controlling water ingress and human exposure in case of a leak.
Figures 4 and 5 show the control structure (at two levels of detail) to represent these objectives.
Figure 4 – Bhopal MIC Release Control Structure (High Level)
Plan
Figure 5 – More Detailed MIC Release Control Structure for Emergency
In an STPA, all control actions identified in control structures are rigorously analysed to identify
any potential unsafe control actions (UCAs) and make recommendations to prevent them. Tables
1-6 show how UCAs are identified. The explicit emphasis on control would likely have led the
STPA team to evaluate several control functions more extensively than HAZOP or LOPA teams
would have using UCA tables as follows:
Table 1: Identifying Unsafe Control Actions (1/5)
Objective: Control Water Ingress
Controller: Operator
Action: Water Wash Filter
Not providing Providing Causes Providing Too Stopped too
causes Hazard Hazard early/ Too late soon/Applied too long
Washing of filters
causes hazard if the
Washing of filters
water system
causes hazard if the
pressure is high - -
MIC Storage Tank
enough for water to
is not isolated
reach the MIC
storage tank
Washing of filters
causes hazard if the
- MIC Storage Tank - -
isolation is not leak
proof
Water entered the tank either through a hose attached following removal of a pressure gauge
(sabotage) or via improper isolation for a maintenance operation. In either case, the STPA team
would likely have questioned the level of control provided for keeping water out of the tank. It
shouldn’t have been as easy as removing the pressure gauge! It should not have been as ‘easy’
as forgetting to install slip blinds!
Context: By explicitly identifying the Buffer Zone as a ‘control’ that can fail, rather than a
safeguard to be taken credit for, an STPA would have likely prompted the management to
enclose the entire Buffer Zone within the plant fence rather than depending on local and state
governments to police property rights.
Table 4: Identifying Unsafe Control Actions (4/5)
Objective: Control Human Exposure
Controller: Government Agencies and UCC/UCIL
Action: Developing and providing an emergency response plan
Providing Providing
Not providing causes Stopped too soon/Applied too
Causes Too early/
Hazard long
Hazard Too late
Not developing a response Hazard develops if training to
plan causes hazards if the medical professionals is
Medical professionals are not provided early, but not updated
trained to treat MIC exposure when required
Not developing a response
plan causes hazards if the Hazard develops if supplies are
Medical professionals do not provided early, but not
have the supplies to treat MIC maintained
exposure.
Not developing a response Hazard develops if first
plan causes hazards if the responder training is provided
First responders are not early, but not updated when
trained for MIC release required
Not developing a response
Hazard develops if people are
plan causes hazards if the
trained early, but training is not
People are not trained on how
maintained
to respond to a release
Table 5: Identifying Unsafe Control Actions (5/5)
Objective: Control Human Exposure
Controller: Operators
Action: Sound Alarm
Not providing Providing Causes Providing Too Stopped too
causes Hazard Hazard early/ Too late soon/Applied too long
Sounding an alarm Sounding an alarm
Not sounding an
causes hazard if causes hazard if Sounding an alarm
alarm causes hazard
Alarm is sounded Alarm is sounded causes hazard if Alarm
as People will not
too often and is too late for is stopped too soon.
be aware of the leak
hence ignored. effective response
Providing an alarm
causes hazard if the
people are not
- - -
aware of how the
leak related alarm
sounds
Context: There should have been a unique alarm in place that the common people recognized.
There should have been a plan in place to mitigate the effects, even if a leak occurred. Citizens
were reported to be running on the streets, unaware of the consequences, helpless and
uninitiated. The simplest remedies like a wet towel on the face could have mitigated the effects.
Why were citizens not aware? Why were the most basic remedies for MIC exposure unknown?
Why couldn’t the police and the military mobilize the affected on time? An STPA would have
not only identified the need to develop an Emergency Response Plan, but also analysed the
developed plan to realize any potential gaps in emergency readiness.
Context: Perhaps the most distinctive feature of an STPA is its ability to address socio-political
aspects of a system. To this day, even after thorough investigation and multiple interviews with
the operators, we still do not have a clear explanation of what exactly went down in Bhopal.
Operators offered opposing explanations, some were hesitant to speak due to fear and some
others were perhaps just disgruntled by the state of affairs in the company. There was a clear lack
of communication and trust between the UCIL management and the plant operators. Did they
provide the management with previous incident reports? (minor leaks had occurred before the
major accident). Did they sound the alarm on time? Were they hesitant to tell the truth? All of
this boils down to the communication established between the operators and management. If the
communication gaps would have been addressed earlier, perhaps the operators might have
spoken up about their disgruntle instead of acting out or covering up. By recognizing the
communication between operators and management as a control action, an STPA would have, at
the very least, initiated a discussion about such communication gaps that were prevalent.
7 Conclusion
The process industries began routinely doing HAZOPs following, and at least partially as a result
of, Bhopal and other significant accidents in that time period. I (Duhon) began to wonder if a
HAZOP performed in the 1960s on the Bhopal MIC Plant design would have prevented the
accident.
I found this question difficult to answer.
When we started doing LOPAs I asked the question anew. LOPA methodology corrects a
common blind spot. A HAZOP team will ‘take credit’ for almost anything that looks like a
safeguard. LOPA technology identifies and quantifies the effectiveness of each possible
safeguard. A LOPA would almost certainly not have taken credit for the MIC caustic scrubber
for example.
And yet, it still wasn’t clear to me that a LOPA team in the 1960s would have made the
necessary changes to prevent the accident or adequately mitigate the results.
And now that we are at least contemplating the use of STPAs I’ve asked myself the question
once more. Would an STPA, performed in the 1960s, have prevented Bhopal? This time it is
easier to have some confidence. The basic premise of the STPA methodology is that accidents
occur when we lose control. This starting point leads us to generate a much larger and richer set
of things that might go wrong.
It is much more likely that an STPA would have prevented Bhopal, and by extension, that it will
prevent the next ‘unimaginable’ tragedy.
8 Sources
D’Silva, 2007, The Black Box of Bhopal: A Closer Look at the World's Deadliest Industrial
Disaster, Trafford Publishin4
Perrow, 1984, Normal Accidents, Living with High-Risk Technologies, Basic Books
Kalelker, 1988, “Investigation of Large-Magnitude Incidents: Bhopal as a Case Study” Presented
at The Institution of Chemical Engineers Conference On Preventing Major Chemical Accidents,
London, England, May, 1988
Amnesty International, 2004, “Clouds of Injustice, Bhopal disaster 20 years on”, Amnesty
International Publications, London
Diane Vaughn, 1999, The Challenger Launch Decision, Risky Technology, Culture and
Deviance at NASA, University of Chicago Press
Duhon, Cronin, 2015, “Risk Assessment in HAZOPs”, SPE-173544-MS, Presented at the SPE
E&P Health, Safety, Security, & Environmental Conference – Americas held in Denver, Colorado,
USA, 16–18 March 2015
Leveson, 2011, Engineering a Safer World, System Thinking Applied to Safety, MIT Press
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
EHS and APM software platforms fail to effectively couple asset integrity with a process safety
complement in the form of risk reduction metrics, KPI benchmarking, scorecards, dashboards,
reports, alerts, and other data displays and notifications.
“Data Rich & Insights Poor” is a characteristic observation in organizations not fully deploying
digital analytics and transformation tools. The ineffectiveness of incident reduction processes,
tools and software applications in use today can generally be characterized as follows:
• A lot of data is being generated, but is underutilized for data-driven analytics and systemic
root cause solutioning to enable whole classes of defects to be resolved
• Lots of emphasis on compliance, but too little focus on process safety risk reduction
• Other than API 754 PSE Tier 1 and 2 KPI comparisons, there is little evidence of
competitive KPI/indices benchmarking of the much more numerous near miss and unsafe
conditions data of Tier 3 and 4 PSEs
• Programs lack business perspective regarding the impact of asset integrity on process
safety and incident reduction as a function of mechanical availability and associated lost
production costs (a huge driver considering that every 1% gain in mechanical availability
is worth about $8 million of additional margin capture per year in a typical 200,000 bpd
refinery)
• Not utilizing a predictive approach involving algorithmic correlation relative to causation
• Incapacity to link condition monitoring and failure analysis for predictive analytics,
advanced pattern recognition, machine learning and artificial intelligence
So, with an IIoT predictive application environment as the backdrop and an asset integrity and
process safety analytic framework as the primary enabler, this paper discusses methods, metrics,
performance analyses, and KPI benchmarking techniques for driving Operational Excellence as it
relates to the ultimate concern of any PSM program, i.e., the loss of primary containment (LOPC)
and associated impacts to production, profitability and process safety.
Keywords: asset integrity, process safety, software, incident management, metrics, KPI
benchmarking
Introduction
Two high-level indicators used to evaluate manufacturing cost effectiveness are mechanical
availability and maintenance costs as a percent of replacement asset value (RAV). It is widely
accepted by Oil & Gas companies that world class manufacturing performance means operating
at or above 97% mechanical availability as well as spending less than 2% on maintenance as a
percent of replacement asset value (RAV).
In order to achieve such “best in class” targets, tools must be used to analyze and trend
performance relative to those measures. Deep-dive methods must surface indicators which drive
toward systemic root causes of inadequate performance and reveal both asset integrity and process
safety “AIPSM” incidents as a function of economic impact (lost production plus direct losses).
As such, lost profit opportunity ($LPO) becomes a measure of loss of primary containment
(LOPC) incidents and near misses characterized by equipment anomalies and upset/malfunction
operating conditions.
By way of example, five years after deploying an earlier scaled-down spreadsheet version of just
such an incident-focused asset integrity and process safety analytic framework at a multi-refinery
company, the following results were realized across the enterprise:
Safety: 27% reduction in Process Safety Management (PSM) and environmental incidents
24% reduction in fires and explosions
39% reduction in spills and releases
48% reduction in near misses
Additionally, a calibrated/weighted asset risk ranking tool and methodology facilitated the proper
allocation of tools and resources for identifying performance optimization opportunities and
driving Operational Excellence (OE) initiatives. Emphasizing the value of this asset integrity
approach drove the proper prioritization of opportunities and virtually guaranteed the successful
outcome of the exercise.
If 97% mechanical availability is now considered world-class asset integrity, could sustainable
98% or 99% availability be achievable by coupling the incident investigation and reporting
analytic framework being introduced, which let’s call PSM Plus for purposes of this paper, with
condition monitoring Industrial IoT (IIoT) technologies like predictive analytics, Advanced
Pattern Recognition (APR) and machine learning? Considering that every 1% gain in mechanical
availability is now worth about $8 million of additional margin capture per year in a typical
200,000 bpd refinery, the low-cost, high impact potential of a systemic RCFA approach like PSM
Plus is a logical next step for IIoT predictive analytics.
So, with IIoT predictive analytics as the backdrop and the asset integrity and process safety analytic
framework of PSM Plus as the primary enabler, this paper discusses methods, metrics,
performance analyses, key performance indicators (KPIs) and benchmarking techniques for
driving OE as it relates to the ultimate concern of any PSM program, i.e., the loss of primary
containment (LOPC) and associated impacts to production, profitability and process safety.
This paper reveals how PSM Plus ties together seemingly connected but functionally disparate
asset integrity and process safety fundamentals into a collaborative analytic framework involving:
Knowledge Management Systems Platforms
o Asset Integrity Management (AIM) Systems (like Metegrity Visions™)
o EHS Management Systems (like Enablon, Gensuite, Visium KMS)
Problem Solving Methodologies
o Continuous improvement programs (A3 problem-solving, 8Ds, TapRoot®,
DMAIC, Six Sigma, etc.)
Predictive Maintenance with APR and trending technologies
o DCS systems, operational data and data historians
Benchmarking Tools
o Gap analysis and regulatory reporting
o “Solomon Associates style” industry-wide process safety benchmarking
PSM Plus is at the core joining these systems and tools into a cohesive and synergistic set of
technologies for managing asset integrity and process safety. It includes several business methods
for evaluating people, processes and tools (and technology) and focuses on the three high value
OE business drivers of risk management, cost reduction, and productivity improvement.
“If you don’t measure it, you can’t manage it!” [1, 2]
The PSM rulemaking was exceptional in its vision some thirty years ago but could have been made
much better by the inclusion of metrics and KPI benchmarking. As is often said “what gets
measured gets done” is likely a leading reason why so many asset integrity and process safety
programs have failed to grow and continuously improve relative to industry best practices and
OSHA expectations.
Even though API 754 (published 2010 and significantly revised in 2016) (Figure 1) makes the
case for establishing common metrics across industry with its public reporting of higher
consequence process safety events (PSEs), it is of limited usefulness as a meaningful
benchmarking tool without the public sharing and comparison of the much more numerous near
miss and unsafe conditions data of the Tier 3 and 4 PSEs.
Figure 1. API RP 754 “Process Safety Performance Indicators of the Refining and
Petrochemical Industries”
Although a good benchmarking program overall, the US Chemical Safety Board (CSB)
characterizes the shortcomings of API 754 (Gomez, 2012) [3] as follows:
1. The statistical power of the few higher severity Tier 1 and 2 events is insufficient to detect
effect
2. The Tier 1 and 2 numbers are lagging indicators and thus of limited usefulness as
performance indicators
3. The lower consequence near miss and management system failure Tier 3 and 4 events occur
in larger number and are thereby more reflective of process failures, and yet are not publicly
reported for industry trend analysis, KPI benchmarking, and continuous improvement
Undoubtedly, analyzing for systemic root cause and publicly benchmarking the much more
numerous “free lessons” of Tier 3 and Tier 4 PSE findings according to company size, type, peer
group and other deeper dive comparators, etc., specific performance improvement opportunities
could be better assessed and thereby further utilized to enhance best practice and regulatory
conformance.
Of the fourteen PSM elements, incident investigation is the one which provides the best window
on asset integrity, plant reliability and process safety risk management, and which gets the most
attention from regulators, especially the CSB. Incident analyses almost always show that loss of
primary containment (LOPC) is preventable, with mechanical failure far exceeding the next
highest categories of operator error, other/unknown and upset/malfunction which all together
constitute the leading process safety risk opportunities for improved performance in the process
industry today. Utilizing a data-driven predictive analytics/decision support framework like PSM
Plus for systemic root cause analysis drives incident and risk reduction by enabling whole classes
of defects to be resolved across an enterprise and throughout facilities company-wide.
Major process safety events often arise from the unforeseen interactions of human and
organizational factors, leading to what is called an “organizational incident.” The “people,
processes and tools” management system framework of the PSM rule focuses heavily on work
processes and program reviews, yet fails to adequately address the organizational “people”
aspects of leadership and management commitment, management reviews, authorities and
accountability, human factors, organizational culture as well as continuous improvement by way
of metrics/KPIs for assessing program effectiveness and maturity.
With respect to minimizing organizational incidents, any number of the following factors may
contribute to the ineffectiveness of incident management system (IMS) processes and tools
(software and other technologies) in use today, especially with Tier 3 and 4 PSE analysis:
Inadequate root cause failure analysis (RCFA) quality - Lack of critical thinking in
process safety event (PSE) causal analysis
o RCFA methods like the “Five-Whys” are utilized superficially and with minimal
causal factors analysis
o Search for a single root cause promotes a flawed reductionist view of incident
causation given that multiple root causal factors most often contribute to a PSE
o Human factors and equipment failure are not analyzed for systemic root cause(s) and
systems-focused solutions, thereby not enabling whole classes of defects to be
resolved
o Associations in the data are not sought out for high level systemic analysis and
planning
o EHS platforms lack a process safety complement and are not configured to
incorporate and display metrics, KPIs, scorecards, dashboards, reports, portals,
alerts, and analyses
o Programs lack business perspective regarding the impact of asset integrity on
process safety and incident reduction
Management systems are not properly designed for optimization planning and process
improvement
o Lots of emphasis on compliance, but too little focus on process safety risk reduction
o A more robust data collection, analysis and reporting Plan-Do-Check-Act (PDCA)
management system structure is necessary to satisfy regulatory expectations for the
demonstration of program effectiveness and maturity as well as for leadership and
management commitment
o Other than API 754 PSE Tier 1 and 2 KPI comparisons, there is little evidence of
competitive KPI/indices benchmarking of the much more numerous near miss and
unsafe conditions data of Tier 3 and 4 PSEs across the enterprise and company-wide
o Limited evidence of employee (frontline) involvement and “closest to the work”
mentality in incident reduction policies, practices and programs
o Insufficient evaluation of asset integrity and process safety performance relative to
economic impact (lost production plus direct costs)
o Lack of mechanisms for management system consistency and sustainability
A lot of data is being generated, but is underutilized for data-driven analytics and
solutioning - “Data Rich & Insights Poor” is a characteristic observation in organizations
not fully or properly deploying digital analytics and transformation tools
o Not utilizing a predictive approach involving algorithmic correlation relative to
causation
o Not leveraging data infrastructure and information systems like OSISoft PI and
CMMS platforms
o Limitations exist due to inflexible IT architectures which are not conducive to robust
analytics, especially predictive analytics and IIoT digital transformation processes
and technologies
o Limited use of mobile digital graphical interfaces to facilitate end user experience by
way of dashboards, scorecards and reports with data views, charts, maps, tables,
KPI’s and alerts built on a predictive analytics platform
o Inability to link condition monitoring and failure analysis for predictive analytics
The PSM Plus AI+PSM (and IIoT-enabled) coupling of process safety (incident management)
with asset integrity (= maintain + inspect + design + operate) utilizes a management system
framework (people, processes, tools/technology) to drive risk mitigation and management in
conformance with industry RAGAGEP (recognized and generally accepted good engineering
practice). A strategically designed 20-element assessment protocol based on AIChE CCPS book
“Risk Based Process Safety” as well as other industry best practices is used to analyze and identify
implementation opportunities for performance improvement, process optimization and systemic
risk reduction.
Investing more organizational effort into the design and implementation of incident management
system (IMS) programs is key to assuring that the right information is gathered from failure events
and is applied to managing risk systemically. In so doing, “lessons learned” becomes more than a
clichéd phrase but instead an integral part of the organizational incident reduction process. PSM
Plus was developed to address this overall industry-wide need.
Looks at process safety through the lens of asset integrity and lost production impacts
Establishes a common framework for assessing asset integrity and process safety
management effectiveness and maturity for KPI benchmarking between plants
Categorically risk ranks issues and guides problem solving teams to high value deep-dive
investigation of systemic problems, prioritized by safety as well as economic impact
Provides an analytic framework/filtering tool for root cause analysis teams now inundated
with API 754 PSE investigations
Makes the system point to root causes with data sufficient enough to facilitate systemic
analysis, thereby eliminating the need to review large numbers of individual detailed
reports
Gives specific attention to equipment failure analysis and human factors fundamentals
Utilizes FMEA algorithms to assist the user in root cause determination and solutioning
Limits the number of distinctive event cause categories with a coding structure that
connects industry-specific causal factors with a concise set of basic and root cause
categories
Prioritizes by API 754 PSE Tier definitions and measures by economic impact to the
corporation, providing a clear understanding of lost profit opportunities associated with
asset integrity and process safety incidents
Utilizes a data-driven predictive analytics/decision support framework for systemic root
cause analysis in order to enable whole classes of defects to be resolved across the
enterprise and throughout facilities company-wide
Provides early warning of precursor/incipient failure stages of equipment degradation and
associated impacts on mechanical availability, process safety and profitability
Provides for an added capability of linking condition monitoring and failure analysis to
advanced pattern recognition for predictive analytics
PSM Plus AIPSM (i.e., the spreadsheet version) is a field-tested and proven system for the
characterization, classification and categorization of asset integrity and process safety incidents
risk-ranked and prioritized by API 754 PSE potential as well as economic impact (production plus
direct losses). Sophisticated machine learning techniques scour historical incidents to find
meaningful patterns in the PSM Plus data to prioritize and guide investigative teams to high value
problem-solving exercises.
Most importantly, PSM Plus AIPSM prioritizes systemic operational problems and guides
engineers and process safety specialists to focus on high value investigation exercises as measured
by economic impact to the organization. It captures and structures the 20% of data that 80% of
operators, engineers, managers and corporate executives want to see by tapping into the data
rich potential of an enterprise asset management (EAM) system and surfacing that 20% of key
information as KPIs.
Figure 2. Business Model for PSM Plus and Predictive Maintenance with APR Analytics
Design, Implementation and Partner Collaborations
Sophisticated machine learning technologies can be used to find meaningful patterns from the PSM
Plus data to prioritize issues and guide problem solving teams to high value investigation exercises.
Solutioning algorithms and methodologies such as A3, Eight Disciplines (8Ds), TapRoot® or ABS
RCA Handbook problem solving are used to conduct extensive investigations, invoking data from
the APR tools to conduct RCFA and establish patterns and trends to apply proactive measures to
predict and prevent future events from occurring.
This incident investigation and reporting analytic framework has the added capability of linking
condition monitoring and incident investigation to Advanced Pattern Recognition (APR) tools by
automatically establishing event frames that identify asset integrity and process safety events and
lost profit opportunity ($LPO). The APR tools use this information to mark the time series data in
a way that connects the anomaly or incident to the operational parameters leading up to the event.
Applying a rigorous structure and discipline to the investigation process, PSM Plus goes well
beyond analysis of the high severity Tier 1 and 2 PSEs to draw even more insight from the much
more numerous Tier 3 and 4 events. Every year in a typical refinery, Tier 3 and 4 numbers far
exceed Tier 1 and 2 events by many hundreds, and yet are often not adequately analyzed (or
reviewed at all) for root cause due to a lack of proper tools and resources. Obviously, the higher
incidence of these lower severity near miss and unsafe conditions is statistically more relevant than
the relatively few higher severity PSEs and are thereby more reflective of common cause failures.
The characterization, categorization, risk-ranking and prioritization of that abundance (better seen
as near-miss “free lessons”) of data (Figure 3) is especially critical for identifying systemic
problems and converting into leading indicators of more serious PSE potential. In order to drive
continuous improvement with mechanical availability and process safety, ALL incident data must
be analyzed for systemic effect in order to enable whole classes of defects to be resolved across an
enterprise and maximize the knowledge base necessary to reduce the risk of LOPC occurrence as
well as minimize lost profit opportunity ($LPO).
The PSM Plus analytic framework is the ideal complement for benchmarking process safety
management program effectiveness and maturity as well as the establishment of a potential
industry PSM accreditation model. Such a model might entail conformance with elements of the
AIChE CCPS book “Risk Based Process Safety” and also include a more robust capture, analysis,
and benchmarking of API 754 Tier 3 and Tier 4 PSEs relative to incident precursors, data patterns,
IOW excursions as well as other leading indicators.
Furthermore, with the proliferation of low-cost sensors and robust wireless communications
technologies, plant systems will soon be flooded with thousands more data points, alarms and alert
notifications. As such, knowing what data to capture, manage, display and control is essential to
proper metrics development and analysis. By not understanding the importance of data collection
and quality, that endeavor will just be an exercise in “garbage in, garbage out.”
Without quality data and problem-solving analytics, you will be on the wrong path with your
IIoT application and its expected return on investment (ROI). The role of PSM Plus in the IIoT
digital transformation journey is as an analytic framework (methodology) for incident
investigation and reporting which classifies and categorizes asset integrity incidents, and then risk-
ranks and prioritizes according to API 754 PSE potential as well as economic impact to the
corporation. Knowing “why it happened, and how to prevent it from happening again” is central
to what PSM Plus is, and what it does better, faster and smarter than any software analytics offering
on the market today (Figure 4).
Figure 4. The role of PSM Plus in the digital transformation journey… “Why did it
happen, and how to prevent it from happening again”
For companies already using user-configurable (in theory) software packages and the API 754
incident management/classification system, incorporating the PSM Plus predictive analytics
framework only involves reconfiguring fields and output to incorporate and display metrics, KPIs,
scorecards, dashboards, reports, portals, alerts, and analyses.
The user-configurable software capability greatly facilitates this initiative, and results in a low
cost, high impact opportunity for significantly enhancing risk/incident reduction efforts not only
at the site level, but also corporate-wide. Such an initiative greatly complements ongoing safety
culture improvement programs by further leveraging the often overlooked “people” soft risk/skills
applications and improvements among stakeholders at all levels.
Achieving Operational Excellence – A Technological Evolution [4]
The convergence of information technology (IT) and operations technology (OT) data has been
greatly facilitated by the proliferation of low-cost sensors and internet-protocol-enabled devices.
Connecting people, processes, machines and equipment via the internet is the next wave of the
industrial revolution which is being called the Industrial IoT (IIoT), or Industry 4.0. It is here, and
it will be the early adopters who gain the competitive advantage in this new frontier of the Oil &
Gas industry.
Chevron is one of those trailblazing pioneers, and has recently partnered with IoT services from
Microsoft to enable thousands of pieces of refining and oil field equipment with wireless sensors
by 2024 to predict exactly when equipment will need to be serviced. This is just part of Microsoft’s
$5 billion, four-year investment in the Oil & Gas sector to drive best-in-class improvements in
mechanical availability, profitability and ultimately process safety and risk management.
However, an analytic framework for extending Mean Time Between Failure (MTBF) and
thereby driving continuous improvement in mechanical availability must also be part of the
solution!
Process plants are built around a myriad of machine and equipment assets like pumps,
compressors, heat exchangers, piping, vessels, control valves and instrumentation, with the
integrity of those assets being key to managing plant reliability and process safety risk. Given the
demands of PSM program elements like mechanical integrity, hazard assessment, procedures,
change management, incident investigation, and information management, it can be difficult for
plant personnel to keep up. IIoT coupled with analytic tools holds the promise to help.
IIoT is converging IT with OT by connecting and enabling information exchange between sensors
and analytics utilizing the data. Pre-built process safety analytic software or “apps” running on
premise or in the cloud can quickly analyze data from performance and condition monitoring
sensors and automatically find trends to generate alerts and reports, thus freeing up engineers from
performing those same tasks manually.
As with Chevron and their Microsoft IIoT joint venture, many Oil & Gas companies are making
significant investments in systems and processes to better manage and enhance the performance
of their assets and ensure the safety of their work environments. These include investments in:
Enterprise Asset Management (EAM) systems that capture a wide variety of data related
to design, construction, commissioning, operations and maintenance of plant, equipment
and facilities
Digital Control Systems (DCS) that provide the automation necessary to control and
manage production…generating enormous volumes of data that is stored in data historians
Knowledge Management Systems that record incidents and act as repositories for capturing
lessons learned, developing best practices, and reporting for regulatory compliance
Universal Data Connector technologies like Eramosa eRIS™ acting as a “data broker” to
unlock data from proprietary platforms, thus enabling the extraction of data from any
database, software product or EAM system
Problem solving and RCFA methodologies that invoke a disciplined approach to resolving
issues and eliminating defects
Additionally, new investments are being made in emerging technologies where the convergence
of low-cost sensors with robust wireless communications technologies make it economically
feasible to outfit more equipment for condition-based monitoring. This proliferation of
instrumentation feeds directly into the creation of more data which can be used in new predictive
maintenance with Advanced Pattern Recognition (APR) systems. These systems, fueled by recent
strides in machine learning and artificial intelligence, hold incredible promise for unlocking
meaningful insights from historical data and recognizing trends within integrity operating
envelopes (API 584 IOWs) that could predict undesirable outcomes from occurring (Figure 5).
Undoubtedly, the promise of these technologies will change the way process and design engineers
interact with data. Having said that, it is equally important to establish confidence that the data
being stored for analysis accurately reflects the characteristics of the original signal. For this
reason, tools like Pattern Discovery Technologies CompressionInsight™ product should be
employed to monitor for data historian configuration issues and data point behavior anomalies.
Unfortunately, the majority of these systems or processes exist as islands. Incremental value can
only be realized when connected in a meaningful way that addresses asset integrity and process
safety risk relative to economic impacts. PSM Plus creates this value.
Summary
IIoT is still in its infancy and remains a mystery to most process plant managers who have either
not heard of it or do not understand its potential. Nevertheless, IIoT is clearly here and happening
now, and convincing people that this rapidly emerging technology is not just another pioneering
effort but instead “what good looks like” will be the challenge.
As such, and with an IIoT application environment in mind, PSM Plus is a predictive process
safety analytic framework which drives conformance to the safety management systems guidance
of OSHA 1910.119 (Process Safety Management), API 1173 (Pipeline Safety Management) and
API 754 (Process Safety Performance Indicators) by classifying and categorizing safety incidents
to uncover systemic problems that can be risk ranked and prioritized by loss production impacts
to the organization. This guides deep-dive process and design engineering teams to high value,
high impact returns on problem solving exercises. In its use of enterprise-wide benchmarking
KPIs, the methodology has proven to dramatically reduce process safety and environmental
incidents, improve equipment reliability and reduce maintenance costs…saving millions of dollars
in annual production losses (every 1% gain in mechanical availability is worth about $8 million
of additional margin capture per year in a typical 200,000 bpd refinery).
Figure 6. Asset Integrity Platform with PSM Plus Tools and Analytics
Rather than just another pioneering effort, PSM Plus is instead a proven management systems
methodology (of people, processes, tools/technology) which can either be configured into an
existing knowledge management system (like Metegrity Visions™, Enablon, Gensuite, Visium
KMS, Sphera, etc.) or “bolt-on” as a standalone module deployed as a powerful complement to
APR and machine learning products (like TrendMiner, Seeq, Falkonry, ECG, GE
SmartSignal/Predix, etc.,), the powerful combination of which is further enabled by the cost-
effective proliferation of wireless sensor technologies (from Emerson, Honeywell, Siemens, ABB,
Flowserve, Endress+Hauser, etc.) as a powerful complement to specific and unique technology
offerings.
Many market-leading AIM and EHS platforms as well as countless other disparate IIoT software
companies offer an array of predictive tools and analytics integrated with asset monitoring, but
none incorporate a “real world” analytic framework like that of PSM Plus which utilizes field-
tested and proven metrics, asset classification and categorization, risk ranking and prioritization,
KPI benchmarking and associated economic impacts (production plus direct losses) (Figure 6).
Rather than first understanding the application environment and then focusing on how software
solutions can help, many software designers start with a “solution” and then search for problems
to solve. To the contrary, and as based on decades of SME experience, PSM Plus was developed
with a hands-on, Reliability Centered Maintenance (RCM) “closest to the work” mentality as well
as a first-hand appreciation of management rank dynamics from field supervisor to department
manager to plant manager to corporate VP.
Figure 7. PSM Plus Incident Management System RCFA, Data Views, Reports
With that reporting hierarchy in mind, the primary goal of PSM Plus is to analyze and trend cost
minimization, drive asset optimization and conformance to process safety RAGAGEP (recognized
and generally accepted good engineering practice) not for just any one facility, but across all
facilities as well as enterprise-wide, and ultimately throughout industry (via API 754 adaptation).
“Following the leader” in a range of best-in-class to next-to-last is what RAGAGEP
conformance is all about, and in this highly regulated industry, there is strength as well as comfort
in numbers.
There are some 650 major refineries globally and many hundreds more petrochemical plants (not
to mention pipelines and midstream assets). Given the widespread application (now at nearly all
US facilities and growing internationally) of the relatively new Tier 754 incident benchmarking
standard and the abundance of data being collected, the opportunities for an IIoT asset monitoring
application coupled with the incident investigation and reporting analytic framework of PSM Plus
are numerous and especially ripe for the early adopters (Figure 7).
Besides enhancements to process safety, just considering that every 1% gain in mechanical
availability is now worth about $8 million of additional margin capture per year in a typical
200,000 bpd refinery, the low-cost, high impact potential of PSM Plus is well worth exploring as
a complement to any predictive maintenance/analytics platform.
Conclusion
The regulatory climate changed considerably following the highly publicized incidents at BP
Texas City in 2005, Tesoro Anacortes in 2010, Chevron Richmond in 2012, and ExxonMobil
Torrance in 2015. Each happened not due to a failure of equipment, instrumentation, facility siting,
operator, procedure, communication, supervision, or training, but rather a failure of all those things
together, i.e., a management system failure. In addition to tens of millions of dollars in
enforcement actions, legal consequences are now getting personal as was the case for plant
management in the aftermath of the 2011 Chevron Pembroke incident. Systemic failures were cited
in a May 2019 sentencing hearing at Swansea Crown Court with the plant declared "fundamentally
unsafe" due to a series of errors and failings that contributed to a multi-fatality incident.
“These systemic failures are manifest failings which paint a picture of a workplace which had
become, over time, fundamentally unsafe” …Chevron Pembroke refinery explosion, Swansea
Crown Court, May 2019 sentencing
“Should have been foreseen and acted upon”
So, could a Chevron Pembroke type incident be averted by your process safety programs as they
exist today? From an asset integrity perspective, do you have formalized programs and tools at
both the site and corporate levels for effectively identifying and communicating systemic failures?
Are you keeping up with industry pace-setters regarding asset integrity and process safety
RAGAGEP? And, how might regulators answer these questions for you should a catastrophic
incident occur? PSM Plus was designed to address these concerns.
References
[1] M.T. Marshall, Best Practices for an Evergreening Pressure Relief System Management
Program, AIChE 7th Global Congress on Process Safety, Chicago, IL, 2011
[2] M.T. Marshall, The Profit Motive: Process Safety Management, Loss of Containment and Lost
Profit Opportunity, Inspectioneering®, May/June Issue 2016
[3] M.R. Gomez, CSB Public Hearing on Process Safety Indicators July 23, 2012, Retrieved from
The US Chemical Safety Board: www.csb.gov, Accessed on October 15, 2015
[4] M.T Marshall, Enhance PSM Design with Metrics-Driven Best Practices, Hydrocarbon
Processing®, February Issue 2016
Michael Marshall, PE is an Oil & Gas industry consultant with Michael Marshall LLC (email:
[email protected] ), and has 39 years’ experience working in the downstream, midstream
and petrochemical industries. While working first with Chevron (10 years) and then Marathon
Petroleum Company (23 years - retired), he progressed through various in-plant and corporate
refining facility and project engineering, operations, maintenance, and equipment
inspection/reliability supervisory and managerial positions. It was Mike’s many years of hands-on
experience while serving in frontline engineering, operations, maintenance and inspection roles
which instilled in him the importance of properly designed risk minimization and management
systems, performance metrics and KPIs. He has unique insight and expertise in areas of risk-based
design relative to loss of containment (LOPC) damage mechanisms, safety integrity systems and
overpressure protection.
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
ABSTRACT
The overall number of incidents in petrochemical industry in USA has varied over the years, but the coverage gained
by a number of incidents in 2019 alone has been significant. A large number of these significant incidents have been
in Texas, especially in and around the oil and gas hub near Houston. These incidents have involved fatalities and
injuries, large fires, and some with explosions or long plume hovering over the city of Houston. In one occasion, the
emergency response was extended for several days. As one incident made it to the news, the next incident had raised
more questions and concerns among regulatory authorities as well as the public. Analysis of the recurrence of such
major incidents in the recent years may indicate a deficiency in the underlying process safety measures across the
industry.
Process safety encompasses the safety triad: prevention, mitigation and response. Understanding of what causes
incidents and taking proactive measures is important to prevent them in the first place. Many companies have good
safety programs in place to prevent incidents, however the incidents keep on happening. It is essential to identify
how to build a stronger safety triad and take proactive measures against the issues to reduce the incidents. The
current paper looks at the factors at play that significantly contribute to the failure behind the incidents and proposes
measures to address these factors. From the analysis of the factors, it is evident that both short term and long-term
planning and implementation is required by companies in collaboration with regulatory agencies and academic
institutions.
Abstract
Employee and operator safety at all petroleum facilities is a constant that must be maintained at
the highest level, regardless of production status or industry financial climate. Value engineering
offers protective construction solutions at reduced cost, decreased installation time and long
operational life expectancy. The Unified Wall Panel System (UWPS) is a value engineered
methodology that offers full-performance protective qualities in the industrial safety regimes of
blast overpressure, fire/thermal loading, seismic events, high wind, ballistic and fragmentation
resistance. The UWPS is a composite structure wall configuration developed through extensive
R&D, full-scale testing and real-world applications by industry leaders in hazard analysis, threat
mitigation and facility construction. New and innovative applications of technologies include
novel employment of high-strength cementitious structural paneling, utilization of non-aramid
advanced mineral fiber reinforcement, and metallic foam energy absorption. These cutting-edge
technologies provide key advantages in the modular UWPS concept currently being considered
for installation across a wide spectrum of applications, including military facilities, public utilities,
academic institutions, medical campuses and a variety of publicly or privately accessible locations.
The UWPS approach provides for the option of a shelter-in-place response for operational
continuity during a catastrophic event driven by manmade and natural incidents. Petroleum
industry safety in operational zones, control rooms, and administrative areas can be enhanced
through utilization of the UWPS and value engineering methodology.
Keywords: UWPS, Value Engineering, Fire, Blasts, Cementitious Panel, Mineral Fiber, Metallic
Foam
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Victor H. Edwards
VHE Technical Analysis
P. O. Box 940849
Houston, TX 77094-7849
E-mail: [email protected]
Abstract
Climate change is occurring and some of the changes will impact the safety of process plants. This
paper summarizes potential adverse impacts on process plants and then outlines how to conduct a
Climate Risk Vulnerability Assessment (CRVA) for a process plant. One product of the CRVA is
a plan of action to eliminate or reduce adverse climate effects on the process plant.
Keywords: Climate change, Process plant, Climate Risk Vulnerability Assessment, Effects on
process plants, Hazards from climate change.
Figure 2 – Cooling tower fan housing, Lake Charles, LA, Hurricane Rita, 2005
Figure 3 – Tank Battery in Cameron, LA, after Hurricane Rita 2005.
Hurricane Rita came ashore on September 24, 2005 as a Category 3 hurricane with winds of 115
mph. It caused $18.5 billion (2005) in damage. Figures 1 through 3 show examples of damage.
Hurricane Laura came ashore at Cameron, Louisiana on Thursday, August 27, 2020 as a
Category 4 storm. Based on windspeed, Laura was the fifth strongest on record in the US history.
Laura’s 150 mph winds made it the strongest hurricane to pass over the state of Louisiana,
matching the 1856 Last Island Hurricane in intensity. Process plants in the area experienced
damage from Laura, but details are not available at this time of writing.
What changes should occur during the siting, design, and construction of
process plants?
•
• First, conduct a Climate Risk Vulnerability Assessment (CRVA). This will be illustrated
by adaptation of the U S Climate Resilience Toolkit.( https://toolkit.climate.gov/ Accessed 12
September 2020) for potential plant sites in a five step process See Table 3.
______________________________________________________________________________
Table 3 –Five Steps to Resilience (Climate Risk Vulnerability Assessment-
______________________________________________________________________________
1. Explore hazards
2. Assess vulnerabilities and risk
3. Investigate options
4. Prioritize and plan
5. Take action
______________________________________________________________________________
Table 6 is a list of the column headings of a CRVA spreadsheet. This spreadsheet is adapted from
the U. S. Climate Resiliency Toolkit. Table 7 is the upper left portion of a typical CRVA
spreadsheet. Each line is for a specific asset. When an asset is at risk from more than one hazard,
additional lines are added for each hazard. For example, the plant administration building would
be at risk from both flooding (first line) and hurricane (second line).
1.4 Assess the plant’s current physical status
• For existing plants, inspect facility to detect and document any deficiencies in assets, such
as mechanical damage, corroded structures, vessels, or equipment and adequacy of
drainage features; Record deficiencies in brief report and highlight in column two of CRVA
spreadsheet. Also indicate year asset was placed in service.
• For existing plants, review design basis of structures, vessels, and equipment to see if
existing facilities meet current building codes and equipment standards; identify necessary
modification to meet load demands, Record deficiencies in brief report and highlight in
column two of CRVA spreadsheet.
• These two activities are best conducted by appropriate members of the CRVA team to build
team knowledge prior to team CRVA risk review meetings.
Key Assets or When Installed & Weather or Potential or Historical Climate stressors Non-climate stressors and
Resources Condition Climate Hazard Consequences and trend trend
Rainstorms of increasing
frenquency and Upstream development of
Plant administration Flooding causing Structural intensity; increasing watershed expanding flood plain;
building 1990/Fair Record rainfall damage; water damage trend increasing trend
3. Investigate options
Consider solutions to mitigate highest risk.
Check how others have responded to similar risks
Reduce your list to reasonable actions
5. Take action
Move forward with the stakeholders who accept responsibility and bring resources to take action.
Check to see if your actions are increasing your resilience.
As you move forward, you’ll monitor, review, and report on your project.
Summary and Conclusions
Climate change is happening and some of its effects present risks to process plants. Reported here
is a method to conduct a Climate Risk Vulnerability Analysis (CRVA) for a plant at risk. The
purpose of the CRVA is to identify those climate hazards and to plan modifications to eliminate
or reduce those risks by adapting and using the U. S. Climate Risk Resiliency Toolkit
References
Trish Kerin
IChemE Safety Centre
Level 7, 455 Bourke St Melbourne, VIC Australia
[email protected]
Abstract
While much research has been undertaken in natural hazards triggering technological disasters
(Natech) it still remains a challenging area. It can be difficult to move past the psychological bias
to focus on the possible incident outcome without discounting a seeming incredible cause. We
have seen some notable instances of Natech incidents, including the Fukushima Daiichi Nuclear
Power Plant meltdown following an earthquake and subsequent tsunami, as well as the impacts of
Hurricane Harvey on the Houston industrial facilities. More recently, there is a natural disaster of
significant proportions taking place in Australia, with a prolonged and intense bushfire season. As
at January 8, 2020 over 10,700,000 hectares have burnt across 7 states and territories (only one
territory remains unburnt) and the fires are not yet all under control. This has burnt a significant
range of environments, even razing whole towns. Twenty-eight people have lost their lives during
this fire season to date. These towns contain gas storage and water treatment facilities, which can
have ongoing process safety implications. Major cities are clogged by smoke, creating ongoing
health issues for the public. This paper will discuss the impact of the 2019-2020 Australia bushfires
and the assumptions made on how to prepare for a natural disaster.
Australian Bushfires
Bushfires, sometimes known as forest fires in other countries, are a natural part of the Australian
landscape. The season for the states in Southern Australia usually commences in summer
(December to February) and continues on to autumn or fall (March to May). In Queensland and
New South Wales, the season starts earlier in spring (September to November) or early summer.
The intensity of the fires is impacted by several factors, including fuel load, fuel moisture, wind
speed, ambient temperature and relative humidity largely. While some fires are the result of arson,
but the most common sources of ignition is natural, being a dry lightning strike [1]. As Australia
is generally a hot, drought prone land mass, it is at risk of fire each year.
Indigenous Australians have used fire as a land management and hunting tool for tens of thousands
of years. The National Museum of Australia states “The first bushfires in the colony were reported
in 1797. As Aboriginal people were driven off their land, their regime of low-intensity fire
management went with them, and bushfires became more prevalent. From then, the government
sought to limit Aboriginal and settler use of fire as an agricultural tool.” [2]
When the fires are particularly intense, a fire storm situation can develop, where the fire creates
its own weather system. Colloquially known as “pyrocumulus” a fire can create a “flammagenitus”
formation [3]. See Figure 1 for their formation details. Where the weather system is strong enough,
fire tornadoes can also form [4].
Figure 1. How pyrocumulus develop [3]
Figure 2. Photo of fire tornado. Proto by Brett Hemmings ©2019 Getty Images
In a high wind situation, the fires can ignite well in advance of the fire front, via wind borne
embers, effectively resulting in the fire advancing at very fast speeds [5].
These conditions can lead to catastrophic bushfire seasons in Australia. The five deadliest bush
fires on record in Australian are shown in Table 1. It should be noted, this list does not include the
fires of the 2019-2020 season. The total area burnt in the 5 most deadly bushfires as noted in Table
1 is 2,955,300 hectares.
Figure 3. Impact of the fire on visibility during daylight, showing Mallacoota Pier
As a result of the fires, smoke haze impacted major cities along the South East of Australia. Figure
4 shows the smoke over Sydney Airport on the afternoon of 5 December 2019. The smoke posed
not only a health hazard due to airborne particulates but also triggered building and facility smoke
alarms in several cases [13].
References
[3] Australian Government Bureau of Meteorology, “When bushfires make their own
weather,” 2018. [Online]. Available: http://media.bom.gov.au/social/blog/1618/when-
bushfires-make-their-own-weather/. [Accessed 1 June 2020].
[4] ScienceAlert, “Australia's Deadly Bushfires Are So Big They've Started Generating Their
Own Weather,” 1 January 2020. [Online]. Available: https://www.sciencealert.com/the-
bushfires-in-australia-are-so-big-they-re-generating-their-own-weather. [Accessed 1 June
2020].
[5] Australian Academy of Science, “How does fire move through the landscape?,” 11
December 2017. [Online]. Available: https://www.science.org.au/curious/earth-
environment/things-you-need-know-about-bushfire-behaviour. [Accessed 1 June 2020].
[6] Forest Fire Management Victoria, “History and incidents: Ash Wednesday 1983,” 9 June
2017. [Online]. Available: https://www.ffm.vic.gov.au/history-and-incidents/ash-
wednesday-1983. [Accessed 1 June 2020].
[7] Forest Fire Management Victoria, “History and Incidents: Black Friday 1939,” 10 June
2017. [Online]. Available: https://www.ffm.vic.gov.au/history-and-incidents/black-friday-
1939. [Accessed 1 June 2020].
[8] Australian Institute for Disaster Resilience, “Black Tuesday bushfires, 1967,” [Online].
Available: https://knowledge.aidr.org.au/resources/bushfire-black-tuesday/. [Accessed 1
June 2020].
[9] Australian Institute for Disaster Resilience, “Victoria, February 1936: Bushfire - South
East Victoria,” [Online]. Available: https://knowledge.aidr.org.au/resources/bushfire-
south-east-victoria/. [Accessed 1 June 2020].
[12] T. McIlroy, “Financial Review: Triage planning under way for Navy Evacuation of
Mallacoota,” 1 January 2020. [Online]. Available:
https://www.afr.com/politics/federal/triage-planning-under-way-for-navy-evacuation-of-
mallacoota-20200101-p53o5f. [Accessed 1 June 2020].
[13] T. Jones, “Gizmodo Sydney Smote Is Triggering Indoor Alarms,” 10 December 2019.
[Online]. Available: https://www.gizmodo.com.au/2019/12/sydney-smoke-is-triggering-
indoor-alarms/. [Accessed 22 June 2020].
[14] A. Love, “Airport Technology Australia's bush fires and the effect on airport visibility,”
10 March 2020. [Online]. Available: https://www.airport-
technology.com/features/australias-bush-fires-and-the-effect-on-airport-visibility/.
[Accessed 22 June 2020].
Abstract
The domino effect has been responsible for several catastrophic accidents that have occurred in
petro-chemical processes and the storage industry. In this study, 326 accidents since 1961-2017
involving the domino effect in process, storage plants and the transportation of hazardous materials
were analysed. Coding of incidents was done based on data obtained from different sources. The
domino incident database analysis includes several categories such as fatalities over time, incidents
over time, and incidents with respect to location, materials involved, causes and consequences.
The analysis has shown that explosions are the most frequent cause of the domino effect followed
by fires. The accidents involving domino effects show that process plants and transportation sector
are the most probable industry for a domino accident (33%) followed by storage terminals (20%).
The domino effect sequences were analysed using relative probability event trees, which may be
useful in further work to understand the domino effect and reduce the probability of its occurrence
in future. In the present study, we have proposed a three-stage methodology for the assessment of
risk due to the domino effect. The results show that quantitative risk assessment of escalation
hazard is fundamentally important to address preventive measures.
Keywords: Domino effect; Accident database; Risk analysis; Major accident hazard; Fires;
Explosions
1
1. Introduction
The domino effect has been responsible for several catastrophic accidents that have occurred in
petro-chemical processes and the storage industry. The consequences of these accidents are at
various levels and may affect not only the industrial sites but also people, economy and the
environment. The destructive potential of such accidents is widely recognized but very less
attention has been paid to this subject in the scientific and technical literature [1-4]. In the area of
risk assessment, the domino effect has been documented in the technical literature since 1947 [4-
5]. However, no well-assessed procedures have been developed for the quantitative assessment of
risk caused by the domino effect. Therefore, the assessment of domino accidental events remains
an unresolved problem. Moreover, there is widespread uncertainty in the escalation criteria and in
the identification of the escalation sequences that should be considered in the analysis of domino
scenarios, either in the framework of quantitative risk assessment or in land-use planning. The
probability of domino effects is relatively high due to the development of industrial plants,
proximity of such facilities to other installations, their inventories and the transportation of
hazardous substances [5].
The severity of domino accidents has caused concern in the legislation and in technical standards
aimed at the assessment and prevention of accident escalation. Therefore, the European legislation
widely recognized the assessment of domino hazards since the first “Seveso” Directive (Directive
82/501/EEC), which was adopted in 1982. Currently, these requirements have been extended to
the assessment of possible ‘‘domino’’ scenarios both on-site and off-site. Such requirements are
compulsory for industrial sites falling under the obligations of the ‘‘Seveso-II’’ Directive (Council
Directive 96/82/EC), as amended by Directive 2003/105/EC [6-7]. Therefore, the domino effect is
a significant concern in risk analysis. A good understanding of the main hazards and features of
this phenomenon can help identify additional safety measures, facility siting studies such as
minimum safe distances between certain types of equipment.
In spite of the relevant attention dedicated in the legislation, there is no well-accepted approach
to date for the analysis of domino related hazards. Several authors have analysed the categories
involved in domino accidents. Bagster & Pitblado [8] and Khan and Abassi [9] analysed the
probability of occurrence and adverse impacts of such ‘domino’ or ‘cascading’ effects. Cozzani
and Salzano [10-11] studied the contribution of a blast wave as a primary event and assessed the
overpressure threshold values for damage to equipment caused by blast waves originating from
primary accidental scenarios. Reniers [12] analysed the efficiency of current risk analysis tools for
preventing external domino accidents. They proposed a meta-technical framework for optimizing
the prevention of external domino accidents, emphasized the importance of combining inherent
safety criteria with conventional active and passive protection [12-13]. Antonioni et al. [14]
developed a methodology for quantitatively assessing the contribution of domino effects to overall
risk in an extended industrial area. Subsequently, several technical standards have introduced
preventive measures, such as safety distances, thermal insulation or emergency water deluges, etc.
to control and reduce the probability of domino events. However, a relevant uncertainty exists in
the threshold values assumed in such assessments [2,11].
Similar studies on pipeline domino effects were conducted by Ramírez-Camacho et al. [15]. The
authors studied the possibility of domino effect in parallel pipelines with different configurations.
They had concluded that the main risks associated with the domino effect are erosion by fluid-
2
sand jets and the thermal action of jet fires in pipelines. Ramírez-Camacho et al. [16] analyzed
about 1063 pipeline incidents to illustrate the risk associated with Pipeline and Hazardous
Materials Safety Administration (PHMSA) maintains a public database of pipeline incidents. A
20-year trend analysis by PHMSA on significant incidents reveal losses in pipeline industry
amounting to 7 Billion dollars in USA alone [17]. Similar major losses due to pipeline incidents
have been reported regularly by European Gas Pipeline Incident Data Group (EGIG) in Europe,
the Transportation Safety Board (TSB) and National Energy Board (NEB) of Canada [18-19].
In view of above, the domino effect is an important aspect of risk assessment because the
understanding of main hazards and features of the phenomenon can be used to introduce additional
safety measures. The past accident analysis in chemical process industries bestows great
importance on identifying their triggers, sequences, and consequences. Retrospection can provide
pointers for developing accident prevention strategies.
Due to its importance, the compilation and analysis of 326 past accidents involving domino effects
have been carried out in this paper to study their behaviour. The analysis reveals that explosions
were responsible for domino effects in almost 57% of the cases, followed by fires (43%).
Explosions and fires can cause subsequent accidents, and their physical effects can trigger a
domino sequence [20]. The severity of the ensuing scenario can considerably increase the
influence of a domino effect. A historical analysis of domino effects carried out by Darbra et al.
[3] show that 59.5% of accidents in seaport areas were due to fires, 34.5% were explosions and
6% were toxic clouds. The assessment carried out in the present study shows that 34% of domino
accidents have occurred in process plants and transportation section whereas 20% in storage
terminals of hazardous materials. Storage areas, which usually contain large amounts of hazardous
materials, are also common settings for domino effect scenarios. This is evident with the recent
incident in Tiajin where a series of explosions killed 173 people and injured hundreds of others at
a container storage station [21].
The domino effect sequences were analysed using relative probability event trees. The most
frequent sequences were i) explosion→ fire (26%), ii) fire→ explosion (20.3%), and iii) fire→ fire
(12%). In the last decade, three major petroleum storage area accidents occurred in Buncefield,
UK (2005), Puerto Rico, USA (2009), and Jaipur, India (2009) [22]. In addition to this, Amuay
refinery accident occurred in Venezuela on the 25 August 2012 [23]. A similar study on 226
domino incidents mainly focused on developing countries and concluded that the most frequent
domino accident sequence was explosion fire (24.8%) followed by explosion fire
explosion amount to about 8% [24].
Few authors have analysed historical surveys of the domino effect. For example,
Abdolhamidzadeh et al. [1] have presented an inventory of 224 major process industry accidents
involving ‘domino effect’. Darbra et al. [3] examined 225 accidents involving the domino effect,
which occurred from 1961 to 2007. The aspects analysed included the accident scenario, the type
of accident, the materials involved, the causes and consequences, and the most common accident
sequences. Kourniotis et al. [25] examined a set of 207 major chemical accidents that occurred
between 1960 and 1998, 114 of which involved a domino effect according to their criteria. Ronza
et al. [26] performed a survey of 828 accidents in port areas and constructed relative probability
event trees to analyse the sequence of the 108 accident scenarios in which a domino effect was
observed. This paper addresses the development of revised criteria to assess the possibility of
3
escalation of accidental scenarios, resulting in domino accidental events. The main purpose of the
study was to obtain a better understanding of the causes of hazardous event escalation and
mitigation measures that prevent transforming minor accidents into disasters.
Lee [27] defines the domino effect as “a factor to take into account of the hazards that can occur
if leakage of a hazardous material can lead to the escalation of the incident”.
Delvosalle [28] considers all of the aspects and define domino accidents as “a cascade of events
in which the consequences of a previous accident are increased both spatially and temporally by
the following ones, thus leading to a major accident”.
The AIChE-CCPS (American Institute of Chemical Engineers - Centre for Chemical Process
Safety) [29] defines a domino effect as “an incident that starts in one item and may affect nearby
items by thermal, blast or fragment impact, causing an increase in consequence severity or in
failure frequencies”.
A recent definition provided by Cozzani & Salzano [10] is: “a domino accidental event will be
considered as an accident in which a primary event propagates to nearby equipment, triggering
one or more secondary event resulting in overall consequences more severe than those of the
primary event”.
For pipelines the domino effect can be defined as event where in two or more pipes installed in the
same hallway: a jet released from one of the pipes can seriously damage another one by abrasion
(underground pipes) or thermal action (jet fire) [15-16].
Therefore, these definitions are used as a framework for the selection of accidents. Based on these
definitions one can say that a relatively minor accident can initiate a sequence of events that causes
damage over a larger area and lead to several severe consequences, which is typically referred as
a domino effect.
According to Reniers [12], domino effects are classified into two categories: single-company
(internal) domino effects and multi-company (external) domino effects. Internal domino effects
signify an escalation accident occurring inside the boundaries of one chemical plant. In external
domino effects, one or more secondary accidents occur outside the boundaries of the plant where
the primary event occurs. Although external domino effects often have more severe consequences
than internal domino effects, this phenomenon has received less attention from prevention
managers in existing chemical clusters. The reason for this relatively extraordinary observation is
threefold [12]. First, they are less frequent; second, their modelling is highly complex; and third,
4
they are difficult to investigate because several companies are involved. The analysis of technical
literature and case histories concerning past accidents shows that all of the accidental sequences,
where a relevant domino effect took place, have three common features namely event occurrence,
their propagations and escalation vectors as shown in Fig. 1 and discussed below [30].
Thus, it is important to understand that the propagation sequence is relevant only if it results in an
‘escalation’ of the primary event, i.e., triggered by an ‘escalation vector’ originating from the
primary scenario. By these definitions, all knock-on accidents including the accidents that occur
within a single process unit would fall under the umbrella term ‘domino effect’ [1].
To study the historical analysis of domino accidents, one of the critical tasks is to establish the
criteria for differentiating domino accidents from non-domino accidents. The hurdles involved in
interpreting records of past accidents create difficulties in conducting past accident analysis (PAA)
5
for stand-alone accidents and an increased level of difficulty for the PAA of domino events.
Therefore, it is important to develop the most appropriate definition for the domino effect. There
are certain well-known difficulties associated with the task of obtaining records of past accidents
[1, 31-32] as listed below:
a) a well-established mechanism was not developed for reporting and maintaining records of
domino accidents that occurred in many countries, particularly in the previous century;
c) integral inaccuracy of several available records; for example, explosion and fire accidents were
often recorded in a generic sense, and in several situations it was difficult to identify the
specific event type;
In developing countries, the lack of proper documentation and inventory of accidents obscure the
involvement of the domino effect. Hence, there is no method to confirm whether the accident had
involved a ‘domino effect’. To classify a series of accidents as a domino event, it is necessary to
establish that the event conforms to the definition of a domino effect. Usually incomplete and
imprecise records of the past accidents create complications to determine whether more than one
process unit was involved in an incident involving multiple accidents. We conducted this study
recognising these limitations and surveyed the records of the following sources:
In present literature review, only accidents that occurred over the past 50 years were considered.
Accidents that occurred prior to 1961 were excluded, as they happened in a technological
environment in which safety measures and risk planning were not comparable with those currently
6
in place. This, although, has reduced the number of accidents studied but increased the quality and
significance of the sample.
7
Explosion/
21 Sep 2001 Petrochem Toulouse, France 610 3000/ 30
Fire
Gas Fire
19 Jan 2004 Skikda, Algeria 580 74/ 27
Processing /Explosion
23 Mar 2005 Refinery Fire/Explosion Texas, United States 1500 170/15
Fire Hertfordshire,
11 Dec 2005 Petroleum 1443 43/0
/Explosion England
Fire Bayamon,
23 Oct 2009 Refinery 6.4 --/--
/Explosion Puerto Rico
29 Oct 2009 Petroleum Explosion/Fire Jaipur, India 32 150/11
Washington,
02 Apr 2010 Refinery Fire/Explosion - 4/--
United States
21 Apr 2010 Upstream Fire/Explosion Gulf of Mexico, USA 590 --/--
As shown in Table 2, there has been a significant increase in the number of accidents over the
years from the 1960s up to 2010 with the exception of the 1991-2000 decade. The implementation
of Clean Air Act in 1991 and Process Safety Management programs could have a reason for less
incidents during the 1991-2000 decade. During the implementation of Clean Air Act, there were
also significant changes in incident reporting which could showing a decreasing trend in the 1991-
2000 decade. This increase in rest of the decades can be attributed to two main reasons. First, the
chemical industry has undergone continuous expansion: more and larger process plants and storage
areas have been created that are more prone to fire and explosion hazards. Second, access to
information about accidents has improved gradually over the time. A considerable number of
accidents that occurred during the 1960s and before were not recorded and the information were
lost.
8
The number of fatalities has also been increasing every decade with the exception of the 1991-
2000 decade (see Fig.2). The decade from 1981-1990 showed an exceptionally high number of
fatalities due to two of the biggest accidents. The Mexico City accident in 1984 led to 650 deaths,
which is higher than 60% of the total fatalities in that decade. This decade had the worst industrial
accident - Bhopal gas tragedy in 1984, but it is not factored in the computations because it was not
a domino incident [36]. Even during the years 2001–2010, two major accidents in Neyshabur and
Zahedan (Iran) had higher than 45% of the total fatalities in that decade. Thus, one or two major
accidents in a decade have typically led to the increased number of fatalities. Major accidents,
which are found to be domino in nature, should be controlled at the initial stages to minimize the
fatalities. Despite the number of accidents, the domino effect has received much less attention than
other aspects of risk assessment. The number of fatalities has also been high in the 2001-2010
decade. Since 2011, the number of accidents has decreased every year. This decreasing trend could
be due to increasing automation of industries, new strict regulations and prompt action in the case
of emergencies.
9
Fig. 2. Number of domino accidents in the chemical industry in each decade
10
3.2 Location-specific accidents
Accidents were divided into the following categories according to the country in which they
occurred:
the European Union (~ 16%, France, Italy and Germany have a large number of domino
accidents than the rest of EU Nations);
other developed countries (~ 58%, United States, Canada, Australia);
Developing countries (~ 26%, Asia and North African countries).
A certain degree of bias may exist because preference was given to information on accidents that
occurred in Europe and the United States. This is because most of the institutions that manage the
databases used in the study are based in these countries and the information on them is generally
more exhaustive.
It has been observed that about 74% of accidents involving domino effects are recorded from
developed countries as illustrated in Fig 4. The large scale of process plants and associated storage
and transportation facilities in developed countries could be the major contributor to the high
percentage, while some loss of data in the rest of the world must be considered. Data on the number
of accidents are mostly obtained from the organizations of developed countries and could be the
reason for high percentages. The possible loss of data in developing countries could contribute to
the lower percentage. Data from developing countries may also be incomplete. However, the
available data seems to be enough to show the overall trend and a representative sample is shown
in Fig.4.
11
3.3 Substance Type
Most of the domino accidents involved more than one substance. Although the number of
substances involved in accidents is higher, only the substance involved in the primary accident is
categorically mentioned. In domino accidents, the substance in the primary event may involve
other substances in secondary or further events leading to the involvement of a large number of
substances in some of the worst accidents. A relatively small number of accidents involved only
one substance. Flammable substances were involved in most of the accidents (89%) and were the
substances most frequently found in domino accidents [3].
In most of the domino accidents, flammable substances were involved. The analysis of 166 domino
accidents as shown in Table 3 illustrates that Crude oil is by far most frequently involved (43 cases,
26%) followed by natural gas (19%), propane (11%), LPG (10%), gasoline was found in 10%
cases and diesel oil involved 4%. LNG and vinyl chloride incidents contribute to about 4% of the
incidents. Ethylene, chlorine, hydrogen and methanol were involved in the same number of
accidents (2 % each one).
12
equipment and their connectivity. The pipeline industry contributes to the major incidents in the
transportation sector. The pipeline industry has had losses amounting to about 7 billion dollars in
the USA [6]. Storage terminals are at risk due to large amount of chemicals retained at one place.
Recent incident like Tianjin, China (2015) explosion and west, Texas, United States (2013)
explosion highlight the impact that storage terminals can pose.
3.5 Causes
The cause of the primary accident is a significant aspect of the analysis of domino effect accidents.
The Major Hazard Incident Data Service reported several generic causes: external events, human
error, impact failure, mechanical failure, instrument failure, violent reaction (runaway reaction),
upset process conditions and services failure [37]. Although some of the generic causes for
accidents are self-explanatory (for example, violent reaction), the accidents due to human error
have greater complexity because other causes, such as violent reaction or mechanical failure could
also be a result of human error.
Of all the external causes, accidents (fire and explosion) in other plants were the most frequent
types. When the primary event was an explosion, it was typically impossible to ascertain from the
information available whether it was the blast wave or the fragment projection that caused the
secondary accident. When human error was the generic cause of the accident, general operations,
general maintenance, overfilling and procedural failures were the main specific causes. The
specific causes are shown in Fig. 6.
13
EXTERNAL HUMAN IMPACT
Fire General Operation Ship to ship collision, barges
Accidental venting
Explosion General Maintenance
Failure to connect / Heavy object
Lightening Overfilling
disconnect Management Other vehicle
Procedural
Flooding
failures Rail accident, no other vehicle
Sabotage or vandalism Design error
Temperature extremes Draining accident
14
4. The domino effect methodology
The methodology proposed here for evaluating the potential for a domino effect involves a three-
stage procedure as illustrated in Fig. 7. This staged approach has an increasing degree of
complexity. In any hazard analysis, it is initially prudent to evaluate whether it is possible to
demonstrate acceptability on the basis of the “consequences” being tolerable or non-hazardous
(i.e. acceptable) followed by a second stage that considers whether the “probability or frequency”
is tolerable. The third stage involving risk assessment is only necessary if it is not possible to show
that the site separation was acceptable from a consequence and a frequency viewpoint. If it is still
not possible to demonstrate risk tolerability at the third stage, then it will be necessary to
investigate and include risk mitigation measures. This approach is reflected in the proposed
methodology for domino assessment.
Stage 1 includes an assessment of the maximum hazard ranges for sites A and B and an evaluation
of whether these hazard zones extended to susceptible critical plants on their site,
Stage 2 includes an assessment of whether the frequencies of all incidents affecting critical plants
exceed some notional threshold value, and
15
Start
Stage1- Assess
Maximum Hazard
Ranges for Sites A & B
Yes
Is the frequency of
interaction exceeding
standard or critical No
limits
Yes
Risk Mitigation
Measures
Acceptable
No Adverse
effect
16
4.1 Input data for analysis
Accurate data are necessary to evaluate the correct effects of any accident. If additional and
accurate data are available, the prediction of impacts will have higher reliability. The information
and data required for the domino assessment methodology are listed below.
After the identification of primary accidental events, the escalation vectors associated with each
scenario should be defined (step 2 in Fig. 1). The propagation of the primary event due to the
escalation vectors and its effects typically generate at least one secondary target. Thus, the physical
effect due to the primary event that caused damage to the exposed individuals is often different
from that responsible for escalation. Therefore, it is crucial to understand that each accidental
scenario should be associated with a “vulnerability vector” (used to estimate the damage to the
exposed individuals) and to one or more ‘escalation vectors’.
Any scenario can generate the three following escalation vectors. Escalation vectors and criteria
for the primary and probable secondary scenarios shown in Table 4 are [30]:
17
Table 4. Escalation vectors and criteria for the primary and probable secondary scenarios
Primary scenario Escalation vector Escalation Expected secondary
criterion scenarios
Mechanical Fragments, over pressure 16 kPa Pool fire, Jet fire,
explosion BLEVE, toxic release
Confined explosion Overpressure Fragment impact All
BLEVE Fragments, over pressure Fragment impact All
VCE Overpressure, fire 16 kPa All
impingement
Pool fire Radiation, fire 15 kW/m2 All
impingement
Jet fire Radiation, fire 15 kW/m2 All
impingement
Flash fire Fire impingement LFL Tank fire
Fireball Radiation, fire Engulfment Tank fire
impingement
Toxic release - - -
Therefore, the selection of credible escalation scenarios based on reliable models for equipment
damage is a central issue to allow the assessment and the control of risk due to domino accidents.
5. Conclusion
Domino effect accidents have been noticed in the tragic history of many past accidents and a more
realistic way of addressing intrinsic risks of chemical and petrochemical plants. Assessment of
domino accidents is a difficult task due to the involvement of more than one flammable substance
and secondary or tertiary events. Due to the complex nature of such accidents, very few studies on
its analysis have been published. The present review and analysis of published literature has led to
records of 326 major domino events spread over five decades. The domino accidents have been
summarized indicating the number of fatalities, locations, involvement of substances and domino
sequences.
The number of accidents over five decades has been increasing due to the expansion of chemical
industries and improvements in accessing information on accidents. In addition, we observed that
one or two major accidents in a decade led to a higher number of fatalities in last five decades.
More than 75% of domino accidents occurred in developed countries, which seems rational due to
a large number of industries located there. There is also the possibility of loss of data regarding
accidents in developing countries, thereby leading to a lower percentage. However, the review and
analysis has also shown that domino accidents in underdeveloped countries have higher severity
compared with countries that are technologically more advanced.
The analysed accidents have indicated that fires and explosions are the primary domino effect
events. Thus, precautionary measures should be adopted while handling flammable materials,
which are the most common substances in domino accidents. The most frequent sequences are
explosion→fire, fire→ explosion and fire→fire. The results show that the quantitative assessment
of escalation hazard is a key tool to understand the credible and critical domino scenarios in
18
complex industrial sites. Therefore, significant efforts should be devoted to improving safety in
such operations, especially in storage facilities where most transfer operations are performed.
Past accident analysis enables an understanding of how accidents occur and provides useful inputs
for the development of loss prevention strategies, and therefore, it is an important component of
loss prevention Research and Development (R&D) activities. Consequence analysis in the case of
domino accidents is a complex task as no clear guidelines for identifying it are available. The
escalation criteria described in this study may represent an initiating point in quantifying risk of
domino effects.
The domino effect is an important aspect in risk analysis, as knowledge of the main hazards and
features of this phenomenon can be used to identify additional safety measures. However, risk
assessment techniques have intrinsic limitations due to the complexities introduced by event
interactions and multi component or multiphase systems encountered in real situations. Thus, it is
imperative to study the risk assessment of major past accidents and thereby to take appropriate
measures to prevent major accidents in the future.
19
References
[1] B. Abdolhamidzadeh, T. Abbasi, D. Rashtchian, and S. A. Abbasi, Domino effect in process-
industry accidents - An inventory of past events and identification of some patterns. Journal
of Loss Prevention in the Process Industries, 24(5), 575–593, 2011, doi:
http://doi.org/10.1016/j.jlp.2010.06.013
[2] V. Cozzani, G. Gubinelli and Salzano, E. Escalation thresholds in the assessment of domino
accidental events. Journal of Hazardous Materials, 129(1–3), 1–21, 2006, doi:
http://doi.org/10.1016/j.jhazmat.2005.08.012
[3] R. M. Darbra, A. Palacios, and J. Casal, Domino effect in chemical accidents: Main features
and accident sequences. Journal of Hazardous Materials, 183(1–3), 565–573, 2010, doi:
http://doi.org/10.1016/j.jhazmat.2010.07.061
[4] F. Kadri, E. Chatelet and G. Chen, Method for quantitative assessment of the domino effect
in industrial sites. Process Safety and Environmental Protection, 91(6), 452–462, 2013,
doi: http://doi.org/10.1016/j.psep.2012.10.010
[5] F. Kadri and E. Chatelet, Domino Effect Analysis and Assessment of Industrial Sites : A
Review of Methodologies and Software Tools. International Journal of Computers and
Distributed Systems, (2), 1–10, 2013
[7] Major Accident Hazards Bureau, Guidance on the preparation of a safety report to meet the
requirements of Directive 96/82/EC as amended by Directive 2003/105/EC (Seveso II).
Institute for the Protection and Security of the Citizen, 2005, doi: http://doi.org/92-79-
01301-7
[9] F. I. Khan and S. A. Abbasi, An assessment of the likelihood of occurrence, and the damage
potential of domino effect (chain of accidents) in a typical cluster of industries. Journal of
Loss Prevention in the Process Industries, 14(4), 283–306, 2001, doi:
http://doi.org/https://doi.org/10.1016/S0950-4230(00)00048-6
[10] V. Cozzani and E. Salzano, The quantitative assessment of domino effects caused by
overpressure: Part I. Probit models. Journal of Hazardous Materials, 107(3), 67–80,
2004a, doi: http://doi.org/10.1016/j.jhazmat.2003.09.013
[11] V. Cozzani and E. Salzano, Threshold values for domino effects caused by blast wave
interaction with process equipment. Journal of Loss Prevention in the Process Industries,
17(6), 437–447, 2004b, doi: http://doi.org/10.1016/j.jlp.2004.08.003
20
[12] G. Reniers, An external domino effects investment approach to improve cross-plant safety
within chemical clusters. Journal of Hazardous Materials, 177(1), 167–174, 2010, doi:
http://doi.org/https://doi.org/10.1016/j.jhazmat.2009.12.013
[13] V. Cozzani, A. Tugnoli and E. Salzano, Prevention of domino effect: From active and
passive strategies to inherently safer design. Journal of Hazardous Materials, 139(2), 209–
219, 2007, doi: http://doi.org/10.1016/j.jhazmat.2006.06.041
[14] G. Antonioni, G. Spadoni and V. Cozzani, V., Application of domino effect quantitative risk
assessment to an extended industrial area. Journal of Loss Prevention in the Process
Industries, 22(5), 614–624, 2009, doi: http://doi.org/10.1016/j.jlp.2009.02.012
[17] US DOT Pipeline and Hazardous Materials Safety Administration (PHMSA). Pipeline
Significant Incident 20 Year Trend. Retrieved December 6, 2017, from
https://hip.phmsa.dot.gov/analyticsSOAP/saw.dll?Portalpages
[18] EGIG. “Gas Pipeline Incidents 9th Report of the Europen Gas Pipeline Incident Data
Group 1970 -201” (No. EGIG 14.R.0404). 2015. Groningen, Netherlands. Retrieved from
https://www.egig.eu/uploads/bestanden/ba6dfd62-4044-4a4d-933c-07bf56b82383
[19] National Energy Board. (2017). 2000-2013 Pipeline Incident Reporting. Retrieved
December 6, 2017, from https://www.neb-
one.gc.ca/sftnvrnmnt/sft/archive/pplnncdntgrprtng/pplnncdts/pplnncdts-eng.html
[20] J.P. Gupta, G. Khemani, and M. Sam Mannan, Calculation of Fire and Explosion Index
(F&EI) value for the Dow Guide taking credit for the loss control measures. Journal of
Loss Prevention in the Process Industries, 16(4), 235–241, 2003, doi:
http://doi.org/https://doi.org/10.1016/S0950-4230(03)00044-5
[21] C. Neame, Tianjin port explosions. International Commerce (Vol. August). London, 2015,
Retrieved from http://www.hfw.com/downloads/HFW-Tianjin-Port-explosion-August-
2015-1.pdf
[22] K.B. Mishra, K.D. Wehrstedt and H. Krebs, Lessons learned from recent fuel storage fires, J.
Fuel Processing Technology.107, 166–172, 2013.
21
[23] K.B. Mishra, K.D. Wehrstedt and H. Krebs, Amuay refinery disaster: The aftermaths and
challenges ahead. Fuel Processing Technology, 119(Supplement C), 198–203, 2014, doi:
http://doi.org/https://doi.org/10.1016/j.fuproc.2013.10.025
[24] Y. Chen, M, Zhang, P. Guo and J. Jiang, Investigation and analysis of historical Domino
effects statistic. Procedia Engineering, 45(1998), 152–158, 2012, doi:
http://doi.org/10.1016/j.proeng.2012.08.136
[25] S.P. Kourniotis, C.T. Kiranoudis, and N.C. Markatos, Statistical analysis of domino
chemical accidents. Journal of Hazardous Materials, 71(1), 239–252, 2000, doi:
http://doi.org/https://doi.org/10.1016/S0304-3894(99)00081-3
[26] A. Ronza, S. Félez, R.M. Darbra, S. Carol, J.A Vílchez and J. Casal, Predicting the
frequency of accidents in port areas by developing event trees from historical analysis.
Journal of Loss Prevention in the Process Industries, 16(6), 551–560, 2003, DOI:
http://doi.org/https://doi.org/10.1016/j.jlp.2003.08.010
[27] F.P. Lees, Lee's Loss Prevention in the Process Industries (Eds. S. Mannan), 3th ed., Elsevier
Butterworth-Heinemann, Burlington, MA. 2005
[29] CCPS, Guidelines for Evaluating the Characteristics of Vapour Cloud Explosions, Flash
Fires, and BLEVEs. (A. I. of C. Engineers, Ed.) (2nd ed.). New York, 2000.
[30] V. Cozzani, G. Gubinelli, and E. Salzano, Escalation thresholds in the assessment of domino
accidental events. Journal of Hazardous Materials, 129(1–3), 1–21, 2006, doi:
http://doi.org/10.1016/j.jhazmat.2005.08.012
[31] P.M.W. Körvers and P.J.M. Sonnemans, Accidents: A discrepancy between indicators and
facts! Safety Science, 46(7), 1067–1077, 2008, doi:
http://doi.org/https://doi.org/10.1016/j.ssci.2007.06.004
[32] T. van der Schaaf and L. Kanse, Biases in incident reporting databases: an empirical study
in the chemical process industry. Safety Science, 42(1), 57–67, 2004, doi:
http://doi.org/https://doi.org/10.1016/S0925-7535(03)00023-7
[33] T.A. Kletz, Lessons from Disaster: How organisations have no memory and accidents
recur. Institution of Chemical Engineers Symposium Series (Vol. 1). Wiley Subscription
Services, Inc., A Wiley Company, 1993, doi: http://doi.org/10.1002/apj.5500020211
22
[35] F.P. Lees, Chapter 24 - Emergency Planning A2 - Mannan, Sam BT - Lees’ Loss Prevention
in the Process Industries (Fourth Edition). Oxford: Butterworth-Heinemann, 2012, doi:
http://doi.org/https://doi.org/10.1016/B978-0-12-397189-0.00024-0
[36] J.P. Gupta, The Bhopal gas tragedy: could it have happened in a developed country?
Journal of Loss Prevention in the Process Industries, 15(1), 1–4, 2002, doi:
http://doi.org/https://doi.org/10.1016/S0950-4230(01)00025-0
[37] HSE, Major Hazard Incident Data Service (MHIDAS). OSH-ROM. London, UK: UK HSE,
2007.
23
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
Storage tank farms are essential industrial facilities to accumulate oil, petrochemicals and gaseous
products. Since tank farms contain huge mass of fuel and hazardous materials, they are always
targets of serious accidents such as fire, explosion, spill and toxic release which may cause severe
impacts on human health, environmental and properties.
This article presents an IOCL Jaipur, 2009 incident investigation of the gasoline tank fires flame
characteristics. Although having a safe layout cannot prevent initiating accidents, it effectively
controls and reduces such accidents' adverse impact. Therefore, appropriate modeling studies are
needed to simulate the potential threat of large-scale pool/ tank fires in fuel storage depots for
quantitative consequence analysis and effective preventive measures. The estimated flame height
lies between 0.9 and 1.5, which is within the observed range. In contrast, the estimated surface
emissive power lies between 27 and 123 kW/m2 respect to time, as determined by adopting various
models for large-scale tank fires. The results show turbulent flame with constant loss burning rate
per area, different flame height, and different heat release rate. The irradiances (𝐸𝑟 ) are assessed
with a point source model and are validated by the DNV Norway-based risk assessment PHAST
software with a maximum percentage error of 25%.
This paper highlights the scale of threat posed to people, assets and the environment by
hydrocarbon storage tank fires, and discusses tank fire hazards and key factors that influence the
prevention and suppression of oil storage tank fires.
Keywords: Hydrocarbon pool fire; Flame height; Thermal radiation; Surface emissive power; Fire
safety
Page 1 of 27
1. Introduction
With the rapid development of the petrochemical industry and the completion of petroleum oil's
strategic reserve, the safety of large oil depots and large storage tanks has become more and more
critical. However, in recent years, fire and explosion accidents occur frequently in large-scale
storage tanks, and these accidents often cause severe losses. In the last decade, there have been
three major large-scale storage tank fire accidents that have exhibited striking similarities. The
Buncefield oil storage depot accident in the UK on 11 December 2005, the Caribbean Petroleum
Corporation fuel depot accident in Puerto Rico, the USA on 23 October 2009, and the Indian Oil
Corporation Ltd (IOCL) accident in Jaipur, India on 29 November 2009 [1]. The Amuay refinery
accident occurred in Venezuela on 25 August 2012 [2]. A wide range of similarities has been
observed among these accidents, most notably the vapour cloud explosion (VCE) followed by the
multiple tank fires. These accidents demonstrate the large-scale destruction of the surroundings
and serious environmental implications and underline the necessity of appropriate measures to
prevent such devastating accidents [3]. Hence, learning from past accidents is essential for the
future safe operations of storage tanks.
Many of the past accident reports state that tank fires are the common disaster forms in petroleum
industries and result in more intense radiation, and higher flame, which can cause serious impact
on the surrounding personnel and equipment and can also lead the boiling liquid to a vapor
explosion or vapor cloud explosion [4].
The different types of fire that may occur at a petroleum industries jet fire, flash fire, fireball, and
pool fire - great importance is attached to pool fire as it is the most frequent, constituting more
than 60% of all the fire incidents [5]. Pool fire often triggers explosions as also newer pool fires
result from explosions. Pool fires can be vast and persistent, challenging to douse [6]. A buoyancy-
driven, turbulent non-premixed flame is formed above the pool. The resulting fire is distinguished
from other fires by a very low initial momentum and the propensity to be strongly influenced by
buoyancy effects. Therefore, the research about hazard analysis and damaged area of the pool fire
in tank fire is of great significance for its prevention.
Compared to other types of fires, full-surface fires formed after a fire in a petroleum oil storage
tank have the characteristics of high burning rate, high flame temperature, intense radiant heat, and
high challenge in the safety assessment of the tank after the fire [7] . Moreover, the wind's direction
and magnitude often play a decisive role in the progress and spread of the fire. Once the fire
becomes challenging to control, firefighters will encounter many difficulties in the process of
firefighting and rescue [8].
The hazard assessment of pool fire incidents comprises an estimation of the mass burning rate,
flame geometry, flame temperature, and, more prominently, evaluate emitted radiation by the
flame [9]. The thermal radiation evaluation plays a significant role in assessing the resistance of
equipment in the proximity of fire and verifying the possibility of domino effects [10]. To avoid
too conservative results, imposing anti-economic geometric constraints, such as spacing or barrier
and other potential active and passive preventive measures, a realistic incident assessment is
significant.
Page 2 of 27
Additionally, fire risk evaluation involves the application of developed risk criteria to assess the
risk level. Fire risk treatment improves existing risk controlling measures, develops new risk
control measures, and implements these measures to reduce fire risk [11]. Therefore, fire risk
analysis is the only part of the fire risk management process. It serves as the foundation of
regulatory decision-making on implementing risk-reducing actions or choosing appropriate risk
treatment measures or not [12-13]. Research related to fire risk analysis, therefore, critical and
essential.
One of the motivations of this study was to expand the fundamental understanding of fuel fire
dynamics to improve the ability to predict hazards from a fire in the given accident scenario,
establish the utility of forensic tools, and validate empirically-based correlations used to model
fire scenarios. In order to analyse the given scenario, one must first determine, by modeling or by
direct observation, how massive the fire is and how intensely it is burning, requiring knowledge
of such characteristics as flame geometry, flame temperature, and heat release rate. Once the fire's
size and intensity are established, heat transfer models can be used to predict hazard levels to the
fire surroundings [17-21]. The ability to simulate real-life accident scenarios has been limited.
However, because tests with small-scale fires do not fully simulate the physics of large-scale fires,
and outdoor tests with more massive fires are subject to poorly controlled ambient conditions [19].
In most cases, photographic or video images were used to characterize the fire's shape, yet visual
methods have limited effectiveness in large and sooty fires. Differences in the methods used to
define and measure flame geometry have also contributed to significant scatter in the results.
Through characterization of the temperature field in the fire plume, the overall geometry of the
fire could be described in greater detail than by looking at video images alone.
The affected plant, shown in Fig. 1, covered 485,625 m2 and contained 11 tanks with a total
capacity of about 110,370 m3 to store gasoline, diesel, and kerosene. Besides, there were five
underground tanks, each of 70 m3 capacity, to store gasoline and anhydrous alcohol [22]. Seven
buildings, a lubricating oil warehouse, and a truck loading facility also came within the affected
Page 3 of 27
area's ambit. The 3 m height compound wall enclosed the entire region, confined by trees and
buildings.
Fig. 1. Pre-accident view of IOCL Jaipur depot tank storage site and the immediate
neighbourhood
Page 4 of 27
3.2. Jaipur IOCL Incident
On, October 29, 2009, at 19:30 hrs (IST), fires that engulfed 11 large fuel storage tanks - a
significantly high proportion of the IOCL Jaipur oil storage and transfer depot at Sitapura industrial
area, India, followed a series of explosions in rapid succession. The powerful explosion occurred
that followed by a great fireball with a temperature of 1727C, which covered the entire installation
[23]. The consequent fires involved a number of fuel storage tanks on the site. The first explosion
was followed by further explosions leading to multiple tank fire, which involved 9 tanks containing
different materials immediate after the explosion. This was followed by additional 2 tank
explosions due to effect of highly intense heat radiation caused by fire in 9 tanks [22]. Thus, 11
large storage tanks of various sizes exploded and caught fire that resulted in the complete
destruction of the facilities and buildings within the premises of the terminal as shown in Fig 2.The
expansion of the fire to the neighbouring 2 tanks happened without explosions due to the high
thermal radiation. For large, black, smoky hydrocarbon fires, the estimation of the critical thermal
separation distance is not only dependent on the total fire but also on the height of a hot and clear
burning zone. Additionally, for multiple tank fires, as occurred in Jaipur, there is a considerable
increase in the mass burning rate, the flame height, the surface emissive power, as well as the
thermal separation distance [24].
Subsequently, the depot was completely destroyed and widespread damage was caused to the
neighbouring properties. This devastating accident resulted in 11 fatalities and injuries to more
than 150 people. About 5000 people had to be evacuated from their homes in the adjacent area.
The fire burned for 11 days, destroying most of the site and emitting a large plume of smoke. The
cost of the incident in terms of damage to property and loss of business is estimated to be
approximately USD 60 million. In the aftermath of the incident, the Ministry of Petroleum and
Natural Gas Committee (MoPNG) was formed to oversee the investigation.
The management of IOCL took the decision to allow the petroleum products to burn out in order
to avoid further aggravation of the accident in the interest of public safety [22]. IOCL personnel
and local firefighters were trained only for a worst-case scenario involving one tank on fire, rather
than 11 tank fires at the same time caused by a vapor cloud explosion. Without sufficient
equipment or training, local responders attempted to fight the multiple tank fires but failed as the
fire encompassed more tanks.
a) Insufficient equipment: Tank terminals like IOCL, Jaipur were not required to conduct a
risk analysis where they consider the potential of a vapor cloud explosion and multiple
tank fires. Neither IOCL, Jaipur deport nor the fire department had the requisite amount
of foam and adequate equipment to effectively fight and control a fire involving multiple
tanks.
Page 5 of 27
b) Insufficient preplanning with local fire departments or firefighter training at the site level:
IOCL, Jaipur did not preplan with local emergency responders, set up mutual aid with
other hazardous materials sites, or adequately train facility personnel to address a tank
farm fire involving multiple tanks. In fact, training for IOCL, Jaipur terminal personnel
was limited only to fighting a fire involving one tank, not an incident involving multiple
tanks.
c) Limited emergency preparedness: Local fire departments did not have sufficient training
or resources to respond to industrial fires and explosions, which resulted in firefighting
delays from insufficient foam and equipment. The limited training and resources of the
local fire departments resulted in an inefficient firefighting operation.
(a)
(b)
Fig. 2. IOCL Jaipur Depot on the (a) first night and (b) second day of the fire that followed the
explosion. Multiple tank fires are clearly visible.
Page 6 of 27
(a) (b)
Fig. 3. Damaged tanks after the incident (a, b)
(a) (b)
Fig. 4. Warehouse and crushed drums on the loading bay
Page 7 of 27
4. Analysis of tank oil fire
In the process of combustion, part of the heat diffuses to the outside through the thermal radiation
and heat convection when others feedback to the surface of oil through heat conduction of tank
wall, heat convection of hot smoke, and thermal radiation of flame. It keeps flammable vapor
producing steadily from the surface and propels combustion. Combustion of vapor, heat feedback
and evaporation are three related and circulatory links and only intervene in this loop and slow its
process, and the combustion could be stopped.
A high level (the distance between the fuel surface and the tank top is D/10 to D/5) tank fire is
likely to be a pool fire. Its flame is typically a turbulent buoyant diffusion flame—the base of the
flame-like a cone. The negative pressure created about 0.5 bar, at the tank center near the fuel
surface. The negative pressure sucks the surrounding air towards the tank's axis, but as the fuel
level high, air cannot be sucked into the tank, and the air, as shown in Figs. 6a and 6b forms only
an inverted circular valley [25].
When the fuel level was decreased to the range D/5 to D/3 (middle-level tank fires), the air
sucked into the tank, and ullage is being formed, but at the same time, the burning gas rises, and
the cold air and hot gases form an irregular interlocking pattern. There are many "fireballs" on
the top of the tank. The flame has a neck and also has apparent pulsation with mushroom clouds,
as shown in Fig. 6c [25].
As soon as the fuel level is lower than D/2, cold air enters into the tank but has difficulty
penetrating the flame. As the concentration of oxygen declines in the ullage, combustion, and the
tank's negative pressure develops progressively weak—a large quantity of the fuel-rich
combustion products and smoke escape from the tank along the wall. The flame is shorter, and the
neck disappears. There are some non-burning fuel-rich "black holes" near the perimeter of the
tank, as shown in Fig. 6d [25].
Page 8 of 27
(a) (b) (c) (d)
Fig. 6. Typical tank fire flames: a. high level, b. some fuel bum out, c. middle level, d. low level
[25]
The study of pool fire in the fire tank mainly includes the geometry formation of the liquid pool,
the shape, height, temperature of the flame, heat radiation, and harm degree of flame. In accordance
with the liquid pool's geometry changes over time, the liquid pool’s geometry of pool fire in the
fire barrier can be divided into two types of a constant geometry of the pool and geometry change
of the pool over time. Among the combustion parameters which determine the overall structure of
a pool fire, the most important is flame height and radiation intensity.
Page 9 of 27
The time averaged mass burning rate 𝑚 ̅𝑓" can be calculated by multiplying the time averaged
burning velocity and the liquid density of fuel. The equation for the mass burning rate is given
below:
This equation is valid for a wide range of gaseous and liquid fuels [28].
For the maximum mass burning rate, the following equations are given [28].
̅ f" (𝐷) = 𝑚
𝑚 "
̅ f,max (1 − 𝑒 −𝑘𝛽𝐷 ) = 𝑚 "
̅ f,max 𝜀F (1b)
𝑚 ̅f,
̅ f" (𝐷) = 𝜌f V max ≈ 1.27 × 10−6 ∆𝐻𝑐 ⁄∆𝐻v (1 − 𝑒 −𝑘𝛽𝐷 )𝜌f (1c)
The estimated 𝑚 ̅ f" (𝐷)for large tank gasoline fire is 0.055 kg/m2s. The maximum mass burning rate
of gasoline in a large tank on fire reported by various authors’ ranges from 0.055 to 0.083 kg/m2s
[16,29].
The flame height is a dynamic parameter, and the flame tip is often taken to be the point of 50%
intermittency [28], because the visual effect of the flame is not entirely representative of its
height. The best known and most widely adopted correlation for calculating the ratio between the
flame height and the diameter of a circular pool is described by Bubbico et al.[30]and Chen and
Wei [31].
The flame height is generally taken as the maximum visible height or the time-averaged visible
height [32]. The time-averaged relative (𝐻̅ ⁄𝐷) and maximum relative ((𝐻
̅ ⁄𝐷)max) visible flame
∗
height are dependent on the Froude number (𝐹𝑟𝑓 ) and the dimensional wind velocity (𝑢̅𝑊 ) and
can be estimated by the following correlations [33].
̅⁄
𝐻 𝑏 ∗𝑐
𝐷 = 𝑎 𝐹𝑟𝑓 𝑢̅𝑊 (2a)
and
Page 10 of 27
(𝐻⁄𝐷)
𝑐
= 𝑎 𝐹𝑟𝑓𝑏 𝑢̅𝑊
∗
(2b)
𝑚𝑎𝑥
There are more correlations with many empirical parameters, including a, b, and c, which are
experimental parameters and are given in Table1 [28].
Table. 1. Parameters for the determination of the dimensionless visible flame heights used in
Eqs. (2a, b) [34]
Correlation a b C Comment
Munoz 1 8.44 0.298 -0.126 Measured on gasoline and diesel pool fires: (H⁄D)max
The height of the visible flame is a function of the pool diameter and the burning velocity [35].
For the IOCL Jaipur incident, an assessment of the maximum, visible and relative flame heights
of gasoline tank fires was conducted assuming that the ‘c’ parameter in Eq. (2b) was zero because
there was no wind effect. The modified equation can therefore be written as:
𝑏
̅ 𝑓͌
𝑚
(𝐻 ⁄𝐷)𝑚𝑎𝑥 ≈ 𝘢Fr𝑓𝑏 = a ( ) (3a)
𝜌𝑎√𝘨𝐷
Thus, the estimated (𝐻 ⁄𝐷)max ratio for the gasoline tank (D = 24m) fire is 1.5. For a large
hydrocarbon pool fire where D 9 m, the time-averaged relative flame height (𝐻
̅ ⁄𝐷 ) is calculated
using Eq. (2a) [33] and Table. 1 and approximates to
0.375
̅̅̅̅
𝑚𝑓 ͌
̅ ⁄𝐷)calc ≈
(𝐻 𝘢Fr𝑓𝑏 = 7.74( ) = 0.9 (3b)
𝜌𝑎 √𝘨 𝐷
̅̅̅̅f,max (D = 24 m) 0.055 kg/ (m2s) for a gasoline pool fire, 𝜌𝑎 = 1.29 kg/m3, and the
With m′′
parameters a and b from Table 1, the calculation using Eqs. (3a, b) results in
An empirical relationship was observed between the maximum and average flame height. Thus,
a single correlation could be used to estimate both dimensions [16]:
̅ ⁄𝐷
(𝐻 ⁄𝐷)max ≈ 1.6 𝐻 (4)
The empirical relationship in Eq. (4) was also considered valid for the IOCL Jaipur tank fires.
Page 11 of 27
5.2.2. Height of the Clear Burning Zone by MSFM
Eq. (5a) is valid only for gasoline and kerosene fires. Within the extent of the MSFM exponential
exp
correlation between 𝜂̅ rad and the pool diameter [36], the following is valid:
exp ̅̅̅̅̅̅clma ≈100 kW/m2
𝜂̅rad,cl (𝐷) = 𝜂̅ rad (𝐷) = 0.35 e−0.05D,𝑆𝐸𝑃 (5b,c)
Eq. (5a) with Eq. (5b) results in the numerical value equation:
"
̅ f,max
For a large tank gasoline fire,𝑚 (−∆𝐻𝑐 ) ≈3630 kW/m2 [16].
The ̅̅̅̅̅̅
𝑆𝐸𝑃𝑐𝑙𝑚𝑎 defines the relative height 𝐻̅𝑐𝑙 ⁄𝐷 of the clearburning zone, as well as 𝜂̅ 𝑟𝑒𝑑,𝑐𝑙 (𝐷), the
effect of the thermal radiation of a large tank fire on the neighbourhood, e.g., the contents of a tank
(burning or not burning).
5.2.3. Height of the Clear Burning Zone by considering the (C/H) ratio
The modelling of the clear flame length has been proposed by Pritchard and Binding [37] and
Ditali [38]. It was reported that the height of the clear flame varied by approximately 30% of the
maximum flame length for fires up to 25 m in diameter to 0% for fire diameters of 5 m or more
[39]. The hydrocarbon fuel has a major role in the production of smoke within the fire affecting
the height of the clear flame. The (C/H) ratio is used to illustrate the saturation of a hydrocarbon
fuel and the tendency to generate soot [38].
Page 12 of 27
𝑚∗ = 𝑚" ⁄𝜌𝑎 (g𝐷)1/2= 6× 10−3 (6b)
and
g 1/3
𝑈 ∗ = 𝑈⁄( " ) ≈ 0.5 (6c)
𝑚 𝐷⁄𝜌𝑎
Comparison of the above two correlations with clear flame data shows that the Pritchard and
Binding correlation provides a better prediction than the Ditali correlation [32]. Hence, the
Pritchard and Binding correlation represents the best available method for predicting clear flame
height.
104 .𝑡
T𝑓 (𝑡, ℎ) = (34+210 ×H+8.51×t) + 298 (8)
In the IOCL Jaipur incident, the estimated flame temperature of the gasoline tank (D = 24 m) was
approximately 957⁰C (1230 K), which lies within the range of 1100 K-1240 K as reported by
various researchers for large-scale gasoline pool fires [17,40-43].
The heat radiation flux on the flame's surface is usually associated with the fuel properties, the
burning extent, the geometry, size of the flame and the flame surface location, and the flame shape
and temperature. We should select different mathematical models to calculate the heat radiation
flux on the flame's surface, depending on the pool diameter. Generally, we assume that the energy
uniformly radiates from the cylindrical flame's top and side face to the surroundings. The surface
thermal radiation flux on the flame's surface should be calculated according to the Mudan model.
The following formula calculates the mass heat release rate
A key parameter for the estimation of the thermal radiation of tank or pool fires is the Surface
Emissive Power (SEP) [34,44, 46]. It is usually defined as the heat flux due to thermal radiation at
the surface of the flame in kW/m2 [47]. The flame surface area (𝐴𝐹 ) should be considered in the
Page 13 of 27
calculations of SEP because it depends on the geometry of the flame. The thermal radiation, SEP,
of a tank or pool fire can be calculated using the radiation models, such as the Solid Flame Model
(SFM), the Modified Solid Flame Model (MSFM), the Two-zone Radiation Model (TRM) and
Thermal Radiation for Single and Multiple Tank Fires Model (TRSMFM). These models consider
the effect of heat feedback enhancement on SEP.
Fig. 8. Solid flame radiation model: The flame is an equally radiant cylinder [33]
̅̅̅̅̅̅SFM
𝑆𝐸𝑃 ma ̅a4 ) ≠f(Df)
= 𝜀̅F 𝜎 (T𝑓4 − T (9a)
With the calculated flame temperature of 1230K from Eq. (8), the constant, surface emissive
power is estimated as 123 kW/m2.
In this model, the flame is divided into two parts: a luminous part where the flame can be clearly
seen with high emissive power and an upper part where dark smoke covers the flame with sudden
bursts of luminous flames, as shown in Fig. 9. The moving border between these two parts depends
on the fuel, pool diameter, and oxygen content of the burning zone [44]. Especially for large pool
diameters, an alternative equation for the time-averaged maximum surface emissive power
̅̅̅̅̅̅
𝑆𝐸𝑃𝑀𝑆𝐹𝑀𝑚𝑎
(D, η) is proposed by [34] (Eq. 10a).
Page 14 of 27
Fig. 9. MSFM: the flame is divided into a clear luminous zone with a high radiation (LZ) and
a non-radiating soot zone (SZ). [33]
̅
𝜂 (𝐷,𝜂)𝑚′′
̅̅̅̅̅ (−△𝐻𝑐 )
̅̅̅̅̅̅
𝑆𝑃𝑀MSFM𝑚𝑎
(𝐷, 𝜂) = rad 4𝐻̅ (𝐷)f⁄𝐷 (10a)
Mc Grattan et al., [48] found an exponential relationship between 𝜂𝑟𝑎𝑑 and the pool diameter:
MSFM is a two-zone radiation model where the SEP of the lower clear burning zone (LZ) is
denoted by ̅̅̅̅̅̅
𝑆𝐸𝑃cl𝑚𝑎 (Eq. 5.10c), whereas the SEP for upper black soot zone (SZ) is denoted by
̅̅̅̅̅̅
𝑆𝐸𝑃u . The SEP of two zones, depending on the area fraction of the smoke zone (𝑎̅SZ ), can be
calculated according to Eq. (10d) [32].
From a hazard prediction point of view, the summation of thermal radiation from black soot and
radiation from the luminous spots on an equivalent area basis is used to reach an average emissive
power for the fire. If we consider two assumptions of 35% and 65% for the surface area covered
with black smoke and the remaining part with luminous spots, the time average emissive power is
given by the following expressions (Eqs.10e - 10f).
Page 15 of 27
Where gasoline-pool fires show 1) 𝑎̅SZ = 0.35 and ̅̅̅̅̅̅
𝑆𝑃𝑀MSFMma
= 140 kW/m2 for Eqs. (10e) and 2)
̅̅̅̅̅̅MSFM
𝑎̅SZ = 0.65 and 𝑆𝑃𝑀 ma ̅̅̅̅̅̅SZ = 20 kW/m2 for Eqs. (10e,
= 120 kW/m2 for Eq. (10 f), and the 𝑆𝐸𝑃
10f) with k ≈ 2.0.
̅̅̅̅̅̅
𝑆𝐸𝑃𝑎𝑐𝑡 (𝐷) = ̅̅̅̅̅̅
𝑆𝐸𝑃𝐿𝑆 𝑚𝑎
𝑎̅𝐿𝑆 (𝐷) + ̅̅̅̅̅̅
𝑆𝐸𝑃𝑆𝐴 (1 − 𝑎̅𝐿𝑆 (𝐷)) (11a)
where, the area fractions are estimated by Eq. (11b).
According to Mudan and Croce [49], a uniform surface emissive power of flames for smoky
hydrocarbon fuels can be determined by Eq. (11c). Although the thermal radiation from black soot
is low, the hot spots appearing on the flame surface due to turbulent mixing have a higher emissive
power.
̅̅̅̅̅
SEPact (𝐷) = ̅̅̅̅̅
SEPLS ma −s𝐷
e + ̅̅̅̅̅
SEPSA (1 − e−s𝐷 ) (11c)
̅̅̅̅̅
SEPact (𝐷) = 140 e−0.12𝐷 + 20 (1 − e−0.12𝐷 )=27 kW/m2 (11d)
Mudan and Croce [49] proposed an actual ̅̅̅̅̅̅ 𝑆𝐸𝑃𝑎𝑐𝑡 (𝐷) averaged over the flame surface based on
the means 𝑆𝐸𝑃𝐿𝑆 = 140 kW/m ≠ η(𝐷, η) and ̅̅̅̅̅̅
̅̅̅̅̅̅ 𝑚𝑎 2
𝑆𝐸𝑃𝑆𝐴 = 20 kW/m2 ≠ η (D, η). For larger pool fires
̅̅̅̅̅̅𝑎𝑐𝑡 (D) ≈ 20 (1−𝑒 −0.12𝐷 ) is also valid so that for larger pool and tank fires, the
with D ≥ 20 m, 𝑆𝐸𝑃
hot and luminous spots [right side first in Eqs. (11a, c) are eliminated.
Page 16 of 27
5.4.4. Thermal Radiation for Single and Multiple Tank Fires (TRSMF)
For multiple tank fires, as occurred in the IOCL Jaipur incident, the interaction of neighbouring
tank fires has a considerable effect on the SEP of the individual tank fires due to heat feedback
enhancement. To determine the surface emissive power of a flame, the flame surface area 𝐴𝐹 has
to be calculated.The thermal radiation, that is, the maximum surface emissive power 𝑆𝐸𝑃𝑚𝑎
(without blocking by black soot), of a tank fire can be calculated with [28, 32-33,44-45,49].
With
𝑄̅c = 𝑚
̅ f" (−∆𝐻c )𝐴𝑃 (12b)
𝐴̅𝐹 = 𝜋𝐷𝐻
̅ (𝐷) + π𝐷2 ⁄4 (12c)
The time-averaged 𝐴̅𝐹 is determined from the instantaneous area 𝐴𝐹 , which is influenced by the
flame fluctuations. According to Eq. (12a), doubling of 𝑚𝑓" as a result of the interaction brings
about a doubling of the thermal radiation of the hot spots. These types of effects were investigated
theoretically and experimentally by Gawlowski et al., [46].
6. Irradiance
The received thermal flux, i.e., the irradiance at any point, is calculated by a point source model,
which assumes that heat radiation of the flame is irradiated from a point that equally disperses in
a radial direction from the emission point as a sphere, as shown in Fig. 11. [47].
The point source radiation model (PSM) calculates the mean irradiance (received thermal radiation
flux) from the following relationships [49]
Page 17 of 27
𝐸𝑟 = 𝜏𝑎 𝜂̅𝑟𝑎𝑑 𝑚𝑓 ∆𝐻𝑐 𝐴𝐹 𝐹𝑃 (13a)
The view factor 𝐹𝑃 is calculated according to the fundamental relation of view factor with
respect to distance (Eq. 13b).
𝐹𝑃 = 1⁄4𝜋𝑥 2 (13b)
The PSM, however, has only a very limited range of validity. In particular, in the near field, great
uncertainties exist.
In the IOCL Jaipur incident, the mean irradiance 𝐸𝑟 versus distance was calculated for the gasoline
tank (D = 24) fires with the point source (PS) radiation model and was validated with the DNV
Norway-based risk assessment PHAST Software estimation, as shown in Fig. 12. The percentage
error between the estimated and calculated irradiance is 17% at a 100 m distance from centre of
flame.
20
18 Gasoline (D = 24 m); (H/D)max = 1.5
16
Raditation Level (kW/m²)
14
12
Point Source Model
10 PHAST 6.51
8
6
4
2
0
0 50 100 150 200 250 300
Distance Downwind (m)
Fig. 12. Estimation of the received thermal flux from a gasoline (D = 24) tank fire with the point
source model and PHAST software
The (𝐻 ⁄𝐷)𝑚𝑎𝑥 ratio computed by the Munoz correlation is 1.5, whereas the observed value lies
in between 1.0 and 1.7. Thus, the calculated value is within the observed value. The average value
̅ ⁄𝐷)calc is 0.9. The clear burning zone heights (𝐻𝐶𝑙 ⁄𝐷) were obtained by various models such
of (𝐻
as MSFM, Pritchard and Binding, and Ditali models. The Pritchard and Binding and Ditali models
use (C/H) ratios to indicate the saturation of the hydrocarbon fuel. The MSFM model gives a
Page 18 of 27
maximum value of 0.6, whereas the Pritchard and Binding and Ditali models predict 0.3 and 0.4,
respectively. This trend shows that the flame height in the Jaipur incident case was unusually large.
The estimated maximum, relative, and clear burning zone flame heights using the above
correlations for the IOCL Jaipur incident are shown in Table 2.
Table. 2. Flame heights for a gasoline tank (D = 24 m) on fire in the IOCL Jaipur incident
(𝑯⁄𝑫) Ratio Flame Height (m) Models
(𝐻 ⁄𝐷)𝑚𝑎𝑥 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑 * 1.0 – 1.7 24-40 Observed Range
(𝐻 ⁄𝐷)𝑚𝑎𝑥 1.5 36 Thomas relation
̅
(𝐻 ⁄𝐷)𝑐𝑎𝑙𝑐 0.9 22 Thomas relation
̅
𝐻𝑐𝑙𝑀𝑆𝐹𝑀
⁄ 𝐷 0.6 15 MSFM
𝐻𝐶𝑙 ⁄𝐷 0.3 7 Pritchard and Binding Correlation
𝐻𝐶𝑙 ⁄𝐷 0.4 10 Ditali Correlation
*Observed Flame Height at the time of incident of IOCL, Jaipur
Table. 3. Surface Emission Power (SEP) of a gasoline tank on fire in the IOCL Jaipur Incident
Models (SEP) kW/m2
SFM ̅̅̅̅̅̅SFM
𝑆𝐸𝑃 ma
123
MSFM ̅̅̅̅̅̅MSFM 𝑎̅SZ = 0.35
𝑆𝑃𝑀 𝑚𝑎
98
MSFM ̅̅̅̅̅̅ 𝑚𝑎
𝑆𝑃𝑀MSFM 𝑎̅SZ = 0.65) 55
TZM ̅̅̅̅̅
SEPact 27
𝑚𝑎
TRSMFM 𝑆𝐸𝑃 114
Piont source model and PHAST software were used to estimate of Thermal heat flux with respect
to distance. The estimated thermal heat flux was at 50m distance approximately 15kW/m2 by point
source model while about 20kW/m2 by PHAST. While at 100m distance about 5kW/m2 thermal
Page 19 of 27
heat flux by Point model and 6.5kW/m2 by PHAST. There is 20% error at nearer distances,
whereas the effective values at higher distances are almost equal.
Decisions regarding investments into fire safety generally have to be made under uncertainty [50-
51]. This stems both from the inherent randomness of large fire events and from the fact that we
are not able to fully understand and model the underlying phenomena. The overall goal of
quantitative fire risk assessment is to support decisions on risk reduction measures by estimating
their impact on the expected consequences (e.g. financial losses or human fatalities) of all possible
fire scenarios [50,52-53]. The basic requirement for the fire risk models to be used for decision-
making is to assess a risk as a function of the safety measures installed by incorporating decision
variables. Another important requirement is to model risk-relevant characteristics of the fuel
storage terminals and installations. Finally, the model should assess the risk as accurately as
possible. The available fire risk reduction measures are grouped into the following main categories,
as shown in Fig. 13.
Bund design
Operation Design Foam System
Gaseous agent
Safe Distance Arson hood at
System
exhaust nozzle
Double deck roof
Water Spray
System
Tank design Geodesic domes
Water monitors
Flame detector Oxygen mask
Personal protective
Alarming system Firefighting
equipment's
equipment
9. Conclusion
In this paper, fire risk analysis and implementing a quantitative analysis models are presented.
Large-scale pool fires of liquid hydrocarbons show fundamentally different characteristics such as
generally much higher mass burning rates, large flame heights and high irradiances. To measure,
calculate and study the fire characteristics, simulations and modelling of hydrocarbon large-scale
Page 20 of 27
pool fires were performed using various models like SFM, MSFM and TZM etc. The present
simulations are based on the assumption of a complete combustion without wind influence.
The computed mass burning rate and flame temperature, based on the data collected, are within
the ranges reported in the peer-reviewed literatures. The modelling analysis of the Jaipur accident
revealed that the height to diameter ratio (H/D)max lies between 0.9 and 1.5. This ratio is
approximately 1.6 times the average value and lies well within the observed values. For the
computation of the clear burning zone, the Pritchard and Binding correlation results in better values
than the Ditali and MSFM models.
The surface emissive power from a tank fire was calculated by changing the percentage of black
smoke and luminous spots covering the flame. Four models namely SFM, MSFM, TRM, TRSMFM
were used to predict the values of SEPs that showed a most likely fire scenarios occurred during
the accident. The irradiance at various distances based on the point source model were compared
with the DNV Norway-based risk assessment PHAST 6.51 Software and it is found that the result
was correct with a maximum error of 20%.
A physical explanation of the Jaipur accident with regard to the relative flame heights H/D and
thermal radiation (SEP, E) is in principal possible, in particular when considering the effective
consequence models including the observations regarding multiple tank fires. However, it is
necessary to overcome the lack of field data, especially with regard to the H/D, SEP, and 𝐸𝑟 for
larger individual as well as multiple tank and pool fires for their more realistic characterization.
Furthermore, continuous efforts are required to improve the large tank fire modelling to simulate
real life scenarios coping with changes in technology and management of petrochemical storage
terminals. Accordingly, future research can be carried out on the verification methods of key
parameters, the measurement method of the impacts of fire on business community, heritage and
the environment, and the appropriate risk management strategy to reduce fire risks in storage
terminals.
Page 21 of 27
Nomenclature
Greek symbols
Page 22 of 27
𝜎 [kWm–2K–4] Stefan-Boltzmann-constant
𝜌𝑎 [kg/m3] ambient density
𝜌f [ kg/m3] density of fuel
Subscripts
a ambient conditions
act actual quantity (i.e., the luminous flame is partly obscured by black
smoke)
calc calculated or predicted quantity
cl hot, clear burning zone of the height or height 𝐻cl
exp experimental quantity
hs hot spots
kβ mean beam length corrector extinction coefficient product, m-1
la laminar
LS yellow luminous spots
ma maximum SEP, i.e. the flame is not obscured with black smoke
max maximum value of a quantity
P pool
SA area of black smoke
sp parcel of black smoke
SZ zone of black smoke
Page 23 of 27
Reference
[1] R.K Sharma, N. Gopalaswami, B.R. Gurjar and R. Agrawal. Assessment of failure and
consequences analysis of an accident: a case study. Engineering failure analysis, 109,
p.104192, 2020.
[2] K. B. Mishra, K. D.Wehrstedt and H. Krebs. Amuay refinery disaster: The aftermaths and
challenges ahead, J. Fuel Processing Technology,119, 198–203, 2014.
[3] R. Pitblado, Global process industry initiatives to reduce major accident hazards, J. Loss
Prevention in the Process Industries, 24, 57-62, 2010
[4] B. Zheng, GH. Chen. Storage tank fire accidents. Process Safety Progress. Sep;30(3):291-3,
2011.
[5] Z. Miao, S. Wenhua, W. Ji and C. Zhen, Accident consequence simulation analysis of pool fire
in fire dike, Procedia Engineering, 84, 565 – 577, 2014
[6] T. Abbasi, H.J. Pasman and S.A. Abbasi. A scheme for the classification of explosions in the
chemical process industry. Journal of Hazardous Materials, 174, 270-280, 2010.
[7] F. Zhou. Numerical Simulation of Large Crude Oil Storage Tank Fire under Various Wind
Speeds. In Journal of Physics: Conference Series, vol. 1300, no. 1, p. 012003. IOP
Publishing, 2019.
[8] L. Zhuang, G.Q. Chen, Z.Y. Sun, L. Feng, S.X. Lu. On the damage study of the thermal
radiation of the large oil-tank fire accidents. Journal of Safety and Environment.;4. 2008.
[9] C.L. Beyler, Fire hazard calculations for large, open hydrocarbon fires. In SFPE handbook of
fire protection engineering. Springer, New York, NY. (pp. 2591-2663), 2016.
[10] G. Landucci , G. Gubinelli, G. Antonioni, The assessment of the damage probability of storage
tanks in domino events[J]. Accident; analysis and prevention, 41(6):1206-1215. 2009.
[11] T. Abbasi, S.A. Abbasi, Accidental risk of superheated liquids and a framework for predicting
the superheat limit. Journal of Loss Prevention in the Process industries.1;20(2):165-81.
2007.
[12] T. Bedford and R. M. Cooke, Probabilistic Risk Analysis: Foundations and Methods,
Cambridge University Press, 2001.
[13] X. Jing and C. Huang, Fire risk analysis of residential buildings based on scenario clusters
and its application in fire risk management. Fire Safety, J.62, 72-78, 2013.
[14] K. Hiroshi, Combustion properties of large liquid pool fires, J. of Fire technology. 25, 241-
255, 1989
[15] L. A. Gritzo, Y. R. Sivathanu and W. Gill. Transient Measurements of Radiative Properties,
Soot Volume Fraction and Soot Temperature in a Large Pool Fire, J. Combustion Science
and Technology,139:1, 113-136, 2007.
[16] M. Muñoz, J. Arnaldos, J. Casal and E. Planas, Analysis of the geometric and radiative
characteristics of hydrocarbon pool fires, J. Combustion and Flame. 139, 263–277, 2004.
Page 24 of 27
[17] K. Mudan, Thermal radiation hazards from hydrocarbon pool fires, J. Prog. Energy Combust.
Sci., 10, 59–80, 1984.
[18] L. Gritzo, P. Senseny, Y. Xin and J. Thomas, The international FORUM of fire research
directors: a position paper on verification and validation of numerical fire models, Fire
Saf., J. 40, 485–490, 2005.
[19] C. S. Lam and E. J. Weckman, Wind-blown pool fire, Part I: Experimental characterization
of the thermal field. Fire Saf, J. 75, 1-13, 2015.
[20] M. Considine, Thermal radiation hazard ranges from large hydrocarbon pool fires, Safety and
Reliability Directorate SRDR 297, United Kingdom Atomic Energy Authority, Culcheth,
Warrington, 1984.
[21] A. Rajendram, F. Khan and V. Garaniya, Modelling of fire risks in an offshore facility, Fire
safety, J.71, 79-85, 2015.
[22] MoPNG (Ministry of Petroleum and Natural Gas) Committee, Independent inquiry committee
report on Indian oil depot fire at Jaipur on 29.10.2009.
http://oisd.nic.in/index.htm (Last Accessed:21.02.10)
[23] R.K. Sharma, B.R. Gurjar, S.R. Wate, S.P. Ghuge and R. Agrawal, Assessment of an
accidental vapour cloud explosion: Lessons from the Indian Oil Corporation Ltd. accident
at Jaipur, India, J. Loss Prevention in the Process Industries, 26, 82-90, 2013.
[24] S. Vasanth, S.M. Tauseef, T. Abbasi, and S.A. Abbasi. Multiple pool fires: Occurrence,
simulation, modeling and management. Journal of Loss Prevention in the Process
Industries, 29, 103-121. 2014.
[25] Z.X Wang, A Three Layer Model For Oil Tank Fires. Fire Safety Science 2: 209-220, 1989.
[26] P. Joulain, Behavior of pool fires: State of the art and new insights. In Proceedings of the 27th
Symposium International on Combustion, Boulder, CO, USA, USA, 2–7 August 1998; pp.
2691–2706, 1998.
[27] D.M. Yu, C.G Feng and Q.X Zeng, Pool Fire in Open Air and the Hazard Analysis, Combust.
Sci. Technol, 2 (2), 95-103, 1996.
[28] I. Vela: CFD prediction of thermal radiation of large, sooty, hydrocarbon pool fires, PhD
Thesis University of Duisburg-Essen, Germany, 1-184, 2009.
[29] J. M. Chatris, J. Quintels, J. Folch, E. Planas, J. Arnaldos and J. Casal, Experimental Study
of Burning Rate in hydrocarbon Pool Fires, J. Combustion and Flame, 126, 1373-1383,
2001.
[30] R. Bubbico, G. Dusserre and B. Mazzarotta, Calculation of the flame size from burning liquid
pools. Chem. Eng. Trans, 53, 67–73, 2016.
[31] Z. Chen and X. Wei, Analysis for combustion properties of crude oil pool fire. Procedia Eng.
84, 514–523, 2014.
[32] P. J. Rew, W. G. Hulbert and D. M. Deaves, Modeling of thermal radiation from external
hydrocarbon pool fires, J. Process Safety and Environmental Protection, 75, 81-89, 1997.
[33] M. Hailwood, M. Gawlowski, B. Schalau and A. Schönbucher, Conclusions Drawn from the
Buncefield and Naples Incidents Regarding the Utilization of Consequence Models, J.
Chem. Eng. Technol., 32, 207-231, 2009.
Page 25 of 27
[34] M. Mu˜noz, E. Planas, F. Ferrerom and J.Casal, Predicting the emissive power of hydrocarbon
pool fires, Journal of Hazardous Materials. 144, 725–729, 2007.
[35] G. Atkinson, S. Betteridge, J. Hall, J. R. Hoyes and S. E. Gant, Experimental determination
of the rate of flame spread across LNG pools, Symposium Series No 161 Hazards 26, 2016.
[36] H. Schmitz and H. Bousack, Modeling a Historic Oil-Tank Fire Allows an Estimation of the
Sensitivity of the Infrared Receptors in Pyrophilous Melanophila Beetles, PLoS ONE, 5,
2012.
[37] M.J. Pritchard and T.M. Binding, FIRE2: A new approach for predicting thermal radiation
levels from hydrocarbon pool fires, IChemE Symposium Series.130, 491-505, 1992.
[38] S. Ditali, A. Rovati and F. Rubino, Experimental model to assess thermal radiation from
hydrocarbon pool fires, 7th IntSymp on Loss Prevention and Safety Promotion in the
Process Industries, Taormina, Italy, 1992.
[39] E. Planas-Cuchi and J. Casal, Modeling Temperature Evolution in Equipment Engulfed in a
Pool-fire, J. Fire Safety, 30, 251-268, 1998.
[40] V. Babrauskas, Estimating Large Pool Fire Burning Rates, J. Fire Technology.19, 251-261,
1983.
[41] A. P. Croce and K. S. Mudan, Calculation impacts for large open hydrocarbons fires, J. Fire
saf., J. 11, 99-112, 1986.
[42] H. Koseki, Combustion Properties of Large Liquid Pool Fires, J. Fire Technology, 25, 241-
255, 1989.
[43] H. Chun, K. D. Wehrstedt, I. Vela and A.Schönbucher, Thermal radiation of di-tert-butyl
peroxide pool fires—Experimental investigation and CFD simulation, J. Hazardous
Materials, 167, 105–113, 2009.
[44] P.K. Raj, Large hydrocarbon fuel pool fires: Physical characteristics and thermal
emission variations with height. J. Hazardous Materials, 140, 280–292, 2007.
[45] J. A. Fay, Model of large pool fires, J. Hazardous Materials, 136, 219- 232, 2006.
[46] M. Gawlowski, M. Hailwood, I. Vela and A. Schönbucher, Deterministic and Probabilistic
Estimation of Appropriate Distances: Motivation for Considering the Consequences for
Industrial Sites, J. Chem. Eng. Technol., 32, 182–198, 2009.
[47] W.F.J.M. Engelhard, Heat Flux from Fires, in: C.J.H. van den Bosch, R.A.P.M. Weterings
(Eds), Methods for the calculation of Physical Effects, Publicatiereeks Gevaarlijke Stoffen,
The Netherlands, pp. 6.1- 6.130, 2005.
[48] K. B. Mc Grattan, H. R. Baum and A. Hamins, NISTIR 6546, Fire Safety Engineering
Division Building and Fire Research Laboratory, NTIS, U.S. Department of Commerce,
Springfield, VA, 2000.
[49] K. S. Mudan and P.A. Croce, Fire hazard calculations for large open hydrocarbon fires, The
SFPE Handbook of Fire Protection Engineering, First ed., Section 2 (NFPA, Quincy, MA,
USA), 1988.
[50] K. Fischer, G. D. Sanctis, J. Kohler, M. H. Faber and M. Fontana, Combining engineering
and data-driven approaches: Calibration of a generic fire risk model with data. Fire Safety
J. 74, 32-42, 2015.
[51] A. T. Murray, Optimising the spatial location of urban fire stations. Fire Safety J. 62, 64-71,
(2013)
Page 26 of 27
[52] R. K. Sharma, B. R. Gurjar, A. V. Singhal, S. R. Wate, S. P. Ghuge and R. Agrawal,
Automation of emergency response for petroleum oil storage terminals, Safety Sci. J. 72,
262-273, 2015.
[53] B. R. Gurjar, R. K. Sharma, S. P. Ghuge, S. R. Wate and R. Agrawal, Individual and Societal
Risk Assessment for a Petroleum Oil Storage Terminal, J. Hazardous, Toxic, and
Radioactive Waste. 2015.
doi: http://dx.doi.org/10.1061/(ASCE)HZ.2153-5515.0000277
Page 27 of 27
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
The safety during construction phase of large chemical complex is very critical in terms injuries
or property damages. In this paper the risks that possess in critical operations carried out during
construction phase of project were considered. Which are categorised based on a novel method of
Intuitionistic Fuzzy Analytical hierarchy process. The different categories are Critical, Serious,
Minor. The Analytical hierarchy process (AHP) possess the unique advantage of comparing
parameters that have no units or scale of measurements. The Intuitionistic Fuzzy Analytical
hierarchy process (IFAHP) further improves the AHP in terms of vagueness, uncertainty and
handles the imprecise data.
1.Introduction
The construction activities mostly labour intensive and is the largest economic activity after the
agriculture sector. The construction sector mainly divided into building, infrastructure, and
industrial. In the industrial construction sector. We consider the chemical industry, where piping,
reactors, vessels and other are present. These construction activities are mainly carried out through
unorganised sectors. In unorganised sectors, the countries like India the labours are mainly migrant
speaks different languages illiterate. So, ensuring and practicing safety in this sector is a difficult
task. The evaluation and data obtained through this study cannot be generalized. What should be
reviewed for preparing the future general safety plan in regional level.
This study utilizes multi criteria decision making techniques like intuitionistic fuzzy analytical
hierarchy process (IFAHP) in Safety Engineering. This is a natural way to quantify a parameter,
that cannot be measured or difficult to measure by using any fundamental scale or derived scales
like kilogram, metre, kilowatt etc. But we can quantify the parameter by comparing the intensities
or changes in it. The AHP is a technique, that utilizes pairwise comparison to prioritise a
component from a group of components. Which are tangled together like Pasta strings, in which
strands are separate. In pairwise comparison, the elements in pairs are compared against a given
criteria. The intuitionistic fuzzy analytical hierarchy process improves AHP in terms of vagueness,
uncertainty, ambiguity and handles imprecise data.
2. Literature Review
The IFAHP is utilized in various domains such as, supply chain management, engineering, e-
commerce, banking, risk assessment. The some of the applications of IF-AHP in the various
disciplines are described below
Sadiq [16] applied to select best drilling fluid in environmental decision-making process. It utilized
a similarity measure to reduce the number of alternatives by grouping different alternatives into a
single class or cluster. Also, for ranking generalized mean and standard deviations are followed.
Jian Wu [19] applied interval valued IFAHP in e-commerce domain, here a score judgement
matrices and obtained its associated interval multiplicative matrix to calculate the priority vector.
Cengiz Kahraman [7] applied Intuitionistic Fuzzy originated Interval type -2 FAHP in the
application of dam less hydroelectric power plants. The triangular IF linguistic evaluation scale is
utilized in pairwise comparison, which is transferred into Triangular Type-2 Fuzzy (TT2F)
pairwise comparison matrix. Applied classical AHP to find best alternatives and finally TT2F is
defuzzied.
Nirmala [11] proposed Triangular IFAHP with location index number and fuzziness index function
for represent the TIFN in the application to select best computer
Zeshui Xu [20] proposed an algorithm to repair inconsistencies in his article for global supplier
selection.
Yao Yu [21] described to rank the risk factors in transnational public- private partnership projects
based on IFAHP. The consistency is checked by the distances between given intuitionistic
preference relation and its perfect multiplicative consistent intuitionistic preference relations were
used. An algorithm is proposed for repair the inconsistent relations.
2.2 IFAHP in risk estimation
Hoang Nguyen [10] Introduced IFAHP in ship system risk estimation. the priority vector of
consequences is found out and a membership knowledge measure of IFV’s is utilized for final
ranking.
Selcuk Cebi [2] is proposed IVIFAHP for warehouse risk estimation, score judgement matrix and
possibility degree matrix were utilized for prioritization.
Safety and health in construction by ILO [5] and Health and safety in construction [17] describes
an overall picture of safety at construction sites. Every year many construction site workers are
killed or injured as a result of their work, others suffer ill health, such as musculoskeletal disorders,
dermatitis or asbestosis. The hazards are not, however, restricted to those working on sites.
Children and other members of the public are also killed or injured because construction activities
have not been adequately controlled. So, the construction sector should be analysed for
improvements in the sense of health and safety.
3. Basic concepts
The basic concepts of intuitionistic fuzzy AHP, which are adopted in this study is described here.
The concept of Fuzzy sets given by Zadeh is generalised by Atannasov [1] by introducing
membership function and non-membership function for that elements of the universe of discourse.
An Intuitionistic Fuzzy Set A in X is defined as an object of the following form
A = {[ x, µ A (x), υ A (x)] / x ∈ X}
3.2 IF Operations[20]
Humans have the ability to perceive things and ideas, to identify them and to communicate what
they observe. Our mind structures complex reality into its constituent parts. These in turn into their
parts and so on hierarchically. The numbers of parts usually range between five, and nine.
By breaking down the reality into homogeneous clusters and subdividing these clusters
into smaller ones. We can integrate large amount of information to structure of a problem and form
a more complete picture of the whole system.
Humans also have the ability to perceive relationship among the things they observe, to compare
in pairs of similar things against certain criteria and discriminate between both members of a pair
by judging the intensity of their preference for one over the other. The relationship between the
elements of each level of hierarchy by comparing the elements in pairs. This relationship
represents the relative impact of the elements of a given level on each element of the next higher
level. The latter element serves as a criterion is called a property.
The result of the discrimination process is a vector of priority of relative importance of the
elements with respect to each property. This pair wise comparison is repeated for all the elements
in each level. The final step is to come down the hierarchy by weighing each vector by the priority
of its property. This synthesis results in a set of net priority weights for the bottom level. The
elements with the highest weight in the one that merits the most serious consideration for action
although the others are not rule out entirely.
4.3 Establish priorities
In a decision problem the first step in establishing the priorities [14] of elements is to make pair
wise comparisons. That is, to compare elements in pairs against a given criteria. For pairwise
comparison, a matrix is the preferred form. The matrix is simple, well-established tool that offer a
framework for testing consistency, obtaining additional information through making all possible
comparisons and the analysing the sensitivity of overall priorities to change in judgements.
To begin with the pairwise comparison process, start at the top of the hierarchy to select the
criterion ‘C’. That will be is used for making the first comparison. Then from the level immediately
below, take the elements to be compared A1, A2, …. An
In the matrix compare the element A1 in the column on the left with element A1, A2, …. An in
the row on the top with respect to the property ‘C’ in the upper left-hand corner. Then repeat the
column elements A2 and so on. To compare elements, ask how much more strongly does these
elements or activity possess or contribute to, dominate, influence, satisfy or benefit, the property
then does the element with which it is being compared.
To fill in the matrix for pairwise comparison, we use numbers to represent the relative importance
of one element over another with respect to the property.
Intensity Definition
1 Equal importance
3 Moderate importance
5 Strong importance
7 Very strong importance
9 Extreme importance
2,4,6,8 For comparisons between the above values
If activity ‘i’ has one of the above non zero numbers assigned to it, when
Reciprocals compared with activity ‘j’, then ‘j’ has the reciprocal value when
compared with ‘i’
1.1-1.9 When elements are close and nearly indistinguishable
1.3 Moderate importance
1.9 Extreme importance
Experience was confirmed that a scale of nine units [15] is reasonable and reflect the degree to
which one can discriminate the intensity of relationship between elements.
Using the scale in a social, psychological, or political context, express the verbal judgement first
and then translate them to the numerical values. The numerically translated judgements are
approximations, and their validity can be evaluated by a test of consistency.
𝑞𝑞−𝑝𝑝−1 𝑞𝑞−1
�∏𝑟𝑟=𝑝𝑝+1 µ𝑝𝑝𝑝𝑝 µ𝑟𝑟𝑟𝑟
𝜇𝜇̅𝑝𝑝𝑝𝑝 = 𝑞𝑞−𝑝𝑝−1 𝑞𝑞−1 𝑞𝑞−𝑝𝑝−1 𝑞𝑞−1
, q > p+1
�∏𝑟𝑟=𝑝𝑝+1 µ𝑝𝑝𝑝𝑝 µ𝑟𝑟𝑟𝑟 + �∏𝑟𝑟=𝑝𝑝+1(1−µ𝑝𝑝𝑝𝑝 )(1−µ𝑟𝑟𝑟𝑟 )
𝑞𝑞−𝑝𝑝−1 𝑞𝑞−1
�∏𝑟𝑟=𝑝𝑝+1 𝜐𝜐𝑝𝑝𝑝𝑝 𝜐𝜐𝑟𝑟𝑟𝑟
ῡ𝑝𝑝𝑝𝑝 = 𝑞𝑞−𝑝𝑝−1 𝑞𝑞−1 𝑞𝑞−𝑝𝑝−1 𝑞𝑞−1
,q>p+1
�∏𝑟𝑟=𝑝𝑝+1 𝜐𝜐𝑝𝑝𝑝𝑝 𝜐𝜐𝑟𝑟𝑟𝑟 + �∏𝑟𝑟=𝑝𝑝+1(1−𝜐𝜐𝑝𝑝𝑝𝑝 )(1−𝜐𝜐𝑟𝑟𝑟𝑟 )
āpq = 𝑎𝑎𝑝𝑝𝑝𝑝 , q = p + 1;
āpq = (ῡ𝑝𝑝𝑝𝑝 , 𝜇𝜇̅ 𝑝𝑝𝑝𝑝 ), q < p
By applying these equations, we can only update less than half of the elements in the original
intuitionistic preference relations to construct the perfect multiplicative consistent Intuitionistic
preference relations
Ā= (āpq)nxn for A
The intuitionistic preference relations A is an acceptable multiplicative consistent intuitionistic
preference relation if d (A, Ā) < 𝜏𝜏
where d (A, Ā) is distance measure between the intuitionistic preference relations A and its
corresponding perfect multiplicative consistent intuitionistic preference relations Ā . Which can
be calculated by
1
d (Ā,A) = ∑𝑛𝑛𝑝𝑝=1 ∑𝑛𝑛𝑞𝑞=1��𝜇𝜇�𝑝𝑝𝑝𝑝 − µ𝑝𝑝𝑝𝑝 � + �𝜐𝜐�𝑝𝑝𝑝𝑝 − 𝜐𝜐𝑝𝑝𝑝𝑝 � + |𝜋𝜋�𝑝𝑝𝑝𝑝 − 𝜋𝜋𝑝𝑝𝑝𝑝 |�
2(𝑛𝑛−1)(𝑛𝑛−2)
Hence it is beneficial to combine initial intuitionistic preference relations A and the corresponding
perfect multiplicative intuitionistic preference relations Ā into a joined intuitionistic preference
relation
Where t is the number of iterations, σ is a controlling parameter, that is determined by the decision-
maker the smaller the value of σ, à =A; if σ = 0, à =A; if σ = 1, à = Ā also à is an intuitionistic
preference relation.
The generally the combined intuitionistic preference relations à contains not only the original
preference information of the intuitionistic preference relations but the preference information of
its corresponding perfect multiplicative consistent preference relations Ã. The controlling
parameters σ also represents the preference of the decision-maker to some extent. Based on the
above-mentioned analyses the equation to repair inconsistent intuitionistic preference relations
described
Through this equation we can improve the consistency level of any intuitionistic preference
relation without losing much original information. This iterative method we can save a lot of time
of the decision maker.
The intuitionistic preference relations do not directly give the priorities. According to Saaty’s
concepts, n- dimentional vectors ω = (ω1, ω2……. ωn) is estimated from the multiplicative
preference relations. The ωp is the weight which accurately represents relative dominance of the
alternative Ap among the alternatives in A.
The intuitionistic preference relations A= (apq)nxn , where 𝑎𝑎𝑝𝑝𝑝𝑝 = (µ𝑝𝑝𝑝𝑝 , 𝜐𝜐𝑝𝑝𝑝𝑝 ), since µ𝑝𝑝𝑝𝑝 , 𝜐𝜐𝑝𝑝𝑝𝑝 ϵ [0,1]
, µ𝑝𝑝𝑝𝑝 + 𝜐𝜐𝑝𝑝𝑝𝑝 ≤ 1, then µ𝑝𝑝𝑝𝑝 = 1 − 𝜐𝜐𝑝𝑝𝑝𝑝 . So, the intuitionistic preference relation’s membership and
non-membership pairs. Which can be transformed into (µ𝑝𝑝𝑝𝑝 ,1- 𝜐𝜐𝑝𝑝𝑝𝑝 ) the interval valued
preference relations. So that, the intuitionistic preference relations A= (apq)nxn is transformed into
an interval valued preference relation 𝐴𝐴′ = � a′ pq � nxn = (µ𝑝𝑝𝑝𝑝 ,1- 𝜐𝜐𝑝𝑝𝑝𝑝 )nxn based on the operational
laws of intervals the priority weights are estimated using the formulae
∑𝑛𝑛
𝑞𝑞=1 µ𝑝𝑝𝑝𝑝 ∑𝑛𝑛
𝑞𝑞=1�1−𝜐𝜐𝑝𝑝𝑝𝑝 �
ω 𝑝𝑝 = ( ∑𝑝𝑝=1 ∑𝑛𝑛
𝑛𝑛 , 1 − ∑𝑛𝑛 𝑛𝑛 )
𝑞𝑞=1�1−𝜐𝜐𝑝𝑝𝑝𝑝 � 𝑝𝑝=1 ∑𝑞𝑞=1 µ𝑝𝑝𝑝𝑝
For an intuitionistic fuzzy set π A (x) = 1- µ A(x) + υ A(x), the degree of uncertainty of the
membership of element x ϵ X to the set A. In ordinary fuzzy set π A (x) = 0 for every x ϵ X.
π A (x) = [0,1] for all x ϵ X , Let α = (µα , 𝜐𝜐α , πα) be an intuitionistic fuzzy value
Szmidt and Kaeprzyk [4] proposed a relation to rank the intuitionistic fuzzy value
ρ(α) = 0.5 (1+ 𝜋𝜋𝛼𝛼 )(1- µ𝛼𝛼 )
The smaller the value of ρ(α), the greater the intuitionistic fuzzy value α in the sense the amount
of positive information included and reliability of information.
5. Procedure for intuitionistic fuzzy analytic hierarchy process
The procedure for intuitionistic fuzzy analytic hierarchy process is described below
1. Identify the objective, safety factors, sub factors and construct the hierarchy of the problem
2. Evaluate intuitionistic preference relation (IPR) by pairwise comparison between each
factor, sub factors against the respective Criterion of severity, occurrence, detectability
3. Evaluate perfect multiplicative consist preference relation from
IPR and measure the distance to check the consistency.
4. If it is inconsistent (τ > 0.1) repair the inconsistent IPR by applying the auto correction
formula in an iterative manner until τ < 0.1.
5. Calculate the priority vector for each consistent IPR
6. Find out the overall weights from lowest level to the highest level by intuitionistic fuzzy
(IF) operation
7. Evaluate the overall weights by multiplication of severity, occurrence, detectability
weights to find risk priority number by IF operations.
8. Classify the safety factors in this case according to the boundary values decided as major,
critical, minor and finally rank the safety factors using the equations.
6. Illustrations
Ensuring safety of construction site is a complex activity consisting of various factors and criteria.
The multi criteria decision making techniques can be applied to analyse the problem in this work
intuitionistic fuzzy analytical hierarchy process (IFAHP) is utilized for the risk assessment of a
large chemical complex during construction phase.
The construction activities mainly include building construction, infrastructure
development and industrial construction. Here in this problem we are conducted studies of a large
chemical complex construction. The construction activities considered here includes only civil
works like excavation, road construction, buildings. Fabrication and erection of structures,
platforms, piping, vessels, other equipment, etc. Inspection and pre commissioning activities like,
water flushing, air blasting, leak testing, storage, and handling of chemicals.
The methods adopted in this study consisting mainly the analysis of the past incident happened
at different construction site and expert opinion [13] is recorded. Around twenty-one risk factors
were identified and can be explained in detail. The risk factors identified are clustered in five
groups as the primary risk factors and then to its respective sub factors.
Table 2. Risk factors
Working at Height
1 Scaffolding, ladders
2 Falling from height, slips
3 Dropping Objects from Height
4 Safety belt & other protective gears
Confined space
8 Lighting & ventilations
9 Standby / watch
10 Simultaneous operations inside
11 Administrative control
Electrical
12 Dragging of live cables
13 Energization of equipment / loto
14 Circuit breaker/earthing/extension
15 Short circuit/insulation failure
16 Striking underground/overhead lines
The various issues noticed under the sub factors [6] are
Scaffolding and Ladders: - Unhealthy scaffolding, substandard scaffolding (without bottom plate,
toe guards, handrails). Not providing ladders to climb in the scaffolding.
Falling from height and slips, dropping of object from height, Safety belt and other protective gears
[3] are self-explanatory
Manual lifting and moving [12] : - It is one of the prime areas of concern and lifting postures
of manual lifting is very important to avoid injuries, sprains etc. Often peoples carry load and
moving alone, or as a group will also lead into incidents due to slips for imperfect perceptions.
Lifting equipment: - The lifting tackles, chain blocks, pulleys, pads etc. should be take the loads
successfully. But due to wear and tear or other damages. It should not take the certified loads. This
should be checked before starting of the work. Lifting equipment should be statutorily carried out
the load test within the stipulated time. Cranes mostly have interlocks and other safety integrated
features, bypassing of these features are alarming.
Vehicle and other moving equipment: - Vehicles and other moving equipment possess a great
threat in the construction site due to poor visibility, constraints in passages, reverse moments,
signalling problem, communication and perception problems.
Ventilation and lightings: - In the confined space work the ventilation is most important factor. In
construction site due to scarcity of utilities the eductors or other artificial air supplies is mostly not
provided. The poor lighting also causes incidents inside the confined space like falling into deep
inside the vessel.
Standby / watch: - In many cases, it is found that the standby person assigned with many other
duties. It is most important to provide a full time dedicated standby person and watch.
Simultaneous operation inside confined space: - It has been seen that contradicting SIMOPS where
carried out in confined space example, welding and flushing welding and painting.
Administrative control: - These are like entry registers, gas checking etc.
Dragging of live electrical cable: - This is found that a common activity in construction sites to
drag the live cables from one location to another. Often the joints or insulation damaged areas
where found sparking while dragging, especially welding cables.
Energisation of equipment / LOTO [18] : - It has been seen that the partial energization to check
the equipment like pumps, compressors, fans, etc. and at this stage the Lotto system is not
implemented. So, identification of energised equipment is become difficult. Also, in each
equipment, three Types of supply comes. The main power supply, control supply and ancillary
supplies to ancillary equipment like lube pumps, for space heaters, etc. Often it is difficult to
identify the equipment supply is live or not.
Circuit breakers /earthing /extensions: - In many occasions the circuit breakers are found unhealthy
and could not serve the function when it is activated. Earthing and bonding is also found improper.
Taking multiple and series extension is found without circuit breakers.
Short circuiting and insulation failure: - Short circuiting is mainly due to wetting of electrical
equipment due to water leaks, rains, flooding, etc. Installation failure due to dragging or crushing
of lines.
Striking overhead and underground lines: - This happens mainly while moving material from one
location to another. During excavation, crane moments, etc.
Work permit system: - It has been seen that construction site work permit system practice is very
lenient.
Usage of proper personal protective equipment (PPEs): - It is found that usage of all mandatory
PPEs is liberal. But the basic PPEs like helmet shoes etc. found strictly followed.
Barricading / Sign boards: - It is found that barricading is provided impractically, people try to
select shortcuts due to the practical difficulties in barricading.
Housekeeping: - Generally it is weaker area in construction sites.
The intuitionistic preference relation under the criterion of severity for the primary factors are
tabulated as below
Table 4. Intuitionistic preference relation under the criterion of severity for the primary factors
𝑞𝑞−𝑝𝑝−1 𝑞𝑞−1
�∏𝑟𝑟=𝑝𝑝+1 µ𝑝𝑝𝑝𝑝 µ𝑟𝑟𝑟𝑟
𝜇𝜇̅𝑝𝑝𝑝𝑝 = 𝑞𝑞−𝑝𝑝−1 𝑞𝑞−1 𝑞𝑞−𝑝𝑝−1 𝑞𝑞−1
, q > p+1
�∏𝑟𝑟=𝑝𝑝+1 µ𝑝𝑝𝑝𝑝 µ𝑟𝑟𝑟𝑟 + �∏𝑟𝑟=𝑝𝑝+1(1−µ𝑝𝑝𝑝𝑝 )(1−µ𝑟𝑟𝑟𝑟 )
Let us consider ā15
𝜇𝜇̅15 = 0.545814
ῡ𝑝𝑝𝑝𝑝 = 0.146296
Table 5. Perfect multiplicative consistent intuitionistic preference relation primary factors under
severity
0.400 0.400 0.250 0.550 0.289 0.289 0.366 0.231 0.546 0.146
0.550 0.250 0.400 0.400 0.550 0.250 0.449 0.182 0.599 0.100
0.289 0.289 0.250 0.550 0.400 0.400 0.400 0.400 0.449 0.182
0.231 0.366 0.182 0.449 0.400 0.400 0.400 0.400 0.550 0.250
0.146 0.546 0.100 0.599 0.182 0.449 0.250 0.550 0.400 0.400
1
d (Ā,A) = ∑𝑛𝑛𝑝𝑝=1 ∑𝑛𝑛𝑞𝑞=1��𝜇𝜇�𝑝𝑝𝑝𝑝 − µ𝑝𝑝𝑝𝑝 � + �𝜐𝜐�𝑝𝑝𝑝𝑝 − 𝜐𝜐𝑝𝑝𝑝𝑝 � + |𝜋𝜋�𝑝𝑝𝑝𝑝 − 𝜋𝜋𝑝𝑝𝑝𝑝 |�
2(𝑛𝑛−1)(𝑛𝑛−2)
Table 6. Perfect multiplicative consistent intuitionistic preference relation primary factors under
severity
0.4 0.4 0.25 0.55 0.337 0.281 0.401 0.235 0.578 0.136
0.55 0.25 0.4 0.4 0.55 0.25 0.469 0.194 0.62 0.1
0.281 0.337 0.25 0.55 0.4 0.4 0.4 0.4 0.469 0.194
0.235 0.401 0.194 0.469 0.4 0.4 0.4 0.4 0.55 0.25
0.136 0.578 0.1 0.62 0.194 0.469 0.25 0.55 0.4 0.4
Table 7. Perfect multiplicative consistent intuitionistic preference relation primary factors under
severity
0.400 0.400 0.250 0.550 0.327 0.283 0.394 0.234 0.572 0.138
0.550 0.250 0.400 0.400 0.550 0.250 0.465 0.192 0.616 0.100
0.283 0.327 0.250 0.550 0.400 0.400 0.400 0.400 0.465 0.192
0.234 0.394 0.192 0.465 0.400 0.400 0.400 0.400 0.550 0.250
0.138 0.572 0.100 0.616 0.192 0.465 0.250 0.550 0.400 0.400
∑𝑛𝑛
𝑞𝑞=1 µ𝑝𝑝𝑝𝑝 ∑𝑛𝑛
𝑞𝑞=1�1−𝜐𝜐𝑝𝑝𝑝𝑝 �
ω 𝑝𝑝 = ( ∑𝑛𝑛 𝑛𝑛 , 1 − ∑𝑛𝑛 𝑛𝑛 )
𝑝𝑝=1 ∑𝑞𝑞=1�1−𝜐𝜐𝑝𝑝𝑝𝑝 � 𝑝𝑝=1 ∑𝑞𝑞=1 µ𝑝𝑝𝑝𝑝
Rank the intuitionistic fuzzy value= ρ(α) = 0.5 (1+ 𝜋𝜋𝛼𝛼 )(1- µ𝛼𝛼 )= 0.50169564
The products of the risk priority numbers are greater than 0.5015 is considered as critical
The products of the risk priority numbers are in between 0.501 & 0.5015 is considered as major
The products of the risk priority numbers are less than 0.501 is considered as minor.
Table 9. Pair wise comparison under severity
Working at Height Lifting & Moving Confined Space Electrical Site Management
Working at Height 0.4 0.4 0.25 0.55 0.55 0.25 0.55 0.25 0.7 0.1
Lifting & Moving Equipment 0.55 0.25 0.4 0.4 0.55 0.25 0.55 0.25 0.7 0.1
Confined Space 0.25 0.55 0.25 0.55 0.4 0.4 0.4 0.4 0.55 0.25
Electrical 0.25 0.55 0.25 0.55 0.4 0.4 0.4 0.4 0.55 0.25
General Site Management 0.1 0.7 0.1 0.7 0.25 0.55 0.25 0.55 0.4 0.4
WORKING AT HEIGHT scaffolding, ladders falling from height dropping objects belt & protective gears
scaffolding, ladders 0.4 0.4 0.4 0.4 0.25 0.55 0.55 0.25
falling from height, slips 0.4 0.4 0.4 0.4 0.25 0.55 0.7 0.1
dropping objects from height 0.55 0.25 0.55 0.25 0.4 0.4 0.7 0.1
safety belt & other protective gears 0.25 0.55 0.1 0.7 0.1 0.7 0.4 0.4
simultaneous
CONFINED SPACE lighting & ventilations standby / watch operations administrative control
lighting & ventilations 0.4 0.4 0.55 0.25 0.55 0.25 0.55 0.25
standby / watch 0.25 0.55 0.4 0.4 0.25 0.55 0.25 0.55
simultaneous operations inside 0.25 0.55 0.55 0.25 0.4 0.4 0.55 0.25
administrative control 0.25 0.55 0.55 0.25 0.25 0.55 0.4 0.4
ELECTRICAL dragging of live cables energization/LOTO breaker/earthing short circuit/insulation structing lines
dragging of live cables 0.4 0.4 0.25 0.55 0.25 0.55 0.25 0.55 0.1 0.7
partial energization of equip/LOTO 0.55 0.25 0.4 0.4 0.4 0.4 0.4 0.4 0.55 0.25
circuit breaker/earthing/extension 0.55 0.25 0.4 0.4 0.4 0.4 0.4 0.4 0.55 0.25
short circuit/insulation failure 0.55 0.25 0.4 0.4 0.4 0.4 0.4 0.4 0.55 0.25
striking Underground/Overhead lines 0.7 0.1 0.25 0.55 0.25 0.55 0.25 0.55 0.4 0.4
GENERAL SITE MANAGEMENT work permits usage of proper PPEs Barricade/sign boards House keeping handling of chemicals
work permits 0.4 0.4 0.25 0.55 0.25 0.55 0.25 0.55 0.55 0.25
usage of proper PPEs 0.55 0.25 0.4 0.4 0.55 0.25 0.55 0.25 0.55 0.25
Barricading/ sign boards 0.55 0.25 0.25 0.55 0.4 0.4 0.55 0.25 0.55 0.25
House keeping 0.55 0.25 0.25 0.55 0.25 0.55 0.4 0.4 0.55 0.25
handling of chemicals 0.25 0.55 0.25 0.55 0.25 0.55 0.25 0.55 0.4 0.4
Table 10. Pair wise comparison - occurrence
Working at Height Lifting & Moving Confined Space Electrical Site Management
WORKING AT HEIGHT 0.4 0.4 0.55 0.25 0.7 0.1 0.55 0.25 0.7 0.1
LIFTING & MOVING EQUIPMENT 0.25 0.55 0.4 0.4 0.7 0.1 0.7 0.1 0.7 0.1
CONFINED SPACE 0.1 0.7 0.1 0.7 0.4 0.4 0.55 0.25 0.25 0.55
ELECTRICAL 0.25 0.55 0.1 0.7 0.25 0.55 0.4 0.4 0.25 0.55
GENERAL SITE MANAGEMENT 0.1 0.7 0.1 0.7 0.55 0.25 0.55 0.25 0.4 0.4
WORKING AT HEIGHT scaffolding, ladders falling from height dropping objects belt & protective gears
scaffolding, ladders 0.4 0.4 0.4 0.4 0.1 0.7 0.55 0.25
falling from height, slips 0.4 0.4 0.4 0.4 0.25 0.55 0.7 0.1
dropping objects from height 0.7 0.1 0.55 0.25 0.4 0.4 0.7 0.1
safety belt & other protective gears 0.25 0.55 0.1 0.7 0.1 0.7 0.4 0.4
LIFTING & MOVING EQUIPMENT lifting/moving lifting equipment moving equipment
manual lifting/moving 0.4 0.4 0.7 0.1 0.55 0.25
lifting equipment 0.1 0.7 0.4 0.4 0.25 0.55
vehicle and other moving equipment 0.25 0.55 0.55 0.25 0.4 0.4
CONFINED SPACE lighting & ventilations standby / watch simultaneous operations administrative control
lighting & ventilations 0.4 0.4 0.55 0.25 0.55 0.25 0.55 0.25
standby / watch 0.25 0.55 0.4 0.4 0.4 0.4 0.25 0.55
simultaneous operations inside 0.25 0.55 0.4 0.4 0.4 0.4 0.55 0.25
administrative control 0.25 0.55 0.55 0.25 0.25 0.55 0.4 0.4
ELECTRICAL dragging of live cables energization/LOTO breaker/earthing short circuit/insulation striking lines
dragging of live cables 0.4 0.4 0.55 0.25 0.25 0.55 0.25 0.55 0.7 0.1
partial energization of equipment/LOTO 0.25 0.55 0.4 0.4 0.25 0.55 0.4 0.4 0.55 0.25
circuit breaker/earthing/extension 0.55 0.25 0.55 0.25 0.4 0.4 0.4 0.4 0.55 0.25
short circuit/insulation failure 0.55 0.25 0.4 0.4 0.4 0.4 0.4 0.4 0.55 0.25
striking Underground/Overhead lines 0.1 0.7 0.25 0.55 0.25 0.55 0.25 0.55 0.4 0.4
GENERAL SITE MANAGEMENT work permits usage of proper PPEs Barricade/sign boards House keeping handling of chemicals
work permits 0.4 0.4 0.25 0.55 0.25 0.55 0.25 0.55 0.55 0.25
usage of proper PPEs 0.55 0.25 0.4 0.4 0.4 0.4 0.55 0.25 0.55 0.25
Barricading/ sign boards 0.55 0.25 0.4 0.4 0.4 0.4 0.55 0.25 0.55 0.25
House keeping 0.55 0.25 0.25 0.55 0.25 0.55 0.4 0.4 0.55 0.25
handling of chemicals 0.25 0.55 0.25 0.55 0.25 0.55 0.25 0.55 0.4 0.4
Table 11. Pair wise comparison - detection
Working at Height Lifting & Moving Confined Space Electrical Site Management
WORKING AT HEIGHT 0.4 0.4 0.4 0.4 0.25 0.55 0.4 0.4 0.4 0.4
LIFTING & MOVING EQUIPMENT 0.4 0.4 0.4 0.4 0.25 0.55 0.4 0.4 0.55 0.25
CONFINED SPACE 0.55 0.25 0.55 0.25 0.4 0.4 0.25 0.55 0.55 0.25
ELECTRICAL 0.4 0.4 0.4 0.4 0.55 0.25 0.4 0.4 0.4 0.4
GENERAL SITE MANAGEMENT 0.4 0.4 0.1 0.7 0.25 0.55 0.4 0.4 0.4 0.4
WORKING AT HEIGHT scaffolding, ladders falling from height dropping objects belt & protective gears
scaffolding, ladders 0.4 0.4 0.25 0.55 0.25 0.55 0.4 0.4
falling from height, slips 0.55 0.25 0.4 0.4 0.4 0.4 0.55 0.25
dropping objects from height 0.55 0.25 0.4 0.4 0.4 0.4 0.55 0.25
safety belt & other protective gears 0.25 0.55 0.25 0.55 0.25 0.55 0.4 0.4
CONFINED SPACE lighting & ventilations standby / watch simultaneous operations administrative control
lighting & ventilations 0.4 0.4 0.55 0.25 0.55 0.25 0.55 0.25
standby / watch 0.25 0.55 0.4 0.4 0.25 0.55 0.55 0.25
simultaneous operations inside 0.25 0.55 0.55 0.25 0.4 0.4 0.55 0.25
administrative control 0.25 0.55 0.25 0.55 0.25 0.55 0.4 0.4
ELECTRICAL dragging of live cables energization/LOTO breaker/earthing short circuit/insulation striking lines
dragging of live cables 0.4 0.4 0.25 0.55 0.25 0.55 0.25 0.55 0.4 0.4
energization of equipment/LOTO 0.55 0.25 0.4 0.4 0.55 0.25 0.55 0.25 0.55 0.25
circuit breaker/earthing/extension 0.55 0.25 0.25 0.55 0.4 0.4 0.4 0.4 0.55 0.25
short circuit/insulation failure 0.55 0.25 0.25 0.55 0.4 0.4 0.4 0.4 0.55 0.25
striking Underground/Overhead lines 0.4 0.4 0.25 0.55 0.25 0.55 0.25 0.55 0.4 0.4
GENERAL SITE MANAGEMENT work permits usage of proper PPEs Barricade/sign boards House keeping handling of chemicals
work permits 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.25 0.55
usage of proper PPEs 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4
Barricading/ sign boards 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.25 0.55
House keeping 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.25 0.55
handling of chemicals 0.55 0.25 0.4 0.4 0.55 0.25 0.55 0.25 0.4 0.4
Table 12. Ranks of risk factors
CONFINED SPACE
8 lighting & ventilations 0.01559 0.86425 0.01284 0.88738 0.01737 0.85478 0.00000348 0.99778 0.50110664 5 Major
9 stand by / watch 0.00998 0.89817 0.00958 0.90816 0.01166 0.88795 0.00000112 0.99895 0.50052284 18 Minor
10 simultaneous operations inside 0.01595 0.86717 0.01101 0.90119 0.01666 0.86302 0.00000292 0.99820 0.50089596 7 Minor
11 administrative control 0.01121 0.89149 0.00812 0.91614 0.01090 0.89479 0.00000099 0.99904 0.5004777 20 Minor
ELECTRICAL
12 dragging of live cables 0.00906 0.89932 0.00780 0.91569 0.01377 0.86780 0.00000097 0.99888 0.5005601 16 Minor
13 energization of equipment / LOTO 0.01510 0.87300 0.00598 0.93136 0.02390 0.83422 0.00000216 0.99855 0.50072041 14 Minor
14 circuit breaker/earthing/extension 0.01505 0.87374 0.00852 0.91555 0.01722 0.86115 0.00000221 0.99852 0.50073805 13 Minor
15 short circuit/insulation failure 0.01510 0.87300 0.00846 0.91616 0.01810 0.85786 0.00000231 0.99849 0.5007544 11 Minor
16 strickling Under ground/Over head lines 0.01009 0.89390 0.00515 0.93307 0.01182 0.88248 0.00000061 0.99917 0.50041668 21 Minor
7.1 Limitations
1. It is influenced by regional standards, practices, and organizational cultures.
2. Results cannot be generalized.
3. It is a perception-based assessment.
References
[1] K.T. Atanassov, Intuitionistic fuzzy sets, theory and applications, Physica-Verlag,
Bulgaria, 1999.
[2] S. Cebi, E. Ilbahar, Warehouse risk assessment using interval valued intuitionistic
fuzzy AHP, Int. J. Anal. Hierarchy Process. 10 (2018) 243–253.
[3] DOSH Malaysia, Guidelines for Safety & Health on Construction Sites, (n.d.) 1–9.
[4] J.K. E.Szmidt, Distances between intuitionistic fuzzy sets, Fuzzy Sets Syst. 114 (2000)
505–518.
[5] ILO, Safety and health in construction, 1992.
[6] K. Jones, Top Causes of Construction Accident Injuries and How to Prevent Them |
ConstructConnect.com, 2015 (2017).
[7] C. Kahraman, B. Öztaysi, S.Ç. Onar, O. Dogan, Intuitionistic fuzzy originated interval
type-2 fuzzy AHP: An application to damless hydroelectric power plants, Int. J. Anal.
Hierarchy Process. 10 (2018) 266–292.
[8] H. Liao, Z. Xu, Consistency of the fused intuitionistic fuzzy preference relation in
group intuitionistic fuzzy analytic hierarchy process, Appl. Soft Comput. J. 35 (2015)
812–826.
[9] Z.S.X. M.M. Xia, On consensus in group decision making based on fuzzy preference
relations, Int. J. Intell. Syst. 26 (2011) 787–813.
[10] H. Nguyen, An application of intuitionistic fuzzy analytic hierarchy process in ship
system risk estimation, J. KONES Powertrain Transp. 23 (2016).
[11] G. Nirmala, G. Uthra, Triangular Intuitionistic Fuzzy Ahp and Its Application To
Select Best Product of Notebook Computer, Int. J. Pure Appl. Math. 113 (2017) 253–
261.
[12] OSHA, Worker Safety Series Construction, Osha 3252-05N 2005. (2005) 1–18.
[13] İ. Otay, B. Oztaysi, S. Cevik Onar, C. Kahraman, Multi-expert performance evaluation
of healthcare institutions using an integrated intuitionistic fuzzy AHP&DEA
methodology, Knowledge-Based Syst. 133 (2017) 90–106.
[14] T. l. Saaty, the analytical hierarchy process, Mc Graw-Hill, Pennsylvania, 1980.
[15] T. l Saaty, Decision making for leaders, RWS publication, Pittsburgh, 1995.
[16] R. Sadiq, S. Tesfamariam, Environmental decision-making under uncertainty using
intuitionistic fuzzy analytic hierarchy process (IF-AHP), Stoch. Environ. Res. Risk
Assess. 23 (2009) 75–91.
[17] UK Government, Health and safety in construction, 3rd ed., 1996.
[18] F.K.W. WONG, A.P.C. CHAN, A.K.D. WONG, C.K.H. HON, T.N.Y. CHOI,
Electrical and mechanical safety in construction, Icsu. 2015 (2015) 106.
[19] J. Wu, H. bin Huang, Q. wei Cao, Research on AHP with interval-valued intuitionistic
fuzzy sets and its application in multi-criteria decision making problems, Appl. Math.
Model. 37 (2013) 9898–9906.
[20] Z. Xu, H. Liao, Intuitionistic fuzzy analytic hierarchy process, IEEE Trans. Fuzzy
Syst. 22 (2014) 749–761.
[21] Y. Yu, A. Darko, A.P.C. Chan, C. Chen, F. Bao, Evaluation and ranking of risk factors
in transnational public-private partnerships projects: Case study based on the
intuitionistic fuzzy analytic hierarchy process, J. Infrastruct. Syst. 24 (2018) 1–13.
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Onder Akinci*1, Andrew Staszak2, Hyun-Su Kim1, Connor Rivard, Michael Stahl2
1
Daros Consulting
2
SciRisq Inc.
[email protected]
Abstract
Demand for natural gas and increased production in the US enabled development of new LNG
export facilities. In parallel to development of new facilities, federal and local codes in the US
have evolved significantly in the last decade. Thus, today’s standards require design and
construction of robust facilities. Depending on the location of LNG plant, effects of hurricanes,
earthquakes, hydrocarbon accidents or floods are typically considered in the design. This study
examines evolution of resilient design requirements and best practices for LNG export terminals.
Case studies are presented to demonstrate impact of resilient design requirements on design of a
facility.
LNG export terminals are considered to be some of the safest hydrocarbon processing facilities in
the industry. Global experience indicates that LNG facilities can be successfully constructed and
safely operated in some of the harshest environments close to the arctic circle, near the flood zones,
on an offshore vessel or in high seismicity regions. Robust design of plant structures as per current
standards and mature liquefaction technologies ensure that LNG terminals are capable of
withstanding major accidents or natural events. Additionally, Facility Siting and Consequence
Analysis for LNG plants require design and process safety measures against fires and explosions.
When these onerous design cases are compared with those for typical upstream and downstream
facilities or civil infrastructure, it can be further inferred that risks to public are generally smaller
for LNG plants. Plants designed according to current standards using state of the art tools are
expected to be resilient against natural hazards and accident events.
Keywords: LNG, Resilience, Inherent Safety, Facility Siting, Blast, Fire, Cryogenic, PFP, Facility
Siting, Safety Critical Elements
1. Introduction
Resilience for Oil&Gas facilities can be defined as the capability to recover from or adjust easily
to extreme events or changes beyond typical design conditions. Considering the importance of Oil
and Gas infrastructure and assets, it is imperative to have a better understanding of resilience in
the context of process safety and operations. Resilience expectations and requirements for
designated critical infrastructure sectors including the energy sector were also captured in
Presidential Policy Directive (PPD-21) Critical Infrastructure Security and Resilience [1]. Roles
and responsibilities of owners and the government agencies were listed in PPD-21 to ensure
advancement of a national unity of effort to strengthen and maintain secure, functioning, and
resilient critical infrastructure.
Design codes and standards developed by industry organizations including ASME, API, NFPA
and ASCE provide provisions for conventional and extreme event design of equipment and
structures at the facilities. These codes (or recommended practices) also cover design for extreme
events such as high magnitude earthquakes, floods, fires, explosions, line break and
overpressurization scenarios. These code provisions are typically intended for protection of life
and asset. However, the plant may not be fully functional after a major event and in some cases, it
may take months or years to return to service. There are not widely accepted guidelines in the
industry for resilient design of LNG plants. Use of best practices, provision of extra design margins
and lessons learned from previous projects are regarded as mitigation measures to resist extreme
events. However, these types of measures do not provide the assurance required to confirm
resiliency of a facility.
To design a resilient facility, the applicable risks and response limits should be established.
Additionally, the level of risk tolerance and design objectives shall be laid out at an early stage. It
is understood that expecting no or minimal damage after any applicable catastrophic event may
not be feasible. Guidelines and recommendations are presented in this study for development of
LNG facilities in a practical way. The following steps are discussed for resilient design:
Identify major risks and consequences
Design and implement barriers to mitigate risks
Evaluate benefits of barriers and test against scenarios beyond the design basis to
understand failure mechanisms
Quantify the limits states and return periods for governing cases
The steps can be integrated with or added as a design iteration after designing the facility for
conventional loads.
The assessment and quantification of process safety risks throughout the energy industry is
considered a well understood topic. Extensive reviews, papers, analyses, regulations, and guidance
document development have been conducted over the years and has covered most aspects of the
topic. The focus on LNG specific risk has only recently become a more significant focus, likely
spurred by the significant increase of LNG demand, which in turn has spurred an increase in LNG
project developments.
In general, the basic processes of identifying hazards, quantifying their extents, and developing
frequencies and probability of events are no different for LNG facilities. There are numerous
papers and resources which cover these topics [2 to 5]. A major difference is due to the need to
address cryogenic hazards facility wide. There are other significant differences of the elements
which should be considered as standard practice or at a minimum addressed qualitatively in the
assessment of process safety risks for resilient LNG design.
Before we proceed, the topic of criteria must be addressed. As noted in the introduction, general
risk guidelines and approaches focus on the safety of personnel but allow significant leniency in
protection of the facility. This is not unintentional nor incorrect, but resilient LNG facility design
requires a much stronger focus on the protection of assets. This focus leads to a number of
distinguishing elements when addressing process safety risk and will be the focus of this section.
In an effort to review these critical differences and outline elements for consideration, we will
summarize risk simply as the product of Likelihood and Consequence. Students of risk analysis
acknowledge that risk calculation is much more complex, but the focus of this paper is on a higher
level than nuanced and detailed calculations. Accepting our simplified model, we can examine
representations of Likelihood and Consequence separately and discuss those elements that are
relevant to the design of resilient LNG facilities.
Figure 2. Example of transient release modelling and the effect on fire size [6].
2.1.3 Physical Barrier Influences
Essentially all of the LNG specific elements discussed here are connected, thus
highlighting their criticality in assessment of a resilient facility. Physical barrier influences
are an explicit example of that, as they can play a major role in the transient effects of
consequences. In general, for process safety risk assessments, a majority of the
consequence assessment will be based on phenomenological modelling. These are usually
open-field calculations, where adjustments can be made to account for phenomena such as
dispersion limitations for indoor releases or pool spread due to bunding. In continuing with
the theme of an analysis that can assess or fully influence the design of assets, risks should
consider more elements of physical barrier influences. Examples of these would be the use
of extensive curbing and drain-ways in LNG facilities and their effect on pool formation
and duration, the use of flange guards to limit cryogenic spray and similar events.
2.1.4 Active Mitigation
A resilient facility will utilize active mitigation systems both to limit process hazards from
occurring (e.g. process controls, blowdown, redundancies) and limit consequences
resulting from process failures (e.g. isolation, foam, water spray). In order to design these
systems, they must be assessed and/or their effect accounted for within process safety risk.
There are an extensive number of approaches which can be used to address these types of
elements, which will not be focused on here, but the approach should be dependent on the
whole approach of the risk assessment (introduced error and level of assumptions and
simplifications).
2.1.5 Performance and Optimization
As discussed above, active mitigation and control is critical to the design of a resilient
facility and are typical of LNG facilities. Asset and general risk can be greatly influenced
(reduced) through the use of optimized process safety controls. This encompasses systems
such as isolation, shutdown, blowdown, detection, and other active mitigations.
Assessment approach must consider or provide means to evaluate their effectiveness.
Through these evaluations both performance verification and optimization of the system
design can be conducted. It is important to note, that in many cases (facility and overall
design philosophy dependent) these elements may be addressed through additional studies
and analysis. Where the supporting data is taken from the process safety risk assessment.
2.1.6 Receptor and Equipment Response
Specific to asset protection as it relates to process safety, elements of receptors and
equipment should include a more detailed or robust examination. Examples of these are
critical valve survivability and structural integrity of piperacks and process module decks.
If process controls are being used for a resilient design, the risk assessment should not only
account for their operation in the calculations but should also provide an assessment (and
design criteria as necessary) to ensure that those elements will survive and can perform
their function. Additionally, this type of analysis is required to assess the risks of escalation
events. Consequences must be determined in a way that allows for escalation prediction.
Figure 3 shows an example of geometric effects on LNG vapor cloud explosion events.
When evaluating and determining the design limit or performance requirements of these
critical receptors, those conducting the risk assessment should be able to identify when
more complex modelling is required. In cases such as that shown below, critical receptors
located in “U” shaped portion of the structure below, could be under-evaluated with simple
modelling approaches. This would result in systems, unable to perform their intended
functions following an initiating event.
Figure 3. Example of LNG vapor cloud explosion effects including complex geometric effects.
Hazards associated with location of process plant permanent and temporary buildings are
evaluated according to API 752 [7] and 753 [8] for onshore facilities in the US. These standards
were mainly intended for petrochemical facilities. Some LNG plants follow API 752 to locate the
occupied buildings outside the fire and blast zones. However, applicability of these facility siting
standards to LNG facilities can raise questions. Current framework for facility siting of LNG plants
account for risk to public outside the plant boundaries. However, risks to plant buildings and
personnel are critical elements of resilient design. It is recommended to perform facility siting
studies for LNG plants. Location and blast (and fire) rating of buildings can then be specified
accordingly. This can also be integrated with a comprehensive risk-based approach (QRA) and
checked against the applicable risk criteria as discussed in NFPA 59A-2019 [9].
Facility siting standards and recommended practices target improving safety of public and plant
personnel. For the case of unoccupied buildings such as equipment rooms or process modules,
facility siting guidelines offer neither mitigation measures nor require protection against extreme
loads. This critical aspect of resilient design is left to the plant owner and the approach can vary
significantly based on the operator’s experience and risk criteria. As noted, a robust QRA
addressing all aspects of risk (process safety, environmental, siting, etc.) is the most effective form
of evaluation. That analysis must address the specifics of LNG facilities and processes, such as
those discussed in this paper, otherwise the result would unlikely meet the requirements of a
“resilient LNG facility design.”
4. Environmental Risks
Resilience of infrastructure against environmental loads has been studied extensively, but not all
the industries have participated in these efforts. Flooding of Energy and Oil Facilities during the
storms of last decade allude to areas of improvement in terms of resilient design against the
environmental conditions. Design of a major petrochemical plant beyond the requirements of
conventional design code allowables depend on owner’s design criteria. Also, changes in design
codes and criteria over time can be a critical limitation for aging plants. Failures at major LNG
facilities due to natural events are very rare and not documented well, but implementation of
lessons learned from previous events at other types of facilities provide valuable information about
potential vulnerabilities [10].
Conventional design load cases for LNG facilities may include earthquake, wind, flood and
tsunami depending on the location of facility as discussed in NFPA 59A [9] and ASCE 7 [11].
These codes provide the applicable load cases and refer to design codes from ACI, API, AISC and
other industry organizations. For some cases, government agencies or owners specify additional
load cases such as wind-borne debris and blast projectile impact [12]. The main objective of design
against environmental loads is protecting safety of the public and staff on site. This is achieved by
checking capacity of structural systems and barriers against specified loads. Some examples
include Operating Basis and Safe Shutdown earthquakes or rare hurricanes.
The natural hazards have varying levels of return periods. For less frequent events the allowable
design limits might be higher whereas for low return period events more stringent design criteria
applies. It is believed that use of lower allowable stresses limits the damage due to a more frequent
event and results in a resilient design. However, the inherent safety factors from design codes does
not always ensure resilience. It is recommended to assess and quantify the level of damage to
critical systems due to credible events so that plant can return to service after a short outage period.
If production at a facility depends on external or an independent power generation unit, design
basis and reliability of those systems need to match the plant’s availability targets. For a
hypothetical case, some elements of the power system could be designed for earthquake or wind
loads corresponding to a relatively lower return period. The plant may then lose power if the design
basis of power supply or generation system is exceeded, and it can take a considerable time to
restore production operations.
Combining the actual environmental risks with process safety risks can give a more refined risk
profile for the plant’s safety and availability. Thus, the original design or upgrades can account for
resiliency in a more structured and quantified way. This approach can help owners, investors,
buyers, and insurers better understand the plant’s characteristics.
In many cases produced LNG is required to be transported to other areas where the consumption
and demand is high. Transportation plays a critical role in successful LNG operations. More
efficient transportation methods for LNG has been long sought since the liquefied gas required
special facilities for safe delivery and protection against losses due to regasification. It is well
established to transport LNG via a large ship with LNG cargo containment system, LNG carrier.
Recently, floating units with LNG production, storage, and offloading (LNG FPSO) or with LNG
storage and regasification facilities (FSRU) have been newly constructed or converted from
existing gas carriers, then installed at nearshore or offshore sites. Several international and national
codes and standards apply transportation of LNG to ensure safety of the vessel, terminals, and the
communities nearby the shipping channels. Additionally, these provisions ensure high operability
rates and rapid recovery from adverse effects.
For design and construction of LNG carrying ships, it is mandatory to conform to International
Liquefied Gas Carriers Code (IGC Code) under the SOLAS convention in 1974 [13]. LNG FPSO
and FSRU are also recommended to be designed following the IGC Code. Classification societies,
i.e., ABS, BV, DNVGL, provide specific design guides in their steel ship design rules for new
construction of LNG carriers while prescribing classification rules for design, construction, and
inspection of offshore LNG FPSO. For LNG carriers that are operated in North America, ship
management procedures and emergency plans are required to be submitted to and approved by the
US Coast Guard. ISO 28460 relates to marine operations during LNG carrier’s port transit and the
cargo transfer at the ship-shore interface [14].
International Maritime Organization (IMO) classifies LNG carriers together with their cargo
containment system (CCS), which consists of insulated cargo tanks inside the inner hull. The LNG
carrier types by CCS are presented in Figure 6 at a high level. It is worth noting that in the last
decade, the membrane type (No96 and Mark III) for LNG carrier’s CCS became more popular to
maximize the cargo volume. For all types, the liquefied gas carrying ship is constructed to adopt
double hull structures that provide hold spaces such as the ballast tank or cofferdam between the
inner hull and outer shell. The hold spaces between the inner hull and outer shell are typically
required to have 4 to 6 ft widths for protection of the cargo tanks in case of ship side-on collision
and grounding accidents.
To design resilient floating LNG (LNG FPSO, FSRU, etc.) and LNG carriers several technical
challenges have taken to be considered. These include sloshing impact to cargo containment
system, structural integrity under accidental collision events, cryogenic spill induced material
brittle failure, fire risk escalated by flammable gas formation due to spillage of cryogenic liquid as
the temperature rises, and explosion risk due to rapid expansion of the liquid-to-gas volume. In the
past couple of decades, many studies have been carried out on sloshing response and collision
behaviour of the gas carrying or storing facilities. The main defence mechanism for the accidental
events should be eliminating the risk and the safeguards should be implemented to mitigate
damage to safety critical systems. This in turn would ensure continuation of service or limited
damage after an event.
Figure 7. Deformed shape of bow striking ship and membrane type LNG carrier side [24].
As discussed in this section, there has been significant improvements in design of LNG carriers.
The rulesets required by the class societies and owners result in a robust hull design that can
withstand damage or recover from accidental conditions with minor repairs. Experience gained in
the ship building industry demonstrates how resilient design practices can be beneficial to overall
supply chain by increasing reliability even under adverse conditions.
Barriers at LNG plants protect the staff, public and the asset from accidental events. These can be
grouped into several categories depending on the threat that is considered. Common applications
include blast resistant components, fire protection and cryogenic spill protection. Elimination of
threat should be the first line of defence, but it would not be practical for many scenarios.
Therefore, blast and fire resistant design plays a critical role in safety and resilience of plants. For
onshore and offshore LNG facilities different design criteria applies. When resilience is
considered, onshore plants can implement some of the best practices from offshore applications.
For the plant to survive an accident with minimal damage, it is imperative to protect people. In
addition to this, protecting the asset and mainly functionality of the Safety Critical Elements
(SCEs) should be considered. The scope of this effort can include process modules, equipment,
piping systems, support structures, tanks, and buildings. An initial screening analysis can be
performed to group the systems into categories. Then, critical systems can be designed to withstand
extreme events for a predefined failure frequency based on plant owner’s risk acceptance criteria.
Some examples of this comprehensive approach are listed below for illustration purposes.
Fire Hazards and Protection: Design for credible events including jet fire as applicable.
Typically, API 2218 [25] and UL 1709 [26] are followed for passive fire protection.
However, it is well known that gas plants are also susceptible to jet fire risks. API RP 2FB
[27] guidelines can be followed at a high level to mitigate jet fire risks at onshore plants.
Also, jet fire rated PFP per ISO 22899-1 [28] should be specified for modules and
equipment in areas with jet fire risks. Use of risk based PFP optimization methods as
discussed in a previous study by Akinci et al. [29] can avoid overconservatism and enable
placement of Passive Fire Protection (PFP) to where it is really needed for safety and
resilience.
Cryogenic Spill Protection: Determine areas with cryogenic spill risk and specify
Cryogenic Spill Protection (CSP) accordingly. The selected CSP product should be tested
and certified per ISO 20088-1 [30] rather than assuming a certain product can resist
cryogenic spill. Also, effect of water ingress or presence of moisture (if applicable) in the
CSP system should be considered as rapid cooling and freezing expansion can have an
adverse effect on the integrity. Additionally, the cryogenic spill might be followed by a fire
and dual systems (CFP + PFP) are required for some applications.
Piping Systems: Design critical piping systems for blast and fire loads and protect structural
supports (e.g. piperacks) against applicable fire scenarios including jet fire. This does not
only protect the asset, but also minimizes the risk of escalation. FABIG guidelines [31]
recommend use of different loads and acceptance criteria based on the return periods of
events. Use of linear static analysis methods can be acceptable for events with 100 or 200
year return periods, but transient non-linear analysis methods are recommended for
extreme events to avoid conservatism [29].
Active Fire Protection: Design deluge systems for blast loads to survive the initial event.
Protecting only certain parts of the systems limits availability after an event.
Selection of PFP: Selected PFP products can have a dual purpose and resist both fire and
cryogenic spill events. Spray applied systems or removable systems can be specified. Use
of removable systems (e.g. flexible jackets) can be advantageous in some cases. These
systems enable inspection of protected structural members, equipment, and piping. Also,
these systems can be replaced in case of damage after an accident event instead of replacing
the protection and protected member depending on the intensity of fire.
Safety Critical Elements: SCEs at a plant can consist of mechanical, electrical, and
structural systems. Control of process systems is critical to safe shutdown. Most plants are
designed to be fail safe. This approach successfully mitigates escalation risks, but plant
may still suffer from major damage. Protection of SCEs can reduce the damage in a plant
due to an extreme event. This in turn increases resilience of critical systems. Emergency
Systems Survivability Analysis (ESSA) can provide guidance for identification,
assessment and qualification of critical systems to perform the design functions.
Design of Equipment Buildings: Buildings (occupied and unoccupied) at Oil and Gas
facilities have been historically designed using ASCE blast guidelines [32] when they are
in blast zones. The intent of this design guide and the focus are on protection of building
occupants. Therefore, functionality and survivability of equipment buildings or modules
need to be addressed separately. A simplified sketch of a typical equipment room is
provided in Figure 8. For most plants, the focus of fire and blast design has been protection
of the building enclosure. However, for equipment buildings, contents of the building can
be equally critical. The functionality of these buildings can be maintained or restored in a
short period of time with proper design measures. Exterior of the building can protect
ingress of blast overpressures or heat, but inertial loads experienced by the internal
components can be detrimental. The following recommendations are expected to increase
resilience of elevated equipment buildings in fire and blast zones:
o Design building enclosure for low damage level [32] to minimize permanent
damage to structural members.
o Provide adequate spacing between building walls and equipment to allow blast
deformation, and do not attach critical equipment on the walls.
o Check spacing between equipment and the potential interaction effects due to
inertial effects.
o Add blast and fire resistant skirts around the base of building to prevent gas
accumulation underneath, and to protect the cables and cable trays from blast (drag)
and fire loads.
o Provide fire and blast dampers at the penetrations to avoid ingress of blast pressures
and flame.
o Check accelerations of elevated buildings due to blast loads and design columns
and bracing systems to limit internal damage.
o Procure seismically qualified type equipment per IEEE [33] or similar standards,
and design equipment anchors for inertial loads due to blast deformations. This will
increase survivability of critical equipment.
o Check dynamic response of cable trays and supports such as hangers due to inertial
loads (e.g. rapid deformation of building roof induces inertial loads on the cable
trays).
o Qualify elevated floor and cable penetration details for blast deformations and
inertial loads (cables might be damaged if there isn’t sufficient slack).
o Provide backup power systems to maintain functionality of equipment if power
supply might be interrupted due to an accidental event.
o Check internal temperature of building due to an external fire event and required
survivability times under full load considering heat generated by the equipment.
Consider qualifying the HVAC and power supply for blast and fire cases if required
to maintain functionality.
o Perform a comprehensive ductility level analysis using detailed models to capture
interaction of structural, mechanical and electrical systems [29].
Key to designing resilient LNG facilities is a thorough assessment of the range of conditions the
facility can experience. Process hazard analyses (PHA) are critical studies to be performed that
systematically identify scenarios which may exceed the current design limits, highlight what
preventative and mitigative barriers are in place, and identify gaps in the design that may warrant
the implementation of additional barriers. Hazard and Operability Studies (HAZOP), Layers of
Protection Analysis (LOPA), and other hazard analyses are powerful tools to identify hazards with
the goal of reducing the risk from these hazards to as low as reasonably practicable (ALARP). The
Center for Chemical Process Safety (CCPS) has published guidance on a variety of hazard analysis
techniques [34] and simplified risk assessment methods [35].
To conduct robust hazard analyses, it is essential that the teams performing these analyses are
multidisciplinary. Representation from engineering, control systems, operations, commissioning,
technology licensors, equipment specialists, etc. should be considered based on the process and
systems being assessed. The hazardous scenarios that are identified often form the basis for which
scenarios are considered for detailed consequence modelling, which ultimately are used to
establish design requirements in the event of fires, explosions, or cryogenic spills. Design and
operations procedures accounting for these risks are expected to result in more robust and resilient
plants. Use of appropriate methods and numerical tools (CFD, FEA etc.) in calculation of hazards
and analysis of safeguards can help to optimize the design without overconservatism.
Several process safety risk assessment and design aspects have been discussed in this study for
resilient design. While separated as Consequence and Frequency contributors to the risk equation,
it is clear that these are interconnected. Any element included in one aspect of the risk equation
must be appropriately addressed in the other half. For example, an assessment or design can only
gain insight and value from the inclusion of safety system intervention into the effect of release
rates over time, if these elements are addressed on the probabilistic side of the risk equation.
It is important to note, that any robust design is not just about the evaluation or assessment of
process safety risks, but should be used in sensitivities, ALARP determination, cost benefit
analysis and the like. It is only through these mechanisms that a resilient design can truly be
developed. In most cases this means that there will be some design cycles, or iteration during the
design phase, which ultimately will lead to optimization and proof of performance (i.e. verification
of a resilient design and operation).
Resilient design can be achieved practically if proper analysis methods and tools are used.
Quantification of risks and effective mitigation systems are key to success in this context. Owners
can make informed decisions and set the design criteria based on their acceptable risk tolerance.
Use of advanced analysis methods (CFD, FEA etc.) and risk-based design (with QRA) reduce the
conservatism and have been proven to be an effective approach for Oil and Gas facilities. The
entire supply chain from production to regasification can be analyzed and made more resilient.
Resilient design guidelines and methods discussed in this study can be implemented for other
process plants as well.
References
[1] Presidential Policy Directive/PPD-21, “Critical Infrastructure Security and Resilience,”
The White House, Office of the Press Secretary, February 12, 2013.
[2] RR 151, “Good practice and pitfalls in risk assessment,” HSE, 2003.
[3] M.D. Christou, A. Amendola and M. Smeder, “The control of major accident hazards: The
land-use planning issue,” Journal of Hazardous Materials, Vol 65, 1999, pp 151-178.
[4] Guidelines for Quantitative Risk Assessment, TNO Purple Book, 1st Edition, 1999.
[5] Guidelines for Chemical Process Quantitative Risk Analysis, Center for Chemical Process
Safety, American Institute of Chemical Engineers, Second Edition, 2000.
[6] S. Ganjam, A. Staszak, and R. Rodriguez, “Use and Comparison of Different Passive Fire
Protection Assessment Methods for LNG Plants,” 14th Global Congress on Process Safety,
2018.
[7] API RP 752, Management of Hazards Associated with Location of Process Plant Buildings,
American Petroleum Institute, 2009.
[8] API RP 753, Management of Hazards Associated with Location of Process Plant Portable
Buildings, American Petroleum Institute, 2007.
[9] NFPA 59A, Standard for the Production, Storage, and Handling of Liquefied Natural Gas
(LNG), National Fire Protection Association, 2019.
[10] N. O. Akinci, “An Investigation on Seismic Resistance of Reinforced Concrete Chimneys,”
Proceedings of ASCE Structures Congress, 2009.
[11] ASCE 7, Minimum Design Loads and Associated Criteria for Buildings and Other
Structures (ASCE/SEI 7-16), 2017.
[12] A. Kohout, P. Jain, and W. Dick, “Review, identification and analysis of local impact of
projectile hazards in the LNG industry,” Journal of Loss Prevention in the Process
Industries, Vol. 57, January 2019.
[13] IGC Code, International Code for the Construction and Equipment of Ships carrying
Liquefied Gases in Bulk, International Maritime Organization, 2016.
[14] EN ISO 28460, Petroleum and Natural Gas Industries – Installation and Equipment for
Liquefied Natural Gas – Ship-to-shore Interface and Port Operations, European Standard
Committee, 2010.
[15] C. Guerro, “Innovative Solutions for LNG Carriers – A Classification Society View,”
International LNG Congress – London, March 14, 2016.
[16] D.H. Lee, M.K. Ha, S.Y. Kim, and S.C. Shin, “Research of design challenges and new
technologies for floating LNG.” International Journal of Naval Architecture and Ocean
Engineering (IJNAOE), 2014.
[17] H. Lee, J.W. Kim, and C. Hwang, “Dynamic Strength Analysis for Membrane Type LNG
Containment System Due to Sloshing Impact Load,” International Conference on Design
and Operation of Gas Carriers, London, 2004.
[18] J.M. Sohn, D.M. Bae, S.Y. Bae, and J.K. Paik, “Nonlinear structural behaviour of
membrane-type LNG carrier cargo containment systems under impact pressure loads at
−163 °C,” Ships and Offshore Structures, Vol. 12, pp. 722 – 733, 2017.
[19] Bureau Veritas, Strength Assessment of LNG Membrane Tanks Under Sloshing Loads
(BV Guideline Note NI 564), 2011.
[20] DNV GL, Sloshing Analysis of LNG Membrane Tanks (DNVGL-CG-0158), 2016.
[21] Lloyds Register, Sloshing Assessment Guidance Document for Membrane Tank LNG
Operations (LR Ship Right – Additional Design Procedures), 2009.
[22] R.M. Pitblado, J. Baik, G.J. Hughes, C. Ferro, and S.J. Shaw, “Consequences of liquefied
natural gas marine incidents.” AlChE/Process Safety Progress, Vol.24-2, pp. 108 – 114,
2005.
[23] H. Bogaert and B. Boon “New collision damage calculation tool used for quantitative risk
analysis for LNG import terminal,” 4th International Conference on Collision and
Grounding of Ships (ICCGS), 2007.
[24] S.R. Cho, K.W. Kang, J.H. Kim, J.S. Park, and J.W. Lee, “Optimal Soft Bow Design of an
LNG Carrier,” 4th International Conference on Collision and Grounding of Ships (ICCGS),
2007.
[25] API RP 2218, Fireproofing Practices in Petroleum and Petrochemical Processing Plants,
American Petroleum Institute, 2013.
[26] UL 1709, Standard for Rapid Rise Fire Tests of Protection Materials for Structural Steel,
Underwriters Laboratories Inc., 2017.
[27] API RP 2FB, Recommended Practice for the Design of Offshore Facilities against Fire and
Blast Loading, American Petroleum Institute, 2012.
[28] ISO 22899-1, Determination of the Resistance to Jet Fires of Passive Fire Protection
Materials, International Standard, 2007.
[29] N.O. Akinci, K. Parvathaneni, A. Kumar, H.S. Kim, M. Stahl, and X. Dai, “Advanced Fire
Integrity Analysis and PFP Optimization Methods for Petrochemical Facilities,” Mary Kay
O’Connor Process Safety Center 21st Annual International Symposium, 2018.
[30] ISO 20088-1, Determination of the resistance to cryogenic spillage of insulation materials,
International Standard, 2016.
[31] FABIG TN-08, Protection of Piping Systems Subject to Fires and Explosions, Steel
Construction Institute, 2005.
[32] ASCE Design of Blast-Resistant Buildings in Petrochemical Facilities, American Society
of Civil Engineers, Second Edition, 2010.
[33] IEEE 344, Standard for Seismic Qualification of Equipment for Nuclear Power Generating
Stations, Institute of Electrical and Electronics Engineers, 2013.
[34] Guidelines for Hazard Evaluation Procedures, Center for Chemical Process Safety,
American Institute of Chemical Engineers, Third Edition, 2011.
[35] Layers of Protection Analysis – Simplified Process Risk Assessment, Center for Chemical
Process Safety, American Institute of Chemical Engineers, Third Edition, 2001.
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
In order to address onsite hazards and risk per American Petroleum Institute Recommended
Practices (API RP) 752 and 753, most United States-based companies and sites are conducting
detailed facility siting studies using either a consequence-based or risk-based approach. These
detailed analyses can give companies valuable feedback concerning the overall risk profile of their
facilities with respect to corporate and industry best practice risk tolerance criteria. However,
many companies are left wondering, “What next?” In other words, once the hazard and risk profiles
have been determined, owners/operators are struggling with implementing a prioritized action item
list to systematically drive down the site risk profile to As Low As Reasonably Practicable
(ALARP).
In order to reduce risk to ALARP, companies are gravitating towards the implementation of risk
mitigation programs. Such programs can involve multi-year programs and require significant
investment across a number of company facilities. If a quantitative risk assessment (QRA) is
available, it can be used as a powerful tool to develop cost-effective risk mitigation programs. A
QRA provides useful information about the dominant hazards (explosion, fire, toxic, etc.) and
highest risk receptors, and allows a company to prioritize investment across all of its assets, or at
individual facilities as needed.
This paper will utilize example case studies to demonstrate how a quantitative-risk-based approach
can be leveraged in a risk mitigation program to optimize risk mitigation solutions such as building
reinforcement, building replacement, and/or scenario mitigation. Also, the paper will present
examples of facility siting issues that the processing industies struggle with, such as focusing on
implementing solutions to mitigating explosion hazards while neglecting other equal or high risk
hazards, or implementing solutions company-wide that might be only effective for some assets,
which results in unnecessary costs that do not mitigate the risk effectively.
Keywords: Risk mitigation, quantitative risk analysis, facility siting, risk-based building design.
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Layers of Protection Analysis (LOPA) has become ubiquitous in the process industries as a risk
assessment and management tool. Designed to bridge the gap between fast but fully qualitative
PHA methodologies like Hazard and Operability (HAZOP) studies and more refined but
cumbersome quantitative risk assessments (QRA), LOPA provides an economical means of
quickly analyzing the risk of a system in a manner that is both reproducible and defensible.
However, LOPA is not a panacea. While many, even most, risk analysis scenarios can be
adequately covered by LOPA, due to the assumptions inherent in LOPA, many safeguards and
hazard scenarios cannot be adequately represented in a traditional LOPA. Worse, when people
or organizations dictate that LOPA shall be used as the risk analysis tool of choice, these
limitations and lack of understanding by those implementing LOPA can result in flawed analysis
and possible exposure to risk above tolerable limits.
Many of the assumptions used in LOPA are around how Independent Protection Layers (IPLs) are
identified and how risk reduction is allocated for them. IPLs should be independent, specific,
effective/reliable, and auditable. Does this mean that safeguards which do not meet one or more
of these requirements do not provide any benefit with respect to safety? Does a shared
component between a control loop which initiates a scenario and an interlock designed to stop
the scenario, such as a valve or transmitter, prevent the interlock from stopping the developing
scenario if a non-shared component is the cause of the failure? If fire detection initiates water
deluge, is there no value in the deluge if it only reduces the severity of the scenario and does not
eliminate it? Obviously not, but the LOPA rules for IPLs assure that assumptions that are required
to allow for the simplifications that are built in to LOPA are maintained, and that the results
obtained will not over-estimate the risk reduction provided by those IPLs.
One of the largest requirements is that IPLs be effective/reliable. This is generally interpreted as
that an IPL should provide a risk reduction of at least 10. This is in keeping with LOPA’s general
order of magnitude level of analysis and is typically stated as an IPL must have a PFD of less than
0.1. This is true for preventative protective layers, for which successful activation prevents the
consequence from occurring. Taking a simple example of failure of a pump seal, we can represent
the scenario using a bow-tie diagram as shown in Figure 1:
The left side of the bow tie diagram, essentially a fault tree, includes initiating events and
preventative protective layers. To the right of the primary loss of containment event, what is
essentially an event tree shows the impact protective measures have on the ultimate
consequence, with multiple consequences being possible. With LOPA’s assumptions, the event
tree is simplified, with results being reduced to either the worst case consequence occurs, or no
consequence being the only options. Mitigative protective layers are effectively removed.
Figure 2: Pump Seal Failure Bowtie Diagram
Only the preventative protection layers, located on the fault tree side of the bowtie diagram, are
typically considered during a LOPA study. While some conditional modifiers (occupancy,
probability of ignition) may occasionally be included, this is only done when the effect of the
mitigation is to effectively eliminate the consequences. From the example, we can see that LOPA
gives us a more conservative estimation of risk.
Preventative protection layers are still accounted for as usual, applying to both the mitigated and
unmitigated scenario outcomes. If both the mitigated and unmitigated risk are tolerable, then
no further action is needed. If either (or both) risk ranks are intolerable, then additional risk
reduction measures can be recommended to address the risk gap.
There are a number of advantages to performing this analysis. The first is that we can more
accurately assess risk for scenarios which rely more heavily on mitigation. This can reduce the
potential for costly, over-engineered safeguards while, unlike a full QRA, being easily integrated
and applied during the LOPA. Consider a tank with a berm or dike around it. There are a number
of ways it can be modelled in LOPA using simplifications. Some companies, viewing it as a
mitigation and not preventative, do not credit dikes in their LOPA, especially in cases where even
when contained ignition of the liquid could result in a potential injury. Others, viewing the dike
as a reliable safeguard against a large spill that could impact other units, may assume the dike
always works and assume that the dike contains the spill as part of the consequence. These are
both simplifications used to examine the scenario within LOPA, but both can result in inaccurate
analysis. A typical LOPA scenario for dealing with overfill and spillage of liquid hydrocarbon from
the tank, in which the dike is not credited for risk reduction, is represented in Figure 5.
Assuming a target risk of 1E-4, an analysis of the scenario including preventative safeguards
results in a risk gap of 2 orders of magnitude. Typically, this would require implementation of
two additional orders of magnitudes of risk reduction, such as a SIL 2 high level interlock, to
prevent overfill of the tank. If we take into account the benefits of the dike, which the team
determines would significantly reduce the consequence of the release, with ignition expected to
result in an injury instead of a fatality. Typical guidance for LOPA, such as that presented in
Guidelines for Initiating Events and Independent Protection Layers in Layers of Protection
Analysis published by CCPS, would not credit such a dike as it does not fully eliminate the
consequence. Assuming that the dike meets the other requirements (properly maintained /
inspected / drained, sufficient containment volume, and wall height to prevent slosh over the
walls from hydraulic waves), it is not unreasonable to assign some level of risk reduction to the
containment.
Splitting the mitigation impacts into a mitigated and unmitigated risk ranks, assuming that we
can use the recommended PFD value typically assigned for dikes which could be considered
preventative of 0.01, we can see that the frequency of a potential fatality has been reduced from
1E-2 to 1E-4, but that we also have an injury potential of 9.9e-3.
Figure 6: Bowtie Diagram of Tank Overfill with Dike
Note that the fatality event is now at a tolerable risk level, while the injury level likelihood is
above tolerable by roughly an order of magnitude. Looking at a risk matrix, we can see the
same:
Figure 7: Risk Matrix of Tank Overfill with Dike
We can again see that the fatality risk is tolerable while the injury risk in above tolerable. In this
analysis, we would still need to make a recommendation, but it would be for only a single order
of magnitude of risk reduction instead of the two order of magnitude as we previously
recommended when the dike was not considered. From a LOPA worksheet standpoint, the
scenario is split into two outcomes. The higher severity outcome with the TMEL of 1E-4, credits
the dike and provides the residual risk, while the lower severity shows the consequences that
exist when the dike functions as intended, and provides out mitigated risk. The LOPA
worksheets for this are shown in Figure 8.
Figure 8: LOPA Worksheets for Tank Overfill with Dike
Deviation Cause Frequency Consequence CAT S TMEL IPL PFD MEL RRF
1.1. High 1.1.1. 0.1 1.1.1.1. Potential Safety H 1E-4 1. Independent 0.1 1E- 1
Level Failure of overfill of tank. high level alarm 4
Level Potential release of LT-102
Control hydrocarbon liquids and 3. Dike around .01
LT-101 subsequent ignition. tank would
Potential for significant contain liquid
fire and possible fatality. and limit fire to
dike area
1.1.1.2. Potential Safety M 1E-3 1. Independent 0.1 1E- 10
overfill of tank. high level alarm 2
Potential release of LT-102
hydrocarbon liquids to
dike. Potential ignition
and pool fire within the
dike, resulting in
possible injury to
personnel.
The second, and perhaps more important, reason to incorporate mitigative functions into LOPA
is when the LOPA is specifically analyzing mitigative safety instrumented functions (SIF).
While uncommon, there are SIF which are purely mitigative in nature. This can lead to
confusion when companies and LOPA practitioners attempt to define a SIL level for these
functions, as required by IEC / ISA 61511 – Functional safety – Safety instrumented systems for
the process sector. Consider a low pressure shutdown on a pipeline. The purpose of such an
interlock is to act as an isolation in the event of a significant breach and loss of containment,
limiting the losses and reducing the potential size of such a release (the definition of a mitigation
safeguard). Assuming a required risk reduction factor of 100, it is easy and tempting to just
assign a SIL 2 target to the SIF.
The problem is that, much like with the dike, even if we were to apportion a SIL 2 PFD value of
0.01 to the low pressure SIF, it may not actually address the risk if the consequences of the
pipeline failure when the SIF takes its safety action is sufficiently high. This exact scenario has
been presented to the author, and the company felt that they were following the correct course of
action by assigning a SIL 2. However, analysis of the LOPA revealed that the potential outcome
for the unmitigated release was a multiple fatality event, as it would likely result in the formation
of a large gas cloud which would impact a number of units if ignited, and thus likely catch a
number of operators within the potentially fatal blast zone. A mitigated release would
significantly reduce this, but the team could not rule out the potential for an operator to be in the
area of the release, so while the low pressure SIF would prevent the large VCE, it would not
prevent a smaller VCE/flash fire in the unit where the pipeline came into the facility, and would
likely result in a fatality.
Looking at the risk matrix in Figure 9, we can see the risk associated with the multiple fatality
event has been addressed by implementing a SIL 2, however, we still have unacceptable risk in
the form of a single fatality event. Note that further reducing the PFD of the SIF does not impact
this risk, as a SIL 3 function would still result in unacceptable risk, as shown in Figure 10.
Figure 9: Risk Matrix for SIL 2 Mitigative Function
Abstract
In the world of consequence and risk analysis, there are many variables and parameters that we
study, modify, and debate. A significant variable that is often under-appreciated is release hole
size. Most work in consequence and risk analysis is focused on loss of containment (LOC) events.
For these evaluations, the hole size used to represent that LOC, when combined with the released
material’s thermophysical properties, is critical in determining a mass release rate, which then
directly affects the magnitude of the flammable or toxic hazards that are calculated. Accordingly,
matters regarding hole size have a significant effect on the predictions produced by consequence
analysis. The selected hole size for a single release scenario directly effects the impacts of a
hazard; and the hole size associated with the maximum hazard extent may vary based on the hazard
being evaluated. When an evaluation turns to a quantitative risk analysis (QRA), the importance
of hole sizes is compounded due to the added effects of frequencies that are assigned to the various
hole sizes. This issue of hole size is explored with an evaluation of some common practices and
assumptions, and some of the associated pitfalls. The effect on predicted consequences for various
hazards, and how a range of sizes selected in a QRA could affect the risk predictions, are also
explored. It will be clear that hole size matters.
Abstract
Several Recognized and Generally Accepted Good Engineering Practices (RAGAGEPs) exist to
help someone make their selection and placement of gas detectors (e.g. ISA-TR84.00.07, NFPA
72, UL-2075). However, there are no real consistent approaches widely used by companies.
Historically, gas detection has been selected based on rules of thumb and largely dependent on
experience. Over the last several years there has been a growing interest in determining not only
the confidence but also the effectiveness of those gas detection systems. In fact, incorrect detector
placement far outweighs the probability of failure on demand (of the individual system
components) in limiting the effectiveness of the gas detection system.
The Gas Detection Philosophy clearly specifies the chemicals of concern and the intended
purposes, i.e. detection of toxic or combustible levels, voting requirements, alarm rationalization,
and control actions.
Appropriate Detector Technology Selection includes consideration of the target gas and the
required detection concentration levels.
The primary approaches for Detector Placement are geographic and scenario-based coverage.
Geographic coverage places detectors on a uniform grid, and sometimes areas risk ranked to reduce
the number of detectors required. Scenario-based coverage has a range of leak models and places
gas detectors based on the dispersion modeling results.
All three elements for effective gas detection (philosophy, technology, and placement) are
interdependent but understanding their relationships is of paramount importance to design an
effective gas detection system.
The intention of this paper is to present the main considerations that design engineers and process
safety professionals should address for each gas detection system element in order to obtain the
best return on your investment when placing your gas detectors.
Industry standards followed for gas detection include ANSI/ISA-TR12.13.02 & 03, ANSI/ISA
84.00.01 & 07, API RP 14C, API RP 500, and NFPA 72. Basic rules of thumb applied for
placement include locating detectors at breathing height for toxic gases, 1-2 feet above ground for
heavy gases such as propane, and for gases that are lighter than air either above the leak source or
as high as possible if those gases may accumulate in specific areas such as hydrogen in a battery
room. Additional considerations should also be made for conditions that may cause the gas to
behave differently, such as cryogenic conditions, as both liquefied natural gas (LNG) and liquefied
ammonia are known to disperse low to the ground while the vapors are cold. Other rules of thumb
that are important are to place detectors near air ductwork intakes or room outlets, in areas
accessible for maintenance, and away from locations that can be damaged by general maintenance
or frequently traveled and not in areas where flooding can disable or damage the equipment.
Two RAGAGEPs for placing gas detectors that are mentioned in these standards include scenario
and geographic-based methods. Older plants that have not used either of these methods are
oftentimes found to have large gaps in detector coverage.
A commonly employed strategy is to place gas detectors to ensure detection of a 5-meter cloud for
combustible hazards. The geographic method is simply applying this strategy globally to where
all possible areas that handle a specific hazardous material have a hypothetical leak with a 5-meter
diameter cloud covered, resulting in a uniform 5-meter spacing of detectors. Basic rules of thumb
discussed earlier are then used. The development of a gas detector placement drawing from this
method is very simple and cheap.
While geographic methods are successful at leak detection, it can result in more detectors than are
necessary, which in turn leads to higher installation and plant operating expenses. This can
especially lead to excessive gas detectors for toxic hazards, as the 5-meter cloud methodology was
intended for combustible clouds. As a result, many companies prefer to use scenario-based
coverage over geographic methods. Scenario-based methods use dispersion modeling to guide
detector placement decisions. Scenario model selection involves identifying a variety of leak
points, hole sizes, and leak directions. After performing dispersion modeling, detectors are
subsequently put in the optimal locations based on leak point and critical receptors of concern
locations to detect the hazardous gas.
Tools used for gas dispersion prediction include Gaussian plume, empirical models, and
computational fluid dynamics (CFD) models. Gaussian plume is the most simplistic of them all
and utilizes basic equations and constants, empirical models add more resolution by making
predictions from experimental observations and include software such as the DNV-GL PHAST
unified dispersion model (UDM) in a 2--dimensional field, and CFD models use a full suite of
transport equations while maintaining conservation of momentum, mass, and energy in a
3--dimensional field that includes geometric and topographic interactions.
Outdoor leaks are largely found to follow the active wind direction. The reason is that the mass
and momentum of the air is far greater than the leak itself and will eventually carry it out of the
plant in that direction. However, there are exceptions to this typical behavior. For example, it has
been found from dispersion modeling that leaks which are especially large and at high pressures
have a very large initial momentum and can travel a good distance on their own, especially if along
the ground. Sometimes this can result in toxic lethality and flammable thresholds being primarily
dictated by leak direction rather than wind direction. Also, topography plays a very large role in
dispersion especially for cryogenic plumes, or heavier than air gases. Trenches and diking as well
as buildings and large pieces of equipment in plants very frequently dictate gas dispersion
behaviors at ground level. OPGDs are often used around the perimeter of secondary containment
of hazardous chemicals due to the ability of the geometry to contain and channel gases. Also,
wind currents can interact with large buildings and tanks in such a way that turbulence eddies and
recirculation zones influence the dispersion in different ways depending on wind direction.
Depending on gas density, outdoor vertical leaks in the air may not be detectable from ground-
based detectors and an elevated open path detector beam, as well as point detectors at the air intakes
of potentially affected buildings, may be needed. All these behaviors can be difficult to predict
from simply looking at the geometry, and as such, dispersion modeling that incorporates CFD may
be critical to understanding these behaviors and making better decisions in detector placements on
a case by case basis.
Indoor leaks are largely found to follow the initial leak direction and are eventually drawn out by
room out-take ducts. Room ventilation rates are many times smaller than outdoor areas, but still,
influence the dispersion of leaks from the room airflow patterns. When there are many rooms to
evaluate, one of the more useful tools for predicting dispersion patterns in a room is a basic airflow
model which can be quickly performed using CFD models. Observing these patterns as well as
the locations of all equipment handling highly hazardous chemicals, one can make reasonable
predictions as to where the vapors of a release will eventually travel as the room exhaust ventilates
the air. Incorporating different leak scenarios into these models helps support and confirm these
decisions, as well as finding common pathways that multiple vapor clouds may take. Modeling
can also expose counter-intuitive behaviors. For example, while typical detector placement
practices for lighter than air gas releases is to place the detectors as high as possible, an air supply
coming from the ceiling with exhaust ducts along the ground has shown ammonia gas clouds being
pushed away from the ceiling and spreading to other areas along the ground. Indoor models can
also help in revealing regions with still air (dead zones) where gas clouds may not migrate while
dispersing. Finally, heavier than air releases are often found to have the highest concentrations
and largest footprint at grade level, suggesting that detectors as low to the ground as possible
should typically be used for heavy gases.
Some detector technologies can have additional constraints to be mindful of. For example,
electrochemical based detectors contain an electrolytic solution that is directly exposed to the air.
Electrochemical detectors must be sheltered from the rain to avoid degradation. If the air that this
solution is exposed to has a very high velocity, it can lead to rapid evaporation and depletion of
the solution, which can lead to problems such as increased maintenance and inaccurate readings.
On the other hand, placement in a dead zone can lead to non-detection of hazardous gases. For
these situations, gas dispersion models using CFD are quite useful in finding the right balance with
gas detector placement decisions. Considerations for the leak models are numerous, and it can be
easy to take different paths. Underspecifying the models can lead to a suboptimal solution with
gaps in coverage, which is especially problematic if the GDS is being treated as an IPL and it
requires a high amount of coverage. On the other hand, over specifying the models can result in
more analysis and computational time than is required. As such, a balanced approach is
recommended, and a series of steps can be followed to come up with an effective solution to
detector coverage that is not as time intensive, including with CFD based methods. The flow chart
in Figure 1 demonstrates this proposed scenario-based work flow.
FGS Scenario Based Work Flow Diagram
Phase
Develop Final
Page 2 Detailed
Scenario
Engineering
Deliverables
FGS
FGS Philosophy FGS Scenario List FGS Design Preliminary
Requirement FGS Basis of Design
Report Chemicals List FGS Zone Definition Basis(SOR) Detector
FGS Safety Requirement
IDLH,PEL, LEL Levels Drawing. FGS Zone Placement
Specification
Alarm Criteria Scenario Locations Coverage Targets Preliminary
FGS SRS Data Sheets
Voting Criteria Leak Hole Sizes FGS Zone Detector Count
Detector Performance
Detector technology Leak Hole Directions Availability Preliminary IO
Report
Criteria Air Speeds Targets. List
Detector Plot Plans
Outputs
It is important to note that beyond having a highly effective detector coverage, there are limitations
of what a GDS can reasonably do. For example, consider following ISA-TR84.00.07 and
employing voting to improve reliability. After the percent coverage is determined, a fault tree may
be used to estimate the overall mitigation effectiveness of the GDS system.
To achieve an overall risk reduction of SIL-2 or higher, one would need well over 99% coverage
and for all components to be SIL-2 capable. Trying to achieve SIL-2 with a GDS would be over
specifying performance. Rather, ISA-TR84.00.07 advises that a GDS not be considered as an IPL
if the overall effectiveness is less than 90%. One can see in the fault tree example in Figure 2
how even 90% coverage with 2ooN voting and 95% coverage with 1ooN voting will result in an
overall effectiveness under 90% if the entire system is limited by a mitigation effectiveness of
90%. Consequently, both the detector coverage and the mitigation effectiveness should be above
90% to be credited as an IPL, which can necessitate modeling of the mitigation system as well to
prove a higher level of effectiveness.
Conclusions
All three elements required for effective gas detection (philosophy, technology, and placement)
are interdependent but understanding their relationships is of paramount importance in designing
an effective gas detection system. In addition, modeling provides additional details that support
other important plant safety decisions. Model driven sensor locations enable informed emergency
planning. For example, ground level detectors located between the source of the release and the
critical receptors can provide early warning to building occupants to take the specified emergency
action. It can also reveal gases that may be drawing into a building air intake that can be a
considerable distance from the leak source. 3D modeling that incorporates wake effects from
buildings can show the plume reaching areas that may not be immediately intuitive such as air
handlers on the back side of a building. Room ventilation patterns may also result in several non-
intuitive behaviors. It is therefore essential to understand hazardous chemical properties, potential
release sources, release directions, and the ventilation patterns in the room for proper gas detector
placement. For some clients, the plant fence line is an important consideration for potential impact
to public receptors, and dispersion models provide valuable information on time for emergency
response and likely concentration impacts.
Abstract
The numerical simulation of gas dispersion and estimation of consequence impact is of importance
in Oil and Gas industry's process safety management. For natural gas fields with toxic components
like Hydrogen sulfide, the toxicity impact zone drives business decisions related to equipment
design, facility siting, layout, land use planning, and emergency response measures. Proprietary
tools or empirical models which are calibrated using experiment database are often used for
carrying out consequence modeling.
The selection of a tool and a suitable dispersion model, based on the cloud behavior, at the source
of dispersion is critical for the impact zone estimation. It is observed that, the fluid phase and the
cloud density are key for determining the appropriate dispersion model. Incorrect parameter
selection could lead to an inaccurate consequence impact zone estimation which could result in
disproportionate risk management efforts.
This paper summarizes the methodology and results from extensive set of consequence modeling
studies done for potential release events associated with Gas exploration and production. The
analysis focuses on the significance of composition, temperature, pressure and hole sizes in release
source term determination as well as the atmospheric parameters that could impact the dispersion
and estimation of impact zones. The output of the study and analysis provides (i) input and
guidance on selection of dispersion model to represent appropriate cloud behaviors (e.g. buoyant
or dense gas dispersion) and (ii) critical parameters that should be included in the sensitivity
assessment to determine the consequence impact zone.
Keywords: Consequence Modeling, Facility Siting, Toxicity, Hydrogen sulfide, Impact zone,
Parameter sensitivity, Natural gas composition
1
1. Introduction and motivation
Understanding of the process related risks is key in natural gas exploration and transportation
process safety management. Several major toxic natural gas release incidents (see Table 1) have
happened in the recent past resulting in human fatality, environmental damage and asset loss
(BSEE 2014, Jianwen 2011). Predictive risk assessments are carried out to determine the extent of
hazardous level distances (impact zone) and how frequently the event occurs (Nair & Wen 2019).
Estimation of the potential impact zone from different accident scenarios through scenario-based
consequence modeling forms integral part of process risk assessment (US DoT 2018). An
important contribution to the calculation of the impact zone comes from the modeling of
atmospheric dispersion following the accidental release of toxic fluids. Impact zone estimation by
consequence modeling is typically carried out using proprietary tools or empirical models (Hanna
et.al. 1982, Nair & Wen 2019). These models and tools have a range of applicability and are
validated using experiments (Hanna 1982, Pandya 2012). Preventive and mitigation measures for
risk management are prioritized using the study outputs. Variation in model inputs impact results
and different parameters have different influences on the results (US DoT 2018). Incorrect
selection of the approach, tool and uncertainty in the input could lead to an inaccurate impact zone
estimation which could result in disproportionate risk management efforts. This challenge can be
addressed by better understanding of the cloud behaviour following release and sensitivity analysis
of the modeling inputs and parameters.
Table 1: Toxic natural gas Incidents
Incident Consequence and description
1950, Poza Rica, Mexico Twenty-two persons died and 320 were hospitalized as a result of
low altitude temperature inversion exposure to hydrogen sulfide for 20-minute period.
1974-1991, Sour gas gathering line 11 incidents, Multiple fatalities, Unspecified number of wildlife
releases, USA (EPA records) died
1992, Gezi, The Zhao 48# well; H2S gas 6 fatalities and 24 poisoning; under pit operation corporation,
well blowout Petroleum administration, Bureau of North China
2003, Kaixian blowout (Chongqing 240+ fatalities, 2000+ hospitalization, 65000 evacuated; direct
“12.23” incident), high sulphur gas economic loss of $900 million
2006, Sichuan (The Luo 2# well) About 10000 people evacuated
2010-2014, Southeast Saskatchewan, 43 sour gas leaking facilities (with average H2S concentrations at
Canada 30,000 ppm)
2013, Kashghan field, Kazakhstan 200 km of leaking pipeline, $3.6 billion to replace
2
1) Hazard,
2) Receptors, 3) Simulations, 4) Sensitivity 5) Situational
scenario and
criteria and tools assessment analysis application
inputs
Figure 2: Scenario – toxic natural gas release from a pipeline and dispersion
3
Table 2: Input, parameters and sensitivity values
Toxic natural gas from eight reservoirs across different geographic regions were analysed. The
natural gases considered (represented as S1 to S8) include H2S composition ranging from 2% to
28% are shown in Table 3. The gas densities at 700 psia are shown at the gathering system supply
pressure for a typical reservoir.
Table 3: Toxic natural gas composition (mol %)
4
Table 4.
Table 4: Hazardous levels of pipeline release of natural gas
Component Accidental Level-3 Level-2 Level-1
consequence
Natural gas (see Flash fire (flammable Upper Flammability Lower Flammability 50% LFL
values in Table vapor cloud distance) Limit Limit (LFL)
5) Methane (CH4) 16% CH4 4% CH4 2%
Propane (C3H8) 9.5% C3H8 2% C3H8 1%
Hydrogen Toxic concentrations 500 ppm 100 ppm 75 ppm
sulfide (H2S) of exposure results in potential for Immediately Acute Exposure
health effects or respiratory arrest, loss dangerous to life and Guideline Level
death of consciousness health (IDLH), #3; loss of sense of
coughing, dizziness smell in minutes
5
Figure 3: Multi-component Phase Diagrams
The results of dispersion were recorded for the maximum concentration along downwind central
line concentration for an averaging time of 60 seconds. A typical output from Canary is given in
Figure 4.
(a) Overhead (plan) view:
illustrates the toxic natural gas
footprint of three H2S cloud
concentrations in the
downwind dispersion along
the central line with the cloud
width
(b) Side view: illustrates the
elevation cross section of the
cloud dispersion with the
cloud height (black line
(a) (b)
indicates cloud central line)
Figure 4: Hydrogen sulfide momentum jet cloud - dispersion isopleths (a) Overhead view (b) Side view
6
Is should be noted that for risk assessment application, the width of the cloud and the averaging
time plays a significant role (US EPA 2017).
2.4 Uncertainty and sensitivity analysis
For developing confidence in understanding a model, evaluate how variations in a model’s outputs
can be apportioned to variations in the inputs, which often referenced as sensitivity analysis.
Sensitivity analysis approach by varying one input parameter at a time which holds other
parameters at central values. The sensitivity outcomes are dependent on these central values. Each
of the eight toxic gas compositions, were subjected to the sensitivity to the range of values for
input and parameters. The results are presented using histograms or quantitative measures to
compare the sensitivity of the uncertain input and parameter.
3. Results and discussion
This section reports the results of the simulations and discuss the sensitivity to the input and
parameters. The aim is to identify the most important parameters from amongst a large number
that affect model outputs. This will help in optimizing the time and resource usage for consequence
modelling in risk assessment. The analysis is carried out on two sets:
Material and release conditions (Source term): fluid composition, hole size, temperature,
pressure, release orientation
Environmental conditions: atmospheric stability, wind speed, humidity, terrain
3.1 Sensitivity: Fluid composition
The compositions analyzed include toxic gases with molar mass lower, similar and higher than
that of air (28.9 g/mol). A comparison of the molar mass and H2S composition used in this study
is given in Figure 5.
Over the years, certain heuristics have been used as source term input parameters for modeling
multiphase releases and ensuing dispersion. Examples of these heuristics include choosing a pure
component of the same molecular weight in place of the mixture, distilling mixture composition
to a handful of components, choosing to model natural gas as a pure methane, etc. Although
convenient, these modeling assumptions can result in hazard estimations that diverge from reality
7
with the biggest problem being the inability to accurately account for thermodynamic effects like
phase splits and composition changes during release conditions (Johnson and Marx, 2003).
Figure 6: Phase equilibrium curves for methane (blue), Methane-ethane-Hydrogen sulphide (green), and S4 sample (yellow)
Consider the case in Figure 7 of a natural gas pipeline operating at 50 degF and 300 psia with S4
composition in Table 5. Going by the popular heuristic of modeling natural gas as 100% methane
(blue line) it was observed that at pipeline operating conditions, the release is purely vapor with
buoyant properties. Similarly, if the natural gas mixture (simplified C4) with three (78% CH4, 8%
C2H6 and 14% H2S) components (green line), at pipeline vapor is mostly vapor too. However, a
detailed composition of the mixture (S4, Table 5) reveals the release contains vapor, aerosol, and
liquid phases which were missed earlier highlighting the importance of composition in dispersion
modeling and the need to perform sensitivity analysis.
The phase envelope of eight toxic natural gas compositions given in Figure 7 illustrates that the
phase of a multicomponent toxic natural gas could vary (liquid, 2-phase or vapor) with a change
in the composition, temperature and pressure.
Dew curves
Bubble curves
Liquid phase
2- phase
Vapor phase
8
Figure 7: Phase equilibrium curve – toxic natural gas compositions
Density of fluid and related buoyancy (positive, neutral, negative) plays a major role is selecting
the dispersion modelling approach (passive, dense etc) for estimating downwind distances (Nair
& Wen, 2019). Released fluid density is driven by fluid’s molar mass molar mass, release pressure
and temperature. The Bubble curve and the Dew curves shift towards to right with an increase in
molar mass (S1, S7, S8). This is due to the higher molar mass from higher composition of C4+
hydrocarbons and hydrogen sulphide contribution. The phase of the released material is critical
since it determines the release and dispersion model used (e.g. heavy gas vs gaussian); an
inappropriate selection can lead to erroneous results. For example, the fluid phase of S5 (MW
24.2) and S6 (MW 26.7) with similar molecular mass could yield different results for a given
pipeline operating pressure and temperature (at 800 psig and 100 oF, S5 will be vapor, whereas S6
will be 2-Phase. Sensitivity to the changes in temperature and operating pressure was further
analysed and given in section 3.3.
The downwind dispersion distances for the eight toxic natural gas compositions to toxic and
flammable hazard levels are given in Figure 8. The downwind distances for LFL ranges from 27ft
(S4, S5) to 60ft (S1) and H2S 100ppm cloud ranges from 820ft (S2) to 1775ft (S8). The following
observations inferred from the results:
i. Distance to H2S toxic hazard level is significantly larger than flammability hazard levels.
For example, results of toxic gas composition S2, toxicity downwind distance to 500ppm
= 261ft and 100ppm = 820ft whereas the flammable cloud downwind distance UFL = 8ft
and LFL = 30ft).
ii. Downwind distance of toxic dispersion is maximum for those release with higher
compositions of H2S (S7, S8) and with higher molar mass (S8, S7, S1).
iii. Downwind distance of toxic cloud dispersion is higher for toxic gas with higher H2S
composition (S8) while the downwind distance of flammable cloud dispersion is higher for
composition with higher molar mass (S1, S7).
9
(a) (b)
Figure 8: Sensitivity composition: Downwind distance to (a) H2S concentration, (b) Flammable cloud
Consequence modeling was performed using Canary to assess the impact of water saturation on
downwind dispersion to H2S hazard level dispersion distance (see Figure 9).
The results suggest that water saturation of natural gas was not a significant parameter in
downwind dispersion to H2S hazard levels.
10
3.3 Release source terms and sensitivity
3.3.1 Sensitivity – Release hole size: A representative hole size is assumed to represent a release
resulting from loss of fixed equipment (pipeline) integrity (e.g. corrosion, erosion) or from
operational upsets (e.g. blocked outlet). Three representative hole sizes (small, medium and large)
was considered for the study. Release rates from three-hole sizes (1, 2 and 3 inch) and downwind
dispersion for the eight toxic gas compositions were estimated, see Figure 10. The following
observations were inferred from the results:
i. Release rates grow significantly with increase in hole size irrespective of the composition.
For S1 composition, the release rates varied from 2.9 lb/s to 21.5 lb/s.
ii. Release rates were higher for compositions with larger molar masses (S1, S7) and the
difference is significant for larger hole sizes.
iii. Similar release rates (e.g. 11.5 to 12.7 lb/s, 3-inch hole) for compositions with molar mass
less than 27 g/mol (S2, S3, S4, S5, S6) for all hole size. However, significantly higher
release rate (21.5 lb/s) for S1 with molar mass 39 g/mol.
iv. Downwind dispersion distance to 500ppm H2S concentration from 1-inch hole releases was
proportional to the H2S composition.
v. Longest downwind dispersion reported (3inch releases), for 500ppm H2S concentration
was for S8 composition (28% H2S, molar mass = 29 g/mol), while 100ppm was for S7
(18% H2S, molar mass = 34 g/mol). Downwind dispersion following release from larger
hole sizes are influenced by H2S concentration and molar mass.
(a) (b)
Figure 10: Sensitivity – Hole size: (a) molar mass and release rate; (b) downwind distance to H2S concentration
Downwind dispersion of toxic cloud is dependent on hole size, release rate and composition. The
failure mechanism and related hole size (small, medium or large) need to appropriately be
determined. For larger hole releases, the composition and molar mass is significant, whereas for
smaller hole releases, the difference in composition did not have a significant impact to the
downwind dispersion distances.
3.3.2 Sensitivity – Pressure: Release and dispersion at three pressure conditions (low = 50 psia,
medium = 117 psia, high = 500 psia) at release for the eight natural gas compositions were
compared. The release rates for S8 composition varied from 2.6 lb/s (low pressure) to 50 lb/s (high
pressure) and the downwind distances for 500ppm varied from 435 ft (low pressure) to 4050 ft
(high pressure). Following observations are inferred from the results given in Figure 11:
i. The release rates (2 to 3 lb/s) for all compositions were similar at low pressure;
11
ii. The release rates (4.5 to 5.5 lb/s) were comparable for compositions S2 to S8 at medium
pressure, but higher (11.8 lb/s) for S1 with highest molar mass.
iii. For high pressure, the compositions (S1, S7, S8) with higher molar mass (>29 g/mol) have
significant higher release rates (>38 lb/s) compared to the compositions (S2, S3, S4, S5,
S6) with lower molar mass (<29 g/mol).
iv. For high H2S compositions (S7, S8), the dispersion distances were significantly longer for
high pressure releases (500 ppm exceeds 2750 ft compared to less than 1000 ft for natural
gas with less than 18% H2S).
(a) (b)
Figure 11: Sensitivity – Pressure: (a) molar mass and release rate; (b) H2S downwind distances
Release rates are dependent on operating pressure and have higher release rates for higher pressure
for all compositions considered. Downwind dispersion for high pressure releases are sensitive for
compositions with greater than 18% H2S content. For such cases with significantly higher impact
zone, further analysis should be carried out before implementing risk reduction measures.
3.3.3 Sensitivity – Temperature: Release and dispersion at three temperatures (low = 20oF,
medium = 77oF, high = 120oF) for different natural gas compositions were compared. The release
rates varied from 2.2 lb/s (S4 low pressure) to 50 lb/s (maximum) and the downwind distances for
500ppm varied from 68ft (S1 low pressure) to 4050ft (S8 high pressure). For the study, the
atmospheric temperature and surface temperature also has been assumed as the same temperature
as that of the fluid temperature. Following observations are inferred from the results given in
Figure 12:
i. For medium and high temperature conditions, the release rates are similar irrespective of
the compositions.
ii. Similar release rates (~5 lb/s) were estimated for compositions with molar mass 29 g/mol
and less for the range of temperatures evaluated.
iii. Significantly higher release rates were estimated for compositions with molar mass greater
than 30 g/mol under low temperature conditions.
iv. For all three temperature conditions, downwind dispersion distances similar for all
compositions with molar mass less than 30g/mol.
v. Downwind dispersion distances for composition with greater than 30g/mol similar for
medium and high temperature, whereas significantly higher for low temperature releases.
12
(a) (b)
Figure 12: Sensitivity – Temperature: (a) molar mass and release rate; (b) H2S downwind distances
Release rates and downwind dispersion are sensitive to low temperature for those compositions
with >30 g/mol. For such cases with significantly higher impact zone, further analysis should be
carried out before implementing risk reduction measures.
Sensitivity – Release orientation: Release and dispersion from two release orientations,
horizontal and upwards (at 45deg from horizontal) for the eight natural gas compositions and from
2-inch hole at 77oF and 115psia were compared. Release rate for each composition will be the
same for both orientations. The orientation options were limited to two as the scenario considered
was at the ground level. 500ppm downwind distance for horizontal orientation ranges from 260ft
(S2) to 668ft (S8) whereas for upwards ranges from 20ft (S1) to 335ft (S8). Following observations
were inferred from the results given in Figure 13:
i. Downwind dispersion distances are higher for horizontal orientation compared to upwards
orientation for all compositions.
ii. For both orientation, downwind dispersion distances for 500ppm and 100ppm were similar
for compositions with H2S concentrations 14% to 18% (S4, S5, S6, S7), but significantly
lower for compositions with low (<10%) H2S concentrations (S1, S2) and significantly
higher for compositions with high (>20%) H2S concentrations (S8).
iii. For dispersion from upwards releases, the downwind dispersion distance increases with the
increase in H2S concentration.
iv. For dispersion from horizontal release, S1 with 2.6% H2S (highest molar mass and release
rate) dispersion distances are higher than S2 (8% H2S) and S3 (10% H2S).
13
Figure 13: Sensitivity – Release orientation: H2S downwind distances
The analysis implies that downwind dispersion is sensitive to the orientation of release. Hence,
appropriate orientation based on the failure mode and expected location (elevation) of the receptors
of concern should be used for consequence modeling.
3.4 Impact zone: Environmental parameters
3.4.1 Sensitivity – Atmospheric stability and wind speed: Dispersion for different natural gas
compositions and from 2-inch hole at 77oF and 115psia under three atmospheric stability
conditions and wind speeds (3.4F: stable and low wind speed, 13D: Neutral and medium wind,
20C: slightly unstable and high wind) were compared. Following observations, were inferred from
the results given in Figure 14:
i. The longest downwind dispersion irrespective of the composition was recorded for stable
conditions and low wind speed.
ii. For dense gas (negatively buoyant) compositions (S1, S7) with higher molar mass (>29
g/mol), the downwind dispersion for Neutral and Medium wind (13D) was significantly
higher.
iii. For lightly unstable and high wind speed (20C) conditions, the downwind distances for 100
ppm was less than 200 ft for all compositions whereas for stable and low wind speed (3.4F)
conditions, the distances exceeded 800 ft.
iv. For compositions with molar mass <29 g/mol (positively buoyant), the downwind distance
for 20C conditions are higher than 13D conditions. Under these conditions, the cloud is
behaving more as heavy gas and closer to ground level, whereby higher concentration
cloud travels further downwind.
Figure 14: Sensitivity – Atmospheric stability and wind speed: H2S downwind distances
Note: Canary tool couples (transition) from jet dispersion to heavy gas dispersion when the central line touches ground
level. This modeling factor is reflected in results for S7 under 13D conditions and for all compositions under 20C
conditions.
For S8 composition with higher H2S concentration, the downwind distance to 100 ppm extends to
1775 ft at low wind and stable conditions (3.4F) compared to 390 ft and 220 ft for neutral stability
and higher wind speeds. For a location with predominant neutral stability and medium wind speed
(like 13.4D), if the risk management bases the impact zone distance worst-case stability and wind
(1775ft) which is about 5 times typical (390ft), then the risk management (e.g. emergency
14
planning) incur significantly higher cost and effort. This comparison highlights the importance of
determining wind speed and stability class appropriate for the location. It is however, advisable to
have a range of stability and wind speed to represent the variations during 24 hours and through
the year.
3.4.2 Sensitivity – Terrain: Dispersion for different toxic gas compositions and from 2-inch hole
at 77oF and 115 psia over three different terrains (mud flat, level country or cut grass, urban area)
were compared. The terrains were considered flat (without obstructions) and the turbulence from
terrains were addressed by surface roughness parameter as given in Table 2. Following
observations are inferred from the results given in Figure 15:
(i) With increase in surface roughness, the downwind dispersion decreases. Downwind
dispersion distances for Urban area was significantly lower than (1/3rd) for all
compositions except S1.
(ii) Downwind dispersion distances for Mud flat and Cut grass is similar for all
compositions except S1 with the highest molar mass. This implies that dispersion of
toxic gas with less than 35 g/mol molar mass is not sensitivity to surface roughness
<0.2 inch.
3.4.3 Sensitivity – Humidity: Dispersion for different toxic gas compositions and from 2-inch
hole at 77oF and 115 psia at three humidity conditions (low =20%, medium = 50%, high =80%)
were compared. Results given in Figure 16 implies that humidity has no significant impact on the
downwind dispersion of toxic natural gas.
15
Figure 16: Sensitivity – Humidity: H2S downwind distances
3.5 Discussion
The statement is often made that natural gas is lighter than air and the properties of a mixture is
determined by the mathematical average of the properties of the individual constituents. Such
mathematical bravado and inconsistency of thought is detrimental to safety and must be qualified
(Speight, 2011). During expansion from elevated pressure, released toxic gas could be colder and
heavier than air close to the release source with the potential to accumulate in low-lying areas (Nair
& Wen 2019).
3.5.1 Findings: From the range of simulations (using HYSYS) and consequence modelling (using
Canary), it was concluded that for a similar type of release event, the toxic hazard impact zone
could be orders of magnitude different. Comparative study was carried out for eight different toxic
natural gas compositions with H2S concentration ranging from 2.6% to 29%. It was observed that
the downwind distances to hazardous levels ranges from less than 50 ft to more than 5000 ft for a
loss of containment from toxic natural gas pipeline transfer line. The range of results were obtained
by varying input on the release (source term) conditions and certain environment conditions. From
the parametric sensitivity analysis for a release event from a natural gas transfer pipeline at ground
level using eight different compositions, the following observations and recommendations were
made:
1. Downwind dispersion distances to concentrations of interest (impact zone) is dependent of
the natural gas composition and release rates. Detailed assessment should be carried out
taking account of the phase and component characteristics of the fluid in question.
2. Selection of appropriate evaluation criteria (hazardous levels – toxic, flammable) is critical
in determining the impact zone from accidental releases. For natural gas with toxic (H2S)
component, the potential impact zone of concern will be dominated by toxicity.
a. Flammable cloud dispersion distance is dependent on the molar mass of natural gas
composition; significant longer distances for natural gas with molar mass greater
than air (>29 g/mol).
b. Toxicity impact zone is dependent on the H2S composition along with the molar
mass. Significantly longer downwind distance of toxic cloud dispersion for natural
gas with higher (>18 mol%) H2S concentration.
3. Phase equilibrium properties of the release should be considered in determining the release
phase as low temperature and high-pressure releases can have longer impact zone
16
distances. Detailed review (prior to implementing risk mitigation) should be carried out for
high pressure releases of compositions with >18 mol% H2S & molar mass >29 g/mol and
for low temperature releases of compositions with molar mass >30 g/mol.
4. Selection of representative release hole size and orientation of the release have significance
in the impact zone estimation. The cause and mode of failure need to be evaluated to
determine representative source term.
5. Natural gas cloud dispersion is sensitive to the turbulence related parameters, i.e. stability
class, wind speed and surface roughness. Following environmental parameter selection and
sensitivity evaluations are suggested:
a. Terrain effects for dense (molar mass > 30g/mol) toxic natural gas with molar mass,
surface roughness selection to evaluate terrain for low (cut grass) and medium
(process plant / urban).
b. Site specific set of atmospheric stability and wind speed to be selected to represent
the predominant conditions. Sensitivity to be evaluated for higher and lower wind
speeds and corresponding atmospheric stabilities.
3.5.2 Application: Role of consequence modeling results in the risk management efforts is
analysed with the study findings. The results from the parameter sensitivity analysis for natural
gas composition S4 transposed to geographical location. The potential impact to public (personnel)
corresponding to each impact zone radius was estimated for comparing the levels of risk. A
comparison with composition S7 and possible risk management considerations are also discussed.
The downwind distances to 100 ppm H2S cloud is summarized in Table 7.
17
Table 7: Natural gas (S6) compositions (mol%) and downwind distance to 100ppm H2S
Figure 17: Parameter sensitivity summary - H2S downwind distances and potential impacts, the yellow
pin corresponds to the release point and the colored circles represents the impact zone for different
set of input and parameters are given in Table 7. The impact area for a release event will be a
section of the circle with orientation dependent on wind direction.
18
Figure 17: Parameter sensitivity summary - H2S downwind distances and potential impacts
The representative set of cases with impact zones, corresponding potential consequence and risk
management considerations are given in Table 8. The base case impact zone (Orange color and
radius 1185 ft), the 100 ppm H2S cloud (IDLH – concentration level) could reach an office building
or residential area. This implies that in the event of a release under the given base case conditions
and Southerly (towards North) wind, more than 500 personnel could be exposed to natural gas
cloud with 100 ppm or more for a period until the release is isolated and such an exposure could
result in coughing and dizziness. Risk reduction measure considerations should be to reduce the
impact zone radius including reducing the pipeline diameter or restricting the horizontal release
orientation (e.g. laying pipeline underground). However, modeling using the site-specific
representative wind speed and atmospheric stability (13D - medium and neutral) instead of worst-
case conditions (3.4F – stable and low wind conditions), the impact zone estimated was much
smaller (300 ft, Green color). The impact zone was limited to the facility surroundings (without
personnel exposure) and whereby the risk management limits were limited to maintaining the
exclusion zone (restricting personnel access / habitats). Similarly, for the impact zone and potential
consequences for operating under higher pressure or for S7 composition is given in Table 8.
Table 8: Natural gas impact zone – parameter sensitivity and risk management considerations
19
C4: Pressure – High Blue 2000+ Operational controls (e.g. at lower
(500psia) (2 x office, 100+ houses) pressure)
C7: Pressure – High Red 25,000+ Elevated risk, consider alternate
(500psia) (Ball park, Supermarket, route
neighbourhoods)
A worst-case consequence modelling estimate may not be the best for risk management, instead a
‘credible’ worst-case scenario need to be determined and subjected to consequence modeling. The
credibility of a set of modeling input should be determined considering the site specific operating
conditions, fluid characteristics, type of failure and likelihood of environmental conditions. Once
the risk levels are evaluated, sensitivity analysis on modeling input can be used further to determine
the risk management efforts.
4. Concluding remarks
Numerical simulation of release and dispersion of natural gas provides an enhanced information
on the potential impact zone which forms an essential part for risk-based decision making,
especially in engineering projects and emergency planning. For toxic natural gas, with components
like Hydrogen Sulfide (H2S), the toxicity impact zone drives business decisions related to
equipment design, facility siting, layout, land use planning and emergency response measures. The
study focused on potential accidental release from pipeline at ground level transferring toxic
natural gas. Eight natural gas compositions were subjected to a range of release source terms and
environmental parameter sensitivity analysis. The multi-component phase diagram was developed
using HYSYS and release followed by and dispersion were estimated using Canary. Analysis was
carried out for by changing one parameter at a time for release and environmental conditions.
The analysis concludes that the release and dispersion of toxic natural gas is significantly impacted
by (i) fluid composition, molar mass and fluid phase, (ii) release hole size and orientation, (iii) low
temperature and high pressure, and (iv) surface roughness, wind speed and stability. The study
findings highlights the significance of the use of a multicomponent consequence model when the
potential for the formation of a two-phase system exists. Sensitivity modelling for the key
parameters is the suggested approach to overcome this challenge. Incorrect selection of the
modeling approach, input and environmental parameters could lead to an inaccurate consequence
impact zone estimation which could result in disproportionate risk management efforts. this
challenge can be addressed by better understanding of the cloud behaviour following release and
sensitivity analysis of the modeling inputs and parameters.
References
1. AspenTech, 2013, pp. 1–117, Aspen Physical Property System: Physical Property Methods.
Technology, Inc, Aspen.
2. Bariha N., Mishra I.M., Srivastava V.C., 2016, Hazard analysis of failure of natural gas and
petroleum pipelines, Journal of loss prevention in process industries, 40, P217-226
3. Guerra M. J., 2006, Aspen HYSYS Property Packages: Overview and Best Practices for
Optimum Simulations, AspenTech, pp. 1–36
20
4. Hanna, S.R., Briggs, G.A., and Hosker, R.P. Jr., 1982, Handbook on atmospheric diffusion.
United States: N. p.,.Web. doi:10.2172/5591108
5. IOGP, 2010, Report No. 434-7, Risk Assessment Data Directory - Consequence Modelling,
OGP
6. Jianwen, Z., Dab, L., Wenxing, F., 2011, Analysis of chemical disasters caused by release of
hydrogen-sulfide bearing natural gas, First International Symposium on Mine Safety Science
and Engineering, Procedia Engineering 26, pp 1878-1890, 2011,
7. Johnson, D. W. and Marx J.D., 2003, The importance of multiphase and multicomponent
modeling in consequence and risk analysis, Journal of Hazardous Materials, Vol.104pp.51–64.
8. Kelley, B.T., Valencia, J.A., Northrop, P.S. and Mart, C.J., 2011, Controlled free zone for
developing sour gas reserves, Energy Podia 4, P824-829
9. Muhlbauer W.K., 2004, Pipeline risk management manual ideas, techniques and resources, 3rd
Edition, ISBN: 0-7506-7579-9, 14/327 – 328.
10. Nair S., Wen J., 2019, Uncertainties in Sour Natural Gas Dispersion Modelling, Chemical
Engineering Transactions, 77, 355-360, https://doi.org/10.3303/CET1977060
11. Pandya N., Gabas N., Marsden E., 2012, Sensitivity analysis of Phast’s atmospheric dispersion
model for three toxic materials (nitric oxide, ammonia, chlorine), Journal of Loss Prevention
in the Process Industries 25(1), DOI: 10.1016/j.jlp.2011.06.015
12. Pandya N., Gabas N., Marsden E.,2013, Uncertainty Analysis of Phast's Atmospheric
Dispersion Model for Two Industrial Use Cases, Chemical Engineering Transactions, AIDIC,
vol. 31, pp. 97-102. DOI:10.3303/CET1331017
13. Speight, J. G., 2007, Natural Gas – A basic handbook, Gulf publishing company, ISBN 1-
933762-14-4
14. Stephens M. J., 2000, GRI-00/0189 - A model for sizing high consequence areas associated
with natural gas pipelines, Gas Research Institute
15. US DoT, 2018, Pipeline Risk Modeling Overview of Methods and Tools for Improved
Implementation, Pipeline and hazardous materials safety administration, link
16. US EPA, 2017 Guideline on Air Quality Models ("Appendix W" to 40 CFR Part 51)
17. Zhang Jianwen, Lei Da, Feng Wenxing, Analysis of Chemical Disasters Caused by Release of
Hydrogen Sulfide-bearing Natural Gas, Procedia Engineering, Volume 26, 2011, Pages 1878-
1890 https://doi.org/10.1016/j.proeng.2011.11.2380
21
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
With the rapid development and advancement in computing power, modelling and simulation
(M&S) has demonstrated its vast potential in predicting the properties of energetic material and
helping to design energetic material. One such application is predicting crystal packing and
crystalline structure from first-principle simulation. Such technique has demonstrated the ability
to distinguish different polymorphs of the same energetic molecules and accurately predict the
crystal structure and density. In addition to the ability to predict detonation pressures and velocities
of more established classes of energetic materials based on their thermochemical code or empirical
equations, M&S has also demonstrated its ability to screen designed energetic materials for
potential application. The application of M&S vastly improves the safety of developing potential
energetic materials - the ability to screen potential energetic materials based on M&S-predicted
heats of formation and detonation properties means that less hazardous experiments are required
to be conducted as well as reducing developmental cost.
Keywords: energetic material, safety, modelling and simulation, density, heat of formation,
detonation property
1. Introduction
Energetic materials, such as propellants, pyrotechnics, and explosives, belong to a special class
of materials, which have many industrial and civic applications, but pose dangers and hazards at
the same time. In recent decades, active research in the design and synthesis of novel high energy
density materials (HEDMs) led to the development of many novel HEDMs, for example,
hexanitrohexazaisowurtzetane (HNIW or CL-20) [1], 1,3,3-trinitroazetidine (TNAZ) [2],
octanitrocubane (ONC) [3] and 1,1-diamino-2,2-dinitroethene (FOX-7) [4], were designed and
synthesized, but the experimental exploration of HEDM can be a rather hazardous process.
With the rapid development of simulation technique and computing power, modelling and
simulation (M&S) has become an important auxiliary tool for the design and development of
HEDMs. Theoretical M&S for HEDMs can evaluate all their material properties more safely,
efficiently and economically, which include crystal form and density, heat of formation and
detonation pressures and velocities. Thus, M&S is not only a precise predictor of material
properties of classical and new-synthesized energetic material, but also a screening tool for
potential high energy density molecules (HEDCs) from plenty of theoretical target for the future
experimental synthesis. This M&S screening tool for HEDMs, help to reduce the number of
hazardous, time-consuming and cost in the rapid advancement and development of HEDM.
The crystal density is a very important parameter in HEDM and new energetic materials must
first achieve the density of more than 1.90 g/cm3 at the present stage. To date, several theoretical
methodologies have been developed to predict the crystal densities of HEDM based on their
geometric structures or electronic structure.
In the past, a frequently used method is the group additivity method (GAM) [5]. GAM divided
an energy density molecule into several appropriate functional groups. Then, the volume of this
energy density molecule is evaluated by summing up all the volumes of these fragments. In
recent years, GAM method has been updated to consider intermolecular interactions of
molecules in the solid state. However, it does not include the effect of crystal form on crystal
volume. Moreover, GAM method relies heavily on the experimental data: when novel promising
HEDC possesses the special group beyond the GAM database, GAM method often finds itself
limited in application.
Based on the calculation of quantum mechanics (QM) and the analysis of the electron cloud
of HEDC, an initial theoretical method predicting the density of HEDC was advised [6-7]. The
density accuracy of the QM method is derived entirely from the appropriate theoretical method
and high calculational level, not a response to the experimental data. Theoretical molecular
density of each target HEDC needs to exactly evaluate the volume of 0.001 electrons Bohr-3
electron density envelope, which is computed by Monte Carlo integration. In order to ensure the
accuracy of Monte Carlo integration, QM volume of HEDC is gained from the arithmetic average
value of more single-point molar volumes, for example, more than 100 times. QM density of
HEDC is highly consistent with the experimental data (see Table 1) and thus it has been widely
adopted at the exploration stage of promising HEDMs.
Table 1. Experimentala and theoretical, calculated at the B3LYP level, density (at g /cm) of TNT, RDX, HMX.
a
see reference [8].
Since the predicted density from QM calculations does not take into account intermolecular
interactions and crystal packing, and QM method is also unable to distinguish between different
polymorphs of the same HEDC, to increase the accuracy of HEDMs prediction, we advised a new
technique to evaluate the crystalline density of HEDM, which includes two consecutive steps:
crystal packing of molecules and first-principle simulation of crystalline HEDMs [9].
Our calculations have shown that the crystal form of the same HEDC molecule is an important
factor to determine the crystal density of HEDM [9]. It is even more remarkable that the new
method can discover the promising crystal forms of synthesized energetic materials, which are
more thermodynamically stable, possess higher density, and yet experimentally evasive
polymorphs. [10]
Heats of formation (HOF) is a key parameter predicting explosive performances of HEDMs. Since
experimental measures of HOF for HEDMs is difficult and hazardous, theoretical measures using
computer codes have become a popular subject in the exploration of HOF of energetic material.
In general, HOF of HEDMs is always calculated at the level of Density Functional Theory (DFT)
[11-12], which can accurately estimate HOF for HEDMs and avoid the shortcomings of other
theoretical methods. Since DFT method includes the electronic correlation and its calculations is
not expensive, DFT method, especially B3LYP [13-15] methods, can be employed to estimate HOF
for most HEDMs. However, the theoretical method needs specifically-designed isodesmic
reactions, during which the target molecule of energetic material need be broken down into
several small molecules containing the same component bonds, thus, different isodesmic
reactions can generate different HOF data for the same molecule at the same computational
level. In addition, there must be accurate experimental HOF values for small molecules generated
in the isodesmic reactions. Thus, there remains some disadvantages in the method of isodesmic
reactions.
At present, a theoretical method with sufficient accuracy [16], the atomization scheme
together with high-level calculational models, which include the Gaussian-n (G2, G2(MP2), and
G3) [17-19] and complete basis set (CBS-4M, CBS-Q, and CBS-QB3) [20-22] models have been
well developed to evaluate HOF of energetic material. The method can accurately estimate HOF
with mean absolute deviations of less than 4.0 kJ/mol from experimental data (see Table 2).
Table 2. Experimental and theoretical, calculated at the G2 and CBS-Q level, heat of formation (at kJ/mol)
of oxazole, pyrazole and1H-tetrazole.
Since experimental data is lacking new HEDMs, detonation velocity and detonation pressure of organic
CHNO explosives are traditionally predicted by applying the empirically derived Kamlet-Jacobs equations:
[25]
P = 1.55802NM1/2Q1/2 (2)
where N is moles of detonation gases per gram of explosive, M is average molecular weight of
gases, Q is chemical energy of detonation, ρ0 is the density of explosive.
Although the Kamlet–Jacobs equations were derived decades ago, they are still being applied
to predict the detonation properties for many CHNO HEDMs, especially for new HEDM and
theoretically designed potential HEDM. For example, the Kamlet–Jacobs value of detonation
velocity and detonation pressure for RDX are theoretically estimated as 9.03 km/s and 35,2 GPa
[26] respectively, which are close to the experimental values of 8.754 km/s and 33.8 GPa [27].
Table 3. Detonation velocity (D, at km/s) and detonation pressure (P, at GPa) of TNT, RDX, HMX from
experimenta and EXPLO5TM calculationa.
6. Concluding Remarks
Although experimental data of energetic material is preferred over the modelling and simulation
data, reliable experimental data of HEDM is not often available as its experiments is always
hazardous. Researches on new HEDM are always short of experimental data, especially in the
exploration of novel HEDM materials.
With the rapid development and advancement of simulation technique and computing power,
M&S has shown its vast potential in predicting the properties of energetic material and screening
the promising HEDCs from a plenty of theoretical targets. As the M&S results collaborate with
the experimental data, it has turned into an indispensable tool to predict the material properties
with accuracy and check the reliability of theoretical target.
Since M&S initially evaluated the material properties and predicted detonation perform only
based on the calculation of quantum mechanics, beyond the experiment, it can effectively avoid
the potential hazards of HEDM experiments. At the same time, M&S can reduce the cost of
experimental design and synthesis of potential HEDMs.
References
[3] M.-X.; Zhang, P. E. Eaton, R. Gilardi, (2000). Hepta- and Octanitrocubanes, Angew. Chem. Int.
Edit., 39 (2000): 401–404.
[5] C. M. Tarver, Density estimations for explosives and related compounds using the group
additivity approach, J. Chem. Eng. Data, 24 (1979), 136-145.
[6] G.-X. Wang, C.-H. Shi, X.-D. Gong, H.-M. Xiao, Theoretical Investigation on Structures,
Densities, Detonation Properties, and the Pyrolysis Mechanism of the Derivatives of HNS, J. Phys.
Chem. A, 113(2009), 1318–1326.
[7] Y. Liu, X. D. Gong, L. J. Wang, G. X. Wang, H. M. Xiao, Substituent effects on the properties
related to detonation performance and sensitivity for 2,20,4,40,6,60-Hexanitroazobenzene
derivatives, J. Phys. Chem. A, 115(2011), 1754–1762.
[8] B. M. Dobratz, Properties of chemical explosives and explosive simulants, United States: N. p.,
1972, Chapter 4.
[9] H.-W. Xi, H. W. Goh, J. Z. C. Xu, P. P. F. Lee, K. H. Lim, Theoretical design and exploration of
novel high energy density materials based on silicon, (2017), 291-301.
[10] H.-W. Xi, S. Z. B. M. Mazian, H. Y. S. Chan, H. H. Hng, H. W. Goh, K. H. Lim, Theoretical studies
on the structures, material properties, and IR spectra of polymorphs of 3,4-bis(1H-5-
tetrazolyl)furoxan, (2019) J. Mole. Model., 25 (2019), 51.
[11] Koch W, Holthausen MC (2000) A Chemist’s guide to density functional theory. Wiley-VCH,
Weinheim.
[12] Parr RG, Yang W (1989) Density-functional theory of atoms and molecules. Oxford
University Press, Oxford.
[13] A. D. Becke, Density-functional thermochemistry. III. The role of exact exchange, J. Chem.
Phys., 98 (1993), 5648-5652.
[14] C. Lee, W. Yang, R. G. Parr, Development of the Colle-Salvetti correlation-energy formula
into a functional of the electron density, Phys. Rev. B, 37 (1988), 785-789.
[15] S. H. Vosko, L. Wilk, M. Nusair, Accurate spin-dependent electron liquid correlation energies
for local spin density calculations: a critical analysis, Can. J. Phys., 58 (1980), 1200-1211.
[16] T. Wei, J. J. Zhang, W. H. Zhu, X. W. Zhang, H. M. Xiao, J. Mole. Struct.: THEOCHEM, 956
(2010), 55–60.
[21] J. A. Montgomery Jr., M. J. Frisch, J. W. Ochterski, G. A. Petersson, A complete basis set model
chemistry. VII. Use of the minimum population localization method, J. Chem. Phys., 112 (2000),
6532-6542.
[23] R.L. David, Handbook of Chemistry and Physics, 84th ed., CRC Press, Boca Raton, 2003
(Section 5).
[24] J. A. Dean, LANGE’S Handbook of Chemistry, 15th ed., McGraw-Hill Book Co., New York, 1999
(Chapter 6).
[26] J. J. Xiao, J. Zhang, D. Yang, H.-M. Xiao DFT comparative studies on the structures and
properties of heterocyclic nitramines, Acta Chimica Sinica 60(2002), 2110-2114.
[27] B. M. Dobratz LLNL Explosives Handbook. Properties of Chemical Explosives and Explosive
Simulants, UCRL-52997, Lawrence Livermore National Laboratory, 1981, (Chapter 6).
Cuixian Yang*, Ralph Zhao, Amanda Peterman, Analisse Rosario, Josh Bader, Tom Vickery, Megan
Roth, Adam Fine
Abstract
Commercially available azo-type low temperature radical initiators provide efficient initiation of many
chemical reactions. However, the azo group initiators are energetic compounds that also have thermal
stability issues at ambient or even sub-ambient temperatures. These initiators can also generate nitrogen
gas during slow decomposition under heat and/or light, which could present a safety challenge for shipping,
storage and usage. In order to define safe storage and handling conditions, a variety of calorimetry studies
were carried out. Exotherm and pressure data were collected from these studies in an effort to gain a better
understanding of the decomposition kinetics. Thermal-kinetics and thermal safety model simulations were
then used to obtain the self-accelerating decomposition temperature (SADT) and decomposition activation
energy for the azo-type initiator. This methodology for thermal decomposition kinetics data and parameter
determination, acquired with 5mg to 1g scale samples, enables safe storage, handling, and scale-up process
preparation.
Keywords: Azo Radical Initiator, Self-accelerating Decomposition Temperature (SADT), Storage and
Handling, Thermal-Kinetics Simulation, Thermal Instability
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
There are possibilities of pressure increasing in chemical equipment in case of some upset
during operation of reaction, pressurization or transfer. Therefore, safety valves are
installed on equipment to prevent rupture of equipment. Pressure increasing by reaction
runaway is difficult to estimate, because two-phase flow might occur. Therefore, it would
be hard to say all equipment have enough size as safety valves. Vent sizing method for
two-phase flow by reaction runaway has been developed by DIERS established in 1987,
and ISO 4126-10 was published in 2010 and it got the global standard. However, the ISO
model often overdesigns the vent size due to the assumption that the liquid level in the
reactor does not change during the runaway reaction. In this study, analysis of reaction rate
of reaction runaway was carried out by ARSST, and then process simulation of pressure
increasing and depressurization phenomenon was carried out by Aspen. Considering liquid
level decrease in the reactor during reaction runaway would give more accurate design of
vent size.
Keywords: Reaction Runaway, Vent Sizing, Two Phase Flow, ISO 4126-10, Aspen,
Dynamic Simulation, Liquid Decrease
1. Introduction
In chemical plants, safety valves are installed to prevent rupture of equipment by undesired
pressure-rise due to reaction and failure during pressurization or transport operations.
Typical initiating abnormal events leading to pressure-rise are malfunction of control valves,
cooling system failure of reflux or internal coil of reactor, tube failure of heat exchanger and
fire case etc. Though there are many considerations for those cases, pressure-rise caused by
reaction runaway is difficult to predict. Especially, about the case of two-phase flow
occurred by reaction runaway, it is not easy to estimate adequate vent size. Vent sizing
method for two-phase flow was developed by DIERS under the auspices of AIChE in 1987,
and ISO 4126-10 [1], Safety devices for protection against excessive pressure -Part10: Sizing
of safety valves for gas/liquid two-phase flow, was published in 2010. After that, JIS B 8227
[2] which is translation of ISO 4126-10 was published in 2013 in Japan. However, the
organization which able to implement vent sizing with JIS method would be few because of
hard to understand each analysis procedure. On the other hand, recently, explosion accidents
due to reaction runaway occur successively, and it is considered important to review
protection layers. Therefore, it is necessary to review safety protection layers for such
equipment, especially the safety valve. If the calculation of vent sizing by ISO model can be
carried out, sometimes bigger diameters than diameters of reactor is obtained. It is
impossible to implement to install safety valve to the reactor. The assumption of ISO method
is that equipment are confined and inventories do not change during runaway reactions.
However, actual equipment have some gas lines such as exhaust lines or reflux lines.
Vaporization of solvents and decrease of inventory would temper reaction runaway
phenomenon by vaporization and prevention of two-phase flow, and it could lead to make
vent size much smaller. Before finding out the effects of gas lines, the construction of
detailed simulation considering with mass balance, reaction rate, temperature and pressure
changes during runaway reaction is carried out in this paper.
Figure 2. The procedure of the construction of estimation model for the diameter of
safety valve in Aspen
simulation result matches the experimental one. On the other hand, safety valve model is not
included in Aspen Dynamics and it is necessary to make a model in Aspen Custom Modeler.
𝑑 𝑇 − 𝑇0 −𝐸𝑎
𝑟= {𝑛0 |𝑀𝐸𝐾𝑃𝑂 (1 − )} = 𝐴𝑝𝑟𝑒 𝑒𝑥𝑝 ( ) (Eq. 2)
𝑑𝑡 ∆𝑇𝑚𝑎𝑥 𝑅𝑇
Figure 5. Corrected temperature data estimated by reaction rate and cooling curve
The blue line is Arrhenius line estimated with corrected temperature data and the red line is
original line in Fig. 6. In case that the reaction rate is estimated with only the experimental
data, the slope of dT/dt was small in the region where the violent runaway occurred, and
the runaway behaviour would be underestimated.
Figure 6. Estimated reaction parameter with and without corrected temperature data
3.1.1 Estimation of heat of reaction
The heat of reaction is calculated by the difference of total heat of formation before and
after the reaction as shown in Equations 3 and 4. Regarding MEKPO and RESIDUE, these
substances are not in the Aspen physical property library, so it is necessary to estimate
their heat of formation. The heat of formation of MEKPO was calculated by PM3 model
by SPARTAN, the molecular orbital calculation software. The heat of formation of
RESIDUE was calculated from the relationship between the heat of formation of other
substances, the composition, the specific heat, and the adiabatic temperature rise obtained
from the ARSST test as shown in Eq.3, Eq.4 and Fig. 7.
𝑄𝑟𝑒𝑎𝑐
∆𝑇𝑚𝑎𝑥 = (Eq. 4)
𝐶𝑝 𝑀𝑡𝑒𝑠𝑡
Figure 9. The interaction between Aspen Dynamics and Aspen Custom Modeler
2𝜀𝑑
𝛹= (Eq. 5)
1 − 𝐶0 ∙ 𝜀𝑑
𝜀𝑑 ∙ (1 − 𝜀𝑑 )2
𝛹= (Eq. 6)
(1 − 𝜀𝑑 3 )(1 − 𝐶0 ∙ 𝜀𝑑 )
The value of vertical axis tanh_void is calculated by the relationship between void_crit and
void, and then mass flow rate through safety valve F is calculated with Eq.8.
Default type of hyperbolic function vary from -1 to 1 as value, but in order to exchange
equation it is necessary to adopt range between 0 to 1 and tanh is multiplied by 2 and is
added 0.5 as in Eq.7. In the same way, when safety valve open, transition phenomena
occur as mass flow rate from no flow with safety valve close to flow with safety valve
open. However, in this case safety valve open before pressure in the reactor reach at set
pressure of safety valve as shown in Fig.13, and the phenomena that pressure cannot rise to
set pressure is obtained. Therefore, hyperbolic function is redefined the equation as in
Fig.9 and Fig.14, that is to take value which is 0 or hyperbolic value in max function like
activation curve used in machine learning.
When pressure in the reactor under set pressure of safety valve, safety valve is closed
because of max function. And then if pressure in the reactor rise over set pressure, safety
valve opens immediately because tanh_Pset goes to 1, and it makes possible to solve the
problem that pressure leaks before set pressure.
Regarding ISO method, the plot is only one at set pressure. And regarding detailed
simulation case 1, there are a series of plots while runaway reaction. When reaction rate is
not so fast plots are on left side in the graph, and as reaction rate increase plots move to right
direction. About slight increasing liquid level in case 1 is caused by liquid expansion with
temperature rise. The destination of case 1 is almost same position of ISO method. As
equation in detailed simulation is same as ISO method, correspondence of these results
would be one of validation for the accuracy of constitution of equation, physical properties
and experimental parameters. And estimated diameter of safety valve of detailed simulation
case 1 and ISO method are 2.2m and 2.3m on each.
Figure 15. Comparison of calculation results between detailed simulation case 1 and ISO
method
Figure 16. Comparison of calculation results between detailed simulation case 1, 2 and
ISO method
Though accuracy of detailed simulation will be carried out verification test in future work,
and here, accuracy is checked by the benchmark test data in literature [5] as shown in Fig.17.
The critical conditions for vent sizing are plotted as benchmark test results, and the black
line in the graph is critical border line for vent sizing. Actual rupture incidents data are also
plotted on the graph under the critical borderline, and it means that diameters of safety valves
of rupture incidents are underestimated. The calculation results of detailed simulation case
1 and 2 are also plotted on the graph. The data of case 2 is plotted on the borderline and case
1 is above the borderline. It means that case 2 would give accurate vent size and case 1, this
is almost same as ISO method, would give bigger vent size.
Figure 17. Comparison of vent size among benchmark test and the calculation results of
case 1 and 2
References
[1] International Standard ISO 4126-10, Safety devices for protection against excessive
pressure Part10: Sizing of safety valves for gas/ liquid two-phase flow (2010)
[2] JIS B 8227, Sizing of safety valves for gas/ liquid two-phase flow (2013), in Japanese
[3] H. K. Fauske, Rvising DIERS’ two-phase methodology for reactive systems twenty
years later, Process Safety Progress Vol.25, No.3 (2006) 180-188.
[4] H. K. Fauske, Properly size vents for nonreactive and reactive chemicals, Chemical
Engineering Progress (2000) 17-29
[5] Guidelines for Pressure Relief and Effluent Handling Systems (2017)
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Tom Shephard
Safety and Automation Consultant
Houston, Texas
*Presenter E-mail: [email protected]
Abstract
For the first time in human history, it is now possible to take a comprehensive, cognitive-
focused, first principles approach to safety critical design, the topic of this presentation. Why is
this important? Experts assessing the root cause of catastrophic accidents commonly site human
error as a causal factor. These same experts often assert that a latent error in the design is the
causal source of that error. An example is a mismatch between the cognitive demands placed on
the user and the user’s actual cognitive capability.
Recent catastrophic accidents provide ample evidence that existing methods failed to reliably
mitigate cognitive-attributed errors in safety critical designs. The most widely employed are
rooted in methods developed in the last century. In response, global industry leaders and
organizations issued article and white paper ‘calls for change’ in these areas. In addition, new
technology available in 1990’s triggered an exponential growth in the base knowledge that
articulates the fundamental nature, attributes, capabilities and executional functioning of the very
different automatic (aka, subconscious) and conscious processes. Little of this new information
is widely known or currently applied. Subsequently, industry tribal knowledge is broadly
incomplete and often erroneous, a hidden bias that contributes to latent design errors.
So, what is the path forward? What form does a new design process take? Any new method
should be purposely designed to explicitly identify and mitigate cognitive-attributed design
errors at the task phase (detect, decide, act) under all plausible situations. It should utilize the
latest available, peer-reviewed information on human cognition and apply equally well to the
design of an active human barrier or a safety critical task. From that frame, the presentation
provides an overview of one possible solution. Presented examples of automatic and conscious
processes should aid in understanding what this new information looks like and the tools and
expertise needed to apply it.
Keywords: human factors, barriers, cognitive ergonomics, situation awareness, tasks, process
safety, design process, emergency response
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
Table of Contents
1 Introduction ........................................................................................................................................... 4
1.1 Industry Insights and Recommendations ...................................................................................... 5
1.2 Terms and Definitions................................................................................................................... 6
2 Barrier Constructs and Models ............................................................................................................. 7
2.1 Define Barriers as Tasks ............................................................................................................... 7
2.2 Frame Tasks in the Form of a Detect, Decide, Act (DDA) Model ............................................. 10
2.3 Integrate Situation Awareness (SA) into the DDA Model .......................................................... 10
2.4 Barrier Elements: Physical, Human and Organizational............................................................. 12
3 Risk Assessment and Barrier Identification ........................................................................................ 13
3.1 Introduction ................................................................................................................................. 13
3.2 Proposed Methodology and Processes ........................................................................................ 14
3.3 Execution Considerations ........................................................................................................... 16
4 Barrier Definition (Preliminary Design) ............................................................................................. 16
4.1 Introduction ................................................................................................................................. 16
4.2 Proposed Methodology and Process ........................................................................................... 16
5 Barrier Detailed Design and Engineering ........................................................................................... 30
5.1 Introduction ................................................................................................................................. 30
5.2 Detect Phase: SA-2 and SA-3 Design (Process C4 in Figure 10) .............................................. 30
Appendix A – Task Specification and Performance Standards .................................................................. 33
Appendix B - Performance Influencing Factors ......................................................................................... 34
Appendix C – Cognitive Assessment Process ............................................................................................ 37
Appendix D - Human Cognition – Example Baseline Information ............................................................ 47
References ................................................................................................................................................... 55
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
Figures
Figure 1 – A Multi-Person, Multi-Task Barrier ............................................................................................ 8
Figure 2 - Activity Phases in Active Barriers (CCPS 2018) ....................................................................... 10
Figure 3 - Integration of SA Model into DDA Model ................................................................................ 11
Figure 4 - Barrier / Task Elements .............................................................................................................. 12
Figure 5 - Process Overview ....................................................................................................................... 13
Figure 6 - Risk Assessments and Barrier Identification.............................................................................. 14
Figure 7 - Overview of Preliminary Design Process .................................................................................. 17
Figure 8 - Time Relationship of Passive and Active Mitigation Barriers (Offshore Example) .................. 27
Figure 9 – Information Defined in the Preliminary Design Phase (Task Specification) ............................ 29
Figure 10 - Overview of Detailed Engineering and Design Processes ....................................................... 30
Figure 11- Process C4: SA-2, SA-3 (Detailed Design and Engineering) ................................................... 31
Figure 12 - Cognitive Assessment Process ................................................................................................. 39
Figure 13 – Sub-Process Applies to Steps in Figure 12 .............................................................................. 39
Figure 14 - Overview- Cognitive Processes and Relative Response Times ............................................... 47
Tables
Table 1 - Compare Active Human Barrier and Safety Critical Tasks ........................................................... 9
Table 2 - Comparison of Task Models........................................................................................................ 12
Table 3 - Originating Barrier Requirements: Single Task Barriers ............................................................ 15
Table 4 - Originating Barrier Requirements: Multi-Task Barriers ............................................................ 15
Table 5 - Suggested Task Analysis Team ................................................................................................... 16
Table 6 - Suggested Design Review Team – Preliminary Design Phase .................................................... 30
Table 7 – Performance Shaping Factors (SINTEFF (CRIOP) 2011 para 5.2.6) ......................................... 34
Table 8 – Performance Influencing Factors (EI 2020, Tables 5, 7, 8, 9) .................................................... 34
Table 9 – Performance Shaping Factors (IEF/SINTEFF 2015 (Petro-HRA) Table 2.2 pp. 29-30) ............ 35
Table 10 – Performance Influencing Factors (NUREG 2012, Table 2-2) .................................................. 35
Table 11- Example Cognitive Assessment Table: Mental Models ............................................................. 40
Table 12 - List of Example Cognitive Assessment Tables ......................................................................... 41
Table 13 – Cognitive Assessment and Recommendations Review Team .................................................. 45
Table 14 - Comparative Overview of Automatic and Conscious Processes ............................................... 50
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
1 Introduction
For the first time in human history, it is now possible to take a comprehensive, science-driven,
cognitive-focused approach to the design of active human barriers and safety critical tasks, the
topic of this manuscript. Why is this important? Experts assessing the root cause of catastrophic
accidents commonly site human error as a causal factor. Many of these experts also assert that a
latent error in the design is the causal source of that error. This occurs when there is a
mismatch between the cognitive demands of the design and the cognitive capabilities of the
human.
The Deepwater Horizon (DWH) accident triggered a global issuance of calls-for-change
whitepapers and articles in cognitive ergonomics (CIEHF 2016, Johnsen et al. 2017, OESI 2016,
IOGP 2012, and SPE 2014). Common themes include improvements to situation awareness and
reducing the unrealistic cognitive demands in the design and the effects of cognitive behaviours
and biases, e.g., confirmation bias. Recent accidents continue to provide evidence that the
existing design methods are inadequate. Some of the most widely employed are rooted in
methods developed in the past century. New technology, first available in the mid-1990’s,
triggered an exponential growth in neuroscience research that provides an expanded and new
understanding of the nature, attributes, and executive functioning of the profoundly different
conscious (aka, System 2) and automatic processes (aka, System 1, unconscious, etc.). Little of
this new information is widely known or currently applied in the O&G and petrochemical
sectors. Subsequently, industry tribal knowledge is broadly incomplete and often erroneous, a
hidden bias that contributes to latent design errors.
So, what is the path forward? What form does a new design process take? To start, a new
methodology should be designed to prevent cognitive mismatches in situation-specific activities
that take place within the barrier detect, decide, and act phases. It should consider the
recommendations in the above-referenced articles and white papers. It should use and apply the
best available science and practices currently used in other industries, and information available
from peer-reviewed and widely recognized sources. From that frame, this manuscript presents a
set of prototype processes and tools guided by these statements, i.e., a white paper approach.
The intent is to show: example processes and tools and the information they generate, a new
methodology that identifies and mitigates cognitive mismatches at the situation-based activity
level, and how these tools can be used in a capital project environment. Given the state of the
industry (and the many roadblocks that appear to be stifling progress), the author believes it may
be helpful to provide an improved starting point, i.e., one that addresses the primary deficiencies
in current practice. Additional supporting information is provided. A novel table summarizes and
contrasts the fundamentally different functioning, capabilities, limitations, and behaviours of the
automatic and conscious processes. A summary of the cognitive issues that are likely to occur in
all safety critical designs is also included. Finally, the manuscript highlights the skills and
knowledge needed to apply the more advanced cognitive-focused processes.
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
Missing Focus on Human Factors – Organizational and Cognitive Ergonomics – in the Safety
Management for the Petroleum Industry. (Johnsen et al 2017)
“Expertise on organizational ergonomics and cognitive ergonomics are missing from companies
and safety authorities and are poorly prioritized during development.”
“Incident investigations have revealed that cognitive and organizational ergonomics seldom are
mentioned and explored.”
“Latent errors are related to designers, high-level decision makers and managers, where the
adverse consequences may lie dormant within the system for a long time and only becoming
evident when combining with other factors to breach the system’s defenses.”
“The missing focus on cognitive and organizational ergonomics…may create weaknesses and
holes in defenses/barriers….These issues have not been properly addressed in new versions of
the Norsok S-002 to be published in 2017.”
SPE Technical Report, The Human Factor: Process Safety and Culture (SPE 2014)
“Incident investigations often identify deficiencies in the design or implementation of the
interface between people and technology as contributing to the loss of reliable human
performance. This is sometimes referred to as “design-induced human error”.
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
“The extensive research into the psychology of how irrationality and cognitive bias lead to poor
risk assessment and decision-making and the practice solutions to counter these biases should be
used more extensively to improve the training of people involved in safety-critical operations in
the E&P industry.”
Human Factors and Ergonomics in Offshore Drilling and Production: The Implications for
Drilling Safety (OESI 2016)
“In the offshore O&G environment, many of the current interfaces for daily and emergency tasks
have not been specifically designed to facilitate and support human performance.”
One incident…”revealed the cause of the incident was not related to poor decision-making, but
rather the absence of SA and poor mental models” “Studies have reported finding the loss of SA
can result from something as simple as inattention and is also a function of experience and
training”
“In a study of 332 offshore incidents…”more than 40% of drilling activity incidents were
associated with inadequate SA. A majority of those errors (67%) occurred at Level 1 SA…”
“Confusion occurs when a worker misinterprets the observed behavior of the operating system
in light of their mental model of the system.”
Abbreviations:
ALARP – As Low as Reasonably Practicable
DDA – Detect, Decide, Act
IPL – Independent Protection Layer. An active human barrier is an IPL if it is designated as
such in a LOPA or equivalent risk assessment process
LOPA – Layer of Protection Analysis
HAZOP – Hazard and Operability Study
HE – Human Element
HMI – Human Machine Interface
OE – Organizational Element
PE – Physical Element
PPE – Personal Protective Equipment
PST – Process Safety Time
RA – Risk assessment
“Although some OGP members require safety-critical human tasks to be specifically identified
and managed, the safety-critical nature of operator activities is not always recognized. It seems
that the required performance standard, or the consequences of individuals not performing tasks
to the required standard, is often poorly understood. There also seems to be an insufficient
understanding of the demands that safety-critical tasks can make on human performance, what is
needed to support the required level of performance, and the ways in which human performance
could fail in undertaking the tasks, or the inherent unreliability associated with the task.”
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
“…members should work towards adopting being able to satisfy themselves that safety-critical
human barriers will actually work and the risk of human unreliability in performing them is
effectively managed and reduced….members should work towards adopting practices to identify
and understand safety-critical human tasks. “
From HSE (Principle 8) Assessment Principles for Offshore Safety Cases (APOSC), 2006:
“Safety critical tasks should be analysed to demonstrate that task performance could be
delivered to the specified performance when required. This demonstration should draw upon
recognised good practice in human factors.”
“Human performance problems should be systematically evaluated. This should involve
evaluating the feasibility of tasks, identifying control measures and providing an input to the
design of procedures and personnel training, and of the interfaces between personnel and plant.
The depth of analysis should be appropriate to the severity of the consequences of failure of the
task.”
Active human barriers can be defined as one or more tasks. Depending on the approach taken in
a task analysis process, some barriers may have one person performing two or more tasks. Refer
to Figure 1. Other barriers (e.g., emergency response barriers) require several persons who must
perform these tasks in a manner that realizes and achieves the barrier safety function within the
PST.
Barrier Leader
Task group 2 Task group 3
Task group 1
assignee assignee
assignee
Active human barriers and SCTs are similar in that they both perform safety critical functions
and rely on one or more humans to achieve the safety function. Indicated in Table 1, there are
some important differences. One difference is the activation of an active human barrier is
unplanned and therefore may be a surprise, as compared to a planned and scheduled SCT.
Safety
Active Human Barrier
Critical Task
Barrier Type Control/Recovery Emergency Response
Preventive Preventive
/ Attributes (Mitigation) (Mitigation)
Occurrence Unplanned Unplanned Unplanned Planned
How many
1 1* 2 or more
Human Varies
(typical) (typical) (typical)
Elements (HE)
Tasks needed
Two or more (typical)
to achieve the 1 1* 1 or more /
barrier (typical) (typical) assigned HE
(One or more tasks assigned to HE)
function
Active Barrier? Yes Yes Yes ** No
Assumed
Assumed manageable
Workload manageable Situation and peak workloads may exceed HE Assumed
within stated response
Demand within stated capacity for periods of time. manageable
time
response time
Varies
May establish target response times:
Medical or recovery response
Incident timeline, e.g., ship avoidance
Specified (collision) barrier
response time
Yes Yes PST may be affected by a time constraint attributed Generally, no
(Process Safety
to an external system or barrier:
Time or PST)
Barrier is dependent on a passive barrier with a
specified endurance time (e.g., a firewall)
One or more barrier elements depend on an
external, time/capacity limited support system,
e.g., battery-backed power systems
* The number of unique tasks from an HTA depends on the how the task analysis team frames the task.
** Active only when scheduled.
2.2 Frame Tasks in the Form of a Detect, Decide, Act (DDA) Model
From CIEHF (2016, p 20) “Active Barriers must have detect-decide-act functionality – i.e., they
must have one or more elements that allow them to:
…Detect the condition that is expected to initiate performance of the barrier function...
...Decide what action needs to be taken, and;
…Take the necessary action.”
“Detect – decide – act functionality can be inherent in a single barrier element, or can involve a
combination of barrier elements working together (such as a sensor raising an alarm, a human
understanding the meaning of the alarm and knowing what action to take, and then the human
using a technical system to effect the action).”
Tasks are a compilation of cognitive and physical activities. The ability to define and assess
these activities requires that the task be expanded into a form that supports the definition and
assessment process. The model adopted in this manuscript is the Detect, Decide, Act (DDA)
model from the Center for Chemical Process Safety (CCPS), indicated in Figure 2.
Act
Detect Decide
(Execute)
“SA has been acknowledged as the basis for good decision making within complex systems,
including the O&G industry where poor performance can lead to devastating results.” (OESI
2016)
Sharp end knowledge-based mistakes may be….”caused by bad HMI design, for example,
operator’s poor problem solving due to lack of sufficient support via HMI to sustain excellent
situation awareness.” (Johnsen et al 2017)
The referenced call-to-action documents affirm the need to improve the understanding and
application of situation awareness in operational and design practices. Suggestions on how this
should be achieved were limited. Situation awareness (models and application) have been
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
successfully used in other high risk, high consequences industries for several decades. The
dominant and the most widely recognized and referenced model is Dr. Mica Endsley’s three-
stage SA model (Endsley 1995). Endsley’s model is adopted for this manuscript. It proposes
three stages of situation awareness:
Perception (SA-1) refers to the acquisition of information that is perceivable and
available to our senses.
Comprehension (SA-2) is the product of combining the SA-1 information with one’s
stored knowledge and experience to develop an understanding (mental picture) of what
the information means.
Projection (SA-3) is the product of using one’s expertise and understanding of how (and
how quickly) the current situation (SA-2) is changing over time, to predict or anticipate
how conditions may change in the future, near term.
Because time is the singular resource that often places the greatest demand on humans assigned
to perform barriers/tasks, it is important to recognize that time is an essential aspect of SA.
“The rate at which information changes is that part of SA…. that allows for the projection
of future situations.” (Endsley and Jones 2012, p. 19)
“A critical part of SA is often understanding how much time is available until some event
occurs or some action must be taken.” (Endsley and Jones 2012, p. 19)
Figure 3 below shows the adopted approach for integrating Endsley’s model it into the DDA
model and thus, into the barrier design process.
PERCEIVE COMPREHEND PROJECTION
(SA-1) (SA-2) (SA-3)
-
Act
Detect Decide ( Execute )
The final construct needed to support the design process is to name and frame the barrier
elements that, together, achieve the barrier safety function or task goal
Currently there is no globally accepted guidance document that defines, names and frames the
various barrier elements. As such, the terms adopted in this manuscript are Physical, Human and
Organizational, indicated in Figure 4.
Every barrier /
task comprises:
Human element (HE): This is the person(s) assigned to perform one or more barrier
task/phases. The HE is verified to be available, fit-for-service, and meet all specified
competency requirements.
Organization Element (OE): Examples include, but are not limited to, staff and staffing plans,
organizational charts, procedures, training, competency assessments, competency management,
etc. (OE includes all components that are not human or physical in nature.)
Discussion
“….an early – and often problematic – focus only on safety critical elements (rather than on
tasks and activities) in the UK offshore safety regime meant that wider critical aspects of the
human element were often missed. A proper focus is needed on the totality of what people do,
not just on the performance of the technical systems.” (CIEHF 2016)
“...technical standards often lack design features necessary to incorporate human factors and
ergonomics considerations. “(SPE 2014)
Comment: Some country and regulatory standards (e.g., PSA 2013) and industry guidance documents (CCPS 2018,
CIEHF 2016, SINTEFF 2016) use the terms ‘Technical, Organization and Operational’ for barrier and task
elements. For this manuscript, the terms ‘Physical, Human and Organizational’ are used. They may be easier to
remember, and the expected-versus-actually word meanings may be less confusing. This may not be true for the
similar sounding terms ‘Organizational’ and ‘Operational’. IEC 61511, first published in 2004, used the similar
sounding terms ‘verification’ and ‘validation’. Not surprising, they are often misunderstood or swapped. Perhaps
the more compelling reason for adopting the term ‘human’ is to offset the historical hardware-focused paradigm
that remains prevalent in many industry documents. Given this perspective, and because the human element is the
most challenging element to address in the barrier design process, it does not seem helpful to exclude ‘human’ as
one of the elements.
From this section forward, the manuscript presents the prototype design processes indicated in
Figure 5. These processes are amenable to the typical, stage-gated project execution model.
They apply to active human barriers, though could also be applied to the design of safety critical
tasks. In the following sections, the design activities and supporting information are discussed.
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
A-2
HAZOP
process through training and preparation. It is also essential that any analysis session has an
adequate task analysis as an input….” (CIEHF 2016)
“So while preventive barriers (those on the left-hand side) typically operate over a timescale that
can be measured in weeks, days and hours, mitigation barriers (those on the right-hand side)
typically have to operate in a timescale of hours and minutes. This can create pressure for
people to perform to extremely high standards in situations of both stress and time pressure.”
(CIEHF 2016)
As good- practice, the information listed in Figure 6 should be provided by the processes
indicated as A-3 to A-7.
With Step A-8, it may be helpful to provide the base barrier information for single task barriers
in a format similar to Table 3. If this information is not the product of the risk assessment then
this introduces the issue of who provides it (e.g., qualified, authorized, accountable) and when.
From the author’s experience, errors in this information can be highly consequential, e.g., it may
later be found that the PST cannot be me or the barrier creates a new hazard. Both are conditions
that make the barrier infeasible in its current form.
Barrier B “ “ “ “ “ “ “ ?
From an execution perspective, the team that participates in the RA and task analysis should
include one or more knowledgeable operations specialist that has direct experience with the
barrier type. A gap between the Work-as-Imagined (WAI) and the Work-as-Done (WAD) can
be contributor to barrier failure. Therefore, it is essential that knowledgeable and experienced
persons participate in these processes to prevent early mistakes in barrier identification and
definition.
Task Analysis Participants
Review facilitator Facilitator: Plan, prepare and facilitate the task analysis
and scribe Scribe: Record decisions, action items, etc.
Operations Senior operations specialist(s)
Facilities engineer (layout, mechanical systems, safety critical systems, etc.)
Technical disciplines
Others as needed
Knowledge of the facility’s process safety design basis, risk and safety design
Process safety
studies
engineer
Responsible for tracking the review action items
Table 5 - Suggested Task Analysis Team
involved in carrying out the function, and iii) when the function has been achieved.” (CIEHF
2016)
From Figure 7, the design progresses backwards from the response action(s) needed to achieve
the barrier/task safety function and safe state.
Continue from RA /
Identication Process (Figure 6)
B-9 B-15
B-3 Specify the minimum Identify SA-1 information
Identify In-Use, Supporting,
projection capability (SA-3) required sources and access locations.
and In-Place PEs required to achieve
the specified safe state(s) / safety to support decisions and actions
B-16 Identify Performance
function
B-10 Identify SA-1 info required to Influencing Factors
B-4 Identify all local SA-1 info meet SA-2 and SA-3 requirements
(real-time feedback) to monitor the and support decisions B-17
Design Review
action response(s)
B-11 Identify SA-1 information from
B-5 Continue to Detailed Design &
Identify specialty skills for inbound communication exchanges Engineering (Figure 10)
using above PE (Step B-3)
Define (specify) every action response required to achieve the barrier / task safety function and
safe state. A barrier or task may have several action responses or response steps. Actions may
range from
Pressing an Emergency Shutdown pushbutton in the field or at a control room console,
Controlling a fire hose or foam monitor,
Updating an Incident Command Board by hand marking and updating the information on
the status board.
Note: The last item may seem simple but can be complex from a cognitive perspective. Performance of the step
requires receiving information from different sources at scheduled or unscheduled times. The information may be
provided over a phone or radio communication (a sustain vigilance task). With each conveyance, the barrier-critical
information must be correctly understood, captured and accurately recorded. Recording hand-written information
on the incident command board must be legible and include the required SA-1 information. The information should
also be recorded in the expected locations and use pre-determined terms. For further insight, see Taber (2010).
Barriers that employ two or more human elements require that they communicate with each
other. The barrier leader conveys status information and instructions to coordinate actions.
Others convey status feedback information or requests. This step defines all required response
action(s) that convey outbound information in a communication exchange. Each communication
should be uniquely identified, and the following information defined or specified:
Sender and intended receiver(s)
Message goal and purpose: convey instruction, coordinate actions, etc.
Message type: real-time (two-way) communication, email, voice communication using a
public address system (one-way), etc.
Sender location and environment, e.g., noisy environment, proximity to danger, etc.
Estimated message frequency, timing, and duration
Transmission form: voice, visual, conference call, text, etc.
Transmission systems (PE): phone, public address, video conferencing, etc.
For each action response defined in Steps B-1 and B-2, identify all physical elements required to
perform and support the action and achieve the specified safe state. PEs to consider and identify
in this step include:
Direct-Use PE: Physical equipment and devices that are directly used to perform the
action response, e.g., a fire hose, hand-held radio, stretcher, HMI data entry display, or
Emergency Shutdown pushbutton.
Support PE: Physical equipment and devices required to protect or support the person
performing the action response, e.g., personal protective equipment, Scott air pack,
portable gas detector, hand-held radio, flashlight, etc.
In-Place PE: Physical features or space that must be in place to achieve the specified
safe state, e.g., a protected rally/muster area.
Step B-4 Identify Required Feedback (SA-1 Information) from PE Identified in Step B-3
Identify the real-time SA-1 feedback required (if any) that may be required by the person
performing the action response. The feedback is from the PE identified in Step B-3. Examples
include:
Direct Use PE: Information that provides real-time feedback on the effectiveness,
performance, or success of the response action.
Support PE: Information that provides real-time feedback on the operational state of
support PE, e.g., the remaining air in a Scott air pack.
In-Place PE: Information (if required) that identifies the operational state of the required
in-place PE, e.g., the status of a protected muster area or rally point.
Step B-5 Define Specialized Skills Required to Use PE Defined in Step B-2
Identify the specialty skills (if any) needed to correctly use, apply or monitor the PE identified in
Step B-3. These ‘skills’ include specialized training and knowledge that should be verified by a
competency assessment process.
A deficiency in an essential skill can contribute to a degraded barrier/task performance or failure,
or an injury to the action responder or others.
Step B-6 defines every decision required to guide each barrier/task response action. Decision-
making should be performed in a manner that is reliable, correct, appropriate, and early enough
to consistently complete the action response and achieve the safe state within specified the
response time (PST).
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
Observation: Once defined, decisions provide the basis for identifying the necessary SA-1 information, and
comprehension (SA-2) and projection capability (SA-3) requirements, i.e., essential inputs to the decision-making
process.
Discussion
“OGP’s Human factors Sub-Committee believes that improved understanding and management
of the cognitive issues that underpin the assessment of risk and safety-critical decision-making
could make a significant contribution to further reduction the potential for the occurrence of
incidents.” (IOGP 2012)
“The situations people find themselves in can also influence the quality of their decision-making.
Time pressure, poor information presentation, ambiguity of information and conflicting goals
can lead to poor decisions.” (SPE 2014)
Barrier decisions are often a primary contributor to barrier failure, and the most cognitively
demanding. Cognitive demand (workload) and the time needed to complete the decision-making
process can increase if:
The SA-1 input information changes rapidly
The barrier requires numerous and complex decisions
Goal conflicts exists (Sträter 2005, p. 51, Woods et al 2010, p. 88).
A late decision may result in a failure to achieve the safe state within the specified barrier / task
response time.
The time needed to make decisions increases exponentially as the number of decisions increase.
(See Hicks Law: Response time = b*Log2 (n+1) where ‘n’ is the number of decisions. This
applies to binary type decisions. (Hicks 1952))
Observation: In current practice, barrier / task decisions are often not fully identified, understood, or addressed in
the barrier design process. Consider the following. With many active human barriers, the operating company
chooses to insert a human into the barrier to perform a function that, as perceived, cannot be reliably performed by
a fully automated barrier that resides in a safety instrumented system. A common expectation, the human has
knowledge and judgement that will reduce the number of unintended barrier activations, i.e., nuisance trips. This
expectation is often encompassed within an organizational practice or denoted by the terms ‘Good Process
Practice’ or ‘Well- Control.’ In these cases, the operator is expected to make production versus safety judgements.
Expectations of this type are implied requirements that are often not documented or integrated into the barrier
design process. With all such requirements, this creates a potential entry point for hidden ‘design’ errors that place
the barrier/task at risk.
With the possible exception of emergency response barriers, active human barriers provide a
degree of flexibility, i.e., the operator can choose when to complete the actions response,
provided it can be completed within the specified barrier / task response time. As will be shown,
this offers flexibility but can also contribute to reduced barrier/task reliability or failure.
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
The following are plausible decisions that occur with active human barriers:
1) What is the required action response for this barrier?
2) Is the barrier / task activator signal valid?
3) Do I have enough information to act?
4) Do I initiate the barrier response now or wait?
Item 2 could be viewed as a production versus safety type decision, i.e., the approach the
operator uses to make this determination might not be specified or included (using clear
language) in operating procedures and training programs.
Item 4 introduces a special set of cognitive issues that may be unknown to barrier designers.
Consider the situation - the barrier activator alarm is detected and the operator choses to delay
the action response for one of many possible reasons. This decision sets up the following
possible scenarios. (The following conversation is an early venture into the cognitive
information included in Appendix D.)
1) The need to remember a future action relies on Prospective Memory, a known human
weakness. The future action is stored in working memory (WM), which has a limited
store capacity. In addition, the information in WM may fade (is forgotten) if not
periodically refreshed. This can occur in as little as 20-30 seconds. (Endsley and Jones
2012, p. 33). A second concern pertains to time. Humans do not possess an internal
clock that accurately tracks time from the perspective of ‘clock time’. This creates the
need to ‘watch the clock’ to not lose track of time. Both concerns increase the likelihood
that the deferred action is forgotten or executed late, i.e., the barrier/task fails.
Consider: Perhaps this is a preventive type active human barrier that was identified in a LOPA with a risk
reduction (RR) factor of 10 taken. To increase the likelihood that this RR can be approached or, perhaps
realized, consider adding features to improve reliability. Example, provide a timer that starts timing when
the barrier activator occurs. The timer setpoint is set to the PST. Have the timer alarm when it is within x
minutes from the PST and the action response is not started. This alarm alerts the operator of an
incomplete faction, an attempt to refocus the operator on the pending action. If the response action does
not occur as it nears timeout, consider automatically initiating the action response.
2) During the deferred action period, what happens if additional demands occur? Perhaps
other high (or higher) priority alarms activate, or a shutdown pre-alarm alerts the operator
of a pending process shutdown that will occur if prompt action is not taken. These new
events introduce one or more increasingly complex decisions, i.e., which issue to attend
to first? This places the original barrier at risk because attention/WM is a limited
resource and the time-pressured situation increases the likelihood that non-rational biases
and behaviours may influence or drive these decisions.
3) Another consequence of deferring the action in step 1, the operator may believe this act
‘frees up’ cognitive capability to address issues that may seem to be more immediate.
Factually, the deferred action continues to consume WM, i.e., holding the pending action
in working memory requires some level of attention to remind oneself of the future
action. Perhaps the same is done for a second demand. The load effect is additive, and
the WM capacity is limited. Another factor to consider, WM capacity may be reduced in
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
response to stress, excessive workload, lack of sleep, fear, etc. As the cognitive load
begins to exceed one’s capability, the operator may revert to a behaviour referred to as
attention tunnelling, i.e., the person chooses to focus on one item and ignore others. The
worked item is not necessarily the highest priority. Humans are often driven by
undetected and non-rational biases and behaviours that may cause the person to work on
a lower priority task instead of a pending higher priority task that may be more complex.
What are additional design questions to consider that can affect barrier/task reliability?
What are acceptable reasons to defer a barrier response action once the activator has
triggered?
What is an acceptable practice on how long the operator can wait to perform the response
action(s)? Is it OK to do so at the last minute as a matter of practice?
If multiple demands occur at the same time, what guidance and training is provided on
which priorities to address first? Does the training include drills under time pressure?
From CIEHF Table 1 (2016):
“Characteristics of good barrier human elements…. Does not require operator to make real-
time judgements that involve safety/performance trade-offs.”
“Characteristics of poor human barrier elements…. “Relies on complex judgement or decision-
making, especially when there is conflict between safety and performance”
Comment: The above and similar requirements are stated in many industry practice and guidance documents and
standards. Given the earlier discussion, compliance to this requirement seems questionable in many cases.
Decisions unique to emergency response and well control barriers are inherently complex. A continuation of this
type of mismatch appears to be a significant disconnect between standards and achievable practice, and a lost
opportunity to provide the realistic guidance needed to improve the reliability of these barrier types.
Step B-7 Identify Specialty Skills (if any) Required to Make Each Decision
“Developing analytical and non-analytical reasoning skill has been shown to improve the
quality of decision-making, as has the use of experiential training methods.” (SPE 2014)
The barrier leader assigned to an emergency response barrier or other complex barrier types,
should demonstrate the competency for RPD decision making. (For information, see Flin et al.,
2008, Salas and Klein 2001) If knowledge-based decision is a possibility, this warrants an
assessment to determine if and why this type of decision is needed, i.e., the barrier/task may be
infeasible.
Skill and rule-based decisions are not ‘special’ from the context of this step.
Specify the minimum understanding and comprehension required to correctly guide each
decision and action response. This begins by examining each decision and action response to
understand what must be comprehended and understood to make an informed decision and guide
actions.
Comprehension is achieved by correctly understanding the meaning of the SA-1 information and
its relationship to decisions and actions. The breadth and depth of a comprehension requirement
depends on the process and hazard type, the barrier safety function and safe state, and the nature
and source of the SA-1 information. The following examples may provide insight into the
thought-processes needed to reveal and confirm comprehension requirements.
Example 1: Facility: Process Plant. Barrier safety function: Activate (press) the process
safety shutdown pushbutton on activation of the High High (HH) level alarm on tank A. The
barrier is defined as a single task assigned to and performed by the Control Room Operator.
Understand the correct response action when the HH level alarm activates (i.e., the
barrier trigger alarm), and barrier response time (PST).
How long ago did the alarm activate, e.g., how much time remains to complete the action
response within the PST?
What is the priority of this barrier relative to other barriers and safety critical alarms?
Example 2: Facility: Offshore production platform. Barrier safety function: When the general
alarm activates, use the designated egress routes to safely and promptly transit to the assigned
emergency response stations (applies to assigned emergency responders) or to a primary or
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
secondary muster area or one of the designated alternates if the primary and secondary areas
cannot be reached (applies to non-essential personnel). The task goal for non-essential
personnel: When the general alarm activates, use the designated egress routes to safely and
promptly transit to the primary or secondary muster areas or one of the designated alternates if
the primary and secondary areas cannot be reached.
What is my response when a general alarm activates?
Where are the designated primary and secondary muster stations?
Given my current location, how long will it take to reach the primary or secondary muster
station?
What are the alternate routes and escape/evacuation options if the routes to the primary
and secondary muster stations are blocked? What are the other evacuation options and
the hazards for using them?
Is the selected route passable and safe, e.g., understanding the threats from debris, a route
that is visually obstructed by smoke, or a nearby toxic gas warning beacon is active (lit).
Case Study: In the DWH accident, the muster alarm (muster barrier) was activated after the following occurred:
well blowout in progress, several explosions, widespread fires, injuries and fatalities, the loss of all emergency
lighting, and the destruction of building sections and escape routes. For insight into the mental state and the many
challenges presented to mustering personnel, see Skogdalen et al. (2011).
Discussion
Comprehension relies on having the requisite mental models (MM) that encompasses the SA-1
information and the circumstances that are unique to barrier. The required MM is the product of
having the right experience (e.g. depth, duration, and applicability to the barrier) and knowledge
(i.e., procedural, technical, and executional). The product of combining the MM with the SA-1
information should provide the understanding and comprehension needed.
Mental Model (MM) refers to long-term memory structures and content. MM are “mechanisms
whereby humans are able to generate descriptions of system purpose and form, explanations of
system functioning and observed system states, and prediction of future states.” (Rouse 1985)
They include prototype representations (schemata) and associated action sequences (scripts).
(Endsley 2012 p. 21-23)
Similar to Step B-8, specify the required capabilities (if any) to anticipate / project what may
happen in the near future, given how the SA-1 information is changing. Because this capability
tends to be limited to those with increased expertise, this may affect personnel selection and
staffing. (This requirement is in additional to the knowledge and experience requirements
identified in Step B-8.)
Example SA-3 requirements may include:
A capability to project how and how quickly conditions may escalate, e.g., a toxic or
flammable gas leak.
A capability to anticipate knock-on effects that results when a barrier response action is
taken.
Anticipate workload spikes that can occur during emergency operations.
Step B-10 Define SA-1 Information Required to Support each SA-2, SA-3 and Decisions
Define all SA-1 information needed to guide decisions and achieve the specified SA-2 and SA-3
requirements.
Define information: the receiving location and environment, e.g., noisy, proximity danger, etc.
If the source of the message is not identified by Step B-2 e.g., the information comes from an
external source, identify and record the ‘sender’ information listed in Step B-2.
See Step B-2 for additional information.
Identify every external ‘support system’ required to maintain the operation state or performance
of barrier elements. This step also defines the SA-1 information needed to monitor the
performance and operational status of the support system.
Discussion
Active human barriers often depend on one or more external systems to maintain the operational
state or performance of one or more barrier physical elements. Common support systems may
include:
Emergency power and distribution systems
Battery-backed power systems (UPS)
Instrument air and distribution systems
Communication networks
Emergency lighting
HVAC systems
The barrier reliability is now affected by the reliability of the support system. As such, these
systems should be monitored and maintained at a level that equals or exceeds the level applied to
the barrier system.
Identify every external barrier that is required to achieve and/or maintain the defined barrier/task
safe state. This step also defines the SA-1 information needed to monitor the performance and
operational status of the external barrier (if any).
Discussion
The barrier/ task may rely on other barriers (e.g., an external passive barrier) to achieve or
maintain its specified safe state. Passive barriers are often designed to provide the safety
function for a defined duration, i.e., the specified endurance time. Consequently, the safe state of
the protected area (e.g., a muster/rally area) is maintained only as long as the passive barrier
provides its protective function. Refer to Figure 8. A firewall and fireproofing protect the muster
area (muster barrier) from a fire event for a period defined by their respective endurance times.
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
The endurance time of the passive barriers constrain the time available to complete
control/recovery barriers safety functions.
Activate Activate
Muster Abandon
Barrier Barrier
0..……..…...10………….………minutes…………………...…45………...……..…..60
TIME
Figure 8 - Time Relationship of Passive and Active Mitigation Barriers (Offshore Example)
The barrier reliability is now affected by the reliability of the external barrier on which it
depends. As such, the external barrier should be monitored and maintained at a level that equals
or exceeds the level applied to the barrier system.
Step B- 14 Identify Non-Technical Skills (NTS)
Step B-14 defines the non-technical skills (NTS) required to achieve the barrier function within
the process safety time. This step applies to multi-person barriers only.
NTS include the follow skill areas. Define the task specific NTS requirements for each barrier
task.
Communication (See note below)
Teamwork
Leadership
Managing stress
Coping with fatigue
Note: For communications, the information conveyance aspects are defined in various steps in Figures 7
and 11. Situation awareness and decision-making are often identified as NTS. Their requirements are
defined in the various steps in Figures 7 and 11.
“Non-technical skills are the cognitive and social skills that compliment workers’ technical
skills…. the cognitive, personal and social resource skills that compliment technical skills, and
contribute to safe and efficient task performance.” (Flin et al. 2008, p. 1)
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
“If an incident occurs, the first minutes of the response are critical to escalation prevention and
to the successful conclusion of the event….Before personnel can go forward for formal
assessment in emergency management, they first require training in handling emergencies at the
scene and an appraisal of their capabilities under duress. Emergency management also requires
specific qualities and skills, which are essentially different from those demanded by daily
activities.” (OPITO 2014)
“Developing proper technical and nontechnical competencies are a critical part of assuring
operational safety. Both are necessary, but neither alone is sufficient… In the operational
safety context, key nontechnical competencies typically include situational awareness,
leadership, teamwork, communication, decision-making, risk awareness, etc.” (SPE 2014)
For additional information on NTS, see: Safety at the Sharp End, (Flin et al. 2008) and
Introducing behavioural markers of non-technical skills in oil and gas operations. (IOGP 2018)
Step B-15 Identify the Source and Access Locations for SA-1 Information
Step B-15 examines all SA-1 information requirements (Steps B-4, 10, 11, 12, and 13) and
specifies:
The source of each information item, e.g., technical device, signage, paint marking,
communicated information, etc. (This is essential input design/procurement information
for these devices and systems.)
The location(s) where the information is accessed. This requirement can affect the type,
number and location of displays or signalling devices. The physical locations may add
new technical requirements, e.g., adds a new device or adds a requirement that a device
must be certified for use in a hazardous area.
This step identifies the Performance Influencing Factors that can affect each barrier element.
Refer to Appendix B for further information.
This step compiles and records the information in the form of a Task Specification.
Figure 9 provides a high-level overview of the information developed during the preliminary
design process. For single task barriers, one task specification is developed. With multi- task
barriers, a task specification is developed for each task. The term Task Specification is used here
to differentiate this information from the additional information defined in a later design process,
i.e., Performance Standards. See Appendix A for a more complete discussion on Task
Specifications and Performance Standards.
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
Observation: As a product of the preliminary design phase, compare the information captured in this phase to the
information included in a Safety Instrumented Function (SIF) governed by the globally adopted standard IEC
61511. It requires a Safety Requirements Specification (SRS) for each SIF. The SIF is an active barrier that does
not rely on a human to perform any of the DDA phase activities. The nature of the information captured in the Task
Specification is analogous to the information captured in the SRS. Within the IEC 61511, the SRS is the
foundational document that supports and enables the ability to take a full life-cycle approach to the SIFs developed
by this standard. Currently, no equivalent standard exists to address active human barriers.
This step is the final design review of the Task Specifications developed for each barrier. For a
stage-gated project, the product of this step (and the updates that may result from this review)
provides the essential information needed to begin the detailed design and engineering process.
Step B-17 marks the final design review and approval process. Reviewed items are accepted,
changed, rejected, or deferred. As stated in Section 3.3, the contribution from the operation
specialist is essential to preventing a gap between the Work-as-Imagined (WAI) and the Work-
as-Done (WAD). A WAI-WAD mismatch can lead to barrier degradation or failure. Table 6
provides the suggested makeup of the design review team (See Figure 7, Step B-17).
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
C2 C4 C6
Design: Design: Detect Design: Develop
Action Phase Phase: SA-2 and SA-3 Performance Standards
5.2 Detect Phase: SA-2 and SA-3 Design (Process C4 in Figure 10)
The process presented in Figure 11 below confirms the SA-2 (comprehension) and SA-3
(projection) requirements are achievable, specifies requirements for increased competency
requirements and HMI support displays (if any), and captures new SA-1 requirements (if any)
from these steps. Step C4-6 evaluates and mitigates the effects of Performance Influencing
Factors. Step C4-7 identifies, eliminates, or mitigates situation-based cognitive mismatches
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
using a new tool and processes in Appendix C. (These two steps also apply to Processes C2, C3
and C5, indicated in Figure 10.)
Cont. from Process C4
(Figure 10)
C4-1a
Without specialty
Go to Step C4-3 Yes
displays, competencies?
C4-1e C4-1d
Potentially With specialty
No
Infeasible design. HSI displays or tools?
Yes
C4-2 Specify HMI display
requirements.
(Input to display design.)
Figure 11- Process C4: SA-2, SA-3 (Detailed Design and Engineering)
Step C4-1d assesses if the requirement can be met by adding a display aid. If so, specify the
requirements for the display, e.g., display purpose, function, and performance requirements. If
not, the SA-2 requirement is infeasible as currently designed (Step C4-2e). This requires a return
to the preliminary design phase or perhaps to the RA/barrier identification phase to seek a
different solution.
Step C4-2 If Applicable, Specify and Design an HMI Display to Support a SA-2 Requirement
If Step C4-1d identifies that an aid type HMI display is required, this step defines the display
requirements to a level required as input to the display designer/implementer. This may include
developing a prototype display for an Operations review.
Steps C4-3, C4-3a to e and Step C4-2 are similar to the activities noted above for Steps C4-1,
C4-1a, with the difference being the focus on SA-3 requirements. Example differences:
The SA-3 requirement may be met with increased expertise (technical, process and/or
execution),
A support HMI display, if required, would differ in that its purpose is to provide
guidance on a potential future states or condition based on the changing nature of the
SA-1 information.
The note in Step C4-1 above also applies to this step.
Step C4-4 If Applicable, Specify and Design an HMI Display to Support a SA-3 Requirement
The activity is the same as Step C4-2, though the objective here is to design a display that
supports the SA-3 requirements. The form of the display may change given its different purpose,
i.e., show possible changes (near term) that may occur.
Step C4-5 Update the SA-1 Information from New HMI Displays (if any)
Update the design information to add new SA-1 information (if any) that results from adding a
new support SA-2 or SA-3 HMI display.
Step C4-6 evaluates the positive and negative effects of Performance Influencing Factors (PIFs).
See Appendix B for further information.
See Appendix C for the processes and new tools that apply to this step. To understand the type
of technical knowledge needed to employ these processes, see Appendix D.
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
“A human performance standard for barriers, or barrier elements, should have six
characteristics:
a) The human performance the barrier will deliver should be specific to the threat and the
situation in when the barrier function is needed…
b) It should be clear who is expected to be involved in the delivering the required
performance…
c) It should identify the level of competence of each of the individuals involved.
d) The expected timing of the performance of the function - both the initiation of the
performance and its time for completion – should be appropriate to the timescale of the
threat.
e) The standard for successful performance of the barrier should be defined…
f) It should document any expectations by those who approve the barrier about how
operations around the barrier will be conducted that are especially critical to performing
its function.”
Much of the above-identified information is defined in the preliminary and detailed design
activities (Figures 7 and 10, above).
Performance Standards
“…performance means the properties which a barrier element must possess in order to ensure
that the individual barriers and its function will be effective. It can include such aspects as
capacity, reliability, availability, effectiveness, ability to withstand loads, integrity, and
robustness and mobilization time.” (PSA, 2013)
“Performance standards “…rarely (if ever) specify the level of human performance that needs to
be achieved for the barrier function.” (CIEHF 2016)
The PSA quote suggests the scope and type of the additional information to include in a
barrier/task performance standard. Process 6 (Figure 10) is the intended process to develop this
information. For brevity, this process is not included in this in this manuscript.
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
Tables 7 to 10 list the PIFs and PSFs included the human reliability analysis and task design
guidance standards. A few observations can be made when the lists are compared:
They indicate areas of overlap, but many more areas of divergence
Given the wide variations, the results produced by these documents may vary
considerably.
PSFs
Competency and training
Procedures “These factors should be considered
when they appear of relevance to the
Human-system interface
questions at hand. The performance
Teamwork
shaping factors have been selected
Goal conflicts
to represent common root causes
Time of day
found in incidents and accidents
Time available across various industries.” (SINTEFF
Work environment 2011, para 5.2.6)
Emergency response
Interventions
Areas of safety
PIFs
improvement
Control/display design Procedures
Equipment/tool design Communications
Memory Aids Clarity of signs
Training Competence
Work design Staffing levels
Procedures System/equipment interface
Supervision
Reducing distractions
Environment
Communications
Decision aids
Behaviour safety
PSFs
Time
Threat stress
Task complexity
Experience and training
Procedures and supporting documentation
Human-Machine Interface
Adequacy of Organization
Teamwork
Physical working environment
Table 9 – Performance Shaping Factors (IEF/SINTEFF 2015 (Petro-HRA) Table 2.2 pp. 29-30)
Category PIF
Organization-Based Training program
Corrective action program
Other programs
Safety culture
Management activities -
Staffing: number, qualifications, team composition,
Scheduling: prioritization, frequency
Workplace adequacy
Resources: procedures, tools, necessary Information
Team-based Communication
Direct supervision: leadership, team member
Team coordination
Team cohesion
Role awareness
Person-based Attention: to task, to surroundings
Physical and Phsyc Abilities -
Alertness, fatigue, impairment, sensor limits, physical attributes,
other
Knowledge / experience
Skills
Familiarity with situation
Bias
Morale, motivation, attitude
Situation / Stressor-
External environment
based
Conditioning events
Task load
Time load
Other loads: non-task, passive information
Task complexity: cognitive, task execution
Stress
Perceived situation: severity, urgency
Perceived decision: responsibility, impact (personal, plant, society)
Machine-based HMI: input, output
System responses
From the context of this manuscript, the PIFs/PSFs listed in these tables appear to fall into the
following categories:
1. Cognitive, e.g., attention, bias, task load, etc.
2. HMI Displays (a physical element)
3. Organizational, e.g., procedures, experience, skills, staffing, team coordination, etc.
4. Others, e.g., working environment, safety culture, etc.
Item 1 PIFs/PSFs are directly and thoroughly addressed by the cognitive-focused processes in
Appendix C. For Item 2, the design requirements for specialty HMI displays are defined by the
presented processes. All HMI displays are then assessed using the processes in Appendix C.
In Item 3, the presented processes specify requirements for the applicable organizational
elements. To realize these requirements, they may be implemented and delivered under an
overarching set of organizational (e.g., corporate) polices and standards. The full realization of
the barrier/task organizational requirements may depend on the efficacy and effectiveness of
those organizational policies and standards. It may be appropriate to examine them to confirm
they can deliver on the stated requirements.
The presented processes do not include a process to address the PIFs/PSFs listed in Item 4, e.g.,
working environment. A process is required to address these factors as they can affect the
performance and reliability of a barrier/task. One possible approach is the checklist approach
employed in SINTEF (2011), section 5.4.
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
This section provides a new tool and processes that are used to systematically identify cognitive
attributed design errors and then identify solutions to eliminate or mitigate each error.
Current industry design standards and practices do not assess a safety critical function at the
situation-based activities that occurs within each task phase. In practice, each phase presents the
user with a different range of cognitive challenges. These challenges can further vary with
changes in external conditions or the immediate state and capabilities of the individual
performing the activity. These unique and seemingly transient circumstances create design-
human mismatches of the type that can cause human error and, subsequently, barrier/task
degradation or failure.
“…most operator errors arise from a mismatch between the properties of the system as a whole
and the characteristic of human information processing. System designers have unwittingly
created a work situation in which many of the normally adaptive characteristics of human
cognition (its natural heuristics and biases) are transformed into dangerous liabilities.”
(Reasons 1990, p. 238)
“There is often a lack of understanding of the nature or complexity of the tasks – and especially
the cognitive elements of those tasks – that need to be carried out for barriers to function as
intended.” (CIEHF 2016)
“During the discussion about cognitive and organizational ergonomics, it was found that the
operator companies had little expertise and lack of relevant knowledge.” (Johnsen et al. 2017)
“…most people in the industry lack awareness of the realities and limitations of human cognition
and the “tricks’ the brain uses to be able to function in the complex modern world.” (SPE 2014)
B. PROCESS OVERVIEW
For those charged with designing active human barriers and SCTs, some may be using a Human
Reliability Assessment (HRA) process as a design tool.
“Safety suffers from the variety of methods and models used to assess human performance. For
example, operations is concerned about human error while design is aligning the system to
workload or situational awareness. This gap decouples safety assessment from design. As a
result, system design creates constraints for the Human working at the sharp end, which
eventually leads to errors. Accidents and incidents throughout all industries demonstrate the
safety relevance of this gap.” (Sträter 2005)
There is a clear need to develop processes and tools that can methodically and effectively
identify and prevent cognitive attributed errors in active human barrier and SCTs. What is
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
needed to develop and implement a process that can achieve this end? What form would it take?
What are the minimum essential features and functions? From the author’s perspective, the
process has at least three components.
1. As was done for physical ergonomics, the process begins with developing a baseline of
vetted information that articulates the known human cognitive capabilities and
limitations, biases and behaviours that can positively or negatively affect barrier/task
reliability, effectiveness and performance.
2. Identify processes and tools that can assess barrier/task elements and components at the
situation-based activities in the detect, decide and act phases. Using the information from
item 1, the process would assess and identify the cognitive mismatches within each
activity, and its most likely cause and context.
3. As a final step, identify the most appropriate and proven approaches that eliminate or
mitigate each cognitive mismatch. The solutions should be verified to be effective and
consistent with overarching design standards (e.g., company HMI display guidance
standards and conventions) and organizational policies (e.g., where possible avoid
competency requirements that limit the pool of personnel who could meet those
requirements).
The new tool and processes described in this appendix appear to align with the above framing.
Refer to Figures 12 and 13 below. These figures present the processes that appear to align with
Item 2 above. Table 11 (below) is the proposed tool to address Items 1 and 3.
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
Return to
originating step
Table 11 is one of several possible pages pertaining to long- term memories. This tool presents
the possible cognitive issues that can contribute (positively or negatively) to a barrier/task
enhancement or failure. Using the table (and others that address different issues), the selected
object (Step A5-0.1 in Figure 13) is evaluated against cognitive issues in the applicable tables.
The process seeks to identify a cognitive mismatch (Step A5-0.2) and select solutions to mitigate
the mismatch (Step A5.03). The last column provides possible solutions to eliminate or mitigate
each mismatch. Examining the contents of Table 11 in greater detail:
Mismatch ID – This is the unique identifier for the identified cognitive mismatch.
Cognitive issue – Identifies the name of the issue and source reference
Potential Cognitive Mismatch – Describes the nature of the possible mismatches
attributed to this issue.
Applies to Phase – Identifies the barrier/task phase where this issue can occur.
Potential consequence – Identifies potential consequences if the mismatch is not
corrected
Possible Corrective Changes (PCC) – Identifies a range of possible changes to
physical, human and organizational elements that may be appropriate to eliminate,
minimize or mitigate the mismatch. (The list is not exhaustive so other options may also
be possible. This is considered in the presented processes.) To support design
transparency and traceability, the PCC ID is combined with the issue ID to create a
unique ID for the selected change.
The suggestion is to create a series of tables (similar to Table 11) that address the most common
and likely cognitive-attributed issues to evaluate and mitigate in the design process. Table 12 is
one possible listing of the recommended topics, each with its own set of tables.
1. Working Memory (WM), WM span / short term memory (STM)
2. Attention and Attention Management
3. Long Term Memories, e.g., Mental Models, memory call-up heuristics, issues with changing
memories (e.g., potential to drift towards unsafe behaviours) (Table 11)
4. Cue / SA-1 information sensing and detection
5. Decisions and decision-making
6. Time pressure / temporal monitoring and tracking
7. Non-rational behaviours, e.g., priming, task switch errors, ease, biases, e.g., confirmation
bias
8. Team skills, e.g., role awareness, supervision, communication, team cohesion, team
coordination
Table 11 is similar to the tables in NUREG 2012, Appendix A, with some important differences
that reflect their different purpose and use:
The column “Relevant PIF(s)” contains organizational elements that are separately
addressed in Figure 7 (Step B-16) and Figure 11 (Step C4-6). See Attachment B for
further information on how PIFs are addressed in the presented process.
The NUREG table is missing the column ‘Possible Corrective Changes’.
The proposed processes and tools appear capable of meeting these objectives.
As a general process, each step in Figure 12 (excluding Step A5-6) performs the sub-process
activities indicated in Figure 13. Each row (unique mismatch) identifies the task phase to which
it applies. The following summarizes how the processes in Figures 12 and 13 are intended to
function.
Sub-process Step A5-0.1: Select Object for Evaluation
The process progresses in a stepwise approach that begins with selecting an object for
evaluation. The following are the suggested objects.
Step A5-1a – Action Phase: Assess each action response
Step A5-1b – Action Phase: Assess each communication exchange (sender perspective)
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
A review of the SA-2 and SA-3 requirements may assess for the more complex aspects
attributed to the LTM and MM or the likely effectiveness of a proposed support display.
A review of the communication exchanges may assess the exchange duration and timing,
the effectiveness of the exchanged information, the potential to miss or misunderstand
conveyed information, and the information’s effectiveness in supporting coordination and
cohesion.
A review of SA-1 information and associated displays may assess the salience of the
presented information, working memory capacity and store duration demands, or the time
and effort needed to use and access the display and the information presented in the display.
In Step A5-04, capture and compile the records from Steps A5-0.1, 0.2 and 0.3 in preparation for
the review and approval step, Step A5-6.
This process, indicated in Figure 12, is common to Steps A5-1a/b, 2, 3 and 4a-f. Upon
completing this process, the recommendations from the assessments are evaluated and selected,
rejected, or modified. The approved recommendations are then implemented into the requisite
detailed design and engineering documents and designs (i.e., physical elements), and
organizational elements. The suggested activities to perform in this step:
Review each recommendation to confirm acceptance, i.e., effective, consistent with
project standards, cost and schedule impact, ALARP assessment, etc.
When multiple recommendations apply to a common phase or element, review for mutual
compatibility.
Confirm the recommendation does not create a new hazard or cognitive challenge
If a recommendation is not accepted, identify an acceptable alternative.
Record the review decisions, action items, etc. to maintain design traceability.
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
D. Execution Considerations
Expertise Requirements
The assessment activities indicated in Figure 13 (assessment sub-process) require one or more
persons having the expertise, experience and execution skills to correctly perform the activities.
The person should have expert knowledge of the information in Appendix D, and have the
knowledge and experience needed to correctly use the assessment tables (e.g. Table 11). This
may be a challenge for organizations that have not used personnel with this skillset on capital
projects. (This skillset may be found within the cognitive systems engineering discipline and
some human factors programs.)
Table 13 provides the suggested team makeup to support the common review process indicated
in Figure 11, Step A5-6. Use of a facilitator and scribe may be justified on larger projects that
have many barriers/tasks to review.
Comment: The scope and nature of the presented processes have many broad similarities to the global standard IEC
61511. IEC 61511 is a complex standard that governs the design and life-cycle management of safety instrumented
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
functions. Some of the implementation challenges included new software tools (e.g., used to calculate SIF
reliability), new documents, (e.g., SIF Safety Requirements Specifications), new expertise requirements (e.g.,
functional safety experts), many new project steps and activities (e.g., verification and assessment processes) and
knock-on/disruptive effects on other disciplines and the design/procurement processes. Over time, success did
happen, and new execution models were developed. Though it took many years, today these activities are now fully
integrated into the capital project design environment. From the author’s experience developing new corporate and
project level execution models on this scale it does appear possible to integrate the presented processes into a major
capital project. Similar to IEC 61511, the roll out would likely occur over a period of years. Early efforts to do so
should be attempted on small and less complex projects to gain experience and benefit from lessons learned and
changes that make the processes more efficient or project amenable.
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
The relative response time of these very different cognitive processes, based on a single
perception cycle and indicative times:
Autonomic process: Very Fast e.g., 20 milliseconds (ms) (Edwards 2005)
Automatic process: Fast e.g., 70-150 ms (Sträter 2005, p85,128, Carter 2014, p. 121)
Conscious process: Slow e.g., 285 ms (Carter 2014, p. 121)
Figure 14 may help to understand what drives a potential human response when startled. The
automatic and conscious processes produce very different responses to a given situation. The
physical world response depends on which of these very processes controls the physical response
at that moment. The startle response is the product of human evolution. It provides the quickest
response that may be achieved by temporarily suppressing conscious processing, a process that is
relatively slow. As such, the most likely response is one that is automatic, e.g., a habituate
response. This may be beneficial in the natural world but is not necessarily the most appropriate
response in the technological world. (The automatic response depends on what resides in long-
term memory, i.e., the habituated response or skill.)
Human Mind
Action response
Cognitive response time time
From this figure, note the pre-priming information based to the conscious processes. It serves up
what the automatic processes expected, i.e., information that lies in long term memories and
mental models. Conscious processes begin with this pre-selected sensory and expectation
information, a source of bias driven by one’s mental models.
Humans have two cognitive processes that control our daily perceptions and actions, i.e., the
automatic and conscious processes. Summarized in Table 14 below, each has profoundly
different capabilities, limitations, behaviours, biases and quirks. (The term ‘automatic’ is
commonly used in the research and academic world; others may know it by the terms
subconscious, unconscious or System 1.) Most incorrectly believe that our decisions and actions
are realized by a conscious process. This is often not the case. Instead, humans place a high
reliance on these automatic processes. Our understanding of the automatic processes increased
exponentially starting in the mid 1990’s, a time when functional magnetic resonance imaging
systems provided a real-time look at the human brain under dynamic conditions. This added to
our understanding and confirmed how automatic processes function and interacts with conscious
processes.
Observation: The human mind is not equipped to reliably and accurately track ‘clock time’, information that is
essential to active human barrier barrier/task performance. Humans are not reliable clocks.
Any need to continuously monitor any object, information source, etc., i.e., a sustained
vigilance task.
There are two types of attention capture to consider, internal preoccupation and external
distractions.
Internal preoccupation (Reason 2008, p. 33-45)
Excessive workload-induced tunnel vision (ignores information)
Intended intense focus (lose awareness of surroundings)
Problems at home (Misdirected attention)
Fear, FFF activation (Re-directed attention, loss of focus)
Working Memory
From Table 14, conscious processes are slow, stepwise and sequential. Memory errors can occur
in the Detect, Decide, or Act phase. This can lead to a wide range of human error and therefore
barrier/task failure scenarios. WM error types include:
Data from memory is forgotten or misremembered
Forget to remember a pending future task, i.e., prospective memory (Reason 1990, p.
107)
Lose track of time / poor time management.
Place losing - What step am I in? (Reason 2008, p. 33)
Lose track of task priorities and safety-critical objectives
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
Change Blindness
Change blindness is a potential source of human error that is rooted in attention management,
working memory and mental models. Attentional focus allows one to select specific sensory
information to perceive, while other critical information may go unnoticed. (Kahneman 2011, p
23-4) Sensory perception may also be inhibited if one’s attention is fully focused on a difficult
mental task. (Kahneman 2011, p. 23-4) In both cases, we can easily miss important sensory
information, i.e., we look but don’t see. (Kahneman 2011, p 23-4) Tracking a change in a
value or state requires remembering the initial-state information and, at some time in the future,
comparing it to future-state information, a non-trivial reliance on WM/STM.
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
Clearly, these do not reflect the proactive, rational behaviour expected from those charged with
performing safety critical functions. The design challenge is to seek methods that mitigate or
minimize this type of behaviour. Example biases and behaviours attributable to cognitive ease:
Confirmation bias: the tendency to validate one’s own understanding by seeking
confirming information, but not contrary information. (Kahneman, 2011 p. 80-1)
No effort is made to assess if essential information is missing.
If a maintenance action caused an earlier spurious alarm, one tends to automatically
assume the next alarm, under similar circumstances, has the same cause. (Kahneman,
2011 p. 74-5)
Confirmation bias was often cited in the white papers discussed in the introduction, i.e., one of
the biases that should be addressed in the safety design process.
Bounded Rationality / Keyhole Effect
The following are two different forms of bounded rationality / key-hole effect.
The first form occurs because SA-1 information is missing, i.e., the information needed to fully
understand what is happening is not available or is not in the form that adequately supports
understanding (SA-2) or the ability to anticipate what might happen next. This information gap
may contribute to a false or incorrect understanding. Example: alarms and indications from
flammable gas detectors are often misunderstood. Not often appreciated, this approach does not
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
directly sense the unsafe condition, i.e., the consequence of gas ignition. The detection indication
is an inferred measurement. Further, the scale of the unsafe condition depends on the gas
amount (leak size, duration, concentration), where it is (enclosed or congested space, non-
classified area), and the effects of the ambient conditions (wind rate and direction). As a
consequence, active human barriers that rely on flammable gas detection are inherently
problematic, though seldom recognized as such.
Case Study: An incomplete understanding of gas detection alarms contributed to the DWH accident.
Critical links in the causal chain that enabled the second devastating explosion near engine room #3
were the failure of two active human barriers that had flammable gas alarm activators, i.e., on
confirmation of flammable gas manually trip HVAC systems (closes inlet air to non-classified areas) and
manually trip the gas turbine-driven generators located in the enclosed engine rooms (prevents generator
over-speed caused by uncontrolled gas ingress into the turbine’s combustion air supply).
The second form pertains to person’s initial framing of a decision or problem space that fails to
consider the actual scenario because their training and experience (e.g., one’s mental models) did
not include that possibility. We tend to limit our assessment to only those things that we ‘know’
are possible, i.e., it resides in our MM. (Reason 1990 p. 38). “If I do not know about it and no
one told me about it, why would I consider that as a possibility?”
“Mental models are probably one of the single most important concepts in cognitive
engineering.” (CIEHF 2016)
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
References
Carter, R., Aldridge, S., Page, M., Parker, S., 2014, The Human Brain Book, 2 nd Ed, DK Publishing, New York
CCPS, 2001, Layer of protection analysis simplified process risk assessment, New York, Center for Chemical Process Safety of
the American Institute of Chemical Engineers
CCPS, 2007, Human Factors Methods for Improving Performance in the Process Industries, John Wiley & Sons, Inc. 2007
CCPS, 2018, Bow ties in risk management, a concept book for process safety, New Jersey, John Wiley & Sons Inc., Center for
Chemical Process Safety of the American Institute of Chemical Engineers
CIEHF 2016, Human barriers in barrier management, a white paper by the Chartered Institute of Ergonomics and Human
Factors, 12/2016, CIEHF
Edwards, S.P., 2005, The amygdala: the body’s alarm circuit, Brainwork, May 2005, Dana foundation (dana.org)
EI (2020), Guidance on human factors safety critical task analysis, Energy Institute London, 2nd Ed, January 2020
Endsley, M. R., 1995. Toward a theory of situational awareness in dynamic systems, Human Factors, 37(1) pp 32-64
Endsley, M.R., Jones, D.G., 2012. Designing for situation awareness: An approach to user-centered design, 2nd Edition, CRC
Press
Flin, R., O’Connor P., Crichton, M., Slaven, G., Stewart, K., 1996. Emergency decision making in the offshore oil and gas
industry, Human Factors 38(2) 262-277
Flin, R., O’Connor P., Crichton, M., 2008. Safety at the Sharp End: A Guide to Non-Technical Skills, Ashgate Publishing
Hick, W.E., 1952, On the rate of gain of information, Quarterly Journal of Experimental Psychology, 4:1, 11-26
HSE, 2006, Assessment Principles for Offshore Safety Cases (APOSC), Health and Safety Executive, UK, March 2006
Hollnagle, E., (2014) Safety-1and Safety-2, the Past and Future of Safety Management, CRC Press, 2014
IEC 61511, 2017, Functional safety - Safety instrumented systems for the process industry sector - Part 1: Framework,
definitions, system, hardware and application programming requirements, Ed 2.1, 017 International Electrotechnical
Commission, 2017
IEF/SINTEFF, 2015, Petro-HRA Guideline, IFE/HR/F-2015/1640, December 2015
IOGP, 2012. Cognitive issues associated with process safety and environmental incidents, London: International Association of
Oil and Gas Producers, IOGP Report No 460, 7/2012
IOGP, 2018. Introducing behaviour markers of non-technical skills in oil and gas operations, International Association of Oil
and Gas Producers, IOGP Report No 503, 2018
Johnsen, SO, Kilskar, SS, Fossum, KR, (2017) Missing focus on human factors – organizational and cognitive ergonomics – in
the safety management for the petroleum industry, J. Risk and Reliability, V231(4) pp 400-410, Proc. IMechE Part O
Kahneman, Daniel, 2011, Thinking, Fast and Slow, Farrar, Straus and Giroux
Le Coze, Jean-Cristopher, (2020) Safety Science Research, Evolution, Challenges and New Directions, Ed. J. Le Coze, CRC
Press, 2020
McLeod, R.W. (2015), Designing for Reliability: Human Factors Engineering in the Oil, Gas and Process Industries, Gulf
Publishing, 1s Ed., 2015
Mlodinow, Leonard, 2012, Subliminal: How Your Unconscious Mind Rules Your Behaviour, Vintage Books (Div. of Random
House Inc.), 1st Edition
NUREG 2012, Building a Psychological Foundation for Human Reliability Analysis, NUREG-2114 INL/EXT-11-23898, 2012
OESI, 2016, Human factors and ergonomics in offshore drilling and production: the implications for drilling safety, Ocean
Energy Safety Institute, 12/2016
Norsok, 2018, Working Environment, S-002, June 2018, 2nd Ed, Standards Norway
OPITO 2014, Major Emergency Management Initial Response Training, Revision 1, OPITO Standard Code 7228, OPITO,
March 13, 2014
PSA, 2013, Principles for barrier management in the petroleum industry, Petroleum Safety Authority Norway, January 29, 2013
Reason, J., (1990) Human Error, Cambridge: Cambridge University Press, 1990
Reason, James, (2008) The Human Contribution, Unsafe Acts, Accidents and Heroic Recoveries, Ashgate Publishing Ltd, 2008
Preventing Cognitive-Attributed Errors in Safety Critical Design: A Path
Forward
Rouse, W. B., & Morris, N. M. (1985) On looking into the black box: Prospects and limits in the search for mental models. May
1985, Psychological Bulletin, 100, 349 –363 (Report no 85-2), 1985
Salas, E., Klein, K. (2001) Linking Expertise and Naturalistic Decision Making, Lawrence Erlbaum Associates, 2001
Salmon, P.M., Stanton, N.A., Walker, G. H., Jenkins, D.P., (2009). Distributed situation awareness, theory measurement and
application to teamwork, Ashgate Publishing Co., England
Shepherd, Andrew, 2001. Hierarchical task analysis, CRC Press
Sheridan, Thomas, B., (2002) Automation: System Design and Research Issues, Wiley, 2002
Sklet, S., 2006. Safety barriers: definition, classification and performance, Journal of Loss Prevention in the Process Industries,
19 (2006), pp 494-506
SINTEF, 2011. CRIOP: A scenario method for crisis intervention and operability analysis, SINTEF Technology and Society,
Report SINTEF A4312, 2011-03-07
SINTEF, 2016, Report: Guidance for barrier management in the petroleum industry, SINTEF Technology and Society, Report
SINTEF A27623, 2016-09-23
Skogdalen, J.E., Khorsandi, J., Vinnen, J.E., (2011) Looking back and forward – evacuation, escape and rescue (EER) from the
Deepwater Horizon Rig, Deepwater Horizon Study Group Working Paper – January 2011
SPE (2014) The human factor; process safety and culture, SPE Technical Report, Society of Petroleum Engineers, March 2014
Smith, J.S., Hoffman, R.R, (2018), Cognitive Systems Engineering, CRC Press, 2018
Stanton, N.A., Salmon, P., Jenkins, D., Walker, G., (2010) Human Factors in the Design and Evaluation of Central Control Room
Operations, CRC Press, Taylor and Francis Group, 2010
Sträter, O., 2005. Cognition and safety: an integrated approach to systems design and assessment, Ashgate Publishing Ltd, 1 st Ed
Sylvestre, Christian, (2017) Third Generation Safety: The Missing Piece, ISBN 978-0-648 1200-0-1, National Library of
Australia Cataloguing-in-Publication, 2017
Taber, Michael John, 2010. Human systems integration and situational awareness in microworlds: an examination of emergency
Response within the offshore command and control system, PhD Thesis, Dalhousie University, Halifax, Nova Scotia, December
2010
Wieck, K. E., Sutcliff, K. M., (2007) Managing the Unexpected, Resilience Performance in the Age of Uncertainty, John Wiley
and Sons, 2nd Ed, 2007.
Woods, D.D., Dekker, S., Cook, R., Johannsen, L., Sarter, N., (2010) Behind Human Error, Ashgate Publishing, 2nd Ed., 2010
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
Standard operating procedures (SOPs) play a critical role in achieving safety and productivity of
daily operations in process industries. Incident investigations indicate that a majority of adverse
events during routine and non-routine operations are attributed to issues associated with SOPs. For
example, a recent investigation of fatalities points out the absence of formal procedure required to
remove plugging from a waste gas piping system and inadequate emergency procedures for hazard
notification and evacuation as major causes of the incident [1]. To ensure the safety of a complex
system such as chemical plants, there exist two viewpoints towards human operators’ use of SOPs:
Safety-I and Safety-II. First, Safety-I perspective defines safety as the absence of undesired events
[2]. Thus, it looks for things that went wrong and seeks to minimize the deviations from prescribed
tasks, which is assumed to be a cause of adverse incidents . Second, Safety-II views safety as the
presence of desirable and successful outcomes. Therefore, this standpoint highlights things that
went right (e.g., adaptive behavior, workaround), and considers performance variability of human
operators from SOPs to be inevitable and even necessary to maintain safe operations of the system
[3]. In favor of Safety-I approach, conventional measures of SOP performance have long been
established and utilized to indicate the degree of safety in relations to procedural systems.
Nonetheless, no measures of human operators’ SOP performance that adopt or support Safety-II
framework exist to date. To address such a gap, this paper identifies limitations of traditional and
dominant measures of SOP performance and then proposes a novel idea that harmonizes the two
views towards safety. The new measurement concept embraces not only the conventional measures
regarding the implementation of SOPs (i.e., following or not following a procedure), but also
incorporates human adaptive behaviors. To instantiate the new measures of SOP performance,
case studies using real-world examples in the chemical industries are presented. Following this,
implications for the proposed measure of SOP performance based on Safety-II viewpoint and
future research proposals to support the benefits of the measure are provided.
Keywords: Human performance, operating procedure, adaptation, safety measure
[1] U.S. Chemical Safety Board, "Investigation Report: Toxic Chemical Release at the DuPont La
Porte Chemical Facility," No.2015-01-I-TX, 2019, Available: https://www.csb.gov/dupont-la-
porte-facility-toxic-chemical-release-/.
[2] E. Hollnagel, Safety-I and Safety–II: The Past and Future of Safety Management. Surrey, UK:
Ashgate Publishing, 2014.
[3] M. Sujan, H. Huang, and J. Braithwaite, "Learning from incidents in health care: Critique from a
Safety-II perspective," Safety Science, vol. 99, pp. 115-121, 2017.
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
In an effort to move beyond the "human error" explanation for safety incidents, we explored
issues with procedures from the worker's point of view by using an anonymous, holistic survey
developed from interviews with then currently employed operators. The current sample (N =
174; survey) included individuals employed in the process safety industry and were primarily
from the Oil & Gas and Chemical industry. The survey was deployed over the course of
approximately four weeks. Twelve distinct constructs emerged from the survey (e.g., perceptions
of procedure quality, procedure deviation, attitudes toward the procedure change process, etc.).
Results indicated that perceptions of procedure quality was the focal variable in all analyses
including positive relationships with attitudes toward the procedure change process and negative
relationships with procedure deviations, and both safety incidents and near-misses. Additionally,
we integrated the three elements of the Interactive Behavior Triad—person, task, and context—
into Dekker’s Model 2 of safety. We found support for both two and three-way interactions using
moderator regression analyses. These results further support a systems view and model of
procedure design, implementation, and change processes. We conclude that these elements are
important factors to consider when evaluating and developing procedure systems and provide
additional information beyond more simplistic “human error” explanations for safety incidents.
Keywords: Procedures, process safety, survey, human error
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
Operators in the process industries work under extreme pressures in complex hazardous
environments that are associated with critical consequences at the cost of lives. Thus, ensuring
operator safety is of utmost importance in this domain, and in particular in stressed contexts.
Advances in Virtual Reality (VR) have enabled cost-effective, relatable, and remote trainings
that can potentially transform the future of operator training in complex environments. However,
consideration of operator states still remain a critical gap in ensuring that trainings are effective
for ensuring performances in real world emergency response operations. The objective of the
present study was to develop and evaluate the effectiveness of a stress-inducing multi-sensory
operator emergency response training scenario (e.g., firefighting). Fourteen adults were trained
under no-stress and stress-inducing VR training scenarios to assist in fire extinguishment. We
monitored participant gaze trajectories, physiological responses, body motions, and functional
brain connectivity during the training scenarios and during a post training assessment mission.
Preliminary findings indicate that the stressful training scenarios, confirmed using heart rate
variability metrics, resulted in reliably different neural patterns than the non-stress training
scenarios. We also found that post training assessment mission after the stress training
demonstrated greater neural efficiency. These initial findings suggest that training under pressure
are associated with development of neurocognitive networks that may be resilient to stress often
experienced by operators in real world scenarios.
John Kang, Abedallah Al Kader, Ran Wei, Ranjana Mehta, and Anthony McDonald
Texas A&M University
College Station, TX 77843, USA
[email protected]
Abstract
Driver fatigue is a critical safety risk factor that contributes to tens of thousands of motor vehicle
accidents each year, resulting in injuries and deaths that cost society $109 billion annually. The
issue is particularly pervasive within shift workers, who are 6 times more likely to be involved in
a drowsy driving crash as compared to the general. Shift workers in the oil and gas extraction
(OGE) industry may be at a greater risk of fatigue-related motor vehicle crashes because of their
exposure to long hours awake with no breaks, monotonous road environments and all-night work
shifts. The goals in this study are to develop and validate a predictive fatigue technology and to
identify translational strategies through which this technology can be feasibly and effectively
employed by the OGE industry to reduce the number of fatigued workers driving on the road,
which will ultimately reduce fatigue-related motor vehicle crashes. Driving, physiological, and
performance based data were obtained from twenty OGE drivers, 12 day shift and 8 night shift
drivers, over the course of 3 work days (12 hour shifts) using vehicle kinematics, chest-based
heart rate monitor, and a tablet-based psychomotor vigilance test. In general, night shift drivers
exhibited greater physiological load and greater decrements in vigilance and alertness, however
these trends were not consistent over the three days. Driving kinematics indicated that there were
no significant difference of breaking behavior between day shift and night shift drivers. Only 1
severe breaking event was detected from forward/breaking acceleration data, and 4 close to
severe breaking events were detected from generic longitudinal acceleration data. These events
were not associated with markedly different physiological responses. Additional data analysis is
underway that will further explore driver fatigue levels over multiple days to driving
performance. These findings will guide future efforts on developing fatigue prediction
algorithms to identify at-risk drivers such that effective fatigue management strategies (e.g.,
scheduling, rest guidelines) can be implemented.
Keywords: Shiftwork, fatigue, heart rate variability, alertness
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
The oil and gas extraction (OGE) industry continues to experience a fatality rate nearly seven times
higher than that for all U.S. workers. OGE workers are exposed to intensive shift patterns and long
work durations inherent in this environment. This leads to fatigue, thereby increasing risks of
accidents and injuries. In the absence of any regulatory guidelines, there is a critical need for the
development of comprehensive fatigue assessment practices specific to OGE operations that take
into consideration not only the various OGE-specific sources of fatigue, but also the barriers
associated with effective and feasible fatigue assessments in OGE work. In response to this need,
Shortz, Mehta, Peres, Benden, and Zheng (2019) developed the Fatigue Risk Assessment &
Management in high-risk Environments (FRAME) survey. Further, they provided evidence that
the FRAME survey content captures fatigue-related information specific to the OGE industry not
found in any one other measure of fatigue.
The present study expands on these efforts by examining the psychometric properties (i.e.,
reliability and validity) of the FRAME survey—a critical step before the survey can be
recommended for use in practice. A sample of 200 OGE and petrochemical refinery workers were
sought to participate in this study. Linkages between the FRAME survey and a number of fatigue-
related measures validated for use outside of the OGE industry will be examined. Once data
analysis is complete, the FRAME survey will be refined for implementation, and recommendations
for implementation will be provided.
Abstract
Lessons learned from past incidents are essential to enhancing process safety of chemical industry
and should be considered as knowledge legacy that evolves over time for corporate and government.
Although a wealth of empirical knowledge has been accumulated from public incident databases
and incident investigation reports, learnings are still limited due to the high expense of manual
content analysis and lack of methodology to gain insights from past incidents.
Recently there are a few attempts that develop methods to enable automated content analysis of
incident reports by natural language processing (NLP) techniques, but with a manual list of key
words still needed, the methods are not intelligent or automated enough to extract information that
is outside the pre-defined vocabulary. In this work, advanced NLP techniques for text mining, are
employed to identify causal relations from incident reports based on unsupervised learning and co-
occurrence network algorithms. The proposed method is capable of extracting latent causal factors
of the incident causes described in the reports and indicating the potential of identifying root causes
with more comprehensive training text data applied in the future work.
Abstract
Failure of hazardous liquid (HL) pipelines is a potentially significant hazard to people,
property and the environment. One of the main causes of HL pipeline failures is corrosion. To
predict cause and consequences of corrosion in HL pipelines, this article presents an artificial
neural network (ANN) using incidents data collected by the Pipeline Hazardous Material Safety
Administration (PHMSA) of the US Department of Transportation corresponding to the onshore
HL transmission pipelines in the US between 2010 and 2019. From this incident database, 70
attributes has been selected for their ability to predict corrosion. Using selected attributes as
input to the ANN model, the model is constructed and optimized for its hyperparameters; and
it predicts the type of corrosion, total cost of property damage, net material loss and type of
incident (rupture/release) with 60-90% accuracy. In order to establish credibility of developed
ANN model, the model accuracy obtained using ANN model is compared against another
machine learning model.
1 Introduction
Pipelines are one of the safest modes to transport bulk energy and have failure rates
much lower than the railroads or highway transportation[1]. However, failures do occur, and
sometimes with catastrophic consequences [2]. Although pipelines failures can never be
completely avoided, an appropriate and accurate risk analysis of pipeline incidents can result in
reasonable and effective risk management measures to reduce the overall risk of failure.
Based on causes, pipeline incidents can be classified in five categories: corrosion,
mechanical, natural, operational error and third party [3, 4]. Corrosion can be further categorized
in internal corrosion caused by the material being transported and external corrosion related to
the pipeline coating and cathodic protection. Incidents due to mechanical causes consists of
cracks and fractures unable to withstand the pipeline flow, and those due to natural causes are
caused by events such as floods, earthquake, frost, etc. Incidents due to operational error are
caused by fluctuations in operating conditions (e.g. Pressure), and third party incidents
represents a damage caused by an operation not carried out by the pipeline management e.g
excavation. Among all the causes, corrosion failure is ranked as one of the most frequent failure
sources in oil and gas pipelines and difficult to detect [5, 3, 6]. Hence, this article primarily focuses
on pipeline incidents related to corrosion.
The area of pipeline incidents analysis can be broadly divided into two categories: Data
analysis and causation analysis. Data analysis analyzes pipeline failure data to derive rate of
injury, fatality and failure rate. Using several data bases, such as PHMSA, CONCAWE [5, 7].
However Only data analysis does not provide a clear insight into pipeline incidents. In order to
do so, causation analysis methods are present, which utilizes methods like neural network,
regression technique and Bayesian methods[8, 9, 10]. In the field of using machine learning based
methods, there had been a good amount of development to predict the cause of pipeline
incidents. Most recently, Shaik et al.[11] has used parameters such as metal loss and weld
anomalies, wall thickness and pressure flow to predict repair requirement of a pipeline.
However, these methods only uses a few attributes to predict pipeline incidents, in spite
of presence of hundreds of attributes. Hence, to overcome the limitations of both of these
approaches, this work proposes to first perform data analysis to select significant attributes from
the rich pipeline incident database and, then, to perform causation analysis to predict cause and
consequences of corrosion using a machine learning method, artificial neural network (ANN).
This article is organized as follows. Firstly, details of data used for analysis and its
preprocessing method are provided. Then the proposed methodology that consists of ANN
model development and model testing is illustrated in detail. The proposed methodology is
demonstrated on corrosion incidents. Finally, the major findings and conclusions of this study are
summarized.
2 Data processing
In North America, the spearhead oil and gas pipeline (OGP) incident database is managed
by the Pipeline and Hazardous Materials Safety Administration (PHMSA) [12]. In this region,
pipeline operators are required by law to report to the PHMSA every event that involves an
undesired release to the environment, which meets any of the following criteria [13]:
1. The incident involves a death or personal injury necessitating in-patient hospitalization
2. Estimated property damage including cost of substance lost is $50,000 or more
In this work, the data has been collected from PHMSA database corresponding to the
onshore hazardous liquid transmission pipelines in the US between 2010 and 2019. The collected
data has 3592 pipeline incidents, and each pipeline incident has 606 attributes. One of the most
frequent cause in OGP failures in the last 10 years has been corrosion causing 721 incidents in
the collected data, for which a prediction model is developed in this work.
In order to develop a prediction model for corrosion, at first 70 out of 606 attributes are
selected based on reasoning about their relevance to pipeline failure. There are two types of
attributes among selected ones: a) Generic attributes relevant to failure (e.g. time, location and
area of incident), and b) Attributes specific to corrosion (e.g. presence of corrosion inhibitors and
lining). Since some of the attributes are only populated for few number of incidents, selected
attributes have been combined to increase information density of attributes. For example, age
of pipe and age of tank has been combined to only account for age of the item involved. In this
way, the number of selected attributes has been reduced to 24 from 70.
To further process the data, numerical operation has been conducted on the attributes.
For example, age of the item involved in the incident is calculated by the difference of the year
of manufacture of the equipment from the year of incident. Additionally, numerical attributes
have been categorized in bins. For examples, age of the item involved in the incident ranges from
10 to 120 years, hence, it is categorized into 12 equal bins: 10, 20, 30, 40, 50, 60, 70, 80, 90, 100,
110, 120. Further, only most informative part of some attributes has been utilized for the
analysis. For example, local time of incident has been extracted as day and night and used for
analysis. Additionally, since most of these attributes (e.g., Operator location, type of commodity
releases) are categorical. Hence, to maintain consistency in the dataset, the numerical inputs
(i.e., age and diameter of pipe) have been categorized by putting a fuzzy value range is allocated
for each linguistic variable.
The selected attributes for input, their count of categories and the categories are listed in
Table 0. Among the selected attributes for input, the attributes such as the type of commodity
released, area of incident, depth of cover, subpart of system involved and item involved are
selected to infer information of the system that is highly likely to undergo an incident. Further,
the equipment specification such as coating type, diameter and wall thickness of pipe is selected
to give specific information about the pipeline. Here, Pipeline function specifies it is either
transportation the commodity from the production site/well to refinery or similar facilities
(gathering) or from refinery to final use or port (trunkline/transmission). It also indicates if the
pipeline is operating below 20 percent specified minimum yield strength (<=20% SMYS) or above
20 percent specified minimum yield strength (>20% SMYS). Among equipment specifications, age
of the item involved in the incident is calculated by the difference of the year of manufacture of
the equipment from the year of incident.
Next, inspection related attributes are selected to infer information about the condition
of the pipeline. Specifically, internal inspection tool indicator represents the pipeline
configuration to accommodate internal inspection tools, and operation complications indicator
represents presence of operational factors which significantly complicate the execution of an
internal inspection tool run. Here, SCADA in place indicator and CPM in place indicator represent
presence of Supervisory control and data acquisition (SCADA)-based system and Computational
Pipeline Monitoring (CPM) leak detection system in place on the pipeline or facility involved in
the incident, respectively.
As condition monitoring attributes, prior damage is selected which represents observable
damage to the coating or paint in the vicinity of the corrosion. Attributes such as corrosion
inhibitors, corrosion lining and cleaning dewatering is selected to represent commodity
treatment with corrosion inhibitors or biocides presence of interior coating or lining with
protective coating, and routine utilization of cleaning/dewatering pigs (or other operations).
Table 1: Input attributes
Attributes Count Categories
TX, GA, CA, WY, PA, OK, IL, KS, AK, CO, OH, MD,
Operator location 18
UT, HI, NJ, NY, MT, NH
Local time of incident 2 Day, Night
Crude oil, Refined and/or petroleum product
Type of commodity released 4 (non-HVL), HVL or other flammable or toxic
fluid, Carbon dioxide/biofuel/alternative fuel
Underground, Aboveground, Tank including
Area of incident 3
attached appurtenances/transitional area
Depth of cover (in) 4 50, 100, 150, >150
Pipeline including valve sites, Terminal/tank
farm equipment and piping, Pump/meter
station equipment and piping, Breakout
System subpart involved 5
tank/storage vessel including attached
appurtenances, Equipment and piping
associated with belowground storage
Pipe, Auxiliary piping (e.g. Drain lines),
Tank/Vessel, Weld including heat affected zone,
Item involved 12
Valve, Relief line, Tubing, Meter/Prover, Flange,
Scraper/pig trap/Sump/separator, Pump, Other
Part of pipe involved 3 Pipe body, Pipe seam, Others
Diameter of pipe (in) 5 5, 10, 15, 20, >20
Pipe wall thickness (in) 5 0.1, 0.2, 0.3, 0.4, >0.4
> 20% SYMS regulated trunkline/transmission,
<= 20% SYMS regulated trunkline/transmission,
Pipeline function 4
> 20% SYMS regulated gathering,
<= 20% SYMS regulated gathering
Coal tar, Fusion bonded epoxy, Cold applied
tape, Paint, Asphalt, Extruded polyethylene,
Pipe coating type 11
Field applied epoxy, Polyolefin, Composite,
Others, None
Age of item involved (years) 9 10, 20, 30, 40, 50, 60, 70, 80, >80
Material involved 2 Carbon steel, Others
Internal inspection tools indicator 3 Yes, No, Null
Operation complications indicator 3 Yes, No, Null
SCADA in place indicator 3 Yes, No, Null
CPM in place indicator 3 Yes, No, Null
Age of cathodic protection (years) 5 0, 10, 30, 50, 70
Prior damage 3 Yes, No, Null
Corrosion inhibitors 3 Yes, No, Null
Corrosion lining 3 Yes, No, Null
Cleaning dewatering 4 Yes, No, N/A- Not mainline pipe, Null
Age of corrosion inspection (years) 7 0, 1, 2, 3, 4, 5, >5
Age of hydrotest (years) 6 0, 10, 20, 30, 40, >40
Yes and an investigative dig was conducted at
the point of the incident’, Yes but the point of
Direct inspection type 4
the incident was not identified as a dig site, No,
Null
As output of the analysis, four attributes are selected. First, this model differentiates
between cause of incidence which is internal and external corrosion. Additionally, to specific the
type of incident, the incident is identified as realease, rupture and others. On the other hand, to
predict the consequence of the incident, the model output is taken as cost of property damage
(in dollars) and net loss of commodity released (in barrels). To increase the computational
efficiency of the model, consequences has been categorized in bins of powers of 10.
3 Methodology
The model developed in this work is an input-output ANN model to capture the causal
dependencies and the contribution of the input attributes in the pipeline failures. This model
understands the synergy among underlying input attributes and their collective ability to affect
the integrity of the pipeline utilizing a wealth of empirical knowledge has been accumulated from
public incident databases. The methodology followed to develop the ANN model is described as
a flowsheet in Figure 1.
Figure 1: Model development methodology
To develop the ANN model, firstly, the contributing factors or attributes are selected from the
PHMSA pipeline failure database as described in data processing section. Then, the entire data
has been divided into ratio of 2:1 as training and testing data. Training data is utilized to obtain
the parameters of ANN model . The structure of the ANN model developed in this work is
presented in Figure 2.
The network structure is designed to have an input layer, two hidden layers and an output
layer. Here, the inputs of the model are the attributes listed in Table 0, which is connected to the
hidden layers. The first hidden layer is designed to have with twenty nodes, and the second layer
is designed to have twenty five nodes. The second hidden layer is connected to the output layer,
i.e., the attributes listed in Table 2. For each attribute listed in Table 2, an ANN model is
developed.
Figure 2: Structure of ANN model
In an ANN model, each node is a processing elements (also known as neuron) which is
connected to other processing elements. Typically the neurons are arranged in a layer, with the
output of one layer serving as the input to the next layer. A neuron may be connected to all or a
subset of the neurons in the subsequent layer, with these connections modeling the causation
structure of the pipeline failure. Weighted data signals entering a neuron simulate, and
consequently, transfer the information within the network. The input values to a processing
element are multiplied by a connection weight that simulates the strengthening of neural
pathways in the causation structure of failure. It is through the adjustment of the connection
strengths, i.e., weights, that learning is emulated in ANNs. This connection is shown in Figure 3,
and the adjustment of connection strength is explained below:
Step 1: For each input, the input value 𝑥 is multiplied with weights 𝑤 ; and all the
multiplied values is summed to account for contribution from all the nodes in the input layer.
Also, bias b is added to the summation of multiplied values.
𝑧=∑ 𝑥 .𝑤 + 𝑏 (1)
where 𝑛 is the total number of inputs.
Figure 3: Structure of ANN model
Here, 𝑡 is number of training points in each category. The loss function is minimized to
get optimized weights and biases for each neurons.
Using the trained model, output for each point in the testing data, i.e., 1/3rd of the total
data is predicted. The model accuracy is calculated using Eq. 5 and reported in the results section.
In order to establish credibility of developed ANN model, the model accuracy obtained
using ANN model is compared against another machine learning model, support vector machine
(SVM).
ANN model training performances of the four models developed are compared using
learning rate parameter which determines the rate to move toward a minimum of a loss function
at each iteration. It can be observed in Figure 5 that learning rate of models with outputs as cause
and release type are good; while learning rate of models with outputs as net loss and total cost
are low, since they have higher number of categories. Since a lower learning rate implies lower
model accuracy, the model accuracy of models, as shown in FIgure 6, with outputs as net loss
and total cost are lower than that of models with outputs as cause and release type.
ANN Accuracy
Output No. of categories Validation Testing
Cause (Internal/External corrosion) 2 97.40 94.54
Total cost (< 10 , 10 , 10 , 10 , > 10 in dollars) 5 89.80 60.50
Net loss (< 10 , 10 , 10 , 10 , > 10 , in bbls) 5 95.76 74.79
Release type (Release, rupture, others) 3 94.53 98.80
5 Conclusion
This article presents a new framework for causation analysis of hazardous liquid pipeline
incidents focused on corrosion. The proposed technique first collects and processes incident data
from PHMSA database. Specifically, it eliminates the redundant attributes and selects 70
attributes resulting in higher information content. The number of attributes are reduced to 30
using process knowledge resulting in higher information density. A reasonably accurate
prediction model is developed to predict the cause and consequence of corrosion, which utilizes
70 attributes resulting in higher information content. The attributes are reduced to 30 resulting
in higher information density.
The proposed ANN model is applied on the preprocessed incident data and validated with
90-95% accuracy. The model performance is tested on another set of data which results in 95%
model accuracy for predictive model of cause and release type and 75% model accuracy for
predictive model of net loss. This article shows the strength of ANN method to predict cause and
consequences of pipeline incidents and can further be extended to pipeline incidents caused bu
other causes such as excavation, natural forces, etc.
6 Acknowledgments
The authors gratefully acknowledge financial support from the PHMSA-CAAP, the Texas
A&M Energy Institute and the Mary Kay O’Connor Process Safety Center.
References
[2] Y. Guo, X. Meng, D. Wang, T. Meng, S. Liu, and R. He, “Comprehensive risk
evaluation of long-distance oil and gas transportation pipelines using a fuzzy petri net model,”
Journal of Natural Gas Science and Engineering, vol. 33, pp. 18 – 29, 2016.
[13] A Guideline: Using or Creating Incident Databases for Natural Gas Transmission
Pipelines, vol. Volume 1: Project Management; Design and Construction; Environmental Issues;
GIS/Database Development; Innovative Projects and Emerging Issues; Operations and
Maintenance; Pipelining in Northern Environments; Standards and Regulations of
International Pipeline Conference, 09 2006.
[14] C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20,
pp. 273–297, 1995.
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
Nanotechnology, being a comparatively new discovery in the field of science and engineering,
poses many risks to the industry. Apart from its health effect, scientists and engineers are
concerned about its explosion possibilities. A hazard index would be able to identify the hazard
level of nanoparticles and help take proper controls of the risk associated with them. This study
creates a database of the different properties of various nanoparticles and creates a hazard factor
to formulate the index. The hazard factor is based on properties like explosion parameters, size,
shape, dispersibility, humidity, toxicity, flammability, reactivity, etc. The study also aims to
consider certain other characteristics like the level of available scientific knowledge that may
impact the index given that this relatively new field of technology has more risk than the already
experimented ones. Based on the hazard factor, the research will use statistical analysis to check
the validity of the method and later compare the result with other existing indexes. Finally, the
indexes will be ranked to precisely identify the hazard level against their respective properties.
This index will be an effective indicator of a potential hazard that the engineered nanoparticles
may hold and alert the users to take preventive action to moderate the risk of the hazard.
Key Words: Nanoparticles, hazard, index
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
Systems in process facilities are complex. The almost ‘endless’ variety of components,
the interaction among these components, the physical arrangement of the components in
the facility, and anticipating system behaviours can overwhelm employees joining the
industry. However, these employees need to evolve a sense of what industrial systems
are in order to be able to grasp the various system functionalities mentioned above.
Johan de Kleer1 and his colleagues termed this sense as ‘Mechanistic Mental Models’
and described it as “the common intuition of ‘simulating the machines in the mind’s
eye’.” One thing observed for millennials and the z-generation joining the workforce is
the ease at which they interact with items in the digital realm.
To address the concern above with understanding systems and take advantage of
millennials and z-generation tendencies when it comes to functioning in digital
environments, the authors and their associates developed a full-scale, 3D, highly
interactive virtual reality application titled DesignVR. DesignVR can be used to interact
with systems, as well as design and build systems, for desired industrial applications.
Experiments were conducted with students to accomplsh just that.
To examine the effectiveness of DesignVR in creating proper mental models for
systems, the authors asessed mental models on the following four dimentions:
(1) System topology: the structure of the system
(2) Envisioning: the inference functionality of systems components
(3) Casual model: the ability to describe system and components functionality
1 de Kleer, J. & Brown, J. S. (1983). Assumptions and Ambiguities in Mechanistic Mental Models.
In D. Gentner & A. L. Stevens (Eds.), Mental models (pp. 155-190). Hilsdale, NJ: Lawrence
Erlbaum Associates Pubs.
(4) Simulation: the ability to conduct mental simulation for behaviour.
The utility of DesignVR as a platform for preparing the younger workforce for working with
systems in procress industries is documented, discussed, and demonstratedi.
Keyword: Systems in process industries; virtual reality; mental models of systems.
i
DesignVR and the virtual reality hardware it operates with will be available for attendee
demonstrations and interactions.
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Prasad Goteti
1. INTRODUCTION
The Plant Manager wants to know how Safe the Process plant is running in real time. It would be nice if
the Manager had one number by which he knew what the current Process Risk is in the plant. Introducing
the concept of Process Risk Index.
The intent of this paper is to introduce and explain the concept of Process Risk Index
An example using a Safety Instrumented Function (SIF) in a Process application will be used to explain
this concept. The paper will detail how relevant Historian data is collected and analyzed and used to
compare “Healthiness” of protection layers with design data like Layers Of Protection Analysis (LOPA)
and Safety Integrity Level (SIL) calculation of the SIF to calculate the Process Risk Index.
American Petroleum Industry (API); Computer Maintenance Management System (CMMS); Key
Performance Indicators (KPI); Loss Of Primary Containment (LOPC); International Electrotechnical
Commissions (IEC); International Society of Automation (ISA); Independent Protection Layer (IPL);
Layers Of Protection Analysis (LOPA); Long Sample Time (LST); Occupational Safety and Health
Administration (OSHA); Probability of Failure on Demand (PFD); Process Risk Index (PRI); Process
Safety Event (PSE); Process Safety Indicators (PSI); Risk Reduction Factor (RRF); Safety Integrity
Level (SIL); Safety Instrumented Systems (SIS); Safety Life Cycle (SLC); Short Sample Time (SST);
Target Mitigated Event Likelihood (TMEL)
2. INTRODUCTION TO API RP 754
API RP 754 is titled “Process Safety Performance Indicators for the Refining and Petrochemical
Industries”, the second edition of which came out in April 2016.
The purpose of the Recommended Practice (RP) is to identify leading and lagging indicators in the
refinery and petrochemical industries whether for public reporting or for use at individual facilities
including methods for the development of Key Performance Indicators (KPI). As a framework for
measuring activity, status or performance, the RP classifies Process Safety Indicators (PSI) into four
tiers of leading and lagging indicators. Tiers 1 and 2 are suitable for public reporting while Tier 3 and 4
are meant for internal use at individual sites.
Figure-1 (Ref 8)
A Process Safety Event (PSE) is defined in this RP as an unplanned or uncontrolled Loss Of Primary
Containment (LOPC) of any material including non-toxic and non-flammable material (ex. Steam or
compressed air) from a process, or an undesired event or condition that, under slightly different
circumstances could have resulted in a LOPC of a material.
Leading indicators inform of a potential hazardous event in advance while lagging indicators are based
on facts after the hazardous event. Tier 1 and 2 generally would have more lagging than leading
indicators, while tier 3 and 4 would have more leading than lagging indicators.
Identifying key leading and lagging indicators for each tier and monitoring them on a continuous basis
could give an indication of Process Safety performance of a site. As an example, a major gas leak above
the tolerable limits set by the local jurisdiction would classify as a Tier 1, lagging indicator. While an
audit finding indicating that a proper PHA was not conducted would classify as a tier 4, leading
indicator.
The intent of IEC/ISA 61511 is the management of functional safety. IEC/ISA 61511 details the
activities to be performed to meet the functional safety requirement in the form of a Safety Life Cycle
(SLC). The SLC covers the Analysis, Implementation and Operation phases to define, design,
implement, operate, and maintain an SIS. The end user can develop his own SLC based on the guidance
in IEC/ISA 61511 which is then documented in a Safety Plan.
1. Hazard and Risk Assessment – To determine the hazards and hazardous events in the
process, the initiating events leading to the hazardous event, the associated process risk, the
Risk Reduction Factor (RRF) required to reduce the risk below acceptable levels, and to
identify Independent Protection Layers (IPL) to achieve the necessary risk reduction based
on the corporate’s acceptable risk criteria.
2. Non-SIS Solutions Applied – Inherently Safer Design (ISD) would be the ideal choice for
any process plant to achieve process safety. However, this is not always possible due to the
hazardous material and chemical processes used in the process industry. Key considerations
for ISD, as described by Trevor Kletz [ref. 7], are to minimize, substitute, moderate, and
simplify. Also the use of non-SIS layers such as Pressure Relief Valves, Rupture Discs, etc.
(refer to Figure 1) are options that a project team considers to meet identified process safety
risks. SIS layers would usually be the last option to be considered.
3. Allocation of safety functions to protection layers – To identify Safety Instrumented
Functions (SIF) as one of the IPLs in step 1 and determine their Safety Integrity Levels (SIL)
based on the extent of Risk Reduction Factor (RRF) taken credit for (refer to “Necessary
Risk Reduction “ in Figure 3). Initial SIL verification calculations for each SIF are
sometimes generated as part of this step.
4. Safety Requirement Specification (SRS) – Generate a document or set of documents which
define the Functional and Integrity requirements of each identified SIF in the SIS.
Figure 3 – Risk Reduction by Independent Layers of Protection
1. SIS Design and Engineering – Design and Engineering of the SIS to meet the Functional
and Integrity requirements in the SRS. SIL verification calculations are generated based on
the instrumentation selected for each SIF. The RRF calculated for each safety instrumented
function (SIF) needs to be equal to or greater than the RRF values determined during the
Analysis phase. The RRF of the SIF represents a portion of the total risk reduction as
indicated in figure 3. SIL verification calculations are based on various parameters, like
failure rates of the instruments, Proof Test Intervals (PTI), and higher diagnostic on valves
(Partial Valve Stroke Testing or PVST).
2. SIS Installation, Commissioning and Validation – To install, commission, and validate
that the SIS meets the Functional and Integrity requirements in the SRS
1. SIS Operation and Maintenance – Operate and Maintain the system based on the
requirements in the SRS. Look for Key Performance Indicators (KPI) which will inform the
Operator of any SIF failures.
2. SIS Modification – Use an approved Management Of Change (MOC) procedure to manage
any changes to the SIS after installation and commissioning. The MOC process usually
begins with an “impact analysis” based on the proposed changes to ensure no negative
impact to the original design requirements of the SIS. Proposed changes need to be validated,
reviewed, approved, and communicated before the changes are incorporated. Requests for
modifications can come from either an operational change in the process or during the once
every 5 year OSHA PSM specified HAZOP revalidation.
3. SIS Decommissioning – De-commission the SIS when all process hazards no longer require
a safety function.
Figure 4 – Simplified representation of the Safety Life Cycle
4. PROCESS RISK INDEX CALCULATION
Hydrocarbon feed to a pressure vessel (V-1). The Upstream pressure to Vessel V1 is greater than 5
Atmospheres and the Maximum Allowable Working Pressure (MAWP) of vessel V-1 is 5 Atmospheres.
The pressure in V-1 is controlled at 3 Atmospheres by PIC-1 through the Basic Process Control System
(BPCS). When the pressure crosses the alarm limit due to failure of PIC-1, PZT-4275 will sense and
send the signal to a Safety Instrumented System (SIS) logic solver, the interlock PSHH-1 (set at 3.75
Atmospheres) will initiate shutdown of XZV-4275, which is a De-energized To Trip (DTT), Fail Close
valve, ie Open when the pressure is normal. The Pressure Safety Valve (PSV-1) pops up in the event
the pressure in V-1 reaches 4 Atmospheres.
Figure-5
In this example, it is assumed the PHA study indicated that uncontrolled High pressure in V-1 can lead
to vessel rupture and release of hydrocarbons in the atmosphere leading to potential explosion. Refer
figure -6, which is the PHA Risk matrix that was used. The various scenarios based on the Risk matrix
are detailed in Table 1.
Table-1
The Target Mitigated Event Likelihood (TMEL) for Severity 2 considered is 1E-05 per year.
Where :
Figure-6
4.3 USE OF KPI’S DURING OPERATION PHASE OF THE SAFETY LIFE CYCLE
After the IPLs are designed and implemented, it would be good to know how the IPLs are functioning.
Based on API RP 754, the KPI’s that can be identified in our example would be (refer figure 1 and 5):
1. Release of PSV-1 to Flare (Tier 1 or 2 KPI) –This would mean both the BPCS and SIS loop
had failed to maintain the pressure in the vessel below dangerous levels. This would be a lagging
indicator and could be classified as Tier 1 or 2 by the individual site based on amount of gas
released to flare.
2. SIF-1 exercised (Tier 3) –This would mean that the Pressure in the vessel was not controlled by
the BPCS loop and reached a limit where SIF-1 had to shut the Hydrocarbon inlet line. This
would be a Tier 3, leading indicator as far as LOPC is concerned but a lagging indicator in terms
of Process Availability.
3. Audit findings (Tier 4) - If an Audit finding indicates that the SIF-1 field instruments are not
being Proof Tested as was considered during the SIL verification calculations, this will be
informed to the individual site management as a Tier 4 leading indicator.
4. SIF component, detected failure (Tier 4) – If the input transmitter of SIF-1 fails and is detected
or bypassed, the SIF is now running in a degraded mode. The component needs to be fixed and
restored so that SIF-1 can contribute to the risk reduction it was designed for. This would be a
Tier 4 leading indicator.
4.4 OPERATION PHASE OF SLC – PROCESS RISK INDEX (TIER 3 OR 4)
Process Risk Index (PRI) is one number which indicates the Process Risk profile of a Process unit in real
time (Short term) or over a period of time (Long term). PRI is based on Hazardous event scenarios
which are High Severity ( Safety , Commercial or Environmental) , ie which have a Base
“Unacceptable” risk without any safeguards.
If PRI=0%, it means the Process Risk is within the “Acceptable” criteria of the Operating company
If PRI=100%, it means the Process Risk is in the “Un-Acceptable” criteria of the Operating company
Short Term (ST) PRI is for a period of one shift or One day. This is for the Plant operations and
maintenance manager to get an idea how their Process plant is doing
Long Term (LT) PRI is for a period of a few months and above. This is for the Senior management and
Plant managers to know how the Process plant has been doing in the long term.
1. “Safety” is the driver for this hazardous event (not Commercial and Environment). So from
now on we will refer to it as Safety Risk Index.
2. PFDactual of SIF and non-SIF IPL is the same as PFD per design
3. The SIF input has 1oo1 input voting
4. All other IPLs are working per design
4.5.2 Variable which effects Short Term (ST) Safety Risk Index
1. SIF “Time in Bypass” over the Short term period. This data is available in the Plant Historian
(Figure 7)
4.5.3 Equations for Short Term (ST) Safety Risk Index for ONE scenario
2. Actual ST Safety Risk = IEF x [(PFD of non-SIF IPL x SIF PFD) x (Time SIF NOT in
Bypass/SST) + (PFD of non-SIF IPL) x (Time SIF in Bypass/SST )] x Safety Severity
where :
4.5.4 Example for Short Term (ST) Safety Risk Index for ONE scenario
In our example, if SIF-1 input (PZT-4275) is bypassed for 8 hours in a period of 24 Hours, the ST
Safety Risk Index calculation is per Table-2 :
Table-2
4.5.5 Equations for Short Term (ST) Safety Risk Index for MULTIPLE scenarios (example – One
Plant)
where :
IEF = Initiating Event Frequency
SST = Short Sample Time
3. ST Safety Risk Index (Multiple) = [Log of (Designed Safety Risk (Multiple)/Actual Safety
Risk(Multiple)) / Log of Designed Safety Risk(Multiple))]*100
4. Worst actor of ST Safety Risk Index = Highest ST Safety Risk Index (ONE scenario)
For Short Term (ST) Safety Risk Index – Only “SIF Time in Bypass” used
For Long Term (LT) Safety Risk Index – All three parameters are used
Figure-7
4.5.6 Assumptions for Long Term (LT) PRI equations
1. For One scenario which has an “Unacceptable” Risk criteria without any safeguards
2. “Safety” is the driver for this hazardous event (not Commercial and Environment). So from now
on we will refer to it as Safety Risk Index.
3. PFDactual of SIF and non-SIF IPL may not be the same as PFD per design
4. The SIF IPL input has 1oo1 input voting
4.5.7 Variables which effect Long Term (LT) Safety Risk Index
1. SIF demand rate. If this is greater than the assumed IEF, then SIF demand rate will be considered
in the “Actual LT Safety Risk” equation
2. SIF “Time in Bypass” over the Long Term period
3. IPLs On time testing. If this is different than what was considered during design, then this will
effect the PFDactual of the IPLs.
4. The above data is available in the Plant Historian and Computer Maintenance Management
System , CMMS. (Figure 7)
4.5.8 Equations for Long Term (LT) Safety Risk Index for ONE scenario
1. Designed Long Term Safety Risk = TMEL (for safety) x Safety Severity
(the assumption here is that with the designed safeguards, the TMEL has been met)
2. Actual LT Safety Risk = SIF demands x [(PFDactual of non-SIF IPL x SIF PFDactual) x (Time
SIF NOT in Bypass/LST) + (PFD of non-SIF IPL) x (Time SIF in Bypass/LST )] x Safety
Severity
where :
SIF demands considered as Initiating Event Frequency if SIF demands > IEF
LST = Large Sample Time
PFDactual (for SIF and IPL) varies based on “Real test intervals” vs “Design Test intervals”
3. LT Safety Risk Index = [Log of (Designed Safety Risk/Actual Safety Risk) / Log of
Designed Safety Risk)]*100
4.5.9 Example for Long Term (LT) Safety Risk Index for ONE scenario
In our example, if SIF-1 input (PZT-4275) is bypassed for say a Total of 2 months in a period of One
year, the LT Safety Risk Index calculation is per Table-3 :
Table-3
4.5.10 Equations for Long Term (ST) Safety Risk Index for MULTIPLE scenarios (example – One
Plant)
Where:
SIF demands considered as Initiating Event Frequency if SIF demands > IEF
LST = Large Sample Time
PFDactual (for SIF and IPL) varies based on “Real test intervals” vs “Design Test intervals”
3. LT Safety Risk Index (Multiple) = [Log of (Designed Safety Risk (Multiple)/Actual Safety
Risk(Multiple)) / Log of Designed Safety Risk(Multiple))]*100
4. Worst actor of LT Safety Risk Index = Highest LT Safety Risk Index (ONE scenario)
5. PROCESS SAFETY RISK INDEX
Depending on who the KPI is for, the Process Safety Risk Index is understood and used in a different
manner.
1. Process Plant Safety Risk Index (Long Term) = LT Safety Risk Index (Multiple)
2. Worst actor for Process Plant Safety Risk Index (Long Term) = Scenario with Highest
LT Safety Risk Index
This will give Senior management at the corporate an insight on how the plant has been running
based on the Long Term safety track record
The Long Term Safety Risk Index will help the Plant / Operations Manager to reanalyze risk and
take appropriate action based on some of the worst actors which are driving the Safety Risk index
up.
1. Process Plant Safety Risk Index (Short Term) = ST Safety Risk Index (Multiple)
2. Worst actor for Process Plant Safety Risk Index (Short Term) = Scenario with Highest
ST Safety Risk Index
The Short Term Safety Risk Index will help the Plant / Operations Manager to decide on
maintenance priorities on a shift or day basis
1. Considers only Safety Risk Index as it assumes Safety as the Risk driver
2. For a SIF only 1oo1 input voting has been considered
3. Only SIF status (bypassed, Ontime test, Demands)
4. All SIF demands are considered as “Real” demands and not spurious
1. Process Risk Index will be considered for scenarios which are driven by Commercial or
Environmental during the HAZOP stage
2. Non-SIF IPL status will also be included (assuming their status is digitally available)
3. SIFs with MooN input voting, where M>=N and N>1, will also be considered
4. Real failure rates of SIF instruments based on application and collected data
7. CONCLUSION
Process Risk Index, both Short and Long term can provide valuable information to both the Corporate
and Plant Managers and Engineers. Based on these Risk Indices, Corporate and plant teams can monitor
and improve the Process Safety solutions currently being used in a plant. This reduces the probability of
process incidents and increases plant reliability
8. REFERENCES
1. http://en.wikipedia.org/wiki/IEC_61511#Scope
2. http://en.wikipedia.org/wiki/Natural-gas_processing
3. IEC 61511 / ISA 61511
4. IEC 61508
5. US DOT Fact Sheet, 12/1/11
6. Layer of Protection Analysis: Simplified Process Risk Assessment, AIChE CCPS, 2001
7. OGP Process Safety – Recommended Practice on KPIs – Report # 456, Nov 2011
8. White paper by AESolutions - Justifying IEC 61511 Spend
9. A presentation on this topic was given by myself at the Purdue Process Safety and Assurance
Center Steering Committee (P2SAC) meeting on Dec 5, 2019
10. U.S. provisional patent application 63/038766, Operations Safety Advisor
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Sinijoy P J*,Dr.M.Bhasi**,Dr.V.R.Renjith***
*
Research Scholar, School of Engineering, Cochin University of Science and Technology,
Kerala, India.
**
Professor, School of Management Studies, Cochin University of Science and Technology,
Kerala, India
***
Professor, School of Engineering, Cochin University of Science and Technology, Kerala,
India
[email protected]
Abstract
Industrial Cyber Security threats are continuously evolving in complexity and it is a fact that Cyber
security is at the top of global risks for decades. Most of the firms are not fully prepared for system
intrusion and attacks. Intrusion Detection System (IDS) is to protect critical infrastructure from
attackers. A robust IDS can protect process industries from Cyber-attacks. Evasion techniques by
attackers make detection a difficult one. The complex interconnected systems demand robust
Cyber security techniques. The learning-from-experience strategy using case-based reasoning
methodologies and utilization of machine learning are investigated. Detection methods like
Anomaly based IDS(AIDS),Signature based IDS(SIDS),Host Based IDS and Network based IDS
are discussed.
Paper discusses Cyber Physical Systems and reviews various recent works on IDS
comprehensively. Paper also propose to examine evasion techniques used by the attackers along
with the advantages and disadvantages of existing systems. Conventional Intrusion Detection
System drawbacks are overthrown with the advent of Real Time and Artificial Intelligence based
Intelligent Monitoring Systems.
It’s vital to develop effective real-time attack monitoring and threat mitigation mechanisms. This
paper also propose future research idea on intrusion detection technology with extensive
application of Machine Learning.
Keywords: CPS,IDS,
1.Introduction
Dependency on technology and associated cyber security threats are increasing in multitude. A
chemical industry is an industrial process plant that processes large scale chemicals. Attackers
target process based industries like chemical industry since the consequences are very high in these
sectors.
First part of the paper describes Cyber Physical Systems [CPS] which is the process of combining
hardware and software components. CPS is a concept that focus on bridging Cyber and Physical
world. The biggest threat to CPS is from the targeted controller and various processes controlled
by it. An intruder hacking into such systems and changing the value of any of the critical
parameters like temperature, pressure in the operational unit can cause heavy damage.
Here in Section 2,it describes about cyber physical system security and intrusion detection systems
and Section 3 introduces related work in CPS. Conclusions of the study and references are
presented in last Sections.
2.Cyber Physical System Security and Intrusion Detection Systems
2.1 Cyber Physical System Risk Equation
Risk = Threat x Vulnerability x Consequence
The above Fig 2 describes common attacks and their sub types in CPS.
A) The intruder take the complete control of the system by control hijacking attack .B)Malware
attacks affect the normal functioning of the system, C)Code injection exploits the vulnerabilities
of the system by systematic injection of code that changes the complete execution of the program.
D) Denial of Service attacks disables the normal services provided by the system. i) Permanent
DoS is a type of attack when intruder tries to exploit unpatched vulnerabilities in order to install
modified firmware to damage a system. ii) Distributed Dos attack is based on a model where
several systems send request to the targeted system and occupies the resources thus making the
targeted system unable to serve the purposes. E) Man in the Middle is an active type of attack and
occurs when intruder intervenes between communicating entities trying to intercept the packets.
F) Spoofing i)IP Spoofing aimed at using another IP Address to pass the security system .ii)GPS
Spoofing is based on broadcasting incorrect signal of higher strength than received from satellite
in order to deceive the victim.
2.3 Two major types of Threat Mitigation Schemes.
A. Intrusion Detection System [IDS]
B. Rule Based and Machine Learning Algorithm based Framework for Threat Detection.
2.3.1 Intrusion Detection System [IDS]
An Intrusion Detection System (IDS) is a device or software application that monitors
a network or Process systems for malicious activity or policy violations. IDS can be classified by
where detection takes place [network or host] or the detection method that is employed (signature
or anomaly-based).
In terms of the way of detecting the threat, modern IDS can be subsequently divided into three subgroups:
Anomaly-based assumes detection of the behavioural patterns which are different from the
patterns of normal system‘s functioning;
Signature-based requires a storage with a set of threats models being kept up-to-date, used
to identify threats;
Specification-based, in this mode specifications of the system as whole, as well as of
components and interfaces are utilized for detection of suspicious activities.
Figure 3
The Fig 3 shows CPS Intrusion Detection classification based on two major things
1. Detection technique: this criteria defines what misbehaviours of a physical component
of the IDS considers to detect intrusion
2. Audit Material: This criteria defines how the IDS collects data for data analysis
A real-time ICS test bed data is used for demonstrating the proposed detection system. Five attacks,
including man in the middle (MITM), denial of service (DoS), data exfiltration, data tampering,
and false data injection, are carried out to simulate the consequences of cyber-attack and generate
data for building data-driven detection models. Four classical classification models based on
network data and host system data are studied, including k-nearest neighbour (KNN), decision
tree, bootstrap aggregating (Bagging), and random forest, to provide a secondary line of defence
of cyber-attack detection in the event that the intrusion prevention layer fails. Intrusion detection
results suggest that KNN, Bagging, and random forest have low missed alarm and false alarm rates
for MITM and DoS attacks, providing accurate and reliable detection of these cyber-attacks.
Cyber-attacks that may not be detectable by monitoring network and host system data, such as
command tampering and false data injection attacks by an insider, are monitored for by traditional
process monitoring protocols.
Cyber-attack detection system utilizing a defense-in depth concept improves overall cyber-security
by combining signature-based and anomaly based analysis of network, host, and process data.
Attack scenarios-reconnaissance, DoS attacks, and a data tampering attack, have been conducted
to demonstrate the possibility of cyber-attacks and to generate data for studying IDS development.
Figure 6: Cyber Attack Detection System
Figure 6 shows the structure of the proposed cyber-attack detection system with a defence-in-depth
concept. The first defence layer is the traditional intrusion prevention layer, including firewalls,
data diodes, and gateways, which are already widely applied in the industry. However, there are
Situations that the attackers could bypass this defence line. The second defence layer consists of
data-driven models for cyber-attack detection based on network traffic and system data, including
the classification model indicated by M1 and big data analytics models indicated by M2. The
classification models are based on supervised learning techniques, which can only detect attacks
with behaviours similar to known attacks. Unsupervised big data analytics-based models will
provide additional flexibility for intrusion detection; this is an area of ongoing research. M1 and
M2 provide early detection of attackers when the attacks cause behaviour deviation from normal
Operation. If the secondary layer fails to detect malicious activities, the last defence line monitors
process data and uses empirical models indicated by M3 to detect abnormal operation, potentially
due to cyber-attack. Model with residual thresholding detection was implemented in the M3
defence layer. The detection results of M1 and M3 using data generated from the physical test bed
show that the proposed cyber-attack detection system has a high detection accuracy and a wide
attack coverage. This multi-layer detection system improves the robustness of overall intrusion
detection and is sensitive to both known and zero-day exploits.
A multi-layer, data-driven cyber-attack system was developed to enhance ICS cyber security by
providing wider attack detection coverage by applying the defence-in-depth concept. In order to
detect unknown attacks using network and host system data, the unsupervised big data analytics
models in M2 will be studied to further enhance the second defence line.
3.2 Deep Learning based Efficient Anomaly Detection for Securing Process Control Systems
against Injection Attacks[2]
Modern Industrial Control Systems (ICS) represent a wide variety of networked infrastructure
connected to physical world. Depending on the application, these control systems are termed as
Process Control Systems (PCS),Supervisory Control and Data Acquisition (SCADA) systems,
Distributed Control Systems (DCS) or Cyber Physical Systems (CPS). ICS are designed for
reliability; but security especially against cyber threats, is also a critical need. In particular, an
intruder can inject false data to disrupt the system operation.
Anomaly-based detection approaches are used to detect attacks that features the injection of
spurious measurement data and proven to be efficient. In this paper, injection attack detection
system that uses deep learning algorithms such as stacked auto encoders and deep belief networks
that are tailored to identify different types of injection attacks are explained.
A model plant is used to obtain different data such as sensor and actuator measurements and
specific attacks were injected into the data. The injected attacks vary in behaviour for training and
testing of the proposed schema.
Data cleaning detects and corrects corrupt or inaccurate records in the collected data. Data
transformation includes tasks such as smoothing, normalization, aggregation and generalization of
acquired data. Normalization is the key task in data transformation which scales the data to a
specified range. Min-Max normalization and z-score normalization techniques are most commonly
used normalization techniques. Data reduction is usually done when acquired data is too big to
handle or work with. Feature extraction is a key step in deep learning applications such as pattern
recognition and image processing. The derived features out of raw data intends to be more
informative and non-redundant facilitating the subsequent learning and generalization steps and,
in some cases, leading to better human interpretations. Sometimes feature extraction is also
considered as a data reduction mechanism discussed in pre-processing.
Statistical features are those which are defined and calculated through statistical analysis.
Statistical analysis theory is the frequently used method of data feature extraction in the time
domain. Mathematical methods are applied on the raw or pre-processed data to obtain the
meaningful information. Mathematical features are also the most commonly extracted features on
both time series and time independent transformations. Several mathematical functions from
transform theory can be used to translate the signal into a different domain. List of mathematical
features include derivate, probability and stochastic process, estimation theory, numerical methods
etc. Deep learning is a machine learning technique combining both supervised and unsupervised
techniques inspired from human brain. Some common deep learning architectures include Stacked
Auto-Encoders (SAE), Deep Belief Networks (DBN), Convolution Neural networks [CNN] are
used in image processing applications and requires huge dataset and training time.
In this paper, Two different deep learning techniques, SAE and DBN were used for complex
feature extraction and later the classification was done with SVM and SMR. Different techniques
have different detection accuracies for different injection attacks. In order to achieve the best
detection accuracy hierarchical architecture with ranking approach can be used. The detection
accuracies are dependent of type of dataset extracted features and the network architecture along
with configuration parameters. Stacked Auto-Encoders (SAE), is the concept of stacking multiple
auto encoders together. Deep Belief Networks (DBN) are formed by stacking Restricted
Boltzmann Machines (RBM). RBM is a generative stochastic network which can learn a
probability distribution over its set of inputs. An expertise in these parameters is necessary to
identify which configuration suits well for the individual application.
3.3 Online Monitoring of a Cyber Physical System against Control Aware Cyber Attacks[3]
There have been an increasing number of malware attacks on the industrial control systems like
Stuxnet in 2010 , Maroochy Shire Sewage attack in 2000, water filtering plant of Pennsylvania in
2006 and Davis-Besse power plant in Oak Harbor, Ohio in 2003. Increasing vulnerabilities in the
cyber physical system have made information security an immediate concern and need for
detecting and controlling the spread of such malware. Information security methods like
authentication and integrity are inadequate in securing these control systems. Attacks on control
system can result in tremendous costs to an organization in rebuild and recovery activities.
Cyber-Physical System (CPS) is integrations of computation with the physical processes. Control
systems automate the tasks once performed by the humans by sensing the environmental
conditions, executing the programmed logic and then actuating physical equipment to perform a
desire task. Control systems are made up of sensors along with computational and communication
capabilities.
Data received by the actuator causes necessary action(s) on the physical system. Sensors measure
the physical system states and transmit it to the distributed controllers. A control action is a reactive
process and failure of any non-redundant sensor or actuator can cause irreparable damage to the
system under control.
Statistical techniques like SPRT are useful in the malware detection .In a cyber-physical system
like SCADA, data is collected in the form of bug reports and system status logs. These data can
provide the vital historic information for understanding system behaviour and its trends. However,
these files are huge in size and difficult to inspect manually.
The paper focuses on using computational geometric techniques for understanding the controller
profile to detect anomalous behaviour.
3.4 Detection of Cyber-Attacks with Zone Dividing and PCA[4]
In 2010, an epoch making malware, Stuxnet, was discovered. It was a virus targeting centrifuge
controllers in the Iran nuclear fuel factory. After its discovery subspecies have been developed.
Although Stuxnet had a specific target, indiscriminate attacks can be committed by them. When
control systems are intruded, not only their dysfunction but also serious accidents such as
explosion or spill of dangerous substances might occur. Industrial control systems (ICS) require
highly reliable security and safety services with urgent priority.
In information networks security measures are frequently taken. Databases of anti-virus software
are updated every day. Various security patches are sent from product developers almost every
day. However, in control networks anti-virus software is not utilized or security patches are not
applied. Because they increase computation load and change link libraries, they might make
controllers stop or be in ill conditions.
Therefore, vulnerability of control networks is much less than one of information networks.
Even in information networks, successes of cyber-attacks are reported frequently. The relationship
of cyber-attacks and security measures is a cat-and-mouse game. In order to assure the safety of
ICS against cyber-attacks, the relationships between safety and cyber-security must be considered
and the characteristics of the plant must be taken advantages to develop security measures. PCA
can be applied to any kinds of plants if normal operation data are available. Many abnormality
detection systems can be constructed for real industrial plants. It is still difficult to distinguish the
causes of the abnormal situation as cyber-attack. The detection is very important especially
because concealment is included cyber-attack procedures. The combination of zone division and
automatic abnormal detection using PCA can be an effective security measure.
In this paper, a design method of control network configuration to improve security and safety is
proposed. The network is divided into plural zones. If the security of each zone is set
independently, the possibility of the intrusion of the whole area becomes low. How to divide the
network and how to detect the abnormality are discussed. Examples of application of zone division
and PCA were illustrated. It was shown that the system could detect the relationship changes
caused by concealment.
4. Conclusion
The growing number of security incidents in ICS facilities is mainly due to a combination of
technological and organizational weaknesses. In the past, ICS facilities were separated from public
networks, used proprietary software architectures and communication protocols. Built on the
“security by obscurity” paradigm, the systems were less vulnerable to attacks leveraging ICT.
Although keeping a segment of communication proprietary, ICS vendors nowadays increasingly
use IP-based communication protocols and commercial off-the-shelf software. Also, it is standard
to deploy remote connection mechanisms to ease the management during off-duty hours, and
achieve nearly-unmanned operation. The stakeholders seldom enforce strong security policies.
User credentials are often shared among users to ease day-to-day operations, seldom updated (and
not always revoked), resulting in a lack of accountability. Due to these reasons, ICS facilities have
become increasingly vulnerable to internal and external cyber-attacks. Although companies
reluctantly disclose incidents, there are several published cases where safety and security of ICS
were seriously endangered.
Many machining monitoring systems based on artificial intelligence (AI) process models may be
used for optimising, predicting or controlling machining processes.AI has significances when
compared to traditional mathematical modelling and statistical analysis.
5. References
[1] Fan Zhang, Hansaka Angel Dias Edirisinghe Kodituwakku, J. Wesley Hines,Jamie Coble,
2019,Multi-Layer Data-Driven Cyber-Attack Detection System for Industrial Control Systems
Based on Network, System, and Process Data, IEEE Transactions on Industrial Informatics
[2] Sasanka Potluri, Christian Diedrich,2019, Deep Learning based Efficient Anomaly Detection
for Securing Process Control Systems against Injection Attacks,15th International Conference on
Automation Science and Engineering
The influence of the velocity field on the stretch factor and on the
characteristic length of wrinkling of turbulent premixed flames
Abstract
We investigate the effect of the turbulent velocity field on the reaction rate of turbulent premixed
flames within the laminar flamelet regime. In the Bray-Moss-Libby (BML) combustion modeling,
two parameters account for the effects of turbulence on the flame, namely the stretch factor and
the characteristic length of wrinkling. However, difficulties in modeling flame stretch suggests the
stretch factor to be assumed constant and equals to unity, which may lead to an inaccurate
representation of the flame behavior. Also, the length scale of wrinkling is often calculated as a
function of the fluctuating velocity via empirical correlations that depend on adjustable constants.
As a first investigation line, we propose an expression for calculating the stretch factor
dynamically, based on the divergence of velocity. In a second line we propose a hybrid reaction
rate model that incorporates the well-known fractal approach into the BML model, by means of
the characteristic length of wrinkling. The initial quasi-laminar regime is considered in the early
stages of flame propagation, and the transition from laminar to turbulent is based on the turbulent
Reynolds number of the flow. The modeling is carried out within an in-house developed a 3D
Navier-Stokes compressible solver for premixed methane-air and propane-air flames in three
partially obstructed geometries.
Keywords: Stretch factor, flame wrinkling, turbulent premixed flames, velocity field, fractal
approach, BML model
1 Introduction
2 Methodology
The BML formulation for the mean reaction burning rate in the flamelet regime is
considered [2]
𝑔𝑐̅(1 − 𝑐̅)
𝑤̅𝑐 = 𝜌𝑅 𝑢𝐿0 𝐼0
|𝜎̂𝑦 |𝐿̂𝑦
where the two first terms are the reactants density and the unstretched laminar flamelet speed,
respectively. The stretch factor is represented by Io, c is the Reynolds-averaged reaction progress
variable, 𝑔 and |𝜎̂𝑦 | are model constants, and 𝐿̂𝑦 is the characteristic length of wrinkling, given as
𝑢
𝐿̂𝑦 = 𝑐𝐿 𝑙𝐿 𝑓 ( )
𝑢𝐿0
where 𝑐𝐿 is a model constant, 𝑙𝐿 is the laminar flamelet thickness and the function f relates the
velocity fluctuations u and the unstretched laminar flamelet speed 𝑢𝐿0 via the empirical correlation
[7]
−1
𝑢 𝑐𝑤1 1
𝑓( 0 ) = [(1 + ⁄ 0 ) (1 − 𝑒𝑥𝑝 [− ] )]
𝑢𝐿 𝑢 𝑢𝐿 1 + 𝑐𝑤2 𝑢⁄𝑢𝐿0
where V is the volume of the hexahedrical uniform computational cell and 𝑐𝐿𝐴𝑀 is a constant. The blending
function from laminar burning to turbulent burning if given by [1]
We propose an expression for the stretch factor that is no longer constant, but instead it is
calculated based on the influence of the velocity field on the flame surface. We follow the
reasoning line that the divergence of velocity contributes to flame stretching the same way the
divergence affects the flow, by representing points of both outward and inward fluxes on the
surface of the flame.
We call it a dynamic stretch factor, and it is given by a simple algebraic expression based
on a normalization between the local divergence of velocity to its maximum value in the previous
time step on the surface of the flame
|𝛻. 𝑣|
𝐼𝑜 =
𝑚𝑎𝑥|𝛻. 𝑣|
Therefore, in this first investigation pathway, the mean reaction rate is calculated by the
following expression
|𝛻.𝑣| 𝑔𝑐̅(1−𝑐̅)
̅𝑐 = 𝜌𝑅 𝑢𝐿0 (𝑚𝑎𝑥|𝛻.𝑣|) (
𝑤 𝑢
𝑛 )
|𝜎
̂𝑦 |𝑐𝐿 𝑙𝐿 𝑓( )
𝑢𝐿
where the function f considers an empirical correlation [7]. With the exception of the 𝑐𝐿 constant,
the values assigned for all model constants are presented in Table 1.
Table 1: Model constants considered in the dynamic stretch factor proposed formulation.
In this formulation, the constants presented in Table 1 are not changed and the assigned
values are typical values found in literature [1], [7], [8].
Following the works of [9] and [10], we calculate the length scale of wrinkling by replacing
the function f for the classical fractal concept, that describes the flame surface as a fractal [11].
𝑢 𝐿𝑇𝑢𝑟𝑏 𝐷𝑓−2
𝑓( )=( )
𝑢𝐿 𝑙𝐺
The flame surface is wrinkled by rotating turbulent eddies of different length scales,
ranging from the inner cut-off to the outer cut-off, powered by the fractal dimension 𝐷𝑓. This
study assumes the outer cut-off to be equal to the integral length scale of wrinkling 𝐿𝑡𝑢𝑟𝑏 and the
inner cut-off to be equal to the Gibson length scale, which is considered to be the smallest scale
having a turnover velocity sufficient to wrinkle the flame front [12]
𝑘 3⁄2 𝑢𝐿𝑜3
𝐿 𝑇𝑢𝑟𝑏 = 𝑙𝐺 =
𝜖 𝜖
where k is the turbulent kinetic energy, e is the turbulent kinetic energy dissipation rate.
The proposed expression for the mean reaction rate in this approach is given by
𝑔𝑐̃ (1 − 𝑐̃ )
̅𝑐 = 𝜌𝑅 𝑢𝐿𝑜
𝑤
𝐿𝑇𝑢𝑟𝑏 𝐷𝑓−2
|𝜎̃𝑦 |𝑐𝐿 𝑙𝐿 ( )
( 𝑙𝐺 )
where the stretch factor is omitted for its consideration to be constant and equals to unity.
The constants applied in this approach can be seen in Table 2. The constant
𝑐𝐿𝐴𝑀 was tuned to the value of 0.09 and the turbulent threshold Reynolds is taken as the Reynolds
number for internal turbulent flows.
In the present study, we investigated two approaches for the mean reaction rate in three
different geometries that consist of combustion chambers partially obstructed by solids (Figure 1).
Chambers (a) and (b) initially contains a stoichiometric mixture of propane and air, whereas
chamber (c) is filled with methane and air at stoichiometric proportions prior to ignition.
The internal dimensions of chambers (a) and (b) are 50 x 50 x 250 mm, whereas chamber
(c) has 150 x 150 x 500 of internal dimensions. The cubic obstacles in chambers (a) and (b) has a
cross section of 12 x 12 mm and are positioned at 100 mm away from the ignition point (in red).
Chamber (b) contains two baffle stations located at 50 and 80 mm from the ignition point. Each
baffle has 3 x 4 x 50 mm and are positioned 5 mm apart. Chamber (c) contains three rectangular
obstacles (75 x 10 x 150 mm) at 100 mm apart from each other and the ignition point [13], [14],
[15].
Assigned values for 𝑐𝐿 can be checked on Table 3.
𝑐𝐿
BML Dynamic 𝐼𝑜 BML hybrid
Combustion chamber (a) 2.0 2.0 3.5
Combustion chamber (b) 1.0 1.0 9.0
Combustion chamber (c) 2.5 2.5 10.0
3 Results
Simulation results applying the BML formulation with a constant stretch factor are
presented in Figure 2. These results also consider the empirical correlation for calculating the
length scale of wrinkling. They were used as a benchmark for assessing any other results obtained
in the BML modification that are presented further.
(a) (b)
(c)
Figure 2: Flame position at different time steps with the classical BML formulation, applying
constant Io and an empirical correlation [1] for calculating the length scale of wrinkling. Flame
position (a) in chamber Fig.1a; (b) in chamber Fig.1b; and (c) in chamber Fig.1c.
Flame position at different time steps into the chambers applying the two proposed
approaches for BML modification are presented on Figure 3. Simulation results considering the
proposed BML hybrid approach can be observed in Figures 3a, 3c, and 3e, whereas flame positions
obtained by the proposed dynamic stretch factor are shown in Figures 3b, 3d, and 3f. It can be
noted that in all three geometries, the BML hybrid simulations show a well-defined flame contour
and a clear separation between reactants (progress variable equals to 0) and products (progress
variable equals to 0). On the other hand, in the dynamic stretch factor simulations results,
conversion into products is not fully complete, especially in areas where there is more resistance
to the flow Figures 3d and 3f. This behavior can be related to a higher divergence field that
contributes to higher rates of flame stretching.
(a) (b)
(c) (d)
(e) (f)
Figure 3: Flame position at different time steps in the partially obstructed combustion chambers.
(a), (c), (e) BML hybrid simulations; (b), (d), (f) dynamic stretch factor (Io) simulations.
Comparison of flame position in chambers (a), (b) and (c) throughout time can be observed
in Figures 4a, 4c and 4e, respectively. Flame speed in chamber (a) is plotted against time in Figure
4b, whereas the flame speed in chambers (b) and (c) is plotted against the axial distance from the
ignition point, and are shown in Figures 4d and 4f, respectively. The results are compared either
with experimental data or LES simulation from literature [13], [14], [15].
In the graphs, curves identified by “BML” refer to the model considering a constant stretch
factor and the function f is calculated by the aforementioned empirical correlation [7]. It can be
observed that the insertion of the dynamic stretch factor for calculating the mean reaction rate,
without changing any other model parameter, acted to decrease flame propagation via flame
stretching, which contributed to a slight improvement in agreement with literature data. However,
flame position and speed profiles were barely changed and some discrepancy from literature
benchmarks still remain.
Such discrepancy is diminished with the BML hybrid approach. It can be noted a
significant improvement in both flame position and flame speed profiles in Figures 4a, 4b
(chamber a), and 4e, 4f (chamber c). Also, it is important to bear in mind that this approach
considered the laminar-to-turbulent transition Reynolds as in internal turbulent flows.
None of the BML formulations were able to predict the final flame acceleration in chamber
(b), as it can be seen in Figures (c) and (d). These results, when compared to the images in Figures
3c and 3d, may the related to a pronounced re-laminarization effect (reduction in speed and
turbulence levels) decreasing flame speed [13].
(a) (b)
(c) (d)
(e) (f)
Figure 4: Flame position at different time steps in the partially obstructed combustion chambers.
(a), (c), (e) BML hybrid simulations; (b), (d), (f) dynamic stretch factor (Io) simulations.
We have introduced two alternative approaches for modification of the BML reaction rate
model. The first one considers a dynamic stretch factor that is calculated taking into account the
effect of the divergence of velocity to flame stretching. The second formulation calculates the
characteristic length of wrinkling as a function of the fractal concept in which the surface of the
flame is wrinkled by the length scales of turbulence, ranging from the Gibson length scale to the
integral length scale. Within this approach, the laminar-to-turbulent transition turbulent Reynolds
is taken as the transition Re for internal turbulent flows. This formulation showed a significant
improvement in predicting flame position and flame speed in two out of three geometries tested.
Future work will focus on combining the two proposed approaches for calculating the BML mean
reaction rate as well as running simulations for large scale geometries.
5 References
1. Vianna, S. S., & Cant, R. S. (2014). Initial phase modelling in numerical explosion applied
to process safety. Procss Safety and Environmental Protection, 92(6), 590-597.
2. Bray, K. N. C., Libby, P. A., & Moss, J. B. (1984). Flamelet crossing frequencies and mean
reaction rates in premixed turbulent combustion. Combustion Science and Technology,
41(3-4), 143-172.
3. Bray, K. N. C. (1990). Studies of the turbulent burning velocity. Proceedings of the Royal
Society of London. Series A: Mathematical and Physical Sciences, 431(1882), 315-335.
4. Chakraborty, N., Alwazzan, D., Klein, M., & Cant, R. S. (2019). On the validity of
Damköhler's first hypothesis in turbulent Bunsen burner flames: A computational analysis.
Proceedings of the Combustion Institute, 37(2), 2231-2239.
5. Ferreira, T. D., Santos, R. G., & Vianna, S. S. (2019). A coupled finite volume method and
Gilbert–Johnson–Keerthi distance algorithm for computational fluid dynamics modelling.
Computer Methods in Applied Mechanics and Engineering, 352, 417-436.
6. Ferreira, T. D., & Vianna, S. S. (2019). The Gilbert Johnson Keerthi distance algorithm
coupled with computational fluid dynamics applied to gas explosion simulation. Process
Safety and Environmental Protection, 130, 209-220.
7. Abu-Orf, G. M., & Cant, R. S. (2000). A turbulent reaction rate model for premixed
turbulent combustion in spark-ignition engines. Combustion and flame, 122(3), 233-252.
8. Chang, N. W., Shy, S. S., Yang, S. I., & Yang, T. S. (2001). Spatially resolved flamelet
statistics for reaction rate modeling using premixed methane-air flames in a near-
homogeneous turbulence. Combustion and flame, 127(1-2), 1880-1894.
9. Lindstedt, R. P., & Vaos, E. M. (1999). Modeling of premixed turbulent flames with second
moment methods. Combustion and flame, 116(4), 461-485.
10. Aluri, N. K., Muppala, S. R., & Dinkelacker, F. (2006). Substantiating a fractal-based
algebraic reaction closure of premixed turbulent combustion for high pressure and the
Lewis number effects. Combustion and Flame, 145(4), 663-674.
11. Gouldin, F. C. (1987). An application of fractals to modeling premixed turbulent flames.
Combustion and flame, 68(3), 249-266.
12. Peters, N. Turbulent Combustion. Cambridge University Press, 2000.
13. Ibrahim, S. S., Gubba, S. R., Masri, A. R., & Malalasekera, W. (2009). Calculations of
explosion deflagrating flames using a dynamic flame surface density model. Journal of
Loss Prevention in the Process Industries, 22(3), 258-264.
14. Gubba, S. R., Ibrahim, S. S., Malalasekera, W., & Masri, A. R. (2011). Measurements and
LES calculations of turbulent premixed flame propagation past repeated obstacles.
Combustion and Flame, 158(12), 2465-2481.
15. Patel, S. N. D. H., Jarvis, S., Ibrahim, S. S., & Hargrave, G. K. (2002). An experimental
and numerical investigation of premixed flame deflagration in a semiconfined explosion
chamber. Proceedings of the Combustion Institute, 29(2), 1849-1854.
23rd Annual Process Safety International Symposium
Abstract
Every LNG project that is proposed to be built, expanded or significantly modified needs to meet
the siting requirements of the applicable regulations. While the requirements and methodology can
vary among different countries – for example, U.S. regulations follow a prescriptive approach,
while the European standard requires a risk assessment to be performed – a common siting
requirement is the safety of the public outside the project’s fenceline in the event of accidents
within the LNG plant.
In order to quantify the hazard footprints for potential accident scenarios (such as flammable vapor
dispersion, pool and jet fires, vapor cloud explosions, etc.), computational tools must be used.
Given the multiplicity of tools available, ranging from empirical models to 3D computational fluid
dynamics packages, the agency reviewing the project application may not have the methodologies
or protocols necessary to determine the suitability of a given model to a certain scenario, its
accuracy and any other setup requirements. For this reason, a Model Evaluation Protocol (MEP)
was developed in 2007 to allow computational tools for vapor dispersion modeling to be reviewed.
This protocol was then successfully applied to two software packages (Phast and FLACS), which
were found acceptable for LNG vapor dispersion modeling under US federal regulations.
The 2007 MEP, however, is very limited in scope: in fact, it only addresses vapor dispersion
modeling and, more specifically, only from atmospheric releases (e.g., vapors from a liquid spill
onto the ground). This means that there are currently no established protocols to evaluate models
to simulate hazards such as the flammable or toxic dispersion of a vapor cloud from a pressurized
release (e.g., a pipe breach), the overpressures generated by a vapor cloud explosion, etc.
Blue Engineering and Consulting and the Gas Technology Institute are collaborating on a DOT-
PHMSA sponsored research project to develop a new set of Model Evaluation Protocols, that will
allow the review of modeling tools for each of the above-referenced hazards. The new MEPs will
greatly increase the confidence of authorities as well as the public, by defining which models may
be used and under which limitations, and what validation factors need to be applied depending on
the type of hazards being evaluated. This paper describes the framework of the new MEPs.
1 Background
The safe siting of LNG facilities requires the quantification of the consequences to people and
property from a loss of containment and release of hazardous materials (e.g., flammable and/or
toxic materials). In order to quantify the hazard footprints for potential accidental release scenarios
(such as flammable vapor dispersion, pool and jet fires, vapor cloud explosions, etc.),
computational tools must be used.
Current U.S. federal regulations by the U.S. Department of Transportation, Pipeline and Hazardous
Materials Safety Administration (PHMSA) contained in 49 CFR 193 include a list of models
required to perform consequence modeling for LNG facility siting: these models include
DEGADIS and FEM3A for flammable vapor dispersion distances and LNGFIRE3 for thermal
radiation distances from each LNG container and LNG transfer system. Unfortunately, these
models have significant limitations that restrict their applicability to only a fraction of flammable
dispersion and pool fire scenarios typically involved in a siting study. Furthermore, these models
are unable to model other types of hazards (e.g., overpressures) and hazard scenarios (e.g., flashing
and jetting releases, jet fires) currently required by PHMSA’s guidance
(https://www.phmsa.dot.gov/pipeline/liquified-natural-gas/lng-plant-requirements-frequently-
asked-questions/h1). Therefore, “new” models (that is, models not explicitly listed in 49 CFR
193) need to be used to perform these calculations.
Given the multiplicity of tools available for hazard modeling, ranging from empirical models to
3D computational fluid dynamics packages, concerns may arise regarding the suitability of a given
model to a certain scenario, its accuracy and any other setup requirements; furthermore, the
regulatory body reviewing the work may not have the expertise necessary to make such
determination. These concerns led to the 2007 development of a model evaluation protocol (MEP)
for LNG vapor dispersion [1]; two different dispersion models (Phast v.6.6-6.7 and FLACS v.9.1)
were subsequently reviewed according to the 2007 MEP and approved by PHMSA in 2011 [2],
[3] for use in vapor dispersion modeling.
However, the scope of the 2007 MEP is quite limited relative to the current regulatory requirements
for LNG facility siting: in fact, it only applies to vapor dispersion hazards; additionally, the only
data sets included in the validation database represent LNG spills onto water or low-momentum,
ground-level gas releases. Therefore, in 2019 PHMSA sponsored a research project to develop a
set of model evaluation protocols that would allow the evaluation of models for the calculation of
the different types of hazards and hazard scenarios associated with the operation of LNG facilities.
This research project is conducted by Blue Engineering and Consulting Company (BLUE), in
collaboration with the Gas Technology Institute (GTI).
2 Scope
The flow chart shown in Figure 2-1 describes the potential outcomes following a loss of
containment from a pipe or vessel, and can be used to clarify the scope of the current project:
The flow chart shown above does not consider every possible hazard scenario – for example, the
same release may create both flammable and toxic hazards. However, it highlights the main hazard
categories that need to be included in an LNG facility siting analysis. The items shown in green
identify steps of the analysis which require modeling. The scope of this project is to develop a set
of MEPs so that models used to perform each of these types of calculations may be evaluated, with
two exceptions:
A guidance document on the evaluation of models for LNG fires was released by Sandia
National Laboratories [4] and is considered by PHMSA as an MEP for pool and jet fires;
Cryogenic and asphyxiation hazards are typically associated with flammable and/or toxic
releases and modeling of these hazards is usually performed by the same models used to
perform flammable or toxic dispersion calculations. Therefore, MEPs for cryogenic or
asphyxiation hazard modeling are not required.
Note that the initiating event (loss of containment) does not represent a hazard per se; however,
defining the source term for any scenario requires modeling appropriate to the nature of the release;
therefore, a model evaluation protocol is necessary for that step as well. In summary, the scope of
the current project is to develop MEPs to evaluate models for:
1. Source term.
2. Flammable dispersion (including non-LNG materials and flashing and jetting releases).
3. Toxic dispersion.
4. Overpressures from vapor cloud explosions.
5. Fireballs and overpressures from BLEVEs.
The broad scope of this project allowed the development of a general model evaluation
methodology, based on an extensive review of the published literature including previous model
validation and evaluation efforts. The common methodology allows for an easier development of
the individual MEPs and facilitates both the application and evaluation processes, particularly for
models capable of calculating different hazard types. Additionally, a consistent methodology is
expected to make the entire model evaluation process easier for stakeholders to understand. Of
course, the methodology recognizes that certain tasks are inherently hazard-specific and can be
adjusted as needed to account for these differences.
For each hazard type described above, the overall model evaluation protocol consists of the
following:
1. Statement of scope
2. Evaluator qualifications
3. Model description questionnaire
4. Scientific assessment
5. User-oriented assessment
6. Model Verification
7. Model Validation, which includes:
a. Model Validation Database (MVD), with:
i. Experimental data necessary for modeling
ii. Key variables and physical comparison parameters
iii. Statistical performance measures
iv. Acceptability criteria
b. Sensitivity cases
c. Uncertainty quantification
d. Qualitative performance assessment
e. Quantitative performance assessment
When a model is submitted for review and the model evaluation tasks are completed, a Model
Evaluation Report (MER) is prepared, which includes:
A brief discussion of each subtask is provided below, as guidance for the development of hazard
specific documents, which will be the purpose of the remainder of this project.
The MEP needs to clearly state the scope of the evaluation for which it is providing guidance. This
includes:
The hazard being evaluated: It is important for the specific hazard to be described,
particularly because many modern models can calculate several different types of hazards;
each hazard should be evaluated separately, and the model proponent may choose to seek
approval only for some modeling capabilities.
The types of models that may be evaluated: The MEPs developed during the current
project are intended to be applicable to any type of model.
The objectives of the evaluation: In general, a model evaluation can serve multiple
purposes, including regulatory approval, performance ranking, model improvement, etc.
The main objective of these MEPs is to provide (or deny) regulatory approval and to
establish conditions (if any) for such approval.
Most of the existing protocols specify that the reviewer should be a third party, in order for the
review to be objective and independent. The downside of this requirement is that a truly
independent reviewer is unlikely to have the same knowledge and expertise as the model
developer, and this may affect the model evaluation. Nonetheless, it is believed that allowing the
model developer to perform the entire evaluation would inevitably raise objectivity concerns.
Therefore, the following requirements apply to the individual (or group) performing a model
evaluation under this protocol:
The evaluator may not be associated with the model developer or with any of the model
distributors. Prior association is acceptable, provided that there is no current or foreseeable
collaboration that may raise concerns of objectivity;
The evaluator must have recognized expertise with the physics of the hazard being
considered;
The evaluator must have recognized expertise with consequence modeling, and specifically
with the same type of model (e.g., box, Gaussian, semi-empirical, CFD, etc.) as the one
being evaluated. Direct experience with the model being evaluated is not required,
however, would be beneficial.
A summary of the evaluator’s credentials will be included in the model evaluation report for
transparency.
The first step in the evaluation of a specific model is to familiarize with the model itself. Therefore,
the evaluation will begin with a Model Description, which should include information such as:
In order to facilitate the submittal and to ensure consistency of information across models, a
questionnaire will be included with each MEP; the questionnaire follows the general structure of
the one provided with the existing MEP for LNG dispersion [5], modified as needed based on the
specific hazards being evaluated. The model description should be prepared by the model
developer or a third-party with deep knowledge of the model.
The purpose of the scientific assessment is to ensure that if the model is able to correctly predict a
given scenario, it does so for the “right” reasons. The scientific assessment therefore includes
several tasks, such as [6], [7]:
Review and assess the scientific basis of a model (that is, the governing equations being
used to replicate the physical phenomena)
Describe the model’s capabilities and limitations, and any special features, relative to the
physical phenomena for which approval is being sought
Identify potential areas for improvement.
The information necessary to perform the scientific assessment should be obtained from the
questionnaire and from other relevant documentation, including technical references provided by
the model developer as well as peer-reviewed literature. The model evaluator will need to have in-
depth understanding of both physical phenomena and modeling principles involved, in order to
provide an independent assessment.
The purpose of the user-oriented assessment is to evaluate the usability of the model. Therefore,
it addresses the following issues:
The information necessary to perform the user-oriented assessment should be from the
questionnaire and other relevant documentation. Direct experience with the model by the
evaluator could be valuable, as it would allow to include ‘user’ feedback.
3.6 Verification
Verification is the process of ensuring that the implementation of a model is consistent with its
theoretical basis. The purpose of verification is to demonstrate that the coding of equations,
algorithms and databases in the model is correct. Potential procedures for code verification
include:
Verification can be complicated by confidentiality issues for proprietary models. As such, most
of the existing model evaluation protocols assign the responsibility for model verification to model
developers; the same approach is followed in this project: the model evaluator should rely upon
evidence of verification provided by the model developer and review it to perform the assessment.
3.7 Validation
Validation is the process of comparing model predictions to experimental data for scenarios that
test the physics that the model is intended to predict. As discussed in existing protocols [8],
“validation” in the true sense of the word cannot be accomplished by simply comparing a model
against a finite number of scenarios; what can be accomplished is an “evaluation” which
establishes enough confidence in a model to expect that it will perform in an acceptable manner
when applied to similar scenarios. However, for consistency with the majority of published
literature, the comparison of a model with experimental data in these MEPs will be called
“validation” and will be understood to implicitly refer to the data sets included in the validation
database, and not to a theoretical, unachievable absolute validation.
The purpose of a model validation database is to provide a set of scenarios to be simulated with a
computational model, in such a matter that qualitative and/or quantitative comparisons may be
made between model predictions and actual observations. In general, the scenarios in the database
will consist of experimental data sets; however, real-world scenarios (e.g., accidents) can also be
included provided that there is sufficient information to set up a simulation and evaluate the
modeling results.
The following criteria for the selection of relevant data were provided by Karaca [7]:
Test cases should represent as close to realistic scenarios as possible. The definition of
“realistic” is difficult and depends on the type of hazard: in general, the scale of the
experiment (strength of the source term, flammable cloud volume, dimensions of the test
area, etc.) relative to expected “real life” scenarios should be considered in this assessment.
Test description should be sufficiently detailed. The goal is for modelers to be able to set
up initial and boundary conditions in their simulations as close as possible to the
experiment.
Sufficient meteorological data should be available and obtained from sensors in/near the
area of interest. Location and height of sensors should be provided. Time resolution of
wind and temperature data should be enough to estimate turbulence and stability
parameters. Humidity and precipitation should be provided.
Measurements should be adequate to reliably describe the hazard. Data needs to be on a
sufficiently fine grid and sufficiently high time resolution. Data should be provided at
different averaging times to allow comparison with different models.
Measurements should be of a sufficient quantity to be statistically representative.
Data processing applied to raw data must be documented.
Uncertainty of all measured and derived quantities must be provided, together with a
description of the method used to define such uncertainty.
Recent work by Skjold et al. [9] introduced an interesting approach for selecting data sets to be
included in the MVD: each potential experimental data set is reviewed and scored according to
several parameters (e.g., relevance to industry practice; experimental scale; repeatability; quality
of measurements; availability of experimental data; etc.). An average score is then calculated for
each data set. Only sets with an average score above a predetermined threshold are included in
the validation database. This approach therefore provides a clear explanation for the selection of
certain data sets over others, instead of “cherry picking” scenarios that may favor one model type
over another; given the importance of transparency in a regulatory environment, this approach is
followed in the development of model validation databases for the current project.
The key variables for model validation depend on the hazard being considered. For example, for
flammable and toxic dispersion, gas concentration in air is certainly an important variable,
however other variables such as temperature may also be used; for vapor cloud explosions,
pressure is likely the variable of most interest, however, other variables such as temperature or gas
velocity may provide useful information; and so on. Ultimately, the data available from the
experimental data sets determines which measured or calculated variables can be used.
Once the key variables have been identified, the physical comparison parameters (PCPs) can be
determined. Once again, the choice depends on the type of variable being considered as well as
the available data. For example, the physical comparison parameters previously defined for dense
gas dispersion scenarios are as follows [10]:
1. Point-wise concentration
2. Maximum arc-wise concentration
3. Cloud width
4. Predicted distance to the measured maximum arc-wise concentration
5. Distance to the LFL concentration
6. Predicted concentration at the measured distance to the LFL
The selection of PCPs for flammable or toxic dispersion model validation can certainly start from
the list above; however, the same PCPs would not be relevant for overpressures or other hazards.
Model performance relative to experimental observation and evaluation database can be performed
on a qualitative and/or quantitative basis. Qualitative model evaluation consists of comparing
predicted and experimental plots of the relevant variables. A qualitative evaluation can be a useful
first step in model evaluation as it provides a general indication of the ability of a model to predict
a particular scenario.
A quantitative evaluation is necessary for a more detailed and objective assessment of a model’s
performance. This is typically done by defining a set of statistical performance measures (SPM)
that compare predicted and observed physical comparison parameters, as each measure has its
advantages and disadvantages [11]. The selection of SPMs should also be consistent with previous
work in order to gain experience with which values of SPMs represent a well performing model.
Each SPM should be associated with a range of values which indicate acceptable model
performance. Defining acceptable ranges for the SPMs is quite difficult, because there are no
theoretical “targets” that can be used for guidance. Instead, the definition of “acceptable” relies to
a certain extent on previous experience with model evaluations in a particular area. For example,
what could be considered an acceptable bias for a model simulating a complex phenomenon, such
as deflagration to detonation transition (DDT), may be considered unacceptable for a model
simulating a better understood phenomenon, such as the dispersion of an unobstructed jet release.
As discussed before, there has been vast experience accumulated in the field of dispersion
modelling, therefore, acceptability criteria for statistical performance measures for dispersion
models can be considered fairly well-established. However, model evaluation experience for other
hazards is very limited, therefore, establishing acceptability criteria in those cases will require
careful consideration. Additionally, any acceptability range should be periodically re-evaluated
as additional experience is gathered and newer models are evaluated.
It is important to remember that model validation is only one of the tasks involved in the model
evaluation, even though it is often the most “recognizable”. Therefore, meeting all (or failing to
meet some) of the SPM acceptability criteria should not automatically qualify (or disqualify) a
model for use.
3.7.4 Sensitivity analysis
The purpose of a sensitivity analysis is to evaluate the changes in model predictions due to
variations to the input parameters. Therefore, a model should be run multiple times for the same
scenario, every time changing one input parameter over a specified range, and the change in
modeling results reported – for example, as a tabulated percent of the base case, or as error-bars
on a scatter plot.
The guidance provided in existing model evaluation protocols is rather generic, given the many
factors that can affect model’s predictions, and how factors are likely to be different for each type
of model and hazard being considered. An example of requirements for sensitivity analysis is
included in PHMSA’s Advisory Bulletin ADB-10-07 [12] for the approval of vapor dispersion
models. The parameters to be varied and the variability ranges should be consistent with
experimental uncertainties in the respective validation datasets, and will be clearly indicated in the
MVD.
Uncertainty quantification (UQ) characterizes sources of uncertainties in a model and propagates their
effects to computed quantities of interest. The application of uncertainty quantification in this protocol
is to establish credible bounds of predictability on computed quantities of interest and to assess model
sufficiency based on computed variances. The protocol presents a well-defined workflow that may be
used to assess uncertainties in simulation results for LNG hazard modeling. The workflow builds on
well-established, peer-reviewed, guidelines such as those outlined in the American Society of
Mechanical Engineers (ASME) Standard for Verification and Validation in Computational Fluid
Dynamics and Heat Transfer [13] and the NASA monograph on simulation credibility and uncertainty
quantification [14].
The Model Evaluation Report (MER) represents the final product of a model evaluation, and the
only public part of the evaluation. The objectives of the MER are to:
Provide regulators with the information they need to determine whether to approve or reject
a model, and to set any conditions or limitations on its use
Convince all stakeholders that the model review was conducted in an objective and
independent manner
Therefore, the MER needs to include sufficient information to allow the reader (for example, staff
from a regulatory agency, model developer and users, or any other stakeholder) to understand the
review process and the results.
The main section of the MER will consist of a summary of the model evaluation, which will
address each section of the MEP, as described previously. In general, the MER will include the
following for each evaluation task:
Note that each evaluation task typically includes several subtasks, each with specific requirements.
For example, the scientific assessment evaluates the physical models, governing equations and the
numerical methods included in the model; the physical models are then broken down into several
submodels (dense gas dispersion, atmospheric boundary layer profiles, etc.). The model’s
performance should therefore be evaluated at the “submodel” level (e.g., comparing different
turbulence closure models, if available), to provide detailed information on the model’s
capabilities and limitations.
In order to ensure that approved models are used in a manner expected to result in accurate
predictions, the MER will include a section highlighting “best practices”. These may include:
Grid sizes
Time steps
Sub-models and property databases (e.g., turbulence closure, combustion, etc.)
Boundary conditions (e.g., boundary layer profiles)
Initial conditions or source terms
Deviations from best practices guidance when submitting simulations for regulatory approval, for
example, may “void” the model approval and require the user to provide additional evidence that
the model’s predictions should be considered acceptable.
Even though model improvement is not a stated purpose of the model evaluation protocols
developed during the current project, any observations made during the model evaluation, which
could lead to an improvement in the capabilities of a model to accurately predict the consequences
of a hazard scenario, should be identified and explained in the MER. However, the model
developer will not be required to act on these recommendations as a condition for the approval of
future versions of the model.
References
[1] M. J. Ivings, S. F. Jagger, C. J. Lea, and D. M. Webber, “Evaluating vapor dispersion models
for safety analysis of LNG facilities,” Health and Safety Laboratory, MSU/2007/04, 2007.
[2] Pipeline and Hazardous Materials Safety Administration, Final Decision on petition for
approval of PHAST-UDM version 6.6 and 6.7. 2011.
[3] Pipeline and Hazardous Materials Safety Administration, Final Decision on petition for
approval of FLACS (version 9.1 release 2). 2011.
[4] A. Luketa, “Guidance on the Evaluation of Models for LNG fires,” SAND2019-4426, Apr.
2019.
[6] N. J. Duijm and B. Carissimo, “Evaluating methodologies for dense gas dispersion models,”
in Environmental Science, vol. 19, 2001.
[7] D. Karaca, “Model Evaluation Protocol,” European Cooperation in Science and Technology,
COST ES1006, Apr. 2015.
[9] T. Skjold et al., “A Matter of Life and Death: Validating, Qualifying and Documenting
Models for Simulating Flow-Related Accident Scenarios in the Process Industry,” Chemical
engineering transactions, vol. 31, 2013.
[10] J. R. Stewart, S. Coldrick, C. J. Lea, and S. E. Gant, “Validation Database for Evaluating
Vapor Dispersion Models for Safety Analysis of LNG Facilities. Guide to the LNG Model
Validation Database Version 12,” Fire Protection Research Foundation, Sep. 2016.
[11] J. C. Chang and S. R. Hanna, “Air quality model performance evaluation,” Meteorol Atmos
Phys, vol. 87, no. 1–3, Sep. 2004, doi: 10.1007/s00703-003-0070-7.
[12] Pipeline and Hazardous Materials Safety Administration, Liquefied Natural Gas Facilities:
Obtaining Approval of Alternative Vapor-Gas Dispersion Models. 2010.
[13] American Society of Mechanical Engineers, “Standard for Verification and Validation in
Computational Fluid Dynamics and Heat Transfer,” American Society of Mechanical
Engineers, Standard VV 20, 2009.
Beirut: How behaves Ammonium Nitrate Exposed to Fire and How Strong
and Damaging is its Explosion?
Charline Fouchier, Hans Pasman, Sunhwa Park, Noor Quddus, Delphine Laboureur
Von Karman Institute for Fluid Dynamics, Sint Genesius Rode
TEES Mary Kay O'Connor Process Safety Center, Texas A & M University System
The Beirut explosion on the 4th of August is one more accident to be added to the
long list of tragedies caused by Ammonium Nitrate. While many investigations have
been conducted to understand better the behaviors of the molecule, it is still unclear
how the Ammonium Nitrate can detonate in an unconfined environment while heated
by fire.
A rapid summary of the state of art on Nitrate Ammonium is given, followed by an
analysis of the Beirut accident, with a proposed scenario that could have led to the
explosion.
Finally, methods to estimate the explosion energy, based on the blast arrival time,
the damages on buildings and the crater dimensions are applied on the Beirut accident
and compared.
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
In 2009, the UK Health and Safety Executive (HSE) published a review of serious incidents
involving ignition of flammable mists of high-flashpoint fluids, i.e. fluids whose vapours cannot
be ignited and sustain a flame at normal room temperature (e.g. kerosene, diesel, lubrication oils
and hydraulic oils). The review identified 37 incidents which together were responsible for 29
fatalities. In response to the findings, HSE and a consortium of other regulatory and industrial
sponsors funded a Joint Industry Project (JIP) on the subject, which ran from 2011 to 2015. The
work included a detailed literature review and a series of experiments at Cardiff University on a
mist release configuration consisting of a downwards-pointing spray from a 1 mm diameter
circular orifice. Test pressures ranged from 1.7 bar to 130 bar and three fluids were tested: Jet A1
(kerosene), a light fuel oil and a hydraulic oil. Computational Fluid Dynamics (CFD) simulations
were also performed, and results were compared to existing hazardous area classification
guidelines. The work was used to devise a preliminary classification scheme for mist flammability,
based on a fluid’s flashpoint and ease-of-atomization.
Several important questions remained unanswered following the first JIP relating to the effect of
the orifice shape, size and release configuration, and the ignition characteristics of other common
fluids, notably diesel. In 2018, HSE launched a follow-on JIP (currently ongoing) which aims to
address these issues. The work started with an updated review of flammable mist incidents,
published in 2019. Experiments on diesel have started in 2020 at Cardiff University and further,
larger-scale experiments are planned for 2020-2021 at the HSE Science and Research Centre,
Buxton.
This paper and presentation at the MKOPSC International Symposium 2020 provides an overview
of the work led by HSE on flammable mists over the last decade, and a summary of the preliminary
results from the ongoing experiments.
Combustible liquids are typically classified by their flashpoint temperature and are often regarded
as being relatively non-hazardous if they are handled at temperatures well below their flashpoint
(EI, 2015). However, if a high flashpoint liquid is atomised to produce a mist of fine droplets, it
can be ignited below its flashpoint and produce a fire or explosion. These flammable mists hazards
are mainly a concern for leaks from pressurized systems (e.g. pumps, pipework, valves) as a result
of corrosion, mechanical damage, cracks, seal failures or loosening of screwed fittings (Eckhoff,
1995). Flammable mists can also be produced by condensation of vapour, such as that produced
by an overheated bearing in a marine diesel engine (Freeston et al., 1956).
In Europe, there are regulations controlling flammable atmospheres, namely the ATEX
‘Workplace’ Directive (1999/92/EC)1 across the EU and the Dangerous Substances and Explosive
Atmospheres Regulations (DSEAR)2 in Great Britain. DSEAR was introduced to implement the
ATEX Directive and has been retained within the UK following its departure from the EU. These
regulations require the identification of any zones where a flammable atmosphere could form,
either as part of normal operations or in the event of a reasonably-foreseeable equipment failure.
Within such zones, all ignition sources must be controlled by using appropriate equipment rated
for use within hazardous areas. Both ATEX and DSEAR cover flammable atmospheres produced
by gases, dusts and/or mists.
There is established guidance available on the extent of hazardous areas for flammable gases (e.g.
BSI, 2016; EI, 2015), but relatively few guidance documents or standards are available to assess
equivalent hazardous areas for flammable mists, or to help select safe equipment for use in
flammable mist atmospheres. The relevant British and European standard, BS EN 60079-10-1
(BSI, 2016) contains limited guidance in Annex G, but this is qualitative rather than quantitative.
The Energy Institute model code of safe practice EI15 (EI, 2015) provides guidance on area
classification for installations handling flammable fluids, which includes tabulated hazard
distances for higher flashpoint fluids leaking at different pressures through various specified hole
sizes. However, the document acknowledges that “there is little knowledge on the formation of
flammable mists and the appropriate extents of associated hazardous areas”. The release conditions
given in EI15 are also tailored towards relatively large-scale equipment, with hole sizes ranging
from 1 mm to 10 mm.
Many areas of concern exist in plant rooms and other enclosed areas. Here, the lack of definitive
guidance often leads to the whole plant room being considered as a hazardous area. Many
assessments assume that leaks do not form a mist at lower pressures (perhaps below 5 or 10 bar
gauge), and it is not clear that all potential mist hazards are fully recognised.
Common items of industrial equipment may have the potential to produce oil mists. For example,
hydraulic equipment, lubricating oil systems and delivery lines for high-flashpoint fuels (diesel,
1
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:31999L0092 (accessed 10 September 2020).
2
https://www.hse.gov.uk/fireandexplosion/dsear.htm (accessed 10 September 2020).
kerosene etc.) could create aerosols if they failed under pressurised conditions. Such equipment is
in widespread use, but the creation and ignition of flammable mists does not seem to be a frequent
occurrence. The apparent lack of mist explosion events suggests that there are often mitigating
factors preventing flammable mists being created or ignited. Understanding these factors could
allow more accurate assessment of when control measures are unnecessary and when they are
essential.
It should be noted that the perception of oil mists seems quite different on board ships. Following
several incidents involving loss of life, the Safety of Life At Sea (SOLAS) regulations require
unattended engine crankcases to be fitted with oil mist detectors and automatic shutdown (IMO,
1974). The International Maritime Organisation (IMO) notes that the majority of engine room
fires are the result of oil mist formation and has guidance on the fitting of oil mist detectors in
these more open areas.
In 2009, the UK Government’s safety regulator, the Health and Safety Executive (HSE), published
a review of serious incidents involving ignition of flammable mists of high-flashpoint fluids
(Santon, 2009). HSE also recently worked in collaboration with the French national laboratory
INERIS3 and the Université de Lorraine in France (Lees et al., 2019) to produce a systematic study
of three European national incident databases: the UK’s Offshore Hydrocarbon Release Database,
the French ARIA database and the German ZEMA databases. These 2009 and 2019 incident
reviews both showed that while oil mist explosions were relatively infrequent events, they happen
sufficiently often that the possibility of one occurring should not be ignored. For example, the
latter study showed that over a 30 month period from 2016 to 2018, there was approximately one
incident per month that involved fluid mists or sprays on offshore oil and gas installations
operating on the UK continental shelf. Oil mist explosions have led to deaths, injuries and
significant property loss.
Following the review of mist incidents in 2009, HSE set up a joint industry project to help improve
our understanding of flammable mists. The four-year project started in December 2011 and was
jointly sponsored by 16 industry and regulatory partners (HSE, ONR, RIVM, GE, Siemens,
EDF/British Energy, RWE, Maersk Oil, Statoil, BP, ConocoPhilips, Nexen, Syngenta, Aero
Engine Controls, Atkins, Frazer Nash and the Energy Institute).
Literature review
The first stage of the project was an extensive literature review that examined three fundamental
issues: mist flammability, mist generation and mitigation measures (Gant et al., 2012; Gant, 2013).
Data on mist flammability were reviewed that included measurements of the Lower Explosive
Limit (LEL), Minimum Ignition Energy (MIE), Minimum Igniting Current (MIC), Maximum
Experimental Safe Gap (MESG) and Minimum Host Surface Ignition Temperature (MHSIT). One
of the significant findings was that the LEL for mists could fall to as low as 10% of the LEL for
the vapour of the same substance (i.e. to around 5 g/m3). Mists were found generally to be more
3
Institut National de l'Environnement Industriel et des Risques, http://www.ineris.fr (accessed 10 September 2020).
difficult to ignite than the equivalent vapour, due to the energy needed to vaporise droplets prior
to ignition.
Regarding releases from pressurized systems, the literature review noted that one of the challenges
in conducting tests with mists (as compared to gases and dusts) is the complexity introduced by
droplet breakup and agglomeration, impact with surfaces and evaporation. Correlations for
primary atomisation, secondary droplet breakup, and droplet impingement were reviewed. Much
of the historical work on sprays was found to be motivated by the development of internal
combustion engines and gas turbines. The study noted that there was uncertainty in applying
correlations developed for those applications to the very different scales and operating pressures
typical of industrial equipment requiring hazardous area classification (e.g. pumps and pressurized
pipework).
Release classification
Following the literature review, a method to classify releases was developed to group together
similar fluids and release scenarios (Burrell and Gant, 2017). This was deemed necessary because
of the large number of high-flashpoint fluids in use across a range of industries, which would make
detailed case-by-case assessments impractical. The classification method was based on two factors
that were considered to be significant for flammable mist formation from leaks of fluids under
pressure, namely the flashpoint of the fluid and the propensity for releases to atomize into droplets.
The chosen atomization criteria were calculated based on the Ohnesorge number, 𝑂ℎ, which is a
dimensionless parameter that depends on the fluid viscosity (𝜇), density (𝜌) and surface tension
(𝜎) and the equivalent diameter of the leak (𝐷):
𝜇 (1)
𝑂ℎ =
√𝜌𝐷𝜎
Ohnesorge (1936) provided an empirical correlation for atomization breakup that depends on the
Reynolds number of the fluid released through the orifice, 𝑅𝑒:
To characterise the propensity of a given release to atomize, it was proposed to use the ratio of 𝑂ℎ
to 𝑂ℎ𝑐 . The greater the value of this ratio (𝑂ℎ/𝑂ℎ𝑐 ) was taken to indicate a greater propensity for
the release to atomize into small, more easily ignitable droplets.
Figure 1 shows the flashpoint plotted against this ratio for a range of different fluids released
through a 1 mm diameter hole at 10 bar gauge pressure. These values were chosen to represent a
credible accident scenario and it is one of the conditions considered in EI15. It is also readily
achievable in experimental studies. The fluids assessed were chosen because they were of
particular interest to the organisations sponsoring the JIP. The vertical and horizontal error bars in
Figure 1 show the range in flashpoint and (𝑂ℎ/𝑂ℎ𝑐 ) values resulting from the range in fluid
properties, which were taken from various data sources in the literature (see Burrell and Gant,
2017 for details). For biodiesel, the effect of changing the release temperature (which alters the
fluid properties) is also shown in the Figure.
Figure 1. Mist classification by flashpoint and atomisation criteria for a 10 bar release through a
1 mm hole
The fluids appeared to fall naturally into three groups in Figure 1. To generalise this classification
system to all fluids, the figure was split into quadrants defining four “Release Classes” (see Figure
2), which can be summarised as:
Release Class I: More volatile fluids that are more prone to atomisation, such as many
commercial fuels.
Release Class II: More volatile fluids that are less prone to atomisation, such as viscous
fuel oils at ambient temperatures.
Release Class III: Less volatile fluids that are also less prone to atomisation, such as many
lubricants and hydraulic fluids at cool (near ambient) temperatures.
Release Class IV: Less volatile fluids that are more prone to atomisation, such as many
lubricants and hydraulic fluids at high temperatures that may arise during use.
The specific values used to bound the four Release Classes (i.e. flashpoint of 125 ºC and Ohnesorge
ratio of 2) were selected based on the best judgement at the time. As and when new evidence
becomes available it is possible (and even likely) that these bounds may need to be revised.
Figure 2. Mist classification diagram overlaid with the four release classes
Experimental studies
To investigate the flammability of mists, experiments were conducted at Cardiff University’s Gas
Turbine Research Centre (GTRC) using three exemplar fluids: one each from the Release Classes
I, II and III (Mouzakitis et al., 2017a; 2017b). These experiments were designed to produce a mist
from a pressurised leak through a small hole. The aim was to determine whether the mists produced
by the three fluids could be ignited and, if so, to use a spark igniter to map the extent of the
flammable cloud. Following this, the aim was to measure droplet size distributions and
concentrations at the edge of the flammable cloud, to investigate the LEL.
The tests were all conducted using a 1 mm diameter, smooth-bore, cylindrical plain orifice with
length-to-diameter ratio of 2. The releases were all directed downwards to prevent asymmetric
effects, and the experiments were conducted within a 1.2 m square, 2.5 m tall test chamber (shown
in Figure 3). This configuration provided a good starting point using a simple arrangement that is
well defined and readily repeatable. The 1 mm hole size is the smallest hole tabulated in the EI15
model code of safe practice and it therefore allowed for direct comparison to the existing guidance.
The ignition tests all used a 1 Joule electric spark (Chentronic’s Smartspark4). Prior to the start of
experimental work, there was considerable discussion within the project’s Steering Committee
regarding the ignition source. The intention was for the source to represent a credible upper limit
for most situations where area classification would be considered. While most commonly
occurring electrical sparks are significantly lower in energy, the consensus view was that 1 J
represented a reasonable upper limit. Situations with the potential for higher-energy ignition
4
https://www.chentronics.com/products/smart-spark (accessed 10 September 2020).
sources (or even naked flames) may exist in a few cases, but these were considered to be
sufficiently unusual that they would be outside the scope of normal guidance.
During a series of releases, the igniter was placed at a set axial distance from the release point and
then tracked across a radius of the jet to locate points just inside and outside of the ignitable
envelope. Similar radial tracks were tested at several different distances along the axis of the
expanding jet.
Droplet sizes and concentrations were measured using a non-intrusive laser Phase Doppler
Anemometer (PDA) system (Dantec Dynamics coherent Innova 70-5 Series argon-ion laser with
BSA P60 flow and particle processor5). PDA measurements were made following the ignition tests
at locations inside and outside of the ignitable envelope at the same locations as those tested for
ignition.
For Release Class I: Kerosene (Jet A1), low viscosity, flashpoint = 38 °C)
For Release Class II: Light Fuel Oil (LFO), higher viscosity, flashpoint = 81 °C.
For Release Class III: Hydraulic oil, higher viscosity, flashpoint = 223 °C.
5
https://www.dantecdynamics.com/ (accessed 10 September 2020).
Tests were also carried out with the same LFO heated to 70 °C. This temperature was still below
the flashpoint, but the increase in temperature changed the physical properties sufficiently that the
heated LFO was moved from Release Class II into Release Class I.
For each fluid, tests were first carried out at release pressures of 5, 10, 15 and 20 bar gauge.
Following these test pressures, further tests were carried out for kerosene and the hydraulic oil at
lower and higher pressures (respectively) to determine the limiting pressures were the mist could
be ignited.
The tests described above were all carried out for a “free jet” configuration, with the unobstructed
spray directed downwards from the top of the enclosure. Following these tests, a further set of
results were obtained for LFO and hydraulic oil with a flat mild steel impingement plate located
at either 150 mm or 400 mm below the orifice. In both cases, the igniter was located 25 mm above
the plate. The aim of these impingement tests was to see whether mists that could not be ignited in
the free jet configuration could be ignited after they had impinged at high-velocity onto a solid
surface and broken up to produce a finer mist.
Experimental results
Figure 4 shows some example photos of the ignition tests. The tests showed significant differences
between the three fluids from the different Release Classes (see Table 1). In the free jet tests,
kerosene was found to ignite at all of the test pressures from 5 to 20 bar. The pressure was then
reduced in steps of 1 bar to the lowest test pressure possible on the apparatus (1.7 bar gauge) and
the kerosene mist ignited in each case. The hydraulic oil showed the opposite behaviour and could
not be ignited at any of the pressures between 5 and 20 bar. The pressure was then increased in
stages up to 130 bar and the hydraulic oil mist still did not ignite fully, although there were
occasional localised flashes near the spark igniter but the flame did not propagate through the mist.
LFO did not ignite at ambient temperature across the range of pressures from 5 to 20 bar, but when
it was heated to 70 °C it ignited at all of the pressures.
In the impingement tests, the hydraulic oil again did not ignite at any of the pressures. The LFO at
ambient temperature did not ignite at a pressure of 15 bar but did ignite at the higher pressure of
20 bar. When the LFO was heated to 70 °C, it behaved in the impingement tests as it had done
previously in the free jet tests and ignited at all of the pressures from 5 to 20 bar.
In addition to these ignition test results, it was noted that there were significant differences in the
visible appearance of releases with the different fluids. At one extreme, the hydraulic oil released
at lower pressures remained largely concentrated in a dense, almost unbroken core of liquid with
very few small droplets being formed. At the other extreme, a significant proportion of kerosene
was well atomised even at very low pressures.
Figure 4. Ignition tests with kerosene (left) and hydraulic oil (right)
Spray Fluid
Temperature Release Pressure (barg) Ignition
Geometry (Flashpoint)
Kerosene 1.7, 2, 3, 4,
Free spray Ambient At all pressures
(FP = 38 ºC) 5, 10, 15, 20
No, but some
Hydraulic oil 5, 10, 15, 20, “flashes” at
Free spray Ambient
(FP = 223 ºC) 30, 70, 110, 130 highest
pressures
Light fuel oil
Free spray Ambient 5, 10, 15, 20 No
(FP = 81 ºC)
Light fuel oil
Free spray 70 °C 5, 10, 15, 20 At all pressures
(FP = 81 ºC)
Hydraulic oil
Impinging Ambient 5, 10, 15, 20 No
(FP = 223 ºC)
Light fuel oil
Impinging Ambient 15, 20 At 20 barg only
(FP = 81 ºC)
Light fuel oil
Impinging 70°C 5, 10, 15, 20 At all pressures
(FP = 81 ºC)
Figure 5 shows an example map of the ignition test results through the kerosene free jet release at
5 bar showing both ignition locations and positions where PDA droplet size and concentration
measurements were made. The PDA system worked from direct measurements of individual
droplets, collecting many thousands of measurements to obtain a statistical analysis of the aerosol
within a small measuring volume. The lack of small droplets in the releases with hydraulic oil and
LFO meant that the measurements were of low quality for those fluids. Some good PDA
measurements were obtained, but those only corresponded to areas where ignitions were certain.
There was little or no good quality data from locations outside the ignition envelope or even on
the borderline.
In the kerosene tests, the minimum calculated concentration from the PDA measurements for a
successful ignition point was 3 g/m3. However, the average concentration of the ignition positions
near the edge of the flammable envelope was 69 g/m3. The average concentration of the
unsuccessful ignition points from the PDA measurements was 21 g/m3. In comparison, the LEL
for kerosene vapour is approximately 48 g/m3 (Zabetakis, 1965).
Figure 5. Cross-section through the 5 bar kerosene mist showing the igniter positions as either
red dots (where ignition was successful) or green crosses (where ignition was attempted but the
mist did not ignite). The locations where PDA droplet size and concentration measurements were
made are shown as black circles. The release point is at zero on the axial and radial axes.
CFD modelling
Alongside the experimental studies, a set of CFD simulations were performed using the ANSYS
CFX-15 software6 (Coldrick and Gant, 2017). The model used an Eulerian-Lagrangian approach
in which the GTRC spray booth was represented by a fixed computational mesh and the spray was
represented by individual computational particles (representing a statistical sample of droplets)
that were tracked through the flow. Computational particles were released at the orifice location
and droplets were allowed to break apart under aerodynamic forces. The model accounted for the
transfer of mass, momentum and energy between the droplets and the surrounding air. Tests were
performed to ensure that the results were insensitive to the grid cell size and particle count. For
most of the simulations, a grid of 1.3 million nodes was used with 10,000 particles.
6
http://www.ansys.com (accessed 10 September 2020)
At the orifice, the primary breakup of the liquid into droplets was defined in the model by
specifying the initial spray cone angle and initial droplet size. Seven different cone angle models
and nine different droplet size models were tested. Two secondary breakup models were also tested
to account for aerodynamic forces on droplets. Details of these models are given in the report by
Coldrick and Gant (2017). Sensitivity tests with different combinations of models were undertaken
and results compared to the data from the GTRC experiments on kerosene at 20 bar. The best
performing combination of models was then used to model all of the other tests (i.e. the kerosene
tests at different pressures and the hydraulic oil and LFO free jets for pressures between 5 and 20
bar). The best performing primary breakup model was found to be the DNV Phase III JIP Rosin-
Rammler correlation (DNV, 2006), which gave predictions within a factor of 2 for the measured
droplet concentration and droplet diameter for the kerosene releases. Predictions for the hydraulic
oil and LFO were in worse agreement with the measurement data. The main issue there was that
the CFD model assumed that the release atomized whereas in the experiments only a small fraction
of the liquid was actually atomised. The model therefore predicted flammable concentrations to
occur when in practice the mist could not be ignited. Examples of the CFD results are given in
Figure 6.
Figure 6. CFD predictions of droplet concentration and size for the kerosene release at 20 bar.
Coloured circles show measured values with the same colour scale as the contours. Crosses
indicate where ignition occurred (a black cross) or did not occur (a white cross). Ignitions were
not attempted on the centreline and only droplet diameter and concentration were measured there
(a white cross on the centreline is used to identify solely the measurement location). Note that
the scales chosen are not the maximum levels: concentrations in excess of 50 g/m3 are shown in
the left-hand figure as red.
The kerosene CFD model was subsequently used to predict the extent of the flammable mist cloud
for Category C fluids in Table C4 of the EI15 code of safe practice (EI, 2015). These EI15 values
were originally determined using the consequence modelling software DNV-GL Phast7 for spray
releases directed horizontally in a 2 m/s wind, where the wind was blowing in the same direction
as the release (i.e. co-flowing). The hazard range was defined in EI15 as the distance to the LEL,
which was assumed to be a droplet concentration of 43 g/m3. EI15 presents results for four different
hole sizes of 1 mm, 2 mm, 5 mm and 10 mm and four pressures of 5 bar, 10 bar, 50 bar and 100
bar. The same set of conditions was modelled using CFD, although the pressures were modelled
as gauge pressure whereas the EI15 values are for absolute pressure, i.e. the CFD results were for
a 1 bar higher pressure in each case. The configuration of the CFD model was the same as that
described earlier in the model validation study, with a vertical downwards spray in nil wind, using
the DNV Phase III JIP primary droplet breakup model.
The results comparison (Table 2 and Figure 7) showed that the CFD model gave somewhat larger
hazard distances than those given in EI15, particularly for lower pressure releases. The EI15
distances all increase with pressure, but the CFD results exhibit more complex behaviour. This
was likely due to the EI15 hazard distance assuming a horizontal release, whereas the CFD value
was for a vertically-downwards release.
Table 2. Predicted hazard distances from CFD model compared to EI15 values for Category C
fluids
The literature review of mists by Gant (2013) showed that the LEL in quiescent mists could be
lower than the 43 g/m3 value assumed by EI15, by as much as a factor of 10 (i.e. approximately 5
g/m3). These lower concentration ignitions were observed in experiments with a strong ignition
source at the base of a quiescent mist cloud. Given this finding and the longer hazard distances
produced by the CFD model for downwards-directed releases, the results suggested that hazardous
distances could extend over a spherical volume with a radius around the release point similar to
that given in EI15, but with the hazardous zone extending over a greater distance downwards in a
cylindrical region below the release point (potentially, to the floor).
7
http://www.dnvgl.com/phast-and-safeti, accessed 10 September 2020.
Figure 7. Comparison of CFD predictions to the guidance on hazard distances for mists produced
by Category C fluids from Table C4 of EI15 (EI, 2015). Symbols are coloured according to the
orifice diameter as shown in the key. Symbol shapes indicate the release pressure as follows:
5 bar, 10 bar, 50 bar, 100 bar.
Additions to guidance
Based on the findings of the MISTS project, some tentative new guidance was developed (see
Figure 8 and Bettis et al., 2017). Whilst the MISTS experimental and modelling results confirmed
that the EI15 guidance was broadly appropriate, the new results identified differences between
fluids that fell within the broad class of EI15 Category C fluids. Where the MISTS experimental
findings clearly showed that particular releases did not produce ignitable mists, the new guidance
reflected the absence of a flammable zone. In the case of Release Class I, the ignition of kerosene
at lower pressures than the lowest pressure of 5 bar in the relevant EI15 table was also highlighted.
Figure 8. Tentative mists hazardous area classification guidance produced by the MISTS project
MISTS2 project
Following the end of the MISTS project, HSE led a workshop involving other regulatory agencies,
industry groups and consultancies to discuss the findings and possible future work. Based on
information gathered in that consultation exercise, HSE proposed a second project under its
“Shared Research” programme and invited other organisations to contribute time and funding to
increase the amount of work that could be undertaken. The new MISTS2 project began in 2018
and it was planned to finish at the end of 2020. However, due to the COVID-19 pandemic, the
project is now likely to extend into 2021. In addition to HSE, the project is being supported by
Shell, Électricité de France (EDF), the Office for Nuclear Regulation (ONR), the Energy Institute
and INERIS. The scope of work for this ongoing MISTS2 project are described below.
Diesel fuel
The somewhat unexpected ignitions of kerosene at very low pressures in the MISTS project raised
questions about diesel. It is very widely used and has similar fluid properties to kerosene with a
flashpoint around 20 °C higher. Understanding the potential for diesel to create flammable mists,
particularly at low operating pressures, is a priority task in the MISTS2 project.
The diesel tests are using the same experimental test procedures as those used in the previous
MISTS programme, to allow like-for-like comparison of results. Two different diesel fuels are
being tested: the first is an ‘ultra-low sulphur’ diesel that is typical of the UK vehicle ‘pump’ diesel
(available from petrol stations, or US gas stations), which is largely composed of mineral-oil
derived fuel, and the second fuel is a 100% biodiesel. The biodiesel has a flashpoint of 145 °C,
significantly higher than the 58 °C flashpoint of the standard ‘pump’ diesel blend.
At the time of writing (September 2020), the GTRC test rig had been redesigned and rebuilt to
allow the duplicate testing in a more robust and safe test environment (see Figure 9) and the
ignition tests have been completed. The ‘pump’ diesel was found to ignite at all the pre-defined
pressures of 5, 10, 15 and 20 bar gauge. A test at a lower pressure of 3 bar gauge did not ignite.
The biodiesel could be ignited at a release pressure of 20 bar gauge but did not ignite at the lower
test pressures of 5 to 15 bar gauge. Work is currently underway to visualise the spray and measure
the droplet sizes and concentrations.
Hole shape
All of the experimental work to date has used a 1 mm diameter drilled circular orifice with a length-
to-diameter ratio of two. In practice, accidental releases of fluids will involve a variety of situations
where the leak path has a more complex geometry. Examples might include:
Holes created by corrosion, where leaks are likely to have very short path lengths
through thinned material, with rough edges;
Loosened screwed fittings, where the leak is along the threads;
Cracked pipes or fittings, where the leak is through a relatively long and narrow
opening;
Damaged or missing seals and gaskets, where the leak is through an arc of the fitting.
To better understand whether the range of possible release paths will alter the likelihood of a
flammable mist being created, a series of tests will be carried out with more complex orifices.
Additive manufacturing (i.e. 3D printing) will be used to create orifices with different geometries.
For each geometry, a range of small size variations will be produced and tested to select ones that
closely match the discharge rates of the circular nozzle. In this way, differences in the mists will
only be due to changes in the leak shape rather than flow rate.
Ignitable extent
The MISTS experiments were conducted in a relatively small-scale test chamber that did not
provide data on the maximum extent of the flammable cloud on the flow centreline. In the kerosene
tests, the mist could be ignited all the way to the floor of the chamber. Since the maximum extent
of the flammable cloud is such an important parameter for hazardous area classification, it is
proposed in the MISTS2 project to duplicate the GTRC releases in a much larger indoor facility at
the HSE Science and Research Centre in Buxton, England (see Figure 10).
The pressurised releases in the HSE Burn Hall will be directed vertically downwards from a boom
offset from a 10 m high scaffold. To minimise differences from the MISTS releases and the
MISTS2 trials, the same GTRC orifices will be used. The ignition trials will also use the same
spark igniter, which GTRC have agreed to loan to HSE for these tests. It is currently proposed to
use diesel for these experiments. The HSE test rig will allow the igniter to be placed on the
centreline, or slightly offset from it if there is a dense liquid stream in the centre. Ignition locations
will extend out to axial distances (below the orifice) in excess of 8 metres, which is well beyond
the flammable cloud extent predicted by current models. It is anticipated that these tests will
provide evidence to support current guidance and future predictive modelling.
Figure 10. The indoor Burn Hall at HSE Science and Research Centre
Summary
In 2009, the UK Health and Safety Executive (HSE) published a review of serious incidents
involving the ignition of flammable mists of high-flashpoint fluids, which identified 37 incidents
which together were responsible for 29 fatalities. In response to the findings, HSE and a
consortium of other regulatory and industrial sponsors funded the MISTS Joint Industry Project,
which ran from 2011 to 2015. The project involved a detailed literature review and a series of
experiments at Cardiff University on a mist release configuration consisting of a downwards-
pointing spray from a 1 mm diameter circular orifice. Test pressures ranged from 1.7 bar to 130 bar
and three fluids were tested: kerosene, a light fuel oil and a hydraulic oil. CFD simulations were
also performed, and results were compared to existing hazardous area classification guidelines.
One of the notable results from the experimental work was that mists of kerosene (with a flashpoint
of 38 ºC) could be ignited with release pressures as low as 1.7 bar. The findings from the MISTS
project were used to develop a tentative classification scheme for mist flammability, based on the
fluid’s flashpoint and ease-of-atomization.
Several important questions remained unanswered following the MISTS project, relating to the
effect of the orifice shape, size and release configuration, and the ignition characteristics of other
common fluids, notably diesel. In 2018, HSE launched a follow-on project, MISTS2, which is
currently ongoing. This new project is conducting tests on diesel, on different orifice shapes and
taking measurements of the maximum extent of the flammable mist. Preliminary results have
shown that standard ‘pump’ diesel can be ignited at pressures from 5 to 20 bar gauge, but not at 3
bar gauge. Tests with a higher flashpoint 100% bio-diesel found that it can be ignited at 20 bar
gauge but not at lower pressures. Further work is ongoing at GTRC Cardiff University and at the
HSE Science and Research Centre in Buxton.
The flammability of mists is a complex subject and there are many unknowns that need to be
addressed to develop proportionate, reliable and scientifically-robust hazardous area classification
guidance. Compared to the decades of research on flammable gases, the work on mists is still at
an early stage. HSE is keen to collaborate with other organisations that share an interest in this
topic going forward.
Acknowledgements
The authors would like to thank the organisations responsible for supporting the work described
here on flammable mists, including: HSE, ONR, RIVM, GE, Siemens, EDF/British Energy, RWE,
Maersk Oil, Statoil, Shell, BP, ConocoPhilips, Nexen, Syngenta, Aero Engine Controls, Atkins,
Frazer Nash, the Energy Institute and INERIS.
The contributions of HSE staff to producing this conference paper were funded by HSE. The
contents, including any opinions and/or conclusions expressed, are those of the authors alone and
do not necessarily reflect HSE policy.
References
BSI (2016), “Explosive atmospheres Part 10-1: Classification of areas — Explosive gas
atmospheres (incorporating corrigendum December 2016)”, BS EN 60079-10-1:2015, British
Standards Institute, https://shop.bsigroup.com/ProductDetail?pid=000000000030353978,
(accessed 10 September 2020).
Bettis R., Burrell G., Gant S., Coldrick S., Mouzakitis K. and Giles A. (2017) “Area
classification for oil mists - final report of a Joint Industry Project”, HSE Research Report
RR1107, http://www.hse.gov.uk/research/rrhtm/rr1107.htm (accessed 10 September 2020).
Burrell G. and Gant S. (2017) “Liquid classification for explosive oil mists”, HSE Research
Report RR1108, http://www.hse.gov.uk/research/rrhtm/rr1108.htm (accessed 10 September
2020).
Coldrick S. and Gant S. (2017) “CFD modelling of explosive oil mists”, HSE Research Report
RR1111, http://www.hse.gov.uk/research/rrhtm/rr1111.htm (accessed 10 September 2020).
DNV (2006) “Phast droplet size theory document”, Det Norske Veritas (DNV) Software,
London, UK.
Eckhoff R.K. (1995) “Generation, ignition, combustion and explosion of sprays and mists
of flammable liquids in air: a literature survey”. Offshore Technology Report - OTN
95 260, Health and Safety Executive, Bootle, UK.
EI (2015) “Model Code of Safe Practice – Part 15: Area Classification Code for Installations
Handling Flammable Fluids (EI15)”, 4th Edition, Energy Institute (EI),
https://publishing.energyinst.org/topics/asset-integrity/ei-model-code-of-safe-practice-part-15-
area-classification-for-installations-handling-flammable-fluids (accessed 10 September 2020).
Gant S., Bettis R., Santon S., Buckland I., Bowen P. and Kay P. (2012) “Generation of
Flammable Mists from High Flashpoint Fluids: Literature Review”, IChemE Hazards XXIII
conference, 12-15 November 2012, https://www.icheme.org/media/9045/xxiii-paper-43.pdf
(accessed 10 September 2020).
Gant S. (2013) “Generation of Flammable Mists from High Flashpoint Fluids: Literature
Review”, HSE Research Report RR980, http://www.hse.gov.uk/research/rrhtm/rr980.htm
(accessed 10 September 2020).
IMO (2020) “International Convention for the Safety of Life at Sea (SOLAS), 1974”, last
amended 2020, International Maritime Organisation,
http://www.imo.org/en/About/Conventions/ListOfConventions/Pages/International-Convention-
for-the-Safety-of-Life-at-Sea-(SOLAS),-1974.aspx (accessed 10 September 2020).
Lees P., Gant S., Bettis R., Vignes A., Lacome J.-M. and Dufaud O. (2019) “Review of recent
incidents involving flammable mists”. IChemE Hazards 29, 22-24 May 2019,
https://www.icheme.org/media/12613/hazards-29-paper-31-review-of-recent-incidents-
involving-flammable-mists.pdf (accessed 10 September 2020).
Mouzakitis K., Giles A., Morris S. and Bowen P. (2017a) “Experimental Investigation of Oil
Mist Explosion Hazards (Phase 1)”, HSE Research Report RR1109,
http://www.hse.gov.uk/research/rrhtm/rr1109.htm (accessed 10 September 2020).
Mouzakitis K., Giles A., Morris S. and Bowen P. (2017b) “Experimental Investigation of Oil
Mist Explosion Hazards (Phase 2)”, HSE Research Report RR1110,
http://www.hse.gov.uk/research/rrhtm/rr1110.htm (accessed 10 September 2020).
Ohnesorge W.V. (1936) “Die Bildung von Tropfen an Düsen und die Auflösung flüssiger
Strahlen”. Z. angew. Math. Mech., 16: 355–358, http://dx.doi.org/10.1002/zamm.19360160611
(accessed 10 September 2020).
Santon R.C. (2009) “Mist Fires and Explosions – An Incident Survey”, IChemE Hazards XXI
Conference, Manchester, UK, https://www.icheme.org/media/9551/xxi-paper-054.pdf (accessed
10 September 2020).
Zabetakis M.G. (1965) “Flammability characteristics of combustible gases and vapors”, Bulletin
627, Bureau of Mines, U.S. Government Printing Office,
https://apps.dtic.mil/dtic/tr/fulltext/u2/701576.pdf (accessed 10 September 2020).
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
Dust dispersion during powder handling and processing is of great concern for both workers’
health and explosion risk. Dust emission locations in industries can vary during handling and
processing, while dust concentration sensing would require the installation of an additional
equipment in every location prone to dust generation. A method of using a digital camera or
photograph to measure the dust concentration based on two target intensity value has been
developed at Purdue University. The method was developed based on the relationship between the
suspended dust concentration and extinction coefficient. Calibrated equations have been developed
for cornstarch, grain dust, and sawdust. This method does not require any training and can be
integrated with security system cameras and/or other independent imaging source.
Elaine S. Oran*
Department of Aerospace Engineering
Texas A&M University
College Station, TX 77843
[email protected]
Abstract
Unwanted explosions, expecially those that evolve into deflagrations or detonations, often have
devastating consequences. They take lives, destroy homes and places of work, and many leave
behind severe and persisting health and economic problems. The creation of the new shock and
detonation tube facility at TAMU, HBT, is based on the principal that the more we understand
about the the dynamics of explosion events and the controlling physics and chemistry of the fuels
and other energetic materials, the better we can develop ways to avoid the event or at least to
mitigate the damage. The HBT facility will consist of a large cylindrical channel (200 m x 20 m)
equipped with an evolving suite of accompanying diagnostics. The channel will be made of thick
steel so that it can withstand the strongest explosions, deflagrations, and detonations. The facility
will be used to examine explosion properties of materials ranging from hydrogen or natural gas
through to heavier hydrocarbons typical of petrochemicals, all in a range of initial conditions. It
will also be used to study multiphase effects, ranging from dispersed small reactive or inert
particles to larger-scale rubble. Another possible use for this facility is the study of ignition and
flame acceleration properties of materials typically present in woodland fires and even materials
processing by shock and detonation waves. This presentation will review the most recent state of
development of HBT and present a plan for dealing with safety and noise issues.
Abstract
Incidental release of flammable gases and liquids can lead to the formation of flammable vapor
clouds. When their concentrations are above the lower flammable limit (LFL), or ½ LFL for
conservative evaluation, fires and explosions can result with the presence of an ignition source.
The objective of this work is to develop highly efficient consequence models to accurately predict
the downwind maximum distance, minimum distance, and maximum vapor cloud width within the
flammable limit. In this study, a novel quantitative property-consequence relationship (QPCR)
model is proposed and constructed for the first time to accurately predict flammable dispersion
consequences in a machine learning and data-driven manner. Flammable dispersion database
consists of 450 leak scenarios of 41 flammable chemicals were constructed using PHAST
simulations. A state-of-art machine learning regression method, extreme gradient boosting
algorithm, was implemented to develop models. The coefficient of determination (R2) and root-
mean-square error (RMSE) were calculated for statistical assessment and the developed QPCR
models achieved satisfactory predictive capabilities. All the developed models have high accuracy,
with the overall RMSE of three models being 0.0811, 0.0741, and 0.0964, respectively. The
developed QPCR models can be used to obtain instant flammable dispersion estimations for novel
flammable chemicals and mixtures at much lower computational costs.
Incidental release of flammable materials may result in the formation of flammable vapor
clouds. For areas in which flammable gas concentrations are above the lower flammability limit
(LFL), fires and explosions will take place when encountering an ignition source, which can be
highly hazardous to the process plant and nearby communities. The deadly hydrocarbon vapor
cloud explosion happened in BP Texas City fifteen years ago caused fifteen fatalities and 180
others injuries, which truly showed the disastrous consequence of flammable dispersion and
explosion (Holmstrom et al., 2006). Flammable dispersion consequence analysis plays a major
role in the prevention and mitigation of fire and explosion incidents. In emergency response
planning, it is also necessary to conduct consequence analysis for large-scale flammable chemical
leaks. When assessing the consequences of flammable dispersion, the areas under the ½ LFL and
LFL are critical criterion for determining the safe areas that have been identified in various
flammable dispersion research works (Birch et al., 1989). Webber (2002) investigated the
possibility of reducing the ½ LFL threshold, finding that the criterion should not be reduced and
that the ½ LFL criterion should apply to instantaneous concentrations.
The consequences of flammable dispersion can be predicted using empirical, computational
fluid dynamics (CFD) or integrated models. Typical empirical models for gas dispersion include
the Pasquill-Gifford model and Britter-McQuaid model, which allow rapid predictions with
acceptable accuracy (McQuaid, 1982). However, the Pasquill-Gifford or Gaussian dispersion
models apply only to neutrally buoyant dispersions of gases. Furthermore, the Britter-McQuaid
model of dense gas dispersion is unable to account for the effects of parameters such as release
height, ground roughness, and wind speed profiles (Crowl and Louvar, 2019). CFD models, such
as ANSYS Fluent, CFX, and FLACS, are capable of capturing the influence of surface roughness,
but are time-consuming and come at significant computational costs (Li et al., 2020; Middha et
al., 2010). This makes them particularly cumbersome in emergencies in which instant estimation
should be available. Integral models such as HEGADIS, NCAR, and DRIFT, which take advantage
of both empirical and CFD methods, have been widely used as a result of their higher prediction
accuracies and lower computational costs (Gant et al., 2018). However, these models are limited
to free-field dispersion with no obstructions and are generally not applicable to situations involving
complex geometries (Dasgotra et al., 2018).
The consequence modeling package PHAST (Process Hazard Assessment Software Tool) is
a popular consequence analysis and risk assessment tool that integrates dispersion models to
examine the progress of potential incidents from initial release to far-field dispersion, including
the modeling of rainout and subsequent vaporization (Witlox et al., 2014). PHAST dispersion
simulation results have been widely validated against various experimental results, including both
buyout and heavy gases, showing satisfactory agreement between simulated and experimental
results (Witlox et al., 2014; Witlox et al., 2018).
Machine learning regression methods have shown significant capabilities for use in data
mining and big data analysis in recent years (Jiao et al., 2020a; Shen et al., 2020). There have been
increasing applications of machine learning algorithms for hazardous material ratings (Yuan et al.,
2020; Jiao et al., 2020b), fire and explosion-related property prediction (Cao et al. 2018; Jiao et
al., 2019a; Yuan et al., 2019), and consequence analysis (Sun et al., 2019; Jiao et al., 2020c). For
current machine learning algorithm implementation in gas dispersion modeling, Wang et al. (2015)
also used PHAST simulation results to validate a neural network-based real-time estimation of
chlorine dispersion using gas detector data, illustrating the practicality of PHAST in validation of
proposed dispersion models. Gwak and Rho (2019) compared three different machine learning
techniques in predicting CO2 dispersion in a lab environment. However, these works only
examined the techniques using one or two chemicals (chlorine, carbon dioxide, or sulfur dioxide)
under limited leaking conditions, which is not sufficient to construct a wide spectrum applicable
model for real-world applications.
One of the major challenges to be addressed involves the development of a rapid, universal
applicable prediction model that is based on leaking conditions and specific properties of
flammable chemicals with the dispersion distance data. However, the relationship is deemed
highly non-linear and the interaction mechanism remains unknown. Machine learning-based
quantitative structure-property relationship (QSPR) analysis is used widely for fire and explosion
related property predictions, including flammability limits, autoignition temperature, and
flashpoint, which shows higher accuracy and reliability compared with other prediction methods.
QSPR can reveal mathematical relationships between the structural attributes and the property of
interest at a quantum chemistry level, which can serve to bridge the gap between micro quantum
structure descriptors and relatively macro properties. Furthermore, machine learning algorithms
can also overcome the high non-linearity between input features and output variables, which make
machine learning-based quantitative prediction models suitable for forming linkages from leaking
conditions and chemical properties to dispersion consequences.
In order to develop a robust predictive tool for fast flammable dispersion consequence
analysis for a wide range of flammable chemicals and leak conditions, machine learning-based
quantitative property-consequence relationship (QPCR) models should be developed to better
assist in consequence analysis, risk assessment, and emergency response planning. In this study,
PHAST flammable dispersion simulations of 450 different leak scenarios were conducted
involving 41 flammable chemicals commonly present in the chemical, oil and gas industries. The
three key flammable dispersion parameters, which are maximum downwind distance, minimum
downwind distance, and maximum vapor cloud width within LFL and ½ LFL criteria, were
obtained from the simulation to construct a comprehensive database with nearly 60,000 data
points. State-of-art machine learning technique, gradient boosting regression (GBR), was
implemented using the Xgboost package in R to correlate the property descriptor with designated
dispersion distances to construct the flammable dispersion QPCR models.
2. Methodology
2.1 Database
Database compilation is the first step for big data analysis. In this study, flammable dispersion
consequence database was generated using PHAST simulation. The leak condition parameters
consist of several components: source condition (release material, location, quantity, etc.), weather
condition (wind speed, atmospheric temperature, humidity, etc.), and leak condition (leak size).
Source conditions are determined based on specific petroleum process operating conditions. 41
flammable chemicals with 450 leak scenarios with both under ½ LFL and LFL criteria were
simulated to construct a database with a total of 19,579 valid flammable dispersion scenarios since
some scenarios did not result in the generation of the flammable cloud. Each simulation result
contains three key parameters of flammable dispersion: maximum downwind distance, minimum
downwind distance, and maximum vapor cloud width. Among 19,579 scenarios that were
employed in the QPCR model development, 75% of data points (14,684 scenarios) were randomly
selected as the training set and the remaining 25% (4,895 scenarios) are grouped into the test set
to validate the accuracy of the model. Using a single data source for model development can ensure
the model’s prediction consistency and accuracy.
In order to visualize the performance of the developed QPCR models, two types of
performance evaluation plots are shown in Fig. 1. The plot of QPCR prediction values vs. actual
values is shown in Fig. 1a and the prediction residual plot is shown in Fig. 1b with the test set
statistical values as R2=0.9838 and RMSE=0.1556. All data points are evenly distributed along the
diagonal baseline of Fig. 1a which indicates that the predicted distance is very close to the actual
distance. Furthermore, the majority of data residuals shown in Fig. 1b are between ±0.25 and very
close to the zero baseline, which proves that the developed QPCR method provides a good estimate
of flammable dispersion maximum downwind distance.
Fig. 1. Maximum Distance QPCR Model Performance Plot: (a) Predicted Value vs. PHAST
Simulation Value, (b) Residual Plot
3.2 Minimum Distance
The plot of QPCR prediction values vs. actual values is shown in Fig. 2a, with all data points
evenly distributed along the diagonal baseline. Since the minimum distance values are relatively
lower than those of the maximum distance, the prediction residual plot shown in Fig. 2b shows
that the minimum distance prediction errors are more concentrated around the zero baseline
compared with the maximum distance model. The test set statistical values of R2=0.9837 and
RMSE=0.1466 also indicates that the minimum distance model performance is slightly better than
the maximum distance model.
Fig. 9. Minimum Distance QPCR Model Performance Plot: (a) Predicted Value vs. PHAST
The plot of QPCR prediction values vs. actual values is shown in Fig. 3a, since the maximum
width data is the least normally distributed data according to the histogram. The test set data points
are sparsely distributed along the diagonal baseline compared to the maximum and minimum
distance models. The prediction residual plot in Fig. 3b shows that more points are located between
0.25 to 0.50 and -0.25 to -0.50 compared to the previous two models. However, the test set
statistical values of R2=0.9869 and RMSE=0.1973 demonstrate that the prediction of the maximum
width QPCR model still has satisfactory accuracy.
Fig. 3. Maximum Width QPCR Model Performance Plot: (a) Predicted Value vs. PHAST
Simulation Value, (b) Residual Plot
The statistical assessment value of the training set, test set, and overall dataset of three QPCR
models is summarized in Table 1 and the model predicted result can be found in Supplementary
Table associated with this paper. All three models have very high accuracy in predicting flammable
dispersion downwind distance, with the dataset 𝑅 2 being higher than 0.995. The fact that the
training set 𝑅 2 for each of these three models is higher than 0.999 illustrates the power of gradient
boosting in detecting small details within the dataset training machine learning prediction models.
The independent test set validation also proved the model's superior performance of the flammable
dispersion QPCR model, with the test set RMSE lower than 0.2.
Table 1
4. Conclusions
In this study, a database was constructed with nearly 20,000 dispersion simulation of 41
flammable chemicals using PHAST, and the consequence analysis results were used to construct
and validate the QPCR models. The GBR method was employed to provide a reliable prediction
for flammable dispersion downwind distances. The GBR-based QPCR models showed
significantly high accuracy for prediction of dispersion downwind distances, with the test set
RMSE for maximum distance, minimum distance, and maximum width having values of 0.1556,
0.1466, and 0.1873, respectively. These prediction models illustrate the power of QPCR and
machine learning for assisting with consequence analysis and emergency response planning with
much higher efficiency and accuracy.
The GBR-based QPCR models do not have specific equations for intuitive applications, such
as empirical methods, which is one of the shortcomings of QPCR method. However, an increasing
trend in the availability and implementation of digital equipment in consequence analysis and
emergency response planning has occurred. Thus, the problem can be overcome by developing a
built-in software package that provides instant predictions of hazardous areas with very high
accuracy and reliability. The models also showed potential for joint applications with QSPR
models of LFL and vapor density so these models can be expanded to other novel chemicals
without measured properties. Furthermore, the database must be further expanded to include the
detailed influences of weather and environmental conditions including weather categories, wind
speeds, and ground temperatures. Additionally, the method to quantify the influences of obstacles
must be included so as to allow for more universal applications of the QPCR models.
References
Birch, A. D., Brown, D. R., Fairweather, M., Hargrave, G. K., 1989. An experimental study of a
turbulent natural gas jet in a cross-flow. Combust Sci. Technol., 66(4-6), 217-232.
Breiman, L., 1997. Arcing the edge (Technical Report 486). Statistics Department. University of
California at Berkeley, Berkeley, CA.
Crowl, D. A., Louvar, J. F., 2019. Chemical process safety: fundamentals with applications.
Pearson Education.
Dasgotra, A., Teja, G. V., Sharma, A., Mishra, K. B., 2018. CFD modeling of large-scale
flammable cloud dispersion using FLACS. J. Loss Prev. Process Ind., 56, 531-536.
Friedman, J. H., 2002. Stochastic gradient boosting. Comput. Stat. Data Anal., 38(4), 367-378.
Gant, S., Weil, J., Delle Monache, L., McKenna, B., Garcia, M. M., Tickle, G., Tucker, H.,
Stewart, J., Kelsey, Adrian., Mcgillivray, A., Batt, R., Witlox, H., Wardman, M., 2018. Dense
gas dispersion model development and testing for the Jack Rabbit II phase 1 chlorine release
experiments. Atmos. Environ., 192, 218-240.
Gwak, K. M., Rho, Y. J., 2019, April. Experimental machine learning study on CO 2 gas
dispersion. In 2019 IEEE 9th Symposium on Computer Applications & Industrial Electronics
(ISCAIE) (pp. 358-363). IEEE.
Holmstrom, D., Altamirano, F., Banks, J., Joseph, G., Kaszniak, M., Mackenzie, C., Shroff, R.,
Cohen, H., Wallace, S., 2006. CSB investigation of the explosions and fire at the BP Texas City
refinery on March 23, 2005. Process Saf. Prog., 25(4), 345-349.
Jiao, Z., Escobar-Hernandez, H., Parker, T., Wang, Q., 2019a. Review of recent developments of
quantitative structure-property relationship models on fire and explosion related properties,
Process Saf. Environ. Prot., 129, 280-290.
Jiao, Z., Yuan, S., Zhang, Z., Wang, Q., 2019b. Machine learning prediction of hydrocarbon
mixture lower flammability limits using quantitative structure‐property relationship models.
Process Saf. Prog., e12103.
Jiao, Z., Sun, Y., Hong, Y., Parker, T., Hu, P., Mannan, M. S., Wang, Q., 2020a. Development of
Flammable Dispersion Quantitative Property-Consequence Relationship (QPCR) Models Using
Extreme Gradient Boosting. Industrial & Engineering Chemistry Research.
Jiao, Z., Ji, C., Yuan, S., Zhang, Z., Wang, Q., 2020b. Development of Machine Learning Based
Prediction Models for Hazardous Properties of Chemical Mixtures. Journal of Loss Prevention in
the Process Industries, 104226.
Jiao, Z., Yuan, S., Zhang, Z., Ji, C., Wang, Q., 2020c, March. Machine Learning Based
Quantitative Structure-Property Relationship Prediction of Lower Flammability Limit. In 2020
Spring Meeting & 16th Global Congress on Process Safety. AIChE.
Shen, R., Jiao, Z., Parker, T., Sun, Y., Wang, Q., 2020. Recent application of Computational
Fluid Dynamics (CFD) in process safety and loss prevention: A review. Journal of Loss
Prevention in the Process Industries, 104252.
Li, X., Abbassi, R., Chen, G., Wang, Q., 2020. Modeling and analysis of flammable gas
dispersion and deflagration from offshore platform blowout. Ocean Eng., 201, 107146.
McQuaid, J., 1982. Future directions of dense-gas dispersion research. J. Hazard Mater., 6(1-2),
231-247.
Middha, P., Hansen, O. R., Grune, J., Kotchourko, A., 2010. CFD calculations of gas leak
dispersion and subsequent gas explosions: validation against ignited impinging hydrogen jet
experiments. J. Hazard Mater., 179(1-3), 84-94.
Nielsen, D., 2016. Tree boosting with xgboost-why does xgboost win "every" machine learning
competition? (Master's thesis, NTNU).
Sun, Y., Wang, J., Zhu, W., Yuan, S., Hong, Y., Mannan, M. S., Wilhite, B., 2019. Development
of consequent models for three categories of fire through artificial neural networks. Ind. Eng.
Chem. Res., 59(1), 464-474.
Wang, B., Chen, B., Zhao, J., 2015. The real-time estimation of hazardous gas dispersion by the
integration of gas detectors, neural network and gas dispersion models. J. Hazard Mater., 300,
433-442.
Webber, D., 2002. On defining a safety criterion for flammable clouds. Health and Safety
Laboratory.
Witlox, H. W., Fernández, M., Harper, M., Oke, A., Stene, J., Xu, Y., 2018. Verification and
validation of Phast consequence models for accidental releases of toxic or flammable chemicals
to the atmosphere. J. Loss Prev. Process Ind., 55, 457-470.
Witlox, H. W., Harper, M., Oke, A., Stene, J., 2014. Validation of discharge and atmospheric
dispersion for unpressurised and pressurised carbon dioxide releases. Process Saf. Environ. Prot.,
92(1), 3-16.
Yuan, S., Jiao, Z., Quddus, N., Kwon, J. S.-I., Mashuga, C. V., 2019. Developing quantitative
structure–property relationship models to predict the upper flammability limit using machine
learning. Ind. Eng. Chem. Res., 58(8), 3531-3537.
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
clustering algorithms
Abstract
When shipping fuel selection problem is on the table, the safety factors should always be top
prioritized. Currently, the liquid flammability classification mainly rely on flash point and the risk
criteria are largely dependent on the two-dimensional matrix of consequence and probability.
However, the liquefied marine fuel combustion has its own uniqueness, leading to a less consistent
with the common classification standard.
This paper is aiming at providing a more reasonable criterion to classify flammable liquids in the
compression ignition engines for further application on safety evaluation of promising marine fuel
options. Besides the widely recognized liquid flammability characteristics, this study identifies
contributors for in-cylinder flame propagation and the liquid aerosol formulation as well. Then
two unsupervised machine learning clustering algorithms, k-means and spectral clustering, are
employed to find the specific patterns of the three safety features for the collected liquid organic
compounds database. To consider both cluster cohesion and separation, the global mean silhouette
value is presented to find the optimal number of clusters and to evaluate the clustering performance
of the proposed models. The results agree that the spectral clustering outperforms k-means
clustering algorithm on classifying the risk ratings of liquid flammability, flame propagation and
aerosol formulation. Moreover, the principal component analysis and the star coordinate diagrams
are presented to visualize high dimensional data to two dimensional graphs. Finally, the overall
liquid safety performance is evaluated by a novel rating system, liquid in-cylinder combustion risk
index (LICRI) via the weight values determined by the information entropy approach.
Keywords: Marine fuel safety; In-cylinder flame propagation; Liquid aerosolization; Spectral
clustering algorithm; High dimensional data visualization
1. Introduction
Safety is the top priority for the promising marine fuel selection. Flammability and explosive hazards, which
have been well studied, are the major concerns of safety aspects of the tank to propeller (TTP) process
aboard ships. The inherent flammable properties of liquids may involve flash point, auto ignition point,
upper/lower flammability limit and boiling point. Currently, liquid combustion level is commonly
determined by flash point. NFPA 704 (National Fire Protection Association, 2017), the widely recognized
liquid flammability classification standard, categorizes liquids into five classes, and Figure 1 shows the
NFPA 704 standard and flammability ratings of the promising marine fuels by adopting the NFPA fire
diamond.
Figure 1. NFPA 704 liquid flammability rating standard (1a); NFPA diamond-based
flammability classification for promising marine fuels (1b)
Can we conclude that LNG, LPG and liquefied hydrogen share the most hazardous flammability feature?
Maybe not. As many concerns were risen on the hazards of high flash point liquid because of the
aerosolization or atomization phenomenon, the study published by Bowen and Shirvill (Bowen and Shirvill,
1994) highlighted the liquid aerosolization hazard could be minimized by the adopting the minimum
practicable pressure for operating systems. However, the aerosolization effect of fuel oils was still
underestimated, leading it to be the root cause of many incidents in shipping industry (Kohlbrand, 1991;
Santon, 2009). Therefore, the flash point driven liquid flammability standard may be too simple to classify
the safety level of marine fuel options, especially when considering the common combustion scenario of
marine fuels.
Most ships employ 2-stroke diesel compression-ignition (CI) engine as their main power-driven source
(Klett et al., 2017). Unlike Otto cycle, the diesel internal combustion engine uses a higher compression
ratio, 15 to 20, to ignite the marine fuel (Sivaganesan and Chandrasekaran, 2016). The flame in the cylinder
of the CI engine is initially propagated as laminar and later it becomes turbulent. Besides, there is a common
operation for the combustion of heavy fuel oil, which needs to be heated to bring the viscosity below 20cst
for achieving proper aerosolization (MAN Diesel & Turbo, 2014). Date back to 1955, Eichhorn (Eichhorn,
1955) firstly presented the aerosols was able to lead an explosion and pointed out as well that the liquid
aerosol flammability with fuzzy boundaries was completely different from vapor flammability limits.
Moreover, we experimentally confirmed that the n-dodecane in the aerosol state can be ignited lower than
the flash point. As shown in Fig. 2, there is a pressure rising when the equivalence ratio decreases to 0.1,
but neither n-octane nor n-dodecane is supposed to be ignited since the LFL for both n-octane and n-
dodecane vapor are 0.57% and 0.54% respectively, , illustrating the liquid aerosol has a wider flammability
range than the bulk liquid (Yuan et al., 2019).
Figure. 2 Pmax of n-octane aerosol and n-dodecane aerosol explosions for different
equivalence ratios (Yuan et al., 2019)
However, the liquid aerosolization and flame propagation, making bulk liquids more hazardous on
combustion and explosion in the cylinder of CI engine, have not widely recognized in industry or academia,
thus, it is necessary to take the liquid flammability, flame propagation and liquid aerosolization effects into
consideration to tell the safety extent of the liquefied fuel options in the CI engine. To fill in gaps for
categorizing promising shipping fuels from the perspective of chemical safety and process system
engineering, there are two steps to carry on the study of TTP process safety: the first step is to find optimal
models for the identified contributors of liquid aerosol formulation and the next step is to adopt the
clustering and classification approaches via unsupervised machine learning (ML) algorithms to classify the
safety level of promising marine fuels.
In this study, the major inherent properties of the liquid flammability, flame propagation and aerosol
formulation will be identified firstly, then two ML clustering algorithms will be executed to classify the
collected database to different groups, and a new flammability rating called liquid in-cylinder combustion
risk index (LICRI) will be calculated to show the overall liquid safety preference, which can be applied as
a reasonable reference when considering the marine fuel selection issue in the TTP process.
The field of fluid flammability characteristics have been well studied while few works has focused on the
inherent property identification for liquid in-cylinder flame propagation and liquid aerosolization. Thus, it
is critical to identify the leading factors for liquid fuel aerosolization and flame propagation effects so that
a reasonable liquified fuel safety criterion in CI engines may be established accordingly.
This work adopts AIT, FP and flammability range (FR), the range between lower flammability and upper
flammability, as the contributors for liquid flammability matrix. Since the liquid in-cylinder flame has a
combination feature of both premixed laminar and turbulent, the theoretical models of these two flames are
analyzed to identify the significant parameters.
The well known “Two zones” model proposed by Mallard & Le Chatelier (Mallard and Le Chatelier, 1883)
is:
𝑇 −𝑇
𝑆𝑙 = √𝛼𝜔̇ ( 𝑏 𝑖) (1)
𝑇 −𝑇
𝑖 𝑢
𝑘
𝛼= (2)
𝜌𝐶𝑝
𝛼 is the thermal diffusivity, 𝜔̇ is the reaction rate, 𝑘 is the thermal conductivity, 𝐶𝑝 is the specific heat
capacity and 𝜌 is the density. The relationship, shown in Equation 8, between laminar flame and turbulent
flame speed presented by Peters (Peters, 2000) has widely accepted and shown a good performance.
2
𝑎4 𝑏32 𝑎 𝑏2 𝑆 𝑙
𝑆𝑇 = 𝑆𝑙 + 𝜇′ {− 2𝑏1
𝐷𝑎 4 3
+ [( 2𝑏 𝐷𝑎) + 𝑎4 𝑏32 𝐷𝑎]1/2 } , where 𝐷𝑎 = 𝜇′𝐿𝛿 (3)
1 𝐿
As the above equation shows, 𝜇′ is the turbulence intensity; 𝑏1 , 𝑏4 and 𝑎4 are the turbulence modeling
constants with value of 2.0, 1.0 and 0.78; 𝐷𝑎 is the Damkohler number and 𝑙 is the turbumence integral
length scale, 𝛿𝐿 that denotes the flame thickness is a function of heat capacity, heat conductivity, density
and laminar flame speed. Hence, both of the laminar and turbulent flame equations pointed out the
dependent varibles of flame propagation for CI engines are heat capacity (HC), liquid density (LD) and
liquid thermal conductivity (LTC), and these three variables construct our liquid in-cylinder flame
propagation matrix.
Many literatures (Ballal and Lefebvre, 1979; Danis et al., 1988; Kiran Krishna et al., 2003; Polymeropoulos,
1984; Yuan et al., 2020, 2019) have pointed out the key parameter to determine liquid aerosolization is the
droplet size. Among all the theoretical mean diameters of aerosols, the Sauter Mean Diameter (SMD) is the
most common one to apply for heat transfer, combustion and dispersion modelling (K. Krishna et al., 2003).
In addition, the diesel engine fuel injector can be deemed as an electro spray type of aerosol generator. Most
studies conducted on pressure atomizers have focused on the type of injector used in compression ignition
engines (Lefebvre and McDonell, 2017). Two SMD formulae proposed by Harmon (Harmon, 1955) and
Elkotb (Elkotb, 1982) for plain-orifice type pressure atomizers are listed as below:
where 𝑑𝑜 is the discharge orifice diameter, 𝜇𝐿 is the liquid kinematic viscosity, 𝜎 is the surface tension, 𝑈𝐿
is the liquid flow rate and 𝜌𝐿 is the liquid density.
From the above equation, 𝜎 is the surface tension, 𝑣𝐿 is the liquid dynamic viscosity, 𝜌𝐿 is the liquid density,
∆𝑃𝐿 is the liquid pressure differential and 𝜌𝐴 is the air density.
Liquid dynamic viscosity (LDV) and surface tension (ST), as the inherent properties of fuels, are the
determinant parameters for the droplet size of liquid aerosols based on the above two equations. For most
practical fuels, any change in dynamic viscosity is always accompanied by a change in volatility, and Ballal
and Lefebvre (Ballal and Lefebvre, 1979) indicated as well the quenching distance was dependent on fuel
volatility. Besides dynamic viscosity, liquid vapor pressure (LVP) is an evidential index to tell the volatility
level of liquids. Therefore, the identified contributors for liquid aerosolization are surface tension, liquid
dynamic viscosity and liquid vapor pressure.
𝐶𝑖,𝑗
LICRI = ∑3𝑖=1,𝑗∈{1,2,…,𝑛} 𝑊𝑖 (6)
As shown in the above equation, the weight value 𝑊𝑖 for the three safety matrices should be determined
and normalized before implementing the ML clustering algorithms. The range of LICRI is between 0 to 1,
as the values of cluster numbers 𝐶𝑖,𝑗 are increasing, the less value of LICRI value for one substance would
be, illustrating its high risk for liquid in-cylinder combustion. In this study, the DIPPR 801 database
(“DIPPR Project 801 - Full Version - Physical Constants - Knovel,” n.d.) are preprocessed to collect 703
effective organic compounds in the liquid state under specific temperatures with values on the nine
dimensional data, i.e., AIT, FP, FR, HC, LD, LTC, LVP, ST and LDV.
Therefore, the LICRI can be updated after the weight vectors determined, please check supporting
information for calculation details.
1
min 𝐶1 , … , 𝐶𝑘 {∑𝐾
𝑘=1 |𝐶 ∑𝑖,𝑖‘ ∈𝐶𝑘 ∑𝑝𝑗=1(𝑥𝑖𝑗 − 𝑥𝑖𝑗
′ 2
) } (11)
𝑘|
Where 𝐶1 , … , 𝐶𝑘 denote cluster 1 to k, |𝐶𝑘 | is the number of samples in the kth cluster, 𝑝 is the number of
𝑝 ′ 2
predictors, and ∑𝑗=1(𝑥𝑖𝑗 − 𝑥𝑖𝑗 ) represents the Euclidean distance between two observations in the kth
cluster. The study applies Python package Scikit-learn (Pedregosa et al., 2011) to process the K-means
cluster algorithm, and the silhouette analysis (scikit-learn, 2017) is adopted to determine the number of
clusters.
Step 1 Split the LICRI database to three data sets and normalize each data set;
Step 2 Construct similarity graph with adjacency matrix by normal k-nearest neighbor
approach by setting number of neighbors of 15 and Sigma value of 1;
Step 3 Compute the normalized graph Laplacian 𝐿 and its first eigenvectors 𝑣1 , … , 𝑣𝑘 ;
Step 4 Set 𝑉 ∈ ℝ𝑛×𝑘 as the matrix containing the vectors 𝑣1 , … , 𝑣𝑘 and formulate the
matrix 𝑈 ∈ ℝ𝑛×𝑘 by normalizing the matrix 𝑉;
𝑣𝑖𝑗
𝑢𝑖𝑗 = (12)
2
√∑𝑘 𝑣𝑖𝑘
Step 5 Let 𝑦𝑖 ∈ ℝ𝑘 , 𝑖 ∈ {1,2, … , 𝑛} be the vector corresponding the i-th row of matrix 𝑈.
Step 6 Cluster the points (𝑦𝑖 )𝑖=1,…,𝑛 with the k-means algorithm into clusters 𝐶1 , … , 𝐶𝑘 , as
shown in previous chapter.
The spectral clustering algorithm is implemented with the help of Matlab statistics and machine learning
toolbox and the fast and efficient spectral clustering package (Ingo, 2020). In contrast to convex data set
shape of k-means algorithm, spectral clustering tends to be useful for hard non-convex problems
(Hocking et al., 2011).
Basically, the SC system is a curvilinear coordinate system. By defining the origin as a 2d point
𝑂𝑛 (𝑥, 𝑦) = (𝑜𝑥 , 𝑜𝑦 ) and a series of 𝑛 2d vectors 𝐴𝑛 = 〈𝑎
⃗⃗⃗⃗1 , ⃗⃗⃗⃗
𝑎2 , … , ⃗⃗⃗ 𝑎𝑛 〉 , the axes can be established
𝑎𝑖 , … , ⃗⃗⃗⃗
and mapped to the Cartesian Coordinates (Kandogan, 2000). The data points 𝐷𝑗 from a high dimensional
dataset 𝐷 are converted to data points 𝐷𝑗′ of the established 2d Cartesian Coordinates by the sum of all
unit vectors ⃗⃗⃗
𝑢𝑖 = (𝑢𝑥𝑖 , 𝑢𝑦𝑖 ) on each coordinate, and the relationship between the original and converted
data points are shown below:
𝐷𝑗′ (𝑥, 𝑦) = [𝑜𝑥 + ∑𝑛𝑖=1 𝑢𝑥𝑖 ∙ (𝑑𝑗𝑖 − 𝑚𝑖𝑛𝑖 ), 𝑜𝑦 + ∑𝑛𝑖=1 𝑢𝑦𝑖 ∙ (𝑑𝑗𝑖 − 𝑚𝑖𝑛𝑖 ) ] (13)
|𝑎
⃗⃗⃗⃗𝑖 |
where 𝐷𝑗 = (𝑑𝑗0 , 𝑑𝑗1 , … , 𝑑𝑗𝑖 , … , 𝑑𝑗𝑛 ), |𝑢
⃗⃗⃗𝑖 | = 𝑚𝑎𝑥𝑖 −𝑚𝑖𝑛𝑖
Moreover, the cluster projection diagram of any two response variables can be applied to find the optimal
clustering model as well. The star coordinates and cluster projection diagram are integrated with the
spectral clustering algorithm to visualize the clustered data of the LICRI database. The sihouette plot,
which has been widely applied to show the optimal number of clusters for unsupervised algorithms, is
employed to find the better clustering models among three safety features between k-means and spectral
clustering algorithms.
3.5 Cluster validation criterion
In order to evaluate which algorithm has a better clustering performance, cluster validation criteria are
introduced in the work. Focusing on measuring the fit of a clustering structure itself, the study employs
the internal validation indices to consider both cluster cohesion and cluster separation. Three common
internal measures of cluster validation have been surveyed, including the Dunn index (Dunn, 1974),
Davies-Bouldin index (Davies and Bouldin, 1979) and Silhouette index (Rousseeuw, 1987). All of the
three indices have presented as robust strategies to predict the optimal clustering
partitions. This work utilizes the Silhouette index as the cluster validation criterion as
a result of its interpretation and validation of consistency within clusters of the liquid
in-cylinder combustion database. The silhouette validation criterion applies a concise
graphical representation, the silhouette plot, to display how well each data points have
been clustered. Similar with the Dunn index, the higher the silhouette index is, the
better the clustering performance would be. For the data point 𝑗 in the cluster 𝐶𝑗 , the mean
distance between I and other data point within the same cluster is defined as:
1
𝑎(𝑗) = ∑𝑘∈𝐶𝑗,𝑗≠𝑘 𝑑(𝑗, 𝑘) (14)
|𝐶𝑗 |−1
where 𝑑(𝑗, 𝑘) is the distance between two data points 𝑗 and 𝑘 within the cluster 𝐶𝑗 . The reason of adding
1
the item is because the distance 𝑑(𝑗, 𝑗) is excluded in the sum.
|𝐶𝑗 |−1
Then the distance of 𝑗 to the points in some cluster 𝐶𝑙 (𝐶𝑙 ≠ 𝐶𝑗 ) other than 𝐶𝑗 is defined as:
1
𝑏(𝑗) = min |𝐶 | ∑𝑘∈𝐶𝑙 𝑑(𝑗, 𝑘) (15)
𝑙≠𝑗 𝑙
Next, the silhouette value of one data point 𝑗 can be expressed as:
From the above expression, one can find the range of 𝑠(𝑗) is between -1 and 1: a vale close to 1 indicates
the data point is clustered to the correct cluster whereas the value -1 tells the data point is affected to the
wrong cluster. This study applies the mean silhouette value to show the performance for a given cluster
𝐶𝑙 , which is denoted as 𝑠̅:
𝑙
1
𝑠̅𝑙 = |𝐶 | ∑𝑗∈𝐶𝑙 𝑠(𝑗) (17)
𝑙
Finally, the overall performance of one specific model is able to be evaluate by the global silhouette
index, the mean of the average silhouette values through all the clusters with the cluster number 𝐿:
1 𝐿
𝑆̅ = ∑ 𝑠̅
𝐿 𝐿=1 𝑙
(18)
This study employs silhouette analysis to study the separation distance between the final clusters of k-
means and spectral clustering algorithms. Also, the silhouette value is adopted to determine the optimal
numbers of clusters for the LICRI database. The performance of the unsupervised clustering models is
evaluated with the visualization of the clustered data and the sihouette plot.
Figure 7. Three-dimensional labelled scatter plot (7a) and the corresponding star
coordinate plot (7b)
As shown in Figure 7a, little information can be obtained since the cluster data distribution in space
cannot be reflected by the 3-dimentional labelled scatter plot. The 3-dimentional labelled scatter plot is
transformed by the theory of star coordinates (Kandogan, 2000), illustrated as Figure 7b. Clearly, this
model performs great on clustering liquid flammability indicators, i.e., FT, AIT and FR, only a limited
number misclassified points locate in the cluster 1 and cluster 2, while cluster 3 and 4 have excellent
clustering feature. Compared with the k-means clustering model with 2 principal components, the spectral
clustering model increases the average silhouette value from 0.209 to 0.426, and the clustering effect on
the labelled scatter plot improves a lot as well.
Figure 8. Optimal silhouette plot (8a) and optimal cluster star coordinate plot (8b) for
flame propagation
Illustrated in Figure 8, the silhouette plot and the optimal cluster star coordinates plot show a fairly good
results on clustering HC, LD and LTC, while the silhouette coefficient value is 0.479 and few outliers in
the star coordinate plot has crossed the boundary of each cluster.
To summarize, the spectral clustering models outperform k-means clustering models with two princinpal
components for each liquid combustion safety matrix, see Table 1. Also, the silhouette plots of the spectral
clustering models present better performance than those of the k-means clustering models, consistent with
the values of global average silhouette numbers. The optimal cluster models can be determined by the
highest value of silhouette coefficient value, and the 4 cluster models are selected as the optimal clustering
models for the three liquid in-cylinder combustion safety features: liquid flammability, flame propagation
and aerosol formulation.
Table 1. Average silhouette coefficient value of three liquid combustion safety matrices for
two clustering models
Liquid Flammability Flame Propagation Aerosol Formulation
4 5 6 7 4 5 6 7 4 5 6 7
cluster cluster cluster cluster cluster cluster cluster cluster cluster cluster cluster cluster
model model model model model model model model model model model model
K-means
clustering
0.20 0.16 0.16 0.12 0.24 0.18 0.15 0.14 0.22 0.21 0.15 0.14
models
with 2
9 7 0 8 3 1 3 2 5 9 6 5
PCAs
3-
dimensiona
0.42 0.34 0.35 0.34 0.47 0.43 0.45 0.45 0.43 0.37 0.29 0.30
l spectral
clustering
6 2 8 0 9 2 5 3 6 9 1 1
models
By employing the optimal clustering models, the risk ratings of each collected liquid flammability, flame
propagation and aerosol formulation are generated by the Matlab codes with the help of the calculated
information entropy values. The whole clustered data and the weight value calculation can be found in
supporting information. The following table shows the example liquids with NFPA 704 flammability level
3 and 4 (National Fire Protection Association, 2017) but different ratings based on the our proposed
clustering models.
Table 2. NFPA flammable and highly flammable liquids with different liquid in-cylinder
combustion ratings
NFPA Liquid Flame Liquid LICRI
Flammability Flammabilit Propagation Aerosolizatio Value
y n
Methanol 3 2 4 2 0.280
Ethanol 3 4 2 3 0.165
Methoxy acetone 3 4 4 2 0.103
As shown in the Table 2, methanol and ethanol share the same level in the NFPA 704 standard, but the
proposed model points out that methanol is a less risky marine fuel on overall liquid in-cylinder risk
combustion value than that of ethanol, although methanol has a high-risk rating on flame propagation.
Based on the LICRI values of the extracted substances in Table 2, the safety preferences can be ranked as
o-Ethyl aniline, methanol, di-(2-Chloroethoxy) methane, ethanol and methoxy acetone. Following the same
way, more promising fuels can be evaluated from the LICRI value of the spectral clustering model.
5. Conclusion
In this study, one novel liquid combustion safety criterion for compression ignition engines is carried out
successfully with acceptable clustering outputs to fill in gaps for categorizing promising marine fuels
from the perspective of chemical safety and life cycle assessment. This work confirms that the graph
theory based spectral clustering performs better than k-means clustering algorithm in the non-convex
liquid in-cylinder combustion database. The four cluster models are finalized as the optimal ones for
liquid flammability, flame propagation and liquid aerosolization. The liquid organic compound database,
comprising 703 substances, are clustered into four groups for the three safety matrices, the low overall
rating presents the high-level hazard. The k-means algorithm integrates PCA to automatically optimize
weight values of each principal component; while the spectral clustering algorithm employs the star
coordinate plots to only reduce the high dimensional data into two dimensional data in the visualization
stage. The star coordinate plots provide a great way to visualize high dimensional data sets, and it can
solve the most obvious disadvantage of PCA, lack of interpretability.
Compared with the flash point dominated NFPA flammability standard, this criterion gives more
information on marine fuel combustion in the CI engines. Also, the global mean silhouette value is
reliable to find the optimal number of clusters in this work and it can be served as a robust reference to
quantitatively evaluate the goodness of the clustering algorithms. Although the clustered results do show
good performance on the cluster 1 and cluster 4, the results still needs to be improved on boundary
determination of the cluster 2 and cluster 3. Nevertheless, the unsupervised clustering models with
information entropy determined weight values give a more objective way to evaluate risk associated with
the liquid in-cylinder combustion since it completely avoids human judgement to build the safety
matrices. The future of this work may either improve the clustering results by adopting other clustering
techniques such as hierarchical clustering and density-based spatial clustering of algorithms with noise or
expand this work to the sustainability study of the promising marine fuel options so that the more greener
and safer marine fuel solutions can be found to meet the long term strategy of the International Maritime
Organization.
Reference
Ballal, D.R., Lefebvre, A.H., 1979. Ignition and flame quenching of flowing heterogeneous fuel-air
mixtures. Combust. Flame. https://doi.org/10.1016/0010-2180(79)90019-1
Bowen, P.J., Shirvill, L.C., 1994. Combustion hazards posed by the pressurized atomization of high-
flashpoint liquids. J. Loss Prev. Process Ind. https://doi.org/10.1016/0950-4230(94)80071-5
Bürk, I., 2012. Spectral Clustering. University of Stuttgart. https://doi.org/10.1201/9781315373515-8
Danis, A.M., Namer, I., Cernansky, N.P., 1988. Droplet size and equivalence ratio effects on spark
ignition of monodisperse N-heptane and methanol sprays. Combust. Flame.
https://doi.org/10.1016/0010-2180(88)90074-0
Davies, D.L., Bouldin, D.W., 1979. A Cluster Separation Measure. IEEE Trans. Pattern Anal. Mach.
Intell. https://doi.org/10.1109/TPAMI.1979.4766909
DIPPR Project 801 - Full Version - Physical Constants - Knovel [WWW Document], n.d. URL
https://app.knovel.com/web/view/itable/show.v/rcid:kpDIPPRPF7/cid:kt00CZDUQI/viewerType:itb
le//root_slug:physical-constants/url_slug:physical-constants?filter=table&b-toc-
cid=kpDIPPRPF7&b-toc-root-slug=&b-toc-url-slug=physical-constants&b-toc-title=DIPPR%25
(accessed 3.28.20).
Dunn, J.C., 1974. Well-separated clusters and optimal fuzzy partitions. J. Cybern.
https://doi.org/10.1080/01969727408546059
Eichhorn, J., 1955. Careful! Mist can explode. Pet. Refin. 34, 194–196.
Elkotb, M.M., 1982. Fuel atomization for spray modelling. Prog. Energy Combust. Sci. 8, 61–91.
https://doi.org/10.1016/0360-1285(82)90009-0
Harmon, D.B., 1955. Drop sizes from low speed jets. J. Franklin Inst. https://doi.org/10.1016/0016-
0032(55)90098-3
Hocking, T.D., Armand;, J., Bach, F., Jean-Philppe, V., 2011. Clusterpath: An Algorithm for Clustering
using Convex Fusion Penalties 1–8.
Ingo, 2020. Fast and efficient spectral clustering,. Retrieved May 9, 2020. [WWW Document]. MATLAB
Cent. File Exch. URL https://www.mathworks.com/matlabcentral/fileexchange/34412-fast-and-
efficient-spectral-clustering
Kandogan, E., 2000. Star coordinates: A multi-dimensional visualization technique with uniform
treatment of dimensions. Proc. IEEE Inf. Vis. Symp. Late Break. Hot Top.
https://doi.org/10.1.1.4.8909
Klett, D.E., Afify, E.M., Srinivasan, K.K., Jacobs, T.J., 2017. Internal combustion engines, in: Energy
Conversion, Second Edition. https://doi.org/10.1201/9781315374192
Kohlbrand, H.T., 1991. Case history of a deflagration involving an organic solvent/oxygen system below
its flash point. Plant/operations Prog. https://doi.org/10.1002/prsb.720100110
Krishna, K., Kim, T.K., Kihm, K.D., Rogers, W.J., Mannan, M.S., 2003. Predictive correlations for
leaking heat transfer fluid aerosols in air. J. Loss Prev. Process Ind. https://doi.org/10.1016/S0950-
4230(02)00091-8
Krishna, Kiran, Rogers, W.J., Mannan, M.S., 2003. The use of aerosol formation, flammability, and
explosion information for heat-transfer fluid selection, in: Journal of Hazardous Materials.
https://doi.org/10.1016/S0304-3894(03)00273-5
Lefebvre, A.H., McDonell, V.G., 2017. Atomization and sprays, Atomization and Sprays.
https://doi.org/10.1201/9781315120911
Li, X., Wang, K., Liuz, L., Xin, J., Yang, H., Gao, C., 2011. Application of the entropy weight and
TOPSIS method in safety evaluation of coal mines, in: Procedia Engineering.
https://doi.org/10.1016/j.proeng.2011.11.2410
Likas, A., Vlassis, N., J. Verbeek, J., 2003. The global k-means clustering algorithm. Pattern Recognit.
https://doi.org/10.1016/S0031-3203(02)00060-2
Luxburg, U. Von, 2006. A turtorial on Spectral Clustering. https://doi.org/10.1103/PhysRevE.80.056117
Mallard, E., Le Chatelier, H.L., 1883. Thermal Model for Flame Propagation, in: Annals of Mines.
https://doi.org/10.1029/90JD02231
MAN Diesel & Turbo, 2014. Guidelines for Operation on Fuels with less than 0.1% Sulphur. Serv. Lett.
SL2014-593 1–24.
National Fire Protection Association, 2017. NFPA 704, Standard System for the Identification of the
Hazards of Materials for Emergency Response.
Ng, A.Y., Jordan, M.I., Weiss, Y., 2002. On spectral clustering: Analysis and an algorithm, in: Advances
in Neural Information Processing Systems.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M.,
Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M.,
Perrot, M., Duchesnay, É., 2011. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res.
Peters, N., 2000. Turbulent Combustion, Turbulent Combustion.
https://doi.org/10.1017/cbo9780511612701
Polymeropoulos, C.E., 1984. Flame Propagation in Aerosols of Fuel Droplets, Fuel Vapor and Air.
Combust. Sci. Technol. https://doi.org/10.1080/00102208408923807
Rousseeuw, P.J., 1987. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis.
J. Comput. Appl. Math. https://doi.org/10.1016/0377-0427(87)90125-7
Santon, R.C., 2009. Mist fires and explosions - An incident survey, in: Institution of Chemical Engineers
Symposium Series.
scikit-learn, 2017. Selecting the number of clusters with silhouette analysis on KMeans clustering [WWW
Document]. Scikit-learn.
Sivaganesan, S., Chandrasekaran, M., 2016. Impact of various compression ratio on the compression
ignition engine with diesel and mahua biodiesel. Int. J. ChemTech Res.
Yuan, S., Ji, C., Monhollen, A., Kwon, J.S.I., Mashuga, C., 2019. Experimental and thermodynamic study
of aerosol explosions in a 36 L apparatus. Fuel. https://doi.org/10.1016/j.fuel.2019.02.078
Yuan, S., Zhang, Z., Sun, Y., Kwon, J.S.I., Mashuga, C. V., 2020. Liquid flammability ratings predicted
by machine learning considering aerosolization. J. Hazard. Mater. 386, 121640.
https://doi.org/10.1016/j.jhazmat.2019.121640
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
BakerRisk has performed vented deflagration testing of congested enclosures over a range of
configurations, congestion levels and fuels. This paper provides a comparison of the measured
flame jetting distances to predictions made using standard methods commonly used to calculate
the associated hazard zone. These methods include the National Fire Protection Association
Standard on Explosion Protection by Deflagration Venting (NFPA 68, [2]), the British Standard’s
Gas Explosion Venting Protective Systems (EN 14994, [3]) and computational fluid dynamics
(CFD) analysis.
Nine test series were carried out using BakerRisk’s Deflagration Load Generator (DLG) test rig.
The DLG is 48-feet wide × 24-feet deep × 12-feet tall, yielding a total volume of 13,800 ft3
(391 m3), and is enclosed by three solid steel walls, a roof, and floor. The rig vents through one
of the long walls (i.e., 48-foot × 12-foot). The venting face was sealed with a 6-mil (0.15 mm)
thick plastic vapor barrier for these tests to allow for the formation of the desired fuel air-mixture
throughout the rig. Both slightly hyper-stoichiometric propane and lean hydrogen mixtures have
been tested in the DLG. Congestion was provided by an array of vertical cylinders. A range of
congestion levels and fill fractions were tested. DLG testing was performed with and without vent
panels present.
Flame jetting distances from the venting face of the DLG were measured using high-speed video.
Flame jetting distances were predicted using the Fireball Dimensions calculation from NFPA 68
and the Flame Effects calculation from EN 14994. Blind (i.e., pre-test) simulations were also
performed using the FLACS CFD code [1]. The flame jetting distance in the CFD simulation was
taken as the distance from the DLG vent to where the gas temperature dropped below a specified
value; the predicted distance for the fuel concentration to drop below half the lower flammability
limit (LFL) was also evaluated as a check on the predicted jetting distance.
Keywords: Fireball, Vented Deflagration, Testing, CFD, NFPA 68, EN 14994, Blast Effects
Introduction
The tests described in this paper were performed in three separate tests programs [4, 5 and 6],
referred to herein as test programs 1, 2 and 3 (T1, T2, T3). Nine separate test series were
performed, as summarized in Table 1. Each test series in T1 (e.g., T1-A) consisted of three tests
and each series in T2 and T3 consisted of two tests, except for T3-B which consisted of one test.
The primary objectives of T1 and T2 were to (1) characterize vented deflagration blast loads and
(2) compare these loads to those based on standard prediction methods. The primary objective of
T3 was to gather vented deflagration data for validation of numerical models [7]. A comparison
of the measured flame jetting distances was made with predictions based on the correlations
provided in NFPA 68 and EN 14994, and to values calculated using the FLACS CFD code.
Table 1. Combined Test Matrix
Obstacle to
Test Fuel Flammable Congested Enclosure Vent
Program Fuel
Series Concentration Volume Volume Surface Area Parameters
Ratio (Ar)
A 6 mil plastic
100%
B 100% 0.32 20-gauge
1
C 50% steel panels
D 25% 0.10
Propane 4.33%
A 0.39
2 B 50% 0.27
100% 6 mil plastic
C 0.15
A
3 100% 0.38
B01 Hydrogen 20%
Background
A vapor cloud explosion (VCE) is classified as a deflagration if the flame propagates through the
unburned fuel-air mixture at a burning velocity less than the speed of sound. The confinement and
congestion associated with the volume encompassed by a flammable cloud affect the flame speed,
which governs the resulting blast load. Confinement refers to solid surfaces that prevent free
expansion of the expanding gas in one or more dimensions (e.g., solid walls, roof, etc.).
Congestion refers to obstacles in the path of the flame that generate turbulence (e.g., the vertical
cylinders in the rig used in these tests). Turbulence increases both the combustion rate per unit
surface area as well as the flame surface area.
During a deflagration, the unburned portion of the flammable cloud is pushed ahead of the flame
as the product gases expand. Cloud expansion is a function of the flame speed and the initial
flammable cloud volume. Increasing flame speed decreases the amount of expansion prior to
consuming the flammable cloud. Increasing the initial flammable cloud volume proportionally
increases the expanded volume. Expansion occurs in all directions for an unconfined deflagration,
whereas confinement limits free expansion to the unconfined direction(s). During a vented
deflagration, flame jetting occurs until the fireball consumes the flammable mixture that has
expanded through the vent and/or the vented volume is depressurized.
Test Rig Configuration
The DLG test rig is an enclosure with three solid walls, a roof, and floor, measuring 48-feet wide,
24-feet deep, and 12-feet tall [5]. Venting was allowed through one of the long walls (i.e., 48-foot
x 12-foot). The venting face of the rig was sealed with a 6-mil thick plastic vapor barrier, which
released (i.e., tore open) at approximately 0.1 psig; the vapor barrier allowed for the formation of
a fuel-air mixture inside the test rig. For T1-C, the plastic vapor barrier was installed halfway
between the rear of the test rig and the venting surface. For all other tests, the vapor barrier was
installed on the rig face, such that the flammable cloud filled the entire enclosure volume. For T1-
B and T1-C, steel vent panels (20 gauge, 2 lbm/ft2) were installed over the plastic vapor barrier
using Fabco® Vent-All explosion relief fasteners to provide a 0.3 psig vent release pressure; vent
panel restraint devices were not utilized in these tests. Figure 1 shows photos of the three venting
configurations utilized: (1) vapor barrier installed on the external venting face of the rig, (2) steel
panels installed on the external face of the rig, and (3) vapor barrier installed at the halfway point
in the rig.
The target propane concentration for these tests corresponds to the peak of the laminar burning
velocity (LBV) curve for propane-air mixtures (4.33% propane). The peak of the LBV curve
constitutes a worst-case mixture, but also corresponds to the region with least change in LBV
versus concentration. Minimum and maximum concentration thresholds were established based
on a 1% variation in LBV from the peak value. The target hydrogen concentration was 20% since
a worst-case hydrogen-air mixture would be expected to result in a deflagration-to-detonation
transition (DDT) inside the DLG; a DDT was observed at a higher concentration [6]. Ten sample
points were distributed throughout the DLG test rig to allow the uniformity of the fuel-air mixture
to be monitored. Each sample point indicated a fuel-air concentration within the tolerance
thresholds prior to ignition. The flammable mixture was ignited at the center of the rear wall,
opposite the venting surface.
Congestion inside the rig was provided by an array of vertical cylinders (2.375-inch and 2-inch
outer diameter) that occupied the internal volume of the rig. For T1 and T3, the 2.375-inch outer
diameter cylinders were located at the front of the rig (first two rows) to minimize plastic
deformation of the cylinders due to repeated loading; T2 only utilized the 2-inch outer diameter
cylinders. The congestion patterns used for T1,T2 and T3 are shown in shown in Figure 2, Figure
3 and Figure 4, respectively. The resulting obstacle-to-enclosure surface area ratios (Ar), which
provide a relative congestion measure, are provided in Table 1.
High speed (HS) cameras recording at 1000 frames per second (fps) and high definition (HD)
cameras recording at 30 fps were located approximately 200 feet away from the DLG,
perpendicular to the venting face. The HS camera recordings were used to evaluated flame jetting
distance. However, for test T3-B, is should be noted that hydrogen-air flames are nearly
impossible to see since they burn with a very pale blue flame and do not produce soot particles, so
the resulting flame length data for this test is approximate. It is recognized that a thermocouple
array or radiative heat flux gauges could have provided a more quantitative measure, but flame
jetting measurements were not the focus of these test programs. Pressure transducers were also
fielded internal and external to the test rig.
Figure 1. Test Rig Venting Configurations
TOP
Side
Figure 7. FLACS Analysis Temperature Plots (500 °K - 1000 °K) for T1-D
Results and Discussion
The maximum flame jetting distance from the venting face of the rig was determined for each test
using HS video. Figure 8 shows still frames from the T1-D HS video at four different times during
flame jetting. For test T1-C, where only the back half of the rig contained a flammable mixture,
so that 12 feet (i.e., half the rig width) was added to the observed flame jetting distance.
Table 2 and Figure 9 provide the average flame jetting distance for each test series along with the
predicted values (i.e., from NFPA 68, EN 14994 and based on the FLACS analysis). Table 3
shows the ratio of the predicted to measured flame lengths for each prediction method. As noted
previously:
The EN 14994 predictions are an extrapolation, since the test rig volume (391 m3) exceeds
the valid range for this correlation by a factor of 8,
The predictions based on the distance at which the flammable gas concentration drops to
LFL/2 in the FLACS analysis should underpredict flame length, and
The test data for the hydrogen test (T3-B) has significant uncertainty.
The NFPA 68 hazard distance predictions are significantly longer than the measured flame jetting
distance, with the ratio between the predicted and measured values averaging 2.3 and reaching 2.9
for one test. This is not unexpected, since the “hazard distance” to personnel, the quantity
predicted by the NFPA 68 correlation, should extend beyond the actual flame jetting distance (i.e.,
there is a hazard due to thermal radiation which extends beyond the actual flame). In addition, the
NFPA 68 correlation is intended to be conservative. The predictions made using the EN 14994
correlation are slightly more conservative.
Table 2. Flame Jetting Distances for Tests and Predictive Methods
Flame Jetting Distance (ft)
Test
FLACS FLACS
ID Test Data NFPA 68 EN 14994
(Temperature) (Fuel Conc.)
T1-A 52 112 120 171 28
T1-B 42 112 120 170 24
T1-C 36 85 95 153 16
T1-D 43 112 120 67 28
T2-A 51 112 120 232 32
T2-B 48 112 120 142 32
T2-C 51 112 120 116 28
T3-A 67 112 120
T3-B 38 112 120
The predicted flame travel distances from FLACS using a 1000 °K temperature criterion were
conservative compared to the observed flame jetting distances, ajusting the temperature defintion
by +/- 100 °K resulted in an approximate change of +/-10 feet (+/- 20%) to the predicted flame
length for most cases, but changes of up to 27 feet (52%) were observed in one case. The FLACS
simulations show hot product gases traveling upwards and outwards away from the the combustion
region. In many cases product gases with a temperature >1000 °K propagated beyond the reported
distance above the height of the test rig, where they would not pose a hazard to personel on the
ground.
As expected, the flame jetting distance predicted based on the FLACS analysis using a fuel
concentration criterion (i.e., LFL/2) significantly underpredicted the measured values in all cases.
Figure 9. Flame Jetting Distance for Tests and Predictive Methods
Conclusions
The NFPA 68 hazard distance predictions are between approximately 2 and 3 times longer than
the measured flame jetting distance. However, the “hazard distance” to personnel predicted by the
NFPA 68 correlation should extend beyond the actual flame jetting distance, since the hazard
extends beyond this distance, and since the NFPA 68 correlation is intended to be conservative.
The predictions made using the EN 14994 correlation are slightly more conservative, but represent
an extrapolation beyond the upper limit for volume for this correlation.
The flame jetting distances predicted based on the FLACS analysis using a temperature criterion
were greater than the measured values for all tests, and generally conservative compatered to the
NFPA 68 and EN 14994 predictions. The temperature criterion defined for this work does not
account for thermal dosage, future work could look at thermal dose and associated vulnerability to
define the hazard zone.
The following observations were made based on test data:
The presence of unrestrained vent panels (vs. only a vapor barrier) reduced flame jetting
distance since the panels act as a physical barrier to the flame.
Flame jetting distance increases with flammable cloud volume (i.e., the fraction of the
enclosure filled with a flammable gas mixture).
Flame jetting distance generally increases with congested volume (i.e., the fraction of the
enclosure filled with congestion). This is likely due to flame deceleration in the
uncongested portion of the enclosure.
Acknowledgments
The T1 test series was performed under the sponsorship of the Explosion Research Cooperative
(ERC), an ongoing joint industry research program organized by BakerRisk. The ERC is
comprised of companies primarily engaged in the petrochemical and chemical industries with a
strong commitment to process safety and has continuously supported VCE testing since 1998. The
T1 tests were carried out with support from Darren Malik, Brad Horn, Emiliano Vivanco, Marty
Goodrich, Randall Bloomquist, Hans Sunaryanto, Seth Johnson, Sean Howell, and Mattias Turner.
The T2 test series was performed as a BakerRisk Internal Research (IR) project and supported by
Darren Malik, Brad Horn, Jack Beadle, Corbin McCollough, Nolan Payne, Greyson Thompson,
and David Tighe. The T3 test series was carried out with the financial support of Daewoo
Engineering and Construction. The FLACS simulations and data analyses were carried out by
Emiliano Vivanco, Oscar Rodriguez, Carley Hockett and Jarrod Kassuba. The contributions of
these organization and individuals are all gratefully acknowledged.
References
Abstract
Accidental explosions in industrial settings often cause devastating losses to personnel and
industry. Many instances, such as in coal mines, are attributed to the accumulation of dangerous
amounts of methane with trace amounts of heavier hydrocarbon gases, which mix with air and
create conditions for flame ignition and subsequent deflagration-to-detonation transition (DDT).
This paper discusses the conditions under which flames in channel geometries can accelerate to
detonation and the effects of trace amounts of impurities. DDT was investigated for the addition
of ethane and propane into a methane-air mixture at various geometry scales with constant
blockage ratio and channel configuration. Results of small-scale simulations of DDT in channels
containing methane and air were compared with existing experimental data. We found that the
location where DDT occurred, 𝐿𝐷𝐷𝑇 , decreased slightly as the percentage of impurity changed.
The variation was, in fact, on the order as the stochasticity (uncertainly due to turbulence and
turbulence interactions) in the simulation. The detonation cell size, however, decreased with
increased amount of impurities, thus resulting in a more robust detonation wave.
Keywords: Explosions, deflagration-to-detonation transition, numerical simulation, reactive
flows
Introduction
Accidental industrial explosions are low-probability, high-impact events that cause devastating
losses. These accidents are a safety concert to various industries including oil and gas, mining,
and fuel refining and transportation [1–4]. For the mining industry in particular, the conditions
for such explosions are created in confined regions of underground mines such as abandoned and
sealed sections of the mine. Natural gas can accumulate in these sealed sections mixing with air
to create an explosive gas mixture.
When detonation occurs in accidental explosions in coal mines or fuel storage facilities, the
destructive potential of the explosion increases enormously. If DDT occurs in the system, the
energy release rate drastically increases and the resulting detonation wave can travel at several
kilometers per second. The explosion frequently has the same mechanism: ignition, flame
propagation and acceleration, then transition to detonation. The route to detonation is the
response of a reactive gas to a smaller explosions created in deflagration and from the formation
of hotspots [5,6]. Goodwin et. al [7] and Xiao et. al [8] showed that DDT can also occur through
shock focusing on the flame front or the unburnt mixture ahead of the flame. It is important to
know the distance it takes for such DDT process to occur so that protective seals can be designed
accordingly [9]. A worst-case scenario would occur if DDT happened at the seal, when pressure
reaches its maximum.
Most work focusing on such explosions has modeled the explosive mixture as pure methane-air
[10–12]. While the primary hydrocarbon in natural gas is methane, often trace amounts of
impurities such as propane and ethane are included. These heavy hydrocarbons are often 0-20%
of natural gas by volume. This work aims to understand the influence of these impurities on the
DDT process and ultimately how it effects the run-up distance 𝐿𝐷𝐷𝑇 .
Numerical Model
The numerical simulations solve the two-dimensional (2D) fully-compressible reactive Navier-
Stokes equations for conservation of mass, momentum, energy and species. The reactants are
perfectly mixed and are assumed to behave as an ideal gas.
𝜕𝜌
+ 𝛻 ⋅ (𝜌𝑢
⃗)=0
𝜕𝑡
𝜕𝜌𝑢⃗
+ 𝛻 ⋅ (𝜌𝑢
⃗𝑢⃗ ) + 𝛻𝑝 = 𝛻 ⋅ 𝜏̃
𝜕𝑡
𝜕(𝜌𝐸)
+ 𝛻 ⋅ ((𝜌𝐸 + 𝑝)𝑢
⃗ ) = 𝛻 ⋅ (𝑢
⃗ ⋅ 𝜏̃ ) + 𝛻 ⋅ (𝐾𝛻𝑇) − 𝜌𝑞𝜔̇
𝜕𝑡
𝜕(𝜌𝑌)
+ 𝛻 ⋅ (𝜌𝑇𝑢
⃗ ) = 𝛻 ⋅ (𝜌𝐷𝛻𝑌) + 𝜌𝜔̇
𝜕𝑡
𝑝 = 𝜌𝑅𝑇/𝑀
2
𝜏̃ = 𝜌𝜈((𝛻𝑢 ⃗ )𝑇 − (𝛻 ⋅ 𝑢
⃗ ) − (𝛻𝑢 ⃗ )𝐼)
3
𝑃 1
𝐸= + (𝑢
⃗ ⋅𝑢
⃗)
(𝛾 − 1)𝜌 2
where 𝜌 is the density, 𝑡 is the time, 𝑝 is the pressure, 𝑢⃗ is the vector velocity, 𝑇 is the
temperature, 𝐸𝑎 is the specific total energy, 𝑌 is the mass fraction, 𝑞 is the chemical energy
release, 𝜔̇ is the chemical reaction rate, 𝜅 is the thermal conductivity, 𝐷 is the mass diffusivity, 𝑅
is the universal gas constant, 𝑀 is the molecular weight, 𝜈 is the kinematic viscosity, 𝜏̃ is the
viscous stress tensor, 𝐼 is the unit tensor, and 𝛾 is the specific heat ratio.
Combustion and the conversion of fuel to product is modeled using a calibrated single-step
chemical-diffusion model (CDM) where the reaction rate (𝜔̇ ) is defined as
𝑑𝑌 𝐸𝑎
𝜔̇ = = −𝐴𝜌𝑌𝑒𝑥𝑝(− )
𝑑𝑡 𝑅𝑇
where 𝑌 is the fuel mass fraction, 𝑡 is time, 𝐴 is the pre-exponential factor, 𝜌 is the fluid density,
𝐸𝑎 is the activation energy, 𝑅 is the universal gas constant, and 𝑇 is the fluid temperature.
The diffusion properties of the mixture are temperature depended and defined as
𝜅 = 𝜅0 𝑇 0.7 /𝜌, 𝐷 = 𝐷0 𝑇 0.7 /𝜌, 𝜇 = 𝜇0 𝑇 0.7 /𝜌
Chemical-Diffusion Model
The Arrhenius Equation in the previous section describes the conversion of fuel into product as
part of the chemical-diffusion model (CDM). To generate these CDMs a genetic algorithm and
optimization approach is used to find the optimal value for model parameters (𝛾, 𝐴, 𝐸𝑎 , 𝑞, 𝜅0 ,
𝑀𝑤 ) such that calculated flame and detonation parameters (𝑇𝑏 , 𝑇𝑐 𝑣, 𝑆𝑙 , 𝑥𝑓𝑡 , 𝐷𝐶𝐽 , 𝑥𝑑 ) match their
specified values [13]. This model has extensively tested in laminar and turbulent flames,
detonations, and DDT [10,14,15].
Problem Setup
The computational domain is a long channel with regularly spaced obstacles (𝐿 = 𝑑) with a
blockage ratio 𝑏𝑟 = 0.3. This configuration is consistent with DDT experiments [16–18] and is
typical for various industrial setting and mines. Previous work has shown effects of blockage
ratio, obstacle type and obstacle placement on 𝐿𝐷𝐷𝑇 [14,19,20]. The configurations investigated
were 𝑑 = 17.4𝑐𝑚, 𝑑 = 52𝑐𝑚, and 𝑑 = 1𝑚. All geometry ratios were kept constant for each gas
mixture.
DDT Process
The deflagration-to-detonation process which occurred in the simulations is seen in the figure
below and has been previously described in [10,12,23]. The process begins as the initial flame
propagates at the laminar flame speed 𝑆𝑙 though the unburned mixture. The hot products increase
the background flow increasing the apparent flame speed. As the flame propagates over the
obstacles the flame surface becomes distorted increasing the flame surface area. This increase in
flame area is the main cause of the increased burning rate which further accelerates the flame.
The fast flame will then begin to generate acoustic waves which will eventually coalesce into a
shock. As these shocks reflect from the obstacles, the shock-flame interaction (Rayleigh-Taylor
instability) causes more distortion in the flame front increasing its velocity. This shock-flame
complex will propagate at a velocity below the 𝐷𝐶𝐽 in a decoupled state. This shock will continue
to gain strength until its reflection from an obstacle increasing the temperature to the ignition
temperature, thus forming a hotspot from which DDT will occur. This resulting detonation wave
will propagate over the obstacles occasionally extinguishing due to rarefaction over an obstacle,
only to subsequently detonate again.
Figure 4: DDT Process
Conclusions
Numerical simulation of DDT in methane filled channels with trace percentage (0%-8%)
propane or ethane showed little effect on reduce the run-up distance to detonation (𝐿𝐷𝐷𝑇 ). The
the variance in 𝐿𝐷𝐷𝑇 is on the order of the stochasticity of the simulations (variance do to
turbulence and turbulence-shock interactions). Hydrodynamics scaled linearly with larger
channels, but the chemical models did not scale with larger channel diameter. Increased heavy
hydrocarbon content slightly increased the laminar flame speed (𝑆𝑙 ) and adiabatic burning
temperature (𝑇𝑏 ) which are the primary driver in the DDT process. Increased heavy hydrocarbon
did increase the half-reaction thickness (𝑥𝑑 ) which suggest a smaller detonation cell size and thus
a more robust detonation.
Acknowledgments
This study was sponsored by the Alpha Foundation for the Improvement of Mine Safety and
Health, Inc. (ALPHA FOUNDATION), through Grant No. AFC215-20. The views, opinions and
recommendations expressed herein are solely those of the authors and do not imply any
endorsement by the ALPHA FOUNDATION, its Directors and staff.
Computing resources were provided by the University of Maryland supercomputing center,
Texas A&M High Performance Research Computing, and the Department of Defense High
Performance Computing Modernization Program.
References
[1] Johnson, D. M. “The Potential for Vapour Cloud Explosions - Lessons from the Buncefield
Accident.” Journal of Loss Prevention in the Process Industries, Vol. 23, No. 6, 2010, pp. 921–
927. https://doi.org/10.1016/j.jlp.2010.06.011.
[2] Johnson, D. M., and Tam, V. H. Y. “Why Ddt Is the Only Way to Explain Some Vapor
Cloud Explosions.” Process Safety Progress, Vol. 36, No. 3, 2017, pp. 292–300.
https://doi.org/10.1002/prs.11874.
[3] Taveau, J. “The Buncefield Explosion: Were the Resulting Overpressures Really
Unforeseeable?” Process Safety Progress, Vol. 31, No. 1, 2012, pp. 55–71.
https://doi.org/10.1002/prs.10468.
[4] McMahon, G. W., Britt, J. R., and Walker, R. E. “Methane Explosion Modeling in the Sago
Mine.” Mining Engineering, Vol. 62, 2010, pp. 51–62.
[5] Oran, E. S., and Gamezo, V. N. “Origins of the Deflagration-to-Detonation Transition in
Gas-Phase Combustion.” Combustion and Flame, Vol. 148, Nos. 1-2, 2007, pp. 4–47.
https://doi.org/10.1016/j.combustflame.2006.07.010.
[6] Oran, E. S. “Understanding Explosions - from Catastrophic Accidents to Creation of the
Universe.” Proceedings of the Combustion Institute, Vol. 35, No. 1, 2015, pp. 1–35.
https://doi.org/10.1016/j.proci.2014.08.019.
[7] Goodwin, G. B., and Oran, E. S. “Premixed Flame Stability and Transition to Detonation in a
Supersonic Combustor.” Combustion and Flame, Vol. 197, 2018, pp. 145–160.
https://doi.org/10.1016/j.combustflame.2018.07.008.
[8] Xiao, H., and Oran, E. S. “Shock Focusing and Detonation Initiation at a Flame Front.”
Combustion and Flame, Vol. 203, 2019, pp. 397–406.
https://doi.org/10.1016/j.combustflame.2019.02.012.
[9] Zipf, R. K., Brune, J. F., and Thimons, E. D. “Progress Toward Improved Engineering of
Seals and Sealed Areas of Coal Mines.” SME Annual Meeting and Exhibit and CMA’s 111th
National Western Mining Conference 2009, Vol. 2, 2009, pp. 641–651.
[10] Kessler, D. A., Oran, E. S., and Kaplan, C. R. “The Coupled Multiscale Multiphysics
Method (Cm3) for Rarefied Gas Flows.” 48th AIAA Aerospace Sciences Meeting Including the
New Horizons Forum and Aerospace Exposition, No. January, 2010.
https://doi.org/10.2514/6.2010-823.
[11] Zheng, W., Kaplan, C. R., Houim, R. W., and Oran, E. S. “Flame Acceleration and
Transition to Detonation: Effects of a Composition Gradient in a Mixture of Methane and Air.”
Proceedings of the Combustion Institute, Vol. 37, No. 3, 2019, pp. 3521–3528.
https://doi.org/10.1016/J.PROCI.2018.07.118.
[12] Gamezo, V. N., Bachman, C. L., and Oran, E. S. “Effects of Scale of Flame Acceleration
and Ddt in Obstructed Channels.” No. January, 2020, pp. 1–11. https://doi.org/10.2514/6.2020-
0443.
[13] Kaplan, C. R., Özgen, A., and Oran, E. S. “Chemical-Diffusive Models for Flame
Acceleration and Transition-to-Detonation: Genetic Algorithm and Optimisation Procedure.”
Combustion Theory and Modelling, Vol. 23, No. 1, 2019, pp. 67–86.
https://doi.org/10.1080/13647830.2018.1481228.
[14] Gamezo, V. N., Ogawa, T., and Oran, E. S. “Numerical Simulations of Flame Propagation
and Ddt in Obstructed Channels Filled with Hydrogen-Air Mixture.” Proceedings of the
Combustion Institute, Vol. 31 II, 2007, pp. 2463–2471.
https://doi.org/10.1016/j.proci.2006.07.220.
[15] Goodwin, G. B., Houim, R. W., and Oran, E. S. “Shock transition to detonation in channels
with obstacles.” Proceedings of the Combustion Institute, Vol. 36, No. 2, 2017, pp. 2717–2724.
https://doi.org/10.1016/j.proci.2016.06.160.
[16] Teodorczyk, A. “Fast Deflagrations and Detonations in Obstacle-Filled Channels.” Journal
of Power Technologies, Vol. 79, 1995, pp. 1–34.
[17] Zipf, R. K., Gamezo, V. N., Mohamed, K. M., Oran, E. S., and Kessler, D. A.
“Deflagration-to-Detonation Transition in Natural Gas-Air Mixtures.” Combustion and Flame,
Vol. 161, No. 8, 2014, pp. 2165–2176. https://doi.org/10.1016/j.combustflame.2014.02.002.
[18] Ciccarelli, G., and Dorofeev, S. Flame acceleration and transition to detonation in ducts. 34,
Aug, 2008, pp. 499–550.
[19] Goodwin, G. B., Houim, R. W., and Oran, E. S. “Effect of Decreasing Blockage Ratio on
Ddt in Small Channels with Obstacles.” Combustion and Flame, Vol. 173, 2016, pp. 16–26.
https://doi.org/10.1016/j.combustflame.2016.07.029.
[20] Xiao, H., and Oran, E. S. “Flame Acceleration and Deflagration-to-Detonation Transition in
Hydrogen-Air Mixture in a Channel with an Array of Obstacles of Different Shapes.”
Combustion and Flame, Vol. 220, 2020, pp. 378–393.
https://doi.org/10.1016/j.combustflame.2020.07.013.
[21] Taylor, E. M., Wu, M., and Martı, M. P. “Optimization of Nonlinear Error for Weighted
Essentially Non-Oscillatory Methods in Direct Numerical Simulations of Compressible
Turbulence.” Vol. 223, 2007, pp. 384–397. https://doi.org/10.1016/j.jcp.2006.09.010.
[22] Riemann Solvers and Numerical Methods for Fluid Dynamics.
[23] Gamezo, V. N., Ogawa, T., and Oran, E. S. “Flame acceleration and DDT in channels with
obstacles: Effect of obstacle spacing.” Combustion and Flame, Vol. 155, Nos. 1-2, 2008, pp.
302–315. https://doi.org/10.1016/j.combustflame.2008.06.004.
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
The response of “slender columnar objects” (e.g., scaffold poles, fence posts, lamp posts, etc.) to
the loads imposed by a vapor cloud explosion (VCE) has been recently identified as an indicator
of whether a VCE was a deflagration or a detonation. It has been suggested that a slender columnar
object deformed such that it has “continuous curvature” rather than a “hinge” is indicative of a
detonation. The purpose of the work described in this paper was to examine the response of simple
poles to the blast and drag loading from VCEs involving disc-shaped clouds in order to determine
the validity of pole response as a detonation indictor.
The blast and drag loading from VCEs involving disc-shaped clouds was evaluated using
BakerRisk’s Blast Wave Target Interaction (BWTI™) computational fluid dynamics (CFD) code.
Cloud diameter to height (D/H) ratios of 50, 100 and 200 were evaluated; cloud dimensions were
selected to preserve cloud volume. A hemispherical cloud was also evaluated (D/H = 2). The
largest D/H ratio is representative of that involved in the Buncefield and Caribbean Petroleum
accidental VCEs. CFD analyses were performed for flame speeds of Mach 0.7 (fast deflagration),
1.0 (very fast deflagration), and 5.2 (detonation). Blast and drag loads were determined for
selected locations both within and external to the flammable cloud. As part of this work,
dimensionless positive phase peak drag pressure and drag impulse curves were developed for each
cloud geometry and flame speed assessed.
The LS-DYNA finite element analysis (FEA) code was used to evaluate the response of a simple
pole geometry to the predicted blast pressure and drag load (i.e., density and velocity) histories. A
vertical pole fixed into the ground (i.e., a cantilevered beam) was considered in this analysis. A
pole outer diameter of 0.5 inches (1.3 cm) was employed in all cases, with the pole wall thickness
selected to ensure a reasonable level of response to the predicted blast and drag loading (i.e., the
thickness evaluated varied with location, cloud geometry and flame speed). The pole height
evaluated was fixed based on the tallest disc-shaped cloud evaluated (i.e., D/H = 50). Both the
deformation of the pole during the blast and drag loading and the relative velocity of the pole and
gas were considered in the analysis. A simplified treatment of the reflection of the blast wave
from the pole face was used.
The results of this work indicate that the presence of continuous curvature in a simple pole is not
necessarily indicative of a detonation. Continuous curvature can be obtained with a deflagration,
and may or may not be present with a detonation, depending on cloud geometry and pole location.
In fact, the results of this work indicate that the presence of continuous curvature in a simple pole
is more likely in a deflagration than in a detonation, although it may occur with either.
Keywords: VCE, CFD, FEA, detonation, blast, drag, detonation indicator, pole damage
23rd Annual Process Safety International Symposium
October 20-21, 2020 | College Station, Texas
Abstract
Flammability limits (FL), including lower flammable limit (LFL) and upper flammable limit
(UFL), are crucial for fire and explosion hazards assessment and consequence analysis. In this
study, by using an extended FL database of chemical mixture, quantitative structure-property
relationship (QSPR) models have been established using gradient boosting (GB) machine learning
algorithm. Feature importance based descriptor screening method is also implemented for the first
time to determine the optimal set of descriptors for model development. The result shows that all
developed models have significantly higher accuracy than other published models, with the test
set RMSE of LFL and UFL models being 0.058, 0.129, respectively. All the developed QSPR
models can be used to obtain reliable chemical mixture FL estimation and provide useful guidance
in fire and explosion hazard assessment and consequence analysis.
2.1 Database
In this study, multiple experiment results are obtained from literature to construct the mixture
FL database. The extended data set contains 271 different mixture LFL data of 15 chemicals and
138 mixture UFL data of 12 chemicals, which is the largest database among all published mixture
FL QSPR papers.35-37 The chemicals in this database contain various chemicals including saturated
hydrocarbons (methane, ethane, propane), unsaturated hydrocarbons (acetylene, ethylene,
propylene, 1-butane, butadiene), hydrocarbon isomers (n-butane and isobutane), ether (dimethyl
ether), ester (methyl formate), halogenated compound (1,1-difluoroethane) and inorganic
compounds (ammonia and carbon monoxide), which can ensure its broad-spectrum applicability.
The detailed composition and experimental data can be found in the Supporting Information.
Subsequently, 75% of data points (203 in LFL database, 103 in UFL database) in the FL database
were randomly selected to the training set and the remaining 25% of data points (68 in LFL
database, 35 in UFL database) will be grouped in the test set to validate the accuracy of the models.
Gradient boosting (GB) is a machine learning algorithm in the form of an ensemble of weak
prediction models, typically decision trees. It builds the model in a stage-wise fashion as other
boosting methods do, and it generalizes them by allowing optimization of an arbitrary
differentiable loss function.33 The idea of gradient boosting originated from the observation by
Leo Breiman that boosting can be interpreted as an optimization algorithm on a suitable cost
function.40 Explicit regression gradient boosting algorithms were subsequently developed by
Jerome H. Friedman.41 A simplified gradient boosting diagram is shown in Figure 2. Instead of
fitting the data hard, the boosting tree method learns slowly which accomplished by only fit the
predictors to the updated residuals from the previous tree. The main algorithms of gradient
boosting regression are shown below:42
The descriptor importance plot of the mixture LFL QSPR model is shown in Figure 4. The
descriptor importance matrix and description of selected descriptors are also shown in Table 1.
The gain of a descriptor is the average training loss reduction gained when using a feature for
splitting. It can be seen from both plot and table that the AMID_C is the most important descriptor
in gradient boosting based mixture LFL QSPR model development. The total descriptor amount
used in final descriptor development is usually 4-6 in order to ensure accuracy without
overfitting.4-8 In this study, the screening threshold of the LFL model is set to 0.01 which results
in AMID_C, Mor06p, ATS0p, TASA, and ATS0v selected as the subset for final model development.
Based on the parameter setting that can reach the lowest test RMSE, the eta, max_depth, and
nround of the QSPR model are set to 0.2, 10, and 400, respectively.
In order to visualize the performance of the developed QSPR models, two types of
performance evaluation plots are shown in Figure 6. The plot of QSPR prediction values vs.
experimental values is shown in Figure 6a and the prediction residual plot is shown in Figure 6b
with the test set statistical values as R2=0.986 and RMSE=0.058. All data points are evenly
distributed along the diagonal baseline of Fig. 6a which indicates that the predicted mixture LFL
is very close to the experimental value. Furthermore, all of the data residuals shown in Fig. 6b are
less than 0.2 and also close to the zero baseline, which proves that the developed QSPR method
provides a good estimate of mixture LFL.
Figure 6. Mixture LFL model performance plot: (a) Predicted value vs. experimental value, (b)
Residual plot
Williams plot the mixture LFL model is shown in Figure 7. The applicability domain is the
area within ±3 standardized deviations and standard leverage ℎ∗ of 0.103. We can see that all data
points are located inside the applicability domain. Therefore, the developed model is confirmed to
have the capability in predicting chemical mixtures LFL within the corresponding applicability
domain.
Figure 7. Mixture LFL model Williams plot
3.2 UFL Mixture
The descriptor importance plot of the mixture UFL model is shown in Figure 9. The descriptor
importance matrix and description of selected descriptors are also shown in Table 1. It can be seen
from the plot that the ten most important descriptors are clustered into three groups. The first two
groups contain most of the regression gain among all calculated descriptors. And the Mor21p is
the most important descriptor in developing UFL QSPR model development. In this study, the
descriptors of the first two clusters are selected as the subset for final model development.
Based on the parameter setting that can reach the lowest test RMSE, the eta, max_depth, and
nround of the QSPR model are set to 0.4, 10, and 200, respectively.
The plot of predicted UFL values vs. experimental UFL values is shown in Figure 11a. Since
the UFL data naturally have a wider spread than LFL data, the test set data points are more sparsely
distributed along the diagonal baseline compared to the mixture LFL models. The prediction
residual plot in Figure 11b also shown the same trend. However, the test set statistical values of
R2=0.935 and RMSE=0.129 demonstrate that the UFL QSPR model still has satisfying accuracy.
Figure 11. Mixture UFL model performance plot: (a) Predicted value vs. experimental value, (b)
Residual plot
Williams plot the mixture UFL model is shown in Figure 12. The applicability domain
defined as the area within ±3 standardized deviations with standard leverage ℎ∗ of 0.175. It can be
seen from the figure that all data points are well-contained under the applicability domain, which
indicates that the developed QSPR model’s reliability in predicting chemical mixtures UFL within
the corresponding applicability domain.
Figure 12. Mixture UFL model Williams plot
4. Conclusions
In this study, the largest database of mixture FL is constructed for the development and
validation of the QSPR prediction model. The gradient boosting machine learning method is
employed for the first time in the QSPR study to provide a novel descriptor screening and
regression method to improve the prediction accuracy of QSPR analysis. The GB-based mixture
FL QSPR models shown significantly high accuracy and reliability with the test set RMSE for
mixture LFL, mixture UFL of 0.058, 0.129, respectively. The performance of developed QSPR
models illustrated the power of gradient boosting descriptor selection and regression in assisting
hazardous property prediction. The statistical assessment result also proved that the models
developed in this study can reliably apply to flammable chemical mixture hazard assessment and
novel chemical design with much higher efficiency and accuracy.
The machine learning based QSPR models do not have a specific equation for an intuitive
application like MLR based QSPR model which is one of the shortcomings of GB-based QSPR
models. However, the problem can be overcome by developing a built-in software package with
interactive user interface, since the whole development process is conducted through open source
programming, the free distribution with high expandability can be easily reached in the future.
Besides, the database used in this study still needs to be further expanded to include as much
chemical mixture FL data as possible to further improve its applicability. And the descriptor
calculation capability also needs to be improved to expand available descriptors for screening. In
future QSPR model development, it is recommended that the GB-based methods should primary
considered for an effective and accurate prediction.
References
(1) Vidal, M.; Rogers, W. J.; Holste, J. C.; Mannan, M. S. A Review of Estimation Methods for
Flash Points and Flammability Limits. Process Saf. Prog. 2004, 23 (1), 47–55.
(2) High, M. S.; Danner, R. P. Prediction of Upper Flammability Limit by a Group Contribution
Method. Ind. Eng. Chem. Res. 1987, 26 (7), 1395–1399.
(3) Zhao, F.; Rogers, W. J.; Mannan, M. S. Experimental Measurement and Numerical Analysis
of Binary Hydrocarbon Mixture Flammability Limits. Process Saf. Environ. Prot. 2009, 87 (2),
94–104.
(4) Wang, B.; Park, H.; Xu, K.; Wang, Q. Prediction of Lower Flammability Limits of Blended
Gases Based on Quantitative Structure–Property Relationship. J. Therm. Anal.
Calorim. 2018, 132 (2), 1125–1130.
(5) Wang, B.; Xu, K.; Wang, Q. Prediction of Upper Flammability Limits for Fuel Mixtures
Using Quantitative Structure–Property Relationship Models. Chem. Eng.
Commun. 2018, 206 (2), 247–253.
(6) Pan, Y.; Ji, X.; Ding, L.; Jiang, J. Prediction of Lower Flammability Limits for Binary
Hydrocarbon Gases by Quantitative Structure-Property Relationship
Approach. Molecules 2019, 24 (4), 748.
(7) Shen, S.; Ji, X.; Pan, Y.; Qi, R.; Jiang, J. A New Method for Predicting the Upper
Flammability Limits of Fuel Mixtures. J. Loss Prev. Process Ind. 2020, 64, 104074.
(8) Jiao, Z.; Yuan, S.; Zhang, Z.; Wang, Q. Machine Learning Prediction of Hydrocarbon
Mixture Lower Flammability Limits Using Quantitative Structure‐Property Relationship
Models. Process Saf. Prog. 2019.
(9) Kondo, S.; Takizawa, K.; Takahashi, A.; Tokuhashi, K. Extended Le Chateliers Formula for
Carbon Dioxide Dilution Effect on Flammability Limits. J. Hazard. Mater. 2006, 138 (1), 1-8.
(10) Mashuga, C. V.; Crowl, D. A. Derivation of Le Chateliers Mixing Rule for Flammable
Limits. Process Saf. Prog. 2000, 19 (2), 112–117.
(11) Jiao, Z.; Escobar-Hernandez, H. U.; Parker, T.; Wang, Q. Review of Recent Developments
of Quantitative Structure-Property Relationship Models on Fire and Explosion-Related
Properties. Process Saf. Environ. Prot. 2019, 129, 280–290.
(12) Quintero, F. A.; Patel, S. J.; Muñoz, F.; Mannan, M. S. Review of Existing QSAR/QSPR
Models Developed for Properties Used in Hazardous Chemicals Classification System. Ind. Eng.
Chem. Res. 2012, 51 (49), 16101–16115.
(13) Wang, B.; Zhou, L.; Xu, K.; Wang, Q. Prediction of Minimum Ignition Energy from
Molecular Structure Using Quantitative Structure–Property Relationship (QSPR) Models. Ind.
Eng. Chem. Res. 2016, 56 (1), 47–51.
(14) Lu, Y.; Ng, D.; Mannan, M. S. Prediction of the Reactivity Hazards for Organic Peroxides
Using the QSPR Approach. Ind. Eng. Chem. Res. 2011, 50 (3), 1515–1522.
(15) Zhou, L.; Wang, B.; Jiang, J.; Pan, Y.; Wang, Q. Quantitative Structure-Property
Relationship (QSPR) Study for Predicting Gas-Liquid Critical Temperatures of Organic
Compounds. Thermochim. Acta 2017, 655, 112–116.
(16) Knotts, T. A.; Wilding, W. V.; Oscarson, J. L.; Rowley, R. L. Use of the DIPPR Database
for Development of QSPR Correlations: Surface Tension. J. Chem. Eng. Data 2001, 46 (5),
1007–1012.
(18) Wang, B.; Zhou, L.; Liu, X.; Xu, K.; Wang, Q. Prediction of Superheat Limit Temperatures
for Fuel Mixtures Using Quantitative Structure-Property Relationship Model. J. Loss Prev.
Process Ind. 2020, 64, 104087.
(19) Zhou, L.; Wang, B.; Jiang, J.; Pan, Y.; Wang, Q. Predicting the Gas-Liquid Critical
Temperature of Binary Mixtures Based on the Quantitative Structure Property
Relationship. Chemom. Intell. Lab. Syst. 2017, 167, 190–195.
(20) Cao, W.; Pan, Y.; Liu, Y.; Jiang, J. A Novel Method for Predicting the Flash Points of
Binary Mixtures from Molecular Structures. Saf. Sci. 2020, 126, 104680.
(21) Moriwaki, H.; Tian, Y.-S.; Kawashita, N.; Takagi, T. Mordred: a Molecular Descriptor
Calculator. J. Cheminformatics 2018, 10 (1).
(22) Oliphant, T. E. Python for Scientific Computing. Comput. Sci Eng. 2007, 9 (3), 10–20.
(23) Turney, J. M.; Simmonett, A. C.; Parrish, R. M.; Hohenstein, E. G.; Evangelista, F. A.;
Fermann, J. T.; Mintz, B. J.; Burns, L. A.; Wilke, J. J.; Abrams, M. L.; Russ, N. J.; Leininger, M.
L.; Janssen, C. L.; Seidl, E. T.; Allen, W. D.; Schaefer, H. F.; King, R. A.; Valeev, E. F.; Sherrill,
C. D.; Crawford, T. D. Psi4: An Open-Source Ab Initio Electronic Structure Program. Wiley
Interdiscip. Rev. Comput. Mol. Sci. 2011, 2 (4), 556–565.
(24) Tosco, P.; Stiefl, N.; Landrum, G. Bringing the MMFF Force Field to the RDKit:
Implementation and Validation. J. Cheminformatics 2014, 6 (1).
(25) Zhao, X.; Pan, Y.; Jiang, J.; Xu, S.; Jiang, J.; Ding, L. Thermal Hazard of Ionic Liquids:
Modeling Thermal Decomposition Temperatures of Imidazolium Ionic Liquids via QSPR
Method. Ind. Eng. Chem. Res. 2017, 56, 4185−4195.
(26) Wang, B.; Yi, H.; Xu, K.; Wang, Q. Prediction of the Self-Accelerating Decomposition
Temperature of Organic Peroxides Using QSPR Models. J. Therm. Anal. Calorim. 2016, 128 (1),
399–406.
(27) Yuan, S.; Jiao, Z.; Quddus, N.; Kwon, J. S.-I.; Mashuga, C. V. Developing Quantitative
Structure–Property Relationship Models To Predict the Upper Flammability Limit Using
Machine Learning. Ind. Eng. Chem. Res. 2019, 58 (8), 3531–3537.
(28) Chang, C.-C.; Lin, C.-J. LIBSVM. ACM Trans. Intell. Syst. Technol. 2011, 2 (3), 1–27.
(29) Yuan, S.; Zhang, Z.; Sun, Y.; Kwon, J. S.-I.; Mashuga, C. V. Liquid Flammability Ratings
Predicted by Machine Learning Considering Aerosolization. J. Hazard. Mater. 2020, 386,
121640.
(30) Zheng, Z.; Lu, P.; Lantz, B. Commercial Truck Crash Injury Severity Analysis Using
Gradient Boosting Data Mining Model. J. Safety Res. 2018, 65, 115–124.
(31) Zeng, M.; Yuan, S.; Huang, D.; Cheng, Z. Accelerated Design of Catalytic Water-Cleaning
Nanomotors via Machine Learning. ACS Appl. Mater. Interfaces 2019, 11 (43), 40099–40106.
(32) Sun, Y.; Wang, J.; Zhu, W.; Yuan, S.; Hong, Y.; Mannan, M. S.; Wilhite, B. Development
of Consequent Models for Three Categories of Fire through Artificial Neural Networks. Ind.
Eng. Chem. Res. 2019, 59 (1), 464–474.
(33) Nielsen, D. Tree boosting with xgboost-why does xgboost win" every" machine learning
competition? Master Thesis, Norwegian University of Science and Technology, 2016.
(34) Chen, T.; Guestrin, C. XGBoost. Proceedings of the 22nd ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining - KDD 16 2016.
(35) Kondo, S.; Takizawa, K.; Takahashi, A.; Tokuhashi, K.; Sekiya, A. Flammability Limits of
Isobutane and Its Mixtures with Various Gases. J. Hazard. Mater. 2007, 148 (3), 640–647.
(36) Kondo, S.; Takizawa, K.; Takahashi, A.; Tokuhashi, K.; Sekiya, A. A Study on
Flammability Limits of Fuel Mixtures. J. Hazard. Mater. 2008, 155 (3), 440–448.
(37) Tang, R. J. Theoretical Prediction of Lower Explosive Limit and Researches on Explosion
Suppression Rules for Binary Hydrocarbon Gas Mixtures. Master Thesis, Nanjing Tech
University, June 2017.
(39) Ajmani, S.; Rogers, S. C.; Barley, M. H.; Livingstone, D. J. Application of QSPR to
Mixtures. J. Chem. Inf. Model. 2006, 46(5), 2043-2055.
(40) Breiman, L. Arcing the Edge (Technical Report 486). Statistics Department. University of
California at Berkeley, Berkeley, CA, 1997.
(41) Friedman, J. H. Stochastic Gradient Boosting. Comput. Stat. Data Anal. 2002, 38 (4), 367–
378.
(42) James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An introduction to statistical learning: with
applications in R; Springer: New York, 2017.
Speaker Bios
Day 1 Track 1
Risk/Consequence Analysis & Design Aspects
Risk Assessment I
Importance of Process Safety Time in Design
Concept
Shanmuga Prasad Kolappan is a driven Process Safety
Engineer with considerable rich experience in the field
of Safety, Risk and Loss prevention engineering. He has
9 years of experience including Process and
commissioning, risk consulting, and loss prevention. He
has also had exposure to safety studies such as Hazop,
SIL, QRA, and active/ passive fire protection, as well as
to safety software such as PHA Pro, PHA works, PHAST,
SAFETI, exSILentia, Pipenet, Detect3D and BowTieXP.
Shanmuga Prasad Kolappan Shanmuga is continuously learning and updating the
TechnipFMC recent trends in the field of Safety.
Abdulaziz Alajlan
Saudi Aramco
Nitin Roy
California State University
The use of Bayesian Networks in Functional Safety
Paul Gruhn is a Global Functional Safety Consultant with
aeSolutions in Houston, Texas. Paul is an ISA (International
Society of Automation) Life Fellow, a 30 year member and
co-chair of the ISA 84 standard committee (on safety
instrumented systems), the developer and instructor of ISA
courses on safety systems, the author of two ISA textbooks,
and the developer of the first commercial safety system
modeling software. Paul has a B.S. degree in Mechanical
Engineering from Illinois Institute of Technology, is a
licensed Professional Engineer (PE) in Texas, a member of
the Control Systems Engineering PE exam committee, and
both a Certified Functional Safety Expert (CFSE) and an ISA
Paul Gruhn
84 Safety Instrumented Systems Expert. Paul was the 2019
aeSolutions
ISA President.
My Vision of Future Instrumental Protective
Systems
Greg Hall is a Principal Electrical Engineer with Eastman
Chemical Company with 39 years experience at Texas
Operations in Longview, Texas. Greg is the IPS
(Instrument Protective Systems) Design engineer,
chairman of the Texas Operations IPS Committee,
member of the Eastman Corporate IPS Governance
Council, and received an Electrical Engineering degree
from the University of Texas at Austin.
Greg Hall
Eastman Chemical Company
Relief Systems
Overlooked Reverse Flow Scenarios
Gabriel Andrade is a lead process engineer for Siemens
Energy and has worked in process safety since 2011.
Gabriel started his career at Chemtech, a Siemens
engineering company in Brazil, after obtaining his
chemical engineering Bachelor's degree from the
Universidade Federal do Rio de Janeiro in 2010. Passion
and dedication brought him to the Siemens process
safety group in Houston, where he has happily lived
with his wife since 2014.
Christopher Ng
Siemens Process & Safety Consulting
Derek Wood
Siemens Process & Safety Consulting
Failure Under Pressure: Proper Use of Pressure
Relief Device Failure Rate Data based on Device
Type and Service
Todd W. Drennen, P.E., is a senior engineer for Baker
Engineering and Risk Consultants, Inc. (BakerRisk). He has
more than 15 years of experience in process safety,
including pressure-relief system design and analysis,
simulation of complex process upset scenarios, process
hazard analysis (PHA), layers of protection analysis (LOPA),
fault tree analysis (FTA), and process safety management
(PSM) compliance auditing. He has a BS in chemical
Todd W Drennen engineering from Drexel University and is a licensed
Baker Risk professional engineer in Illinois and Delaware.
Additional Engineering and Documentation to
Reduce Pressure Relief Mitigation Cost
Gabriel Andrade is a lead process engineer for Siemens
Energy and has worked in process safety since 2011.
Gabriel started his career at Chemtech, a Siemens
engineering company in Brazil, after obtaining his
chemical engineering Bachelor's degree from the
Universidade Federal do Rio de Janeiro in 2010. Passion
and dedication brought him to the Siemens process
safety group in Houston, where he has happily lived
with his wife since 2014.
Gabriel Martiniano Ribeiro de Andrade
Siemens Process & Safety Consulting
Additional Engineering and Documentation to
Reduce Pressure Relief Mitigation Cost
Kartik Maniar is a principal process engineer for
Siemens Energy and has worked in the field of process
safety since 2006. He has lead and completed various
refinery wide pressure relief and flare studies. Recent
work has included relief studies based on dynamic
simulation and also flare load minimization using non-
normal devices and instrumentation credit.
Kartik Maniar
Siemens Process & Safety Consulting
Day 1 Track 2
Human Factors-People In Action
Training/ Engagement
Virtual Reality Process Safety in Counterfactual
Thinking
Kianna Arthur is a second-year PhD student in the
Social-Personality Psychology program. She works with
Dr. Rachel Smallman in examining the functionality of
counterfactual thoughts (i.e., "If only...") in health-
related contexts. This includes both the generation and
subsequent consequences of counterfactuals
(motivation, behavioral intentions, risk perception, etc.).
Kianna also works with Next Generation Advanced
Procedures and RIHM Lab with Dr. Camille Peres.
Kianna Arthur
Texas A&M University
Human Performance/Decision Making I
Is Attentional Shift the Problem (or something else)
with Hazard Statement Compliance? An
Experimental Investigation Using Eye-Tracking
Technology
Dr. Camille Peres is an Associate Professor
with Environmental and Occupational Health at Texas
A&M University as well as the assistant director of
Human Systems Engineering with the Mary Kay
O’Connor Process Safety Center. Her expertise is
Human Factors and she does research regarding:
procedures; Human Robotic Interaction in disasters; and
S. Camille Peres
team performance in Emergency Operations.
Texas A&M University
Risk Management entails decision making: Does
design decision making in complex situations come
down to somebody’s gut feeling?
Dr. Hans J. Pasman studied chemical technology at
Delft University of Technology, the Netherlands. Ph.D.
in 1964. He worked for Shell before moving to the
research organization TNO. He has investigated
numerous process industry accidents, worked on a
variety of topics and managed units of TNO Defense
research. 1980-90s Chairman NATO group on
Explosives, OECD group on Unstable Substances,
Chairman European Working Group on Risk Analysis,
Chairman European Working Party on Loss Prevention.
Dr. Pasman is a Co-founder of the European Process
Safety Centre and he coordinated late 90s industrial
safety research TNO. He was a Professor of Chemical
Hans J. Pasman Risk Management at Delft University for nearly 10 years.
Mary Kay O’Connor process Safety Center He is also member of former Dutch Hazardous
Substances Council and since 2008 Research Professor
at Mary Kay O’Connor Process Safety Center in the
Chemical Engineering department of Texas A&M
University.
Joseph W. Hendricks
Texas A&M University
Practical Writing Tips To Prevent Human Error
When Following Procedures
Dr. Philippart specializes in managing operational risks
associated with human performance. Her career begun
at NASA’s Kennedy Space Center, where she applied
her mechanical and industrial engineering degrees to
develop and improve spaceflight equipment and
processes. Since 2006, she has dedicated primarily to
enhancing deep-water drilling process safety and risk
management in the petroleum industry. Dr. Philippart
has also developed and imparted courses for NASA and
Embry-Riddle Aeronautical University, and has enjoyed
Monica Philippart
working for The Walt Disney Company.
Ergonomic Human Factors Solution
Joseph W. Hendricks
Texas A&M University
Day 1 Track 3
Managing Operations and Maintenance
Modeling and Asset Integrity
RBI Study using Advanced Consequence Assessment
for Topside Equipment on Offshore Platforms
Chetan has more than 10 years of global experience in
the Oil and Gas Industry working for Engineering
Contracting Companies, major international operators
and specialist consultancies for various onshore and
offshore projects in Asia, Middle East, UK & Europe and
Eurasia. He has been instrumental in developing and
modifying Hazard Management tools used in Risk
Calculations (Consequence Modelling, Risk Calculations,
SIL Calculations, etc.) and has extensive experience of
Chetan Birajdar using various software packages.
Monaco Engineering Solutions
Indicators of an Immature Mechanical Integrity
Program
Mr. Derek Yelinek is the Risk Based Inspection Lead for
Siemens Process & Safety Consulting business located
in Houston, TX. Mr. Yelinek has over 10 years of
experience in the development, implementation, and
management of Mechanical Integrity programs with a
focus on Inspection Data Management Systems (IDMS),
Risk-Based Inspection (RBI), and procedure and work-
process development. His experience ranges in
consultant/services as well as the user/owner side,
across the oil & gas, chemical, and mining industries.
Derek Yelinek Derek is API 570, 571, and 580 certified and holds a B.S.
Siemens Process & Safety Consulting in Chemical Engineering from Western Michigan
University.
Missing Biography
Michael Marshall
Tratus group
Guidance to Improve the Effectiveness of Process
Safety Management Systems in Operating Facilities
Syeda Zohra Halim is currently employed as a
Postdoctoral Research Associate at the Mary Kay
O’Connor Process Safety Center (MKOPSC) and as a
Lecturer of Chemical Engineering at the Texas A&M
University. She overlooks several industry and federal
funded projects ongoing at the MKOPSC, generates
proposals for new ones and mentors graduate students
in process safety-related dissertation projects.
Zohra completed her PhD in Chemical Engineering in
Spring 2019 with Mary Kay O’Connor Process Safety
Center at Texas A&M University. Her research focused
Syeda Zohra Halim on developing a model for assessing cumulative risk
Mary Kay O’Conner Process Safety Center arising from impaired barriers in offshore oil and gas
facilities.
Suresh G
Bharat Petroleum Corporation
Risk Mitigation
Development of Resilient LNG Facilities
Onder Akinci has a PhD degree in Civil Engineering and
more than 20 years of R&D, Civil/Structural/Architectural
Engineering and Project Management experience. He had
leadership roles with major EPC, consulting and LNG
project development companies previously. He is a
registered Professional Engineer in the state of Maryland.
Dr. Akinci has extensive non-linear analysis, structural
design, PFP optimization and facility upgrade experience.
His areas of expertise include design of structures for fire
and blast, earthquakes, hurricanes, dropped object and
impact loads. He worked on several onshore and offshore
Onder Akinci Oil&Gas projects, and supported all phases from concept
Daros Consulting development to construction.”
Development of Risk Mitigation Programs using a
Quantitative-Risk-Based Approach
Dr. Rafael Callejas-Tovar is a Senior Engineer working in
the BakerRisk® Houston office as part of the Process
Safety Group. His work is focused on quantitative risk
analysis, consequence modeling, and computational fluid
dynamics simulations. He received his PhD degree in
Chemical Engineering from Texas A&M University. Rafael
has over 8 years of industry and consulting experience in
the U.S. with a focus on consequence and quantitative risk
analysis for chemical plants, refineries, transportation of
Rafael Callejas-Tovar hazardous materials, and offshore oil & gas facilities.
BakerRisk
Incorporating Mitigation safeguards with LOPA
Ed Marszal is President and CEO of Kenexis. He has over
25 years of experience in risk analysis and technical
safety engineering of process industry plants, including
design of Safety Instrumented Systems and Fire and
Gas Systems. Ed is an ISA Fellow and former Director of
the ISA Safety Division and 20 year veteran of the ISA
84 standards committee for safety instrumented
systems. He is also the author of the “Safety Integrity
Level Selection” and “Security PHA Review” textbooks
from ISA.
Edward Marszal
Kenexis
Consequence Analysis: Gas Release
Hole Size Matters
Jeff Marx is a Senior Engineer with Quest Consultants in
Norman, Oklahoma, USA, and a registered professional
engineer in the state of Oklahoma. He earned his
Bachelor’s degree in mechanical engineering from the
University of Oklahoma and a Master’s degree in
Mechanical Engineering from Georgia Tech. In his 27 years
at Quest, Jeff’s primary responsibilities have been in
consequence and risk analysis studies for the
petrochemical industry. This work includes facility siting,
building siting studies per API RP 752/753, and
quantitative risk analysis (QRA) studies for various
corporate and regulatory entities. His work has been
involved all aspects of the petrochemical system, including
pipelines, gas plants, refineries, LPG terminals, and
chemical plants. Much of this work has been involved in
the LNG industry, including siting for LNG plants (using 49
Jeffrey D. Marx
CFR 193, NFPA 59A, CSA Z276, EN 1473, and other
Quest Consulting Inc
standards), and as a member of the Canadian Standards
Association’s Z276 committee, the LNG standard for
Canada.Jeff is also responsible for several portions of the
CANARY by Quest consequence analysis software, and has
helped to develop, maintain, and apply the CANARY+ risk
analysis toolset used at Quest.
How Can I Effectively Place My Gas Detectors
Mr. Brumbaugh is a process safety engineer with a 13
year background in process modeling, holding degrees
in chemical engineering and computer science from
Texas Tech University. He has worked in the process
safety industry for over 7 years, performing models of
gas dispersion, vapor cloud explosions, pool and jet
fires, and other hazards in a wide range of software
packages including computational fluid dynamics (CFD).
He has also participated in numerous PHA studies;
conducted process simulations in VMG, HYSYS, and
Jesse Brumbaugh CFD; and also numerical methods based models for
aeSolutions various types of projects.
Consequence Assessment Considerations for Toxic
Natural Gas Dispersion Modeling
Nair is a Process Safety leader with expertise in
Technical Safety engineering and safety management. A
Chartered Engineer with global experience in
stewarding process safety performance and governance
in hazardous industries and industry peer groups. Nair,
in his current role as Senior Process Risk Engineer, leads
technological risk management at Permian operations
and projects for Chevron Corporation. Nair, a MSc
(Eng.) in Process Safety and Loss Prevention (the
SreeRaj Nair University of Sheffield, UK, 2004) is pursuing his PhD in
Chevron dispersion modeling at the University of Warwick.
Joseph W. Hendricks
Texas A&M University
Fatigue and Stress
Operator Performance Under Stress: A Neurocentric
Virtual Reality Training Approach
Ranjana Mehta is Associate Professor in the Department of
Industrial and Systems Engineering at Texas A&M
University. She is also a graduate faculty with the Texas
A&M Institute for Neuroscience at Texas A&M University,
director of the NeuroErgonomics Laboratory, co-director of
the Texas A&M Ergonomics Center, and a faculty fellow
with the Center for Remote Health Technologies and
Systems, the Center for Population Health and Aging, and
Mary Kay O’Connor Process Safety Center. The
NeuroErgonomics Lab examines the mind-motor-machine
nexus to understand, quantify, and predict human
performance when interacting with emerging technologies
Ranjana Mehta
(unmanned, collaborative, and wearable systems) in safety-
Texas A&M University
critical extreme environments (e.g., emergency response,
space exploration, oil and gas).
Towards a Predictive Fatigue Technology for Oil and
Gas Drivers
John Kang is a Ph.D. student in Industrial & Systems
Engineering at Texas A&M University and has received
a BS in Industrial & Systems Engineering from Georga
Tech. His research interests are physiological wearables
to quantify fatigue and understanding decision making
under fatigue or stress in a high-risk environment.
John Kang
Texas A&M University
Validation of the Fatigue Risk Assessment and
Management in High-Risk Environments (FRAME)
Survey
Stefan V. Dumlao is a doctoral student studying
industrial-organizational psychology at Texas A&M
University, College Station. His primary research
interests are employee reactions to wearable monitors
and occupational safety.
Stefan V. Dumlao
Texas A&M University
Day 2 Track IV
Research and Next Generation
Next Generation Process Safety I
Identifying Contributing Factors of Pipeline Incident
from PHMSA Database on NLP and Text Mining
Techniques
Guanyang Liu, a PhD student in Chemical Engineering
with research interest of reaction engineering, process
safety, AI applications in process industry
Guanyang Liu
MKOPSC
Causation Analysis of Pipeline Incidents using
Artificial Neural Network (ANN)
Pallavi Kumari is a fourth-year Ph.D. student working
with Dr. Joseph Kwon. Her research focuses on root
cause and consequence analysis of rare events in
chemical process industries using statistical data
analysis methods, process modelling and process
control techniques. She received her Bachelor’s from IIT
Kanpur and worked in Reliance Industries Limited, India.
Pallavi Kumari
MKOPSC
Development of Hazard Factor for Engineered
Particles
Nabila Nazneen is currently pursuing her Master's in
Safety Engineering at the Mary Kay O'Connor Process
Safety Center. Her Background is in Chemical
Engineering. She attained an MBA degree right after
her bachelor degree and worked in the ready-made
garments sector for two years before coming to the
Process Safety Center. Her research interest is on
nanoparticle hazards.
Nabila Nazneen
MKOPSC
Next Generation Process Safety II
Can a Virtual Reality Application Better Prepare
Millennials and the Z-Generation for Working with
Systems in the Process Industry?
Nir Keren is an associate professor of occupational
safety and a graduate faculty member at the Virtual
Reality Application Center at Iowa State University.
Keren is also the director of the Occupational Safety
Program of the NIOSH Heartland Education and
Research Center and the Director of the VirtuTrace
Laboratory for Applied Decision Making Research in
Virtual Reality.
Nir Keren
Iowa State University
Process Safety Risk Index Calculation Based on
Historian Data
Prasad Goteti is a Principal Project Engineer at
Honeywell Process Solutions (HPS), Houston Texas USA,
in the Safety Engineering Center of Excellence. He is
responsible for providing process safety solutions to
customers, working on Safety Engineering at the
proposal and detailed engineering stage for Safety
Instrumented System (SIS) projects, which includes
Emergency Shutdown Systems (ESD), Burner
Management Systems (BMS) and Fire and Gas Systems
(FGS). He is also an Instructor for the TUV Rheinland
Germany, approved Functional Safety Training course
conducted by HPS Automation College. Prasad holds a
degree in Instrumentation from Birla Institute of
Technology and Science (BITS), Pilani , India, is a
Prasad Goteti Professional Engineer (P.Eng) with the Association of
Honeywell Process Solutions Professional Engineers and Geoscientists of Alberta
(APEGA), Canada, a Certified Functional Safety Expert
(CFSE), a TUV Functional Safety Expert (TUV Rheinland,
Germany), An Advisory Board member of Purdue
Process Safety and Assurance Center (P2SAC) at Purdue
Universtiy, a member of WG 7 of ISA TR 84.00.07 and
WG 9 of ISA TR 84.00.09 committees
A Brief review of Intrusion Detection System in
Process plants and advancement of Machine
Learning in Process Security
Missing Biography
Sinijoy P J
Cochin University of Science and Technology
Day 2 Track V
Explosions
Explosion Modeling
The Influence of the Velocity Field on the Stretch
Factor and on the Characteristic Length of Wrinkling
of Turbulent Premixed Flames
Tássia Lins da Silva Quaresma is a chemical engineer
with experience in Risk Analysis and numerical
combustion modeling with computational fluid
dynamics. Tássia has contributed to several risk analysis
methodologies of oil&gas and mining industrial plants.
Currently, she has been studying combustion models
for turbulent premixed flames focusing on the
turbulent-flame interaction in order to predict flame
speed and its consequences.
Tássia L. S. Quaresma
University of Campinas
Towards a Comprehensive Model Evaluation
Protocol for LNG Hazard Analyses
Dr. Filippo Gavelli is a mechanical engineer who
specializes in the analysis of heat transfer and fluid flow
phenomena, including multiphase flows and cryogenic
fluids. He has 18 years of engineering consulting
experience and over 25 years of experience in
computational fluid dynamics (CFD) modeling, using
several research and commercial codes. He applies his
expertise to modeling the consequences of hazardous
releases and performing risk assessments for Liquefied
Natural Gas (LNG) facilities. Dr. Gavelli has over 16
years of experience with modeling hazard scenarios
including vapor cloud dispersion, pool and jet fires and
vapor cloud explosions; his experience includes more
Filippo Gavellia than 50 LNG installations worldwide, including onshore,
Blue Engineering and Consulting offshore and floating facilities for LNG import, export,
peakshaving, truck loading and bunkering. He has
been a member of the NFPA 59A committee for over 13
years and a frequent contributor to LNG safety related
conferences and expert panels.
Beirut: How behaves Ammonium Nitrate Exposed to
Fire and How Strong and Damaging is its Explosion
Charline Fouchier is a postdoctoral researcher at the
von Karman Institute in Belgium. She completed her
Ph.D. degree this year on the Investigation of the
Pollutant Dispersion Driven by a Condensed-Phase
Explosion in a Complex Environment. She has a master’s
degree in industrial safety, from Ecole national
supérieure des Mines d’Alès (France), a master’s degree
in Environments and Urban Risks from Ecole Nationale
supérieure des Mines de Saint-Etienne (France) and a
Post-graduate Research Master in Fluid Dynamics from
Charline Fouchier the von Karman Institute (Belgium), during which she
Von Karman Institution of Fluid Dynamics won the Excellence in Experimental Research Award for
her work on the blast propagation in an urban
environment
Explosion Phenomena I
Flammable Mist Hazards Involving High-Flashpoint
Fluids
Simon Gant is a Principal Scientist in the Fluid Dynamics
Team at HSE’s Science and Research Centre in Buxton,
UK, where he undertakes work on incident
investigations, research, development of guidance and
standards, model reviews and consultancy. He obtained
a master’s degree in mechanical engineering from
Leeds University in 1997 and a PhD in computational
fluid dynamics from Manchester University in 2002. His
current work is mainly focused on the Jack Rabbit II
chlorine release trials, hydrogen energy demonstration
Simon Gant projects, carbon capture and storage, and flammable
UK Health and Safety Executive mists
Missing Biography
Yumeng Zhao
Purdue University
The HBT-A Large-Scale Facility for Study of
Detonations and Explosion
Elaine S. Oran is TEES Eminent Professor in the
Department of Aerospace Engineering at Texas A&M
University. Previously she was the A. James Clark
Distinguished Professor and the Glenn L. Martin
Institute Professor at the University of Maryland. For
many years before that, she was the Senior Scientist for
Reactive Flow Physics at the US Naval Research
Laboratory in Washington, DC. She received an A.B. in
chemistry and physics from Bryn Mawr College and
both a M.Ph. in Physics and a Ph.D. in Engineering and
Applied Science from Yale University. She is a Member
of the National Academy of Engineering, an Honorary
Fellow of the AIAA, and a Fellow of the American
Elaine S. Oran Academy of Arts and Sciences. Her recent research
Texas A&M University interests include chemically reactive flows, turbulence,
numerical analysis, high-performance computing,
shocks and shock interactions, and rarefied gases, with
applications to combustion, propulsion, and all sorts of
explosions.
Explosions Phenomena II
Development of Flammable Dispersion Quantitative
Property-Consequence Relationship Models Using
Machine Learning
Zeren Jiao joined Mary Kay O'Connor Process Safety
Center in September 2016 and he obtained his M.S.
degree of chemical engineering in 2018 under the
supervision of Dr. Sam Mannan. He is currently a Ph.D.
student in the MKOPSC. Zeren Jiao's research focus is
on implementing machine learning and big data
analysis in process safety.
Zeren Jiao
MKOPSC
An Unsupervised Model to Predict the Liquid In-
Cylinder Combustion Risk Ratings of Marine Fuels
Chenxi Ji is currently a research assistant at Mary Kay
O'Connor Process Safety Center and Gas & Fuel
Research Center of Texas A&M University. He is
motivated to apply his process systems engineering
and chemical process safety on the shipping and oil &
gas industry, seeking to make the oil & gas industry
faster, greener, safer and more cost-effective.
Chenxi Ji
MKOPSC
Fireball and Flame Venting Comparisons
Peter Diakow is a Senior Consultant with the Blast
Effects group at BakerRisk, with a master’s degree in
Mechanical Engineering from Queen's University in
Canada. Mr. Diakow has over a decade of experience in
experimental testing and research with a focus on vapor
cloud explosions, vented deflagrations, and
deflagration to detonation transition (DDT). At
BakerRisk Mr. Diakow is also involved with Facility Siting
Studies (FSS), Quantitative Risk Assessments (QRA),
Incident Investigations and Dust Hazard Analyses
Peter A Diakow
(DHA).
BakerRisk
Missing Biography
J. Kelly Thomas
BakerRisk
Machine Learning Based Quantitative Prediction
Models for Chemical Mixture Flammability Limits
Zeren Jiao joined Mary Kay O'Connor Process Safety
Center in September 2016 and he obtained his M.S.
degree of chemical engineering in 2018 under the
supervision of Dr. Sam Mannan. He is currently a Ph.D.
student in the MKOPSC. Zeren Jiao's research focus is
on implementing machine learning and big data
analysis in process safety.