0% found this document useful (0 votes)
290 views

Icao Shell Model

The ICAO SHELL model is used to analyze the interaction between human and technical components of a system. It considers the interfaces between Liveware (human beings), Software (rules and procedures), Hardware (physical equipment), and Environment (external situation). The model aims to identify mismatches between these components that could contribute to human error. Liveware refers to humans and emphasizes that people are the most unpredictable element. The interfaces between Liveware and the other components must be carefully designed and managed to reduce stress and avoid system breakdowns.

Uploaded by

Jion Mujiono
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
290 views

Icao Shell Model

The ICAO SHELL model is used to analyze the interaction between human and technical components of a system. It considers the interfaces between Liveware (human beings), Software (rules and procedures), Hardware (physical equipment), and Environment (external situation). The model aims to identify mismatches between these components that could contribute to human error. Liveware refers to humans and emphasizes that people are the most unpredictable element. The interfaces between Liveware and the other components must be carefully designed and managed to reduce stress and avoid system breakdowns.

Uploaded by

Jion Mujiono
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

ICAO SHELL Model

Contents
 1 Description
 2 Liveware
 3 Liveware-Liveware
 4 Liveware-Software
 5 Liveware-hardware
 6 Liveware - Environment
 7 Related Articles

Description
ICAO SHELL Model, as described in ICAO Doc 9859, Safety Management Manual, is a
conceptual tool used to analyse the interaction of multiple system components. It also refers to
a framework proposed in ICAO Circular 216-AN31.
The concept (the name being derived from the initial letters of its components, Software,
Hardware, Environment, Liveware) was first developed by Edwards in 1972, with a modified
diagram to illustrate the model developed by Hawkins in 1975.
One practical diagram to illustrate this conceptual model uses blocks to represent the different
components of Human Factors. This building block diagram does not cover the interfaces which
are outside Human Factors (hardware-hardware; hardware-environment; software-hardware)
and is only intended as a basic aid to understanding Human Factors:
 Software - the rules, procedures, written documents etc., which are part of the standard
operating procedures.
 Hardware - the Air Traffic Control suites, their configuration, controls and surfaces,
displays and functional systems.
 Environment - the situation in which the L-H-S system must function, the social and
economic climate as well as the natural environment.
 Liveware - the human beings - the controller with other controllers, flight crews,
engineers and maintenance personnel, management and administration people - within
in the system.
According to the SHELL Model, a mismatch between the Liveware and other four components
contributes to human error. Thus, these interactions must be assessed and considered in all
sectors of the aviation system.
Liveware

The critical focus of the model is the human participant, or liveware, the most critical as well as
the most flexible component in the system. The edges of this block are not simple and straight,
and so the other components of the system must be carefully matched to them if stress in the
system and eventual breakdown are to be avoided.
However, of all the dimensions in the model, this is the one which is least predictable and most
susceptible to the effects of internal (hunger, fatigue, motivation, etc.) and external
(temperature, light, noise, workload, etc.) changes.
Human Error is often seen as the negative consequence of the liveware dimension in this
interactive system. Sometimes, two simplistic alternatives are proposed in addressing error:
there is no point in trying to remove errors from human performance, they are independent of
training; or, humans are error prone systems, therefore they should be removed from decision
making in risky situations and replaced by computer controlled devices. Neither of these
alternatives are particularly helpful in managing errors.

Liveware-Liveware
(the intertface between people and other people)

This is the interface between people. In this interface, we are concerned with leadership, co-
operation, teamwork and personality interactions. It includes programmes like Crew Resource
Management (CRM), the ATC equivalent - TRM (TRM), Line Oriented Flight Training (LOFT) etc.
Liveware-Software
(The interface between people and software)

Software is the collective term which refers to all the laws, rules, regulations, orders, standard
operating procedures, customs and conventions and the normal way in which things are done.
Increasingly, software also refers to the computer-based programmes developed to operate the
automated systems.
In order to achieve a safe, effective operation between the liveware and software it is
important to ensure that the software, particularly if it concerns rules and procedures, is
capable of being implemented. Also attention needs to be shown with phraseologies which are
error prone, confusing or too complex. More intangible are difficulties in symbology and the
conceptual design of systems.

Liveware-hardware
(The interface between people and hardware)

Another interactive component of the SHELL model is the interface between liveware and
hardware. This interface is the one most commonly considered when speaking of human-
machine systems: design of seats to fit the sitting characteristics of the human body, of displays
to match the sensory and information processing characteristics of the user, of controls with
proper movement, coding and location.
Hardware, for example in Air Traffic Control, refers to the physical features within the
controlling environment, especially those relating to the work stations. As an example the press
to talk switch is a hardware component which interfaces with liveware. The switch will have
been designed to meet a number of expectations, including the probability that when it is
pressed the controller has a live line to talk. Similarly, switches should have been positioned in
locations that can be easily accessed by controllers in various situations and the manipulation
of equipment should not impede the reading of displayed information or other devices which
might need to be used at the same time.
Liveware - Environment
(The interface between people and the environment)

The liveware - environment interface refers to those interactions which may be out of the
direct control of humans, namely the physical environment - temperature, weather, etc., but
within which aircraft operate. Much of the human factor development in this area has been
concerned with designing ways in which people or equipment can be protected, developing
protective systems for lights, noise, and radiation. The appropriate matching of the liveware -
environmental interactions involve a wide array of disparate disciplines, from environmental
studies, physiology, psychology through to physics and engineering.

Related Articles
 Pilot Equipment Interface
 Controller Position Design
 Human Factors Analysis and Classification System (HFACS)
 Heinrich Pyramid
 James Reason HF Model
 PEAR Model
Pilot Equipment Interface
Contents
 1 Description
 2 Typical Scenarios
 3 A&I Examples
 4 Contributory Factors
 5 Solutions
 6 Further Reading

Description
Level busts are often the result of break-down of the pilot-equipment interface; that is to say,
the incorrect handling or interpretation of aircraft equipment by the pilot. There are usually
two elements to this:
 The pilot makes an incorrect setting or performs an inappropriate action on the
equipment; and,
 The error is not noticed or not corrected by other flight-crew members.

Typical Scenarios
 Altimeter Setting Procedures. The pilots set an incorrect or inappropriate pressure
setting on the altimeter barometric sub-scale;
 Use of the Altitude Alerter. The pilots inadvertently select the wrong altitude or flight
level on the altitude alerter;
 Use of the Autopilot.
o The pilot enters an incorrect target altitude on the Flight Guidance System and
fails to confirm the entered target on the Primary Flight Display and/or the
Navigation Display;
o The pilot inadvertently arms a selected mode or selects an incorrect mode;
o The pilots become pre-occupied with the automatic systems resulting in loss of
situational awareness;
See also Aircraft Technical Equipment.

A&I Examples
 DH8A/DH8C, en-route, northern Canada, 2011 (On 7 February 2011 two Air Inuit DHC8s
came into head-to-head conflict en route over the eastern shoreline of Hudson Bay in
non radar Class ‘A airspace when one of them deviated from its cleared level towards
the other which had been assigned the level 1000 feet below. The subsequent
investigation found that an inappropriate FD mode had been used to maintain the
assigned level of the deviating aircraft and noted deficiencies at the Operator in both
TCAS pilot training and aircraft defect reporting as well as a variation in altitude alerting
systems fitted to aircraft in the DHC8 fleet.)
 DH8D / B772, vicinity Sydney Australia, 2016 (On 9 December 2016, a Bombardier
DHC8-400 departing Sydney lost prescribed separation against an inbound Boeing 777-
200 after its crew failed to ensure that the aircraft levelled as cleared at 5,000 feet and
this was exceeded by 600 feet. The Investigation found that the First Officer, as Pilot
Flying, had disconnected the autopilot prior to routinely changing the selected airspeed
because it tended to disconnect when this was done with altitude capture mode active
but had then failed to re-engage it. The Captain's lack of effective monitoring was
attributed to distraction as he sought to visually acquire the conflicting traffic.)
 DH8D, vicinity Exeter UK, 2010 (On 11 September 2010, a DHC8-400 being operated by
Flybe on a scheduled passenger flight from Bergerac France to Exeter failed to level as
cleared during the approach at destination in day VMC and continued a premature
descent without the awareness of either pilot due to distraction following a minor
system malfunction until an EGPWS ‘PULL UP’ Hard Warning occurred following which a
recovery climb was initiated. There were no abrupt manoeuvres and no injuries to any
of the 53 occupants.).

Contributory Factors
 Pilot Workload;
 Complacency;
 Interruption or Distraction;
 Emergency or Abnormal Situation.

Solutions
 The existence and application of effective Standard Operating Procedures covering
cross-checking and monitoring procedures.
 Improved Crew Resource Management (CRM) in order to ensure that any error made in
setting or operating aircraft equipment is corrected before being implemented.

Further Reading
EUROCONTROL Level Bust Toolkit
 Level Bust Briefing Note Ops 4 - Aircraft Technical Equipment ;
 Level Bust Briefing Note Ops 5 - ACAS.

Controller Position Design


(Redirected from Controller Position Design)

Description
The design of an air traffic controller's working position (CWP) is critical to the safe and efficient
operation of a control room. Factors that affect CWP design include the following:
 Controller comfort, including:
o Seat design;
o General lighting;
o Noise;
o Heating and ventilation:
 Ergonomic arrangement of CWP with respect to:
o Equipment;
o Other staff within the control room;
o Windows (where an outside view is required);
 Equipment design, including:
o Software philosophy and design;
o Readability of radar screens;
o Instrument lighting and readability;
o Ease of use of controls;
o Efficiency of communications equipment; and many other factors.

The development or modification of controller working positions is one of the most visible and
critical activities in the upgrade of an ATM system. It is also one of the most difficult. For
controllers, the CWP is both their working environment and the tool through which they
exercise their professional skills. Consequently, changes to the CWP are a matter of
considerable significance, potential sensitivity and an area in which acceptance of a system
upgrade can be won or lost.
Successful development and introduction of a controller working position involves the
integration of operational, technical and human factors expertise as well as good management
and effective communication. It is also a long process and increasingly stringent regulatory and
safety standards are generating new requirements in terms of traceability.
CWP development, testing and acceptance has been a source of difficulties for R&D concept
studies as well as a major contributor to delays in the introduction of major systems upgrades
in Europe and elsewhere.
The Core Requirements for ATM Working Positions (CoRe) project was a project of the EATM
Human Resources Management programme, carried out at the EUROCONTROL Experimental
Centre and completed in December 2002. The purpose of the CoRe project was to consolidate
and disseminate good practice on the requirements capture, design, and evaluation of ATM
working positions for European ATM.
Further Reading
EUROCONTROL
 Controller Working Position Brochure;
 HF31 CoRe Project: An Overview of the Project Activity;
 HF43 CoRe Project: Baseline Exemplary Style Guide;
 HF19 Font Requirements for Next Generation ATM Systems;
 HF20 Methodology for Selecting Fonts for Next Generation ATM Systems .
 Design for Humans, Steven Shorrock, Safeguard January/February 2018, Feb 2018.
Human Factors Analysis and Classification System (HFACS)
Contents
 1 Definition
 2 The HFACS Framework
 3 HFACS Level 1: Unsafe Acts
 4 HFACS Level 2: Preconditions for Unsafe Acts
 5 HFACS Level 3: Unsafe Supervision
 6 HFACS Level 4: Organisational Influences
 7 Use of HFACS
 8 Application of HFACS
 9 HFACS Taxonomy
o 9.1 HFACS Level 1: Unsafe Acts
 9.1.1 Errors
 9.1.2 Violations
o 9.2 HFACS Level 2: Preconditions for Unsafe Acts
 9.2.1 Environmental Factors
 9.2.2 Condition of Operators
 9.2.3 Personnel Factors
o 9.3 HFACS Level 3: Unsafe Supervision
o 9.4 HFACS Level 4: Organisational Influences
 10 Related Articles
 11 Further Reading

Definition
The Human Factors Analysis and Classification System (HFACS) was developed by Dr Scott
Shappell and Dr Doug Wiegmann. It is a broad human error framework that was originally used
by the US Air Force to investigate and analyse human factors aspects of aviation. HFACS is
heavily based upon James Reason's Swiss cheese model (Reason 1990). The HFACS framework
provides a tool to assist in the investigation process and target training and prevention efforts.
Investigators are able to systematically identify active and latent failures within an organisation
that culminated in an accident. The goal of HFACS is not to attribute blame; it is to understand
the underlying causal factors that lead to an accident.

The HFACS Framework


The HFACS framework (Figure 1) describes human error at each of four levels of failure:
1. Unsafe acts of operators (e.g., aircrew),
2. Preconditions for unsafe acts,
3. Unsafe supervision, and
4. Organisational influences.
Within each level of HFACS, causal categories were developed that identify the active and latent
failures that occur. In theory, at least one failure will occur at each level leading to an adverse
event. If at any time leading up to the adverse event, one of the failures is corrected, the
adverse event will be prevented.
Figure 1: The HFACS framework

HFACS Level 1: Unsafe Acts


The Unsafe Acts level is divided into two categories - errors and violations - and these two
categories are then divided into subcategories. Errors are unintentional behaviors, while
violations are a willful disregard of the rules and regulations.
Errors
 Skill-Based Errors: Errors which occur in the operator’s execution of a routine, highly practiced
task relating to procedure, training or proficiency and result in an unsafe situation (e.g., fail to
prioritise attention, checklist error, negative habit).
 Decision Errors: Errors which occur when the behaviors or actions of the operators proceed as
intended yet the chosen plan proves inadequate to achieve the desired end-state and results in
an unsafe situation (e.g, exceeded ability, rule-based error, inappropriate procedure).
 Perceptual Errors: Errors which occur when an operator's sensory input is degraded and a
decision is made based upon faulty information.
Violations
 Routine Violations: Violations which are a habitual action on the part of the operator and are
tolerated by the governing authority.
 Exceptional Violations: Violations which are an isolated departure from authority, neither typical
of the individual nor condoned by management.

HFACS Level 2: Preconditions for Unsafe Acts


The Preconditions for Unsafe Acts level is divided into three categories:
 environmental factors,
 condition of operators, and
 personnel factors.
These three categories are further divided into subcategories. Environmental factors refer to
the physical and technological factors that affect practices, conditions and actions of individual
and which result in human error or an unsafe situation. Condition of operators refers to the
adverse mental state, adverse physiological state, and physical/mental limitations factors that
affect practices, conditions or actions of individuals and result in human error or an unsafe
situation. Personnel factors refer to the crew resource management and personal readiness
factors that affect practices, conditions or actions of individuals, and result in human error or an
unsafe situation.
Environmental Factors
 Physical Environment: Refers to factors that include both the operational setting (e.g., weather,
altitude, terrain) and the ambient environment (e.g., heat, vibration, lighting, toxins).
 Technological Environment: Refers to factors that include a variety of design and automation
issues including the design of equipment and controls, display/interface characteristics, checklist
layouts, task factors and automation.
Condition of Operators
 Adverse Mental State: Refers to factors that include those mental conditions that affect
performance (e.g., stress, mental fatigue, motivation).
 Adverse Physiological State: Refers to factors that include those medical or physiological
conditions that affect performance (e.g, medical illness, physical fatigue, hypoxia).
 Physical/Mental Limitations: Refers to the circumstance when an operator lacks the physical or
mental capabilities to cope with a situation, and this affects performance (e.g., visual limitations,
insufficient reaction time).
Personnel Factors
 Crew Resource Management: Refers to factors that include communication, coordination,
planning, and teamwork issues.
 Personal Readiness: Refers to off-duty activities required to perform optimally on the job such as
adhering to crew rest requirements, alcohol restrictions, and other off-duty mandates.

HFACS Level 3: Unsafe Supervision


The Unsafe Supervision level is divided into four categories.
 Inadequate Supervision: The role of any supervisor is to provide their staff with the opportunity
to succeed, and they must provide guidance, training, leadership, oversight, or incentives to
ensure the task is performed safely and efficiently.
 Plan Inappropriate Operation: Refers to those operations that can be acceptable and different
during emergencies, but unacceptable during normal operation (e.g., risk management, crew
pairing, operational tempo).
 Fail to Correct Known Problem: Refers to those instances when deficiencies are known to the
supervisor, yet are allowed to continue unabated (e.g, report unsafe tendencies, initiate
corrective action, correct a safety hazard).
 Supervisory Violation: Refers to those instances when existing rules and regulations are willfully
disregarded by supervisors (e.g., enforcement of rules and regulations, authorized unnecessary
hazard, inadequate documentation).

HFACS Level 4: Organisational Influences


The Organisational Influences level is divided into three categories.
 Resource Management: Refers to the organisational-level decision-making regarding the
allocation and maintenance of organisational assets (e.g., human resources, monetary/budget
resources, equipment/facility recourse).
 Organisational Climate: Refers to the working atmosphere within the organisation (e.g.,
structure, policies, culture).
 Operational Process: Refers to organisational decisions and rules that govern the everyday
activities within an organisation (e.g., operations, procedures, oversight).

Use of HFACS
By using the HFACS framework for accident investigation, organisations are able to identify the
breakdowns within the entire system that allowed an accident to occur. HFACS can also be used
proactively by analyzing historical events to identify reoccurring trends in human performance
and system deficiencies. Both of these methods will allow organisations to identify weak areas
and implement targeted, data-driven interventions that will ultimately reduce accident and
injury rates.
HFACS provides a structure to review and analyze historical accident and safety data. By
breaking down the human contribution to performance, it enables the analyst to identify the
underlying factors that are associated with an unsafe act. The HFACS framework may also be
useful as a tool for guiding future accident investigations in the field and for developing better
accident databases, both of which would improve the overall quality and accessibility of human
factors accident data. Common trends within an organisation can be derived from comparisons
of psychological origins of the unsafe acts, or from the latent conditions that allowed these acts
within the organisation. Identifying those common trends supports the identification and
prioritization of where intervention is needed within an organisation. By using HFACS, an
organisation can identify where hazards have arisen historically and implement procedures to
prevent these hazards which will result in improved human performance and decreased
accident and injury rates. The US Navy was experiencing a high percentage of aviation accidents
associated with human performance issues. Using the HFACS framework, the Navy was able to
identify that nearly one-third of all accidents were associated with routine violations. Once this
trend was identified, the Navy was able to implement interventions that not only reduced the
percentage of accident associated with violations, but sustained this reduction over time.
Application of HFACS
While the first use of the HFACS framework occurred in the US Navy where it originated, the
system has spread to a variety of industries and organizations (e.g. mining, construction, rail
and healthcare). Over the years, the application reached civil and general aviation.
Organizations such as the Federal Aviation Administration (FAA) and National Aeronautics and
Space Administration have explored the use of HFACS as a complement to pre-existing systems.

HFACS Taxonomy
The HFACS taxonomy describes four levels within Reason's model and are described below.
HFACS Level 1: Unsafe Acts
The Unsafe Acts level is divided into two categories - errors and violations - and these two
categories are then divided into subcategories. Errors are unintentional behaviours, while
violations are a willful disregard of the rules and regulations.

Errors
Skill-Based Errors: Errors which occur in the operator’s execution of a routine, highly practiced
task relating to procedure, training or proficiency and result in an unsafe a situation (e.g., fail to
prioritize attention, checklist error, negative habit).
Decision Errors: Errors which occur when the behaviours or actions of the operators proceed as
intended yet the chosen plan proves inadequate to achieve the desired end-state and results in
an unsafe situation (e.g, exceeded ability, rule-based error, inappropriate procedure).
Perceptual Errors: Errors which occur when an operator's sensory input is degraded and a
decision is made based upon faulty information.
Violations
Routine Violations: Violations which are a habitual action on the part of the operator and are
tolerated by the governing authority.
Exceptional Violations: Violations which are an isolated departure from authority, neither
typical of the individual nor condoned by management.
HFACS Level 2: Preconditions for Unsafe Acts
The Preconditions for Unsafe Acts level is divided into three categories - environmental factors,
condition of operators, and personnel factors - and these two categories are then divided into
subcategories. Environmental factors refer to the physical and technological factors that affect
practices, conditions and actions of individual and result in human error or an unsafe situation.
Condition of operators refer to the adverse mental state, adverse physiological state, and
physical/mental limitations factors that affect practices, conditions or actions of individuals and
result in human error or an unsafe situation. Personnel factors refer to the crew resource
management or TRM and personal readiness factors that affect practices, conditions or actions
of individuals, and result in human error or an unsafe situation.

Environmental Factors
Physical Environment: Refers to factors that include both the operational setting (e.g., weather,
altitude, terrain) and the ambient environment (e.g., heat, vibration, lighting, toxins).
Technological Environment: Refers to factors that include a variety of design and automation
issues including the design of equipment and controls, display/interface characteristics,
checklist layouts, task factors and automation.
Condition of Operators
Adverse Mental State: Refers to factors that include those mental conditions that affect
performance (e.g., stress, mental fatigue, motivation).
Adverse Physiological State: Refers to factors that include those medical or physiological
conditions that affect performance (e.g, medical illness, physical fatigue, hypoxia).
Physical/Mental Limitation: Refers to when an operator lacks the physical or mental
capabilities to cope with a situation, and this affects performance (e.g., visual limitations,
insufficient reaction time).
Personnel Factors
Crew Resource Management: Refers to factors that include communication, coordination,
planning, and teamwork issues.
Personal Readiness: Refers to off-duty activities required to perform optimally on the job such
as adhering to crew rest requirements, alcohol restrictions, and other off-duty mandates.
HFACS Level 3: Unsafe Supervision
The Unsafe Supervision level is divided into four categories.
Inadequate Supervision: The role of any supervisor is to provide their staff with the
opportunity to succeed, and they must provide guidance, training, leadership, oversight, or
incentives to ensure the task is performed safely and efficiently.
Plan Inappropriate Operation: Refers to those operations that can be acceptable and different
during emergencies, but unacceptable during normal operation (e.g., risk management, crew
pairing, operational tempo).
Fail to Correct Known Problem: Refers to those instances when deficiencies are known to the
supervisor, yet are allowed to continue unabated (e.g, report unsafe tendencies, initiate
corrective action, correct a safety hazard).
Supervisory Violation: Refers to those instances when existing rules and regulations are
willfully disregarded by supervisors (e.g., enforcement of rules and regulations, authorized
unnecessary hazard, inadequate documentation).
HFACS Level 4: Organisational Influences
The Organisational Influences level is divided into three categories.

Resource Management: Refers to the organisational-level decision-making regarding the


allocation and maintenance of organisational assets (e.g., human resources, monetary/budget
resources, equipment/facility recourse).
Organisational Climate: Refers to the working atmosphere within the organisation (e.g.,
structure, policies, culture).
Operational Process: Refers to organisational decisions and rules that govern the everyday
activities within an organisation (e.g., operations, procedures, oversight).
Related Articles
Human Performance Modelling
Generic Error-Modelling System (GEMS)
Heinrich Pyramid
Human Factors Analysis and Classification System (HFACS)
ICAO SHELL Model
James Reason HF Model
LMQ HF Model
PEAR Model

Further Reading
 Scott A. Shappell (Feb 2000), “The Human Factors Analysis and Classification System–HFACS”
DOT/FAA/AM-00/7.
 "The Human Factors Analysis and Classification System (HFACS)," Approach, July - August 2004.
 Human Factors Analysis and Classification System–Maintenance Extension (HFACS-ME) Review
of Select NTSB Maintenance Mishaps: An Update by John K. Schmidt, Don Lawson and Robert
Figlock. Aeromedical Division, Naval Safety Center, and School of Aviation Safety, Naval
Postgraduate School, U.S. Navy, 2003.
 Reason, J.(1990) “Human Error”. Cambridge University Press
 Ford, C., Jack, T., Crisp, V., & Sandusky, R. (1999).”Aviation accident causal analysis. Advances”
Aviation Safety Conference Proceedings, (P-343).
 Shappell, S. and Wiegmann, D. (2001). “Applying Reason: The human factors analysis and
classification system”. Human Factors and Aerospace Safety, 1, 59-86.
 HFACS Analysis of Military and Civilian Aviation Accidents: A North American
Comparison.ISASI,2004
 Wiegmann, D. A., & Shappell, S. A. (2003). A human error approach to aviation accident analysis:
The human factors analysis and classification system. Burlington, VT: Ashgate Publishing, Ltd.
 US Department of Defense HFACS
 DOT/FAA/AM-00/7 "The Human Factors Analysis and Classification System - HFACS" - FAA
 Shappell, S., Detwiler, C., Holcomb, K., Boquet, A., Wiegmann, D., (2006). Human Error and
Commercial Aviation Accidents: A Comprehensive, Fine-Grained Analysis Using HFACS.
HEINRICH PYRAMID
Heinrich's Accident Pyramid

Description
A pictorial description of the relationship between occurrences and more serious incidents and
accidents.

Heinrich's Law
In his 1931 book "Industrial Accident Prevention, A Scientific Approach", Herbert W Heinrich put
forward the following concept that became known as Heinrich's Law:
in a workplace, for every accident that causes a major injury, there are 29 accidents that cause
minor injuries and 300 accidents that cause no injuries.
This is commonly depicted as a pyramid (in this case with the number of minor incidents shown
as 30 for simplicity):

Heinrich's law is based on probability and assumes that the number of accidents is inversely
proportional to the severity of those accidents. It leads to the conclusion that minimising the
number of minor incidents will lead to a reduction in major accidents, which is not necessarily
the case.
Related Articles
Human Performance Modelling
Generic Error-Modelling System (GEMS)
Heinrich Pyramid
Human Factors Analysis and Classification System (HFACS)
ICAO SHELL Model
James Reason HF Model
LMQ HF Model
PEAR Model
James Reason HF Model
Swiss Cheese Model

Description
The Swiss Cheese model of accident causation, originally proposed by James Reason, likens
human system defences to a series of slices of randomly-holed Swiss Cheese arranged vertically
and parallel to each other with gaps in-between each slice.
Reason hypothesizes that most accidents can be traced to one or more of four levels of failure:
 Organisational influences,
 Unsafe supervision,
 Preconditions for unsafe acts, and
 The unsafe acts themselves.
In the Swiss Cheese model, an organisation's defences against failure are modelled as a series
of barriers, represented as slices of the cheese. The holes in the cheese slices represent
individual weaknesses in individual parts of the system, and are continually varying in size and
position in all slices. The system as a whole produces failures when holes in all of the slices
momentarily align, permitting "a trajectory of accident opportunity", so that a hazard passes
through holes in all of the defences, leading to an accident.

Swiss Cheese model of accident causation


Related Articles
 Human Factors Analysis and Classification System (HFACS)
 Heinrich Pyramid
 ICAO SHELL Model
 PEAR Model
PEAR Model
Contents
 1 Description
 2 People
 3 Environment
 4 Actions
 5 Resources
 6 Related Articles
 7 Further Reading

Description
The mnemonic PEAR is used to recall the four considerations for assessing and mitigating
human factors in aviation maintenance:
 People who do the job;
 Environment in which they work;
 Actions they perform; and
 Resources necessary to complete the job.

People

Physical Factors
 Physical Size
 Gender
 Age
 Strength
 Sensory Limitations
Physiological Factors
 Nutritional factors
 Health
 Lifestyle
 Fatigue
 Chemical dependancy
Psychological Factors
 Workload
 Experience
 Knowledge
 Training
 Attitude
 Mental or emotional state
Psychosocial Factors
 Interpersonal conflicts
 Personal loss
 Financial hardships
 Recent divorce

Environment

Physical
 Weather
 Location inside/outside
 Workspace
 Shift
 Lighting
 Sound level
 Safety
Organisational
 Personnel
 Supervision
 Labour-management relations
 Pressures
 Crew structure
 Size of company
 Profitability
 Morale
 Corporate culture

Actions

 Steps to perform task


 Sequence of activity
 Number of people involved
 Communication requirements
 Information control requirements
 Knowledge requirements
 Skill requirements
 Attitude requirements
 Certification requirements
 Inspection requirements

Resources

 Procedures/work cards
 Technical manuals
 Other people
 Test equipment
 Tools
 Computers/software
 Paperwork/signoffs
 Ground handling equipment
 Work stands and lifts
 Fixtures
 Materials
 Task lighting
 Training
 Quality systems

Related Articles
 Maintenance Workload
 Human Performance Modelling
 Generic Error-Modelling System (GEMS)
 Heinrich Pyramid
 Human Factors Analysis and Classification System (HFACS)
 ICAO SHELL Model
 James Reason HF Model
 LMQ HF Model
 PEAR Model

Further Reading
 "A Model to Explain Human Factors in Aviation Maintenance" Dr W B Johnson & Dr M E
Maddox, Avionics News, April 2007.
 Advisory Circular 120-92B, Safety Management Systems for Aviation Service Providers.
U.S. Federal Aviation Administration, January 8, 2015.
 “Fatigue Risk Management in Aviation Maintenance: Current Best Practices and
Potential Future Countermeasures.” by A. Hobbs, K. Avers and J. Hiles. DOT/FAA/AM-
11/10, 2011. U.S. Federal Aviation Administration Office of Aerospace Medicine.

You might also like