0% found this document useful (0 votes)
118 views50 pages

CS.00133 Global DFMEA

This document outlines the Global DFMEA (Design Failure Mode & Effect Analysis) work instructions for Stellantis, detailing the processes and methodologies required to identify and mitigate potential design issues before product release. It includes a history of changes, a comprehensive table of contents, and definitions of key terms relevant to the DFMEA process. The document serves as a global standard applicable to all Stellantis engineering activities and emphasizes the importance of cross-functional collaboration in improving product quality.

Uploaded by

Paco
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views50 pages

CS.00133 Global DFMEA

This document outlines the Global DFMEA (Design Failure Mode & Effect Analysis) work instructions for Stellantis, detailing the processes and methodologies required to identify and mitigate potential design issues before product release. It includes a history of changes, a comprehensive table of contents, and definitions of key terms relevant to the DFMEA process. The document serves as a global standard applicable to all Stellantis engineering activities and emphasizes the importance of cross-functional collaboration in improving product quality.

Uploaded by

Paco
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

CS.

00133
GLOBAL DFMEA AND MSR
Characteristic
WORK INSTRUCTIONS
Specification
01276_17_00009
(STELLANTIS INSTRUCTIONS) Page: 1/50

Date: 13-NOV-2023

STELLANTIS HARMONIZED

Change level Date Description of change


- 04-AUG-2016 Initial release (draft)
A 21-FEB-2017 Official release
B 13-DEC-2017 Updated Sections 1.3 – 3 and 5.4.11.
Added various references to forming process.
C 18-OCT-2018 Updated Sections 3.2 & 3.9; Tables 3,4, & 6
D 03-DEC-2019 Updated Sections 1.3, 5.1, 5.4.6 & 5.4.17; Tables 1,3, & 4; Added
Figure 12 - Detection Scoring Logic; NAFTA to North America
E 06-MAR-2020 Added Missing Section Reference 5.4.13; Updated Section 5.3 &
5.4.13
F 11-MAY-2021 Updated 1,2, Updated 1.3; Updated Table 1; Updated 3, Added 3.6.1;
Removed Previous Figure 1; Added 3.7; Added 3.14 with Figure 2 and
Figure 3; Removed 5.2 then renumbered paragraphs; Updated 5.2,
5.3.11.1, 5.3.12 & 5.3.14, Added 5.4; Renumbered Figures and
Updated Figure References; Propulsion was Powertrain throughout
G 23-JUL-2021 Updated 5.3.2 removing previous Figure 12; Updated 5.3.11, 5.3.11.1,
5.3.12, and 5.3.17; Renumbered Figures; Updated Table 6, Added
5.3.26 with Figure 14
H 08-APR-2022 Harmonization with Ex-FCA, Integration of MSR and Software Logic
J 13-NOV-2023 Updated 1.1, References updated 2, Noise Factor description updated
3.10, Impact Assessment and PSK/KPI updated 5.2, updated figure
numbers and references, included DocInfo number in title

For information, please contact author and co-author in the lateral label of this
document

© 2022 STELLANTIS
ANY PRINTED COPY IS TO BE DEEMED AS UNCHECKED; THEREFORE THE UPDATED COPY MUST BE CHECKED IN THE APPROPRIATE WEB SITE
CONFIDENTIAL
THIS DOCUMENT MUST NOT BE REPRODUCED OR CIRCULATED TO THIRD PARTIES WITHOUT PRIOR WRITTEN CONSENT BY THE RELEVANT STELLANTIS
COMPANY. IN CASE OF DISPUTE, THE ONLY VALID REFERENCE IS THE ENGLISH EDITION
Page: 2/50
CS.00133
Change Level: J

GLOBAL DFMEA
WORK INSTRUCTIONS
(STELLANTIS HARMONIZED)

TABLE OF CONTENTS
1 GENERAL ................................................................................................................................................. 5
1.1 Purpose .................................................................................................................................................. 5
1.2 Coverage of this standard ...................................................................................................................... 5
1.3 Global DFMEA Working Team............................................................................................................... 6
2 REFERENCES.......................................................................................................................................... 6
3 DEFINITIONS/ABBREVIATIONS/ACRONYMS/SYMBOLS ..................................................................... 7
3.1 Customers .............................................................................................................................................. 7
3.2 Design Item ............................................................................................................................................ 8
3.3 Vehicle Level.......................................................................................................................................... 8
3.4 System Level.......................................................................................................................................... 8
3.5 Component Level ................................................................................................................................... 8
3.6 Installation/Interface DFMEA ................................................................................................................. 9
3.7 Functional breakdown .......................................................................................................................... 10
3.8 Physical Breakdown............................................................................................................................. 11
3.9 Interfaces ............................................................................................................................................. 11
3.10 Noise Factors ..................................................................................................................................... 12
3.11 Correlation Matrix............................................................................................................................... 14
3.12 DFMEA............................................................................................................................................... 14
3.13 Relationships Between Internal and External Items .......................................................................... 14
4 REGULATED SUBSTANCES & RECYCLABILITY ................................................................................ 15
5 REQUIREMENTS/CONDITIONS ........................................................................................................... 15
5.1 Generalities .......................................................................................................................................... 15
5.2 Proactive Reliability Process................................................................................................................ 16
5.2.1 Functional Block Diagram (FBD) ...................................................................................................... 17
5.3 DFMEA Realization.............................................................................................................................. 17
5.3.1 Heading ............................................................................................................................................. 17
5.4 DFMEA Input Columns ........................................................................................................................ 18
5.5 Template (Standard Risk Analysis)...................................................................................................... 18
5.5.1 Item to study (Individual part / Interface)............................................................................................ 18
5.5.2 Item Elementary Function ................................................................................................................. 18
5.5.3 Requirement of the function <->Function Specification .................................................................... 18
5.5.4 Complexity/Application(Optional)...................................................................................................... 18
5.5.6 Generic Failure Mode........................................................................................................................ 18
5.5.7 Potential Failure Mode ...................................................................................................................... 20
5.5.8 Life Situation (Optional) .................................................................................................................... 20
5.5.9 Potential Effect(s) of Failure.............................................................................................................. 20
5.5.10 Initial Severity.................................................................................................................................. 21
5.5.11 Effective Safety Barrier (Optional) .................................................................................................. 21
5.5.12 Link to reference of Technical Safety Requirements (Optional) ..................................................... 21
5.5.13 Most Severe Failure Effect with effective Safety barrier (Optional) ................................................ 21
5.5.14 ASIL target for FM (Optional) .......................................................................................................... 21
5.5.15 Mechanical Occurrence (Optional) ................................................................................................. 21
5.5.16 Occurrence PMHF (Optional) ......................................................................................................... 21
5.5.17 Potential Root Cause(s) / Mechanism(s) of Failure ........................................................................ 22
5.4.18 Drawing Characteristic / Design requirements ............................................................................... 22
Page: 3/50
CS.00133
Change Level: J

5.5.19 Impact (Optional)............................................................................................................................. 22


5.5.20 Characteristic Classification ............................................................................................................ 23
5.5.21 Prevention Controls......................................................................................................................... 23
5.5.22 Actual Value (Optional) ................................................................................................................... 23
5.5.23 Initial Occurrence ............................................................................................................................ 24
5.5.24 Detection Controls (was Current Design Controls Detection)......................................................... 24
5.5.25 Initial Detection................................................................................................................................ 25
5.5.26 Initial Action Priority......................................................................................................................... 25
5.6 Recommended Actions(s).................................................................................................................... 26
5.6.1 Recommended Action(s): ................................................................................................................. 26
5.6.2 Predicted Action Priority: Severity..................................................................................................... 27
5.6.3 Predicted Action Priority: Occurrence ............................................................................................... 27
5.6.4 Predicted Action Priority: Detection .................................................................................................. 27
5.6.5 Predicted Action Priority: Action Priority ........................................................................................... 27
5.6.6 Responsibility .................................................................................................................................... 28
5.6.7 Due date............................................................................................................................................ 28
5.6.8 Action(s) results ................................................................................................................................ 28
5.7 Monitor System Response (MSR) Risk Reduction Evaluation ............................................................ 28
5.7.1 Rationale for Frequency (F) .............................................................................................................. 30
5.7.2 Frequency (F) of FC.......................................................................................................................... 30
5.7.3 Current Diagnostic Monitoring .......................................................................................................... 30
5.7.4 Monitoring (M) ................................................................................................................................... 30
5.7.5 Most Severe Failure Effect after System Response ......................................................................... 30
5.7.6 Severity (S) of FE after MSR ............................................................................................................ 30
5.7.7 MSR Action Priority (AP)................................................................................................................... 30
5.7.8 MSR Preventive Action ..................................................................................................................... 31
5.7.9 Diagnostic Monitoring Action ............................................................................................................ 31
5.7.10 System Response ........................................................................................................................... 31
5.7.11 Most Severe Failure Effect after System Response ....................................................................... 31
5.7.12 Responsible Person ........................................................................................................................ 31
5.7.13 Target Completion Date .................................................................................................................. 31
5.7.14 Action Item Status ........................................................................................................................... 31
5.7.15 Action Taken with Pointer to Evidence ........................................................................................... 31
5.7.16 Completion Date ............................................................................................................................. 31
5.7.17 Severity (S) of FE after MSR .......................................................................................................... 31
5.7.18 Frequency (F).................................................................................................................................. 32
5.7.19 Monitoring (M) ................................................................................................................................. 32
5.7.20 Action priority (AP) .......................................................................................................................... 32
5.8 Final Action Priority/Comment ............................................................................................................. 32
5.8.1 Final Action Priority: Severity ............................................................................................................ 32
5.8.2 Final Action Priority: Occurrence ...................................................................................................... 32
5.8.3 Final Action Priority: Detection .......................................................................................................... 32
5.8.4 Final Action Priority: Action Priority ................................................................................................... 32
5.8.5 Comment........................................................................................................................................... 33
5.9 Software Logic DFMEA........................................................................................................................ 33
5.9.1 DFMEA Relationships ....................................................................................................................... 33
5.9.2 Functional Safety .............................................................................................................................. 33
5.9.3 Function Block Diagram .................................................................................................................... 34
5.9.4 Grand Interaction FBD ...................................................................................................................... 34
5.9.5 DFMEA FBD ..................................................................................................................................... 34
5.9.6 Agile Development with DFMEA....................................................................................................... 35
5.9.7 Generic Failure Modes...................................................................................................................... 35
5.9.8 Effects ............................................................................................................................................... 36
Page: 4/50
CS.00133
Change Level: J

5.9.9 Causes .............................................................................................................................................. 36


5.9.10 Prevention Controls......................................................................................................................... 37
5.9.11 Detection Controls........................................................................................................................... 37
5.9.12 Severity Scoring .............................................................................................................................. 38
5.9.13 Occurrence Scoring ........................................................................................................................ 38
5.9.14 Detection Scoring............................................................................................................................ 39
5.9.15 Monitoring & System Response...................................................................................................... 39
5.9.16 Considerations ................................................................................................................................ 39
5.9.17 Recommended Actions ................................................................................................................... 39
5.10 Transitions from Core to Application DFMEA .................................................................................... 40
6 APPROVED SOURCE LIST ................................................................................................................... 40
Annex A: DFMEA Template ........................................................................................................................ 41
Annex B: Severity Rating ............................................................................................................................ 42
Annex C: Occurrence Rating ...................................................................................................................... 43
Annex E: DFMEA Action Priority................................................................................................................. 45
Annex F: Frequency Rating ........................................................................................................................ 46
Annex G: Monitoring Rating ........................................................................................................................ 47
Annex H: MSR Action Priority ..................................................................................................................... 48
Annex I: Impact Assessment Tool RASI ..................................................................................................... 49
Annex J: Impact Assessment Tool Example............................................................................................... 50
Page: 5/50
CS.00133
Change Level: J

1 GENERAL

1.1 Purpose

This document is a guideline and describes the corporate activities required to develop a solid DFMEA
(Design Failure Mode & Effect Analysis).

DFMEA is an engineering analysis done by a cross-functional team of subject matter expert. Its objective
is to improve the design by finding and correcting potential quality issues before the product is released to
the customer.

1.2 Coverage of this standard

This is a global standard and is applicable to all Stellantis engineering activities.

Previous methodology usage

Core DFMEAs / program specific DFMEAs previously created using other methodologies may
continue to be used if acceptable. The DFMEA is acceptable if:
The DFMEA has been continually reviewed and updated and considered effective by Engineering
and Quality,
The system or component has demonstrated good reliability historically and
There is limited differentiation between the system or component being analyzed and the content
of the core DFMEA

A Program Specific DFMEA may be created using acceptable core DFMEAs/ program DFMEAs
using legacy methodology.
If an acceptable existing core DFMEA/ program DFMEA does not exist, a new DFMEA must be
created using the global harmonized methodology.
Migration to the global harmonized methodology is encouraged even if not specifically required per
the criteria.

All New global programs DFMEAs starting after implementation of this standard has to follow the new
global methodology herein specified.
Existing DFMEAs (core/application) migration to new standard will follow the specific transition plan
defined by the Organization.

Link to DFMEA/PFMEA Supplier Synthesis Template:

http://docinfogroupe.inetpsa.com/ead/doc/ref.01355_16_00154/v.vc/fiche
Page: 6/50
CS.00133
Change Level: J

1.3 Global DFMEA Working Team

Mohammad Hijawi NA QMS & Engrg Core Tools


Thomas Hansel EE QMS & Engrg Core Tools
Matthew Withrow NA QMS & Engrg Core Tools
Philippe Leveque EE B & I RAMS Engineer
Axel Lünenbürger EE Vehicle Performance Iintegration and
Validation
Dennis Irico, Marco Brino, Roberto Beltramo EE Propulsion DFMEA Specialists
Eduardo Adan SA Propulsion DFMEA Specialist

2 REFERENCES

Table 1 - References
downloadable
Shield/ for suppliers
Document Number Designator Document Title from
(if applicable) beSTandard/
DocInfo
CS.00133 / 01276_17_00009 Global DFMEA Work Instruction Y
CEP.00031 / 01276_19_00034 Global DFMEA Template Y
CEP.00068 / 01276_22_00060 GLOBAL DFMEA/DRBFM PROCESS IMPACT ASSESSMENT Y
(PIA) TEMPLATE
02022_21_00047 Checklist DFMEA and MSR Y
QR.10022 / 01272_08_00052 DVP&R Template and Work Instruction Y
CEP.00053 / 01276_22_00056 DRBFM Template Y
CS.0021 / 02022_22_00337 DRBFM Work Instruction Y
01276_23_00053 DFMEA Vehicle KPI Y
PSK-T014 User Guide DFMEA Propulsion PSK N
PRO.00109 / 01276_22_00061 Quality Requirements for Suppliers (QRS Y
01272_06_00006 Product FMEA Study Summary (Supplier Synthesis) Y
FPW.IFN053 Key Characteristics Designation System (KCDS) Y
01276_10_00022
CS.00071/ Q741300 Global Classification of Characteristics Y
(01276_10_00891)
SAE J1739 Potential Failure Mode and Effects Analysis in Design (Design N
FMEA), Potential Failure Mode and Effects Analysis in
Manufacturing and Assembly Processes (Process FMEA)
AIAG/VDA FMEA Handbook AIAG/VDA FMEA Handbook First Edition N
First Edition
AIAG 4th Edition N
DES.FORMING/01 Global DIE Engineering Std. –Sheet Metal Forming Simulation Y
Guidelines
DES.FORMING/02 Global DIE Engineering Std. –Sheet Metal Forming VTO Y
Simulation Guidelines
ISO26262 Road vehicles – Functional safety N
Q250100_tA List of the Customer Feared Events with a 4 rated Severity N
01276_10_00892
List of the Customer Feared Events with a 3 rated Severity
01276_10_00876
Page: 7/50
CS.00133
Change Level: J

3 DEFINITIONS/ABBREVIATIONS/ACRONYMS/SYMBOLS
AP Action Priority
ASIL Automotive Safety Integrity Level
CAN Controller Area Network
CFD Computational Fluid Dynamics
CFTS Component Function Technical Specification
DFMEA Design Failure Modes & Effects Analysis
DOE Design of Experiments
DV Design Validation
DVP&R Design Validation Plan & Report
ERC Evènement Redouté par Fonction Véhicule / Feared Event by Vehicle Function
FBD Functional Block Diagram
FEM Finite Elements Method
FuSa Functional Safety
HARA Hazard Analysis and Risk Assessment
HIL Hardware In the Loop
MSR Monitoring & System Response
PMHF Probabilistic Metric for random Hardware Failures
PLM Product Lifecycle Management
PV Process validation
QFD Quality Function Deployment
RPN Risk Priority Number
S×O S×O (Severity × Occurrence)
SoR Statement of Requirements
VF Vehicle Function
VSA Variation Simulation Analysis

For a proper analysis the following are conventionally defined:

3.1 Customers

VEHICLE/PRODUCT USERS: Drivers, passengers, operators, end users, and/or owners who expect
safety, performance, reliability, comfort, and convenience.

GOVERNMENT AUTHORITIES: Government agencies that define requirements and monitor compliance
to safety, environmental, and other specifications.

MANUFACTURING FACILITIES: Producers of the Product that define manufacturing requirements to


meet product specifications, feasibility expectations, cost constraints, and operator safety at your plant,
and/ or ship-to-plant.

MAINTENANCE: Service Technicians who must safely perform preventative maintenance and/or repair
tasks to ensure product availability, and meet environmental regulations.

SHIPPING & HANDLING: Production Control and Logistics (including packaging / containerization)
personnel who accomplish movement, storage, and ensure preservation of the product.

END OF LIFE ENTITIES: Organizations which recycle, reuse, re-manufacture or reclaim materials from
an end of life product.
Page: 8/50
CS.00133
Change Level: J

3.2 Design Item

It can be a system or component. Each design item has functions which interface with or are required by
other design items.

It is used to explain the scope of the design and how it works.

Examples:
- A turbocharger provides functions such as pumps air or supplies air or increases air density for
an intake manifold. There will be addditional functions for locate, hold, and seal which relate to
other discrete interfaces

- A door handle provides functions such as develop force or pull cable which then gets
transferred to the latch mechanism. There are still functions for locate, hold, and potentially
seal which will relate to other specific design item features

The design Items are further clarified by the following Design Level Structures.

3.3 Vehicle Level

How the outputs of systems work together to provide functions to the user.
Best modelled through the use of a Functional Block Diagram (FBD).

3.4 System Level

A system is a combination of components that provide one or more than one functions (e.g. engine, brake
system, cooling system, fuel system, door, transmission, suspension system, seat, etc.).
Design items such as turbocharger, water pump, A/C compressor are less complex and considered
components.

A system level analysis considers:

1. How components work together to produce the system functions;


2. The functions that the components have to provide to ensure the function of the system;
3. Interfaces and interactions between the components included in the system;
4. The failures of the components cause the system level failures. Components failure analysis is
included in the component DFMEA

When a system is too complex, it is broken into subsystems and a team is developed around each
subsystem.

3.5 Component Level

A component can be an individual part/element or a combination of parts that provide a function in a


system.
Examples include: electrical motor, turbocharger, control module, seat structure, door structure, sun roof
structure, oil pump, fuel pump, front end module, and instrument panel.
The scope of the study should be determined based on the complexity of the component.
In the case of individual part, material and dimensions are main focus, including its interface with mating /
surroundings components.
If the component is an assembly made up of multiple parts but not as complex as a system, then the
analysis will include also the internal interfaces and focus on how individual parts work together.
Page: 9/50
CS.00133
Change Level: J

A component level analysis considers:

- How individual part(s) function;


- How parts function together if more than one part is included in the analysis;
- How an individual part interfaces with other parts of the same component or in the attachments/joining
of assemblies and systems.

The part / interfaces functions are basically created by the material, dimension, tolerances and other
design parameters/drawing characteristics.

Causes will be why parts fail (dimensions, material failures, and the noise factors that can cause the
change/failure).

3.6 Installation/Interface DFMEA

An Installation/Interface DFMEA is focused on interfaces and/or interactions between Systems,


Components, the Environment, and/or the Customer. Usually the analysis will include dimensions,
tolerances, and material properties. It is often performed on co-designed or externally designed
components.

Some instances for an Installation/Interface DFMEA include turbocharger interface to an engine, engine
interface to a transmission, a high voltage battery installation, and the relationships of the airbag modules
with the dashboard, seats, trim, and steering wheel.

Items of study may include, fastening choice, interfaces interactions (sealing, ducts alignment,
assembly,…) heat sources, vibration level of specific application, water & dust protection, minimum
clearance from neighbouring parts, cables/pipes routing, tool access, and maintenance access.

Example: Turbocharger gas inlet flange design interacts with the exhaust manifold with gasket in terms
of flatness, surface finish, porosity, sealing surface width to locate gasket bead, hole dimensions and
position, and flange thickness.

The Installation/Interface DFMEA may also include performance requirements (e.g. efficiency, pressure
drop, cleanliness, leak test specification, traceability, part identification, handling prescriptions, airbag
modules effective deployment).

The functions for this DFMEA should be identified by the functional block diagram or correlation matrix
depending on the template selected.
Page: 10/50
CS.00133
Change Level: J

3.7 Functional breakdown

A functional breakdown of the design structure begins at the main function of a vehicle (broken into
system functions), system (broken into component functions) depending on the scope of the study
(example given at Figure 1). The objective is to decompose the high level function to an executable level
that can be used for DFMEA. For a correlation analysis, the functional breakdown is typically 3-5 levels
depending on the scope of the analysis.

Figure 1 - Functional breakdown example (In grey, design for manufacturing/ maintanance/
regulatory functions, on the right the design structure)
Page: 11/50
CS.00133
Change Level: J

3.8 Physical Breakdown

A list of :
- Individual parts/components and / or
- Sub systems/systems
that are the object of analysis, along with their interfaces

3.9 Interfaces

Functions are transferred across interfaces between different design scopes (at an input or an output).
Interfaces and their functions are considered in six broad categories:

1. Physical contact (all structural, locate/position, hold/secure, connect, or seal functions);


2. Transfer materials (diesel, gasoline, air, hydraulics, even debris/dust, water, coolant, exhaust,
hydrocarbon fumes, etc.);
3. Transfer energy (e.g., heat, force, electrical current, steam, hydraulic pressure, radiated heat);
4. Transfer information (analog or digital e.g. wiring harness, electrical signals or any types of
information exchange (e.g., CAN, LIN)
5. Human-Machine interfaces;
6. Physical clearances: they should be included in the analysis when they become critical to the
product. For example:
o Allow assembly operation in engine/vehicle plants (tool access, fit access, etc.),
o Allow feasibility (metal forming, plastic parts forming, castings, metal working),
o Allow ordinary maintanance/repair operation in service garage (oil change, oil filter change, etc.),
o Prevent contact between moving parts (actuator rod stroke or parts dynamics),
o Prevent interferences due to components tolerance stack up,
o Protect from noise factors (alternator shield to avoid oil /water dripping, heat shield to protect
electrical components from radiated heat).

NOTE: All six of the above relationships can be by design intent or be undesirable as a cause if it is unintended (radiate heat to a
nearby temperature sensitive item, transfer water (but water is not resolved and gathers in an undesirable location, etc.) and
generated by noise factors.

The specific features of the design need to be shown in relationship with their function (e.g. profile A:
provide room-for-tool, allow access for repair).

A primary focus of interface functions will be: locate, hold, seal, transfer a function or functions between
interfacing design structures. Interfaces can be internal to the system/components (interactions of
system components/individual parts of the component) and/or external to the system/components
(interactions with mating and surrounding system/components).
Page: 12/50
CS.00133
Change Level: J

3.10 Noise Factors

Control Factors (e.g., dimensions and materials) are under the Control of the engineer.
Noise Factors (e.g., ambient temperature, dirt, customer usage) are not under the control of the
engineer, but have an influence on performance and durability.

It is important to determine design improvements (Control Factor settings) to minimize or divert the
influence of the noise factors.

Noise Factors shall be included in DVP tests to demonstrate that the design is robust.

How to include Noise Factors in FMEA?

Before the FMEA is started, problems from the past should be analyzed.
Was there Noise Factor influence visible/possible?
Analyze possible Noise Factors for each item/interface while creating the Block Diagram using the
Noise Factor table in the “Functional Block Diagram” sheet (DFMEA template CEP.00031/
01276_19_00034). The List of Noise Factors should be taken as a thought starter as there could be
others)
Noise Factors can be a Root Cause for a Failure Mode
Extreme cold ambient is a potential cause of partial sealing
Road salt is a potential cause for corrosion
Functions could include the Noise Factors e.g.,
Protect against water/dust intrusion
Withstand cold weather (low temperate)
Define Preventive/Detection Controls against Noise Factors influence
Define Corrective Actions against Noise Factors influence

There are five broad categories that can be used to minimize the impact of noise factors:
1. Change the technology to avoid the sensitivity to the noise factors (Very expensive or not
feasible);
2. Add a compensation that can offset the noise factor such as ABS brake and vibration damper;
3. Remove the source of the noise (usually very expensive);
4. Change the design to divert the noise such as changing the direction of coolant or oil, adding a rib
to redistribute the stresses;
5. Use Design of Experiments with noise factors to optimize the design so it is not sensitive to the
noise factors.

In case of problems or to learn more about noise factor handling the DFSS Team should be consulted

This list below includes may common noise factors. New / Additional noise factors may be defined based
on the specific analysis.
Page: 13/50
CS.00133
Change Level: J

Categories from
NOISE FACTOR EXAMPLES
CS.00133
Fuel type / specifications / fuel quality for hot/cold temperature, coagulation at
cold temperature, bad fuel)
Flex Fuel (chemical aggression / parts sensitivity to ethanol)
Low-cost / adulterated fuel (affects GDI system)
Oil type / specifications / oil quality
Low-cost engine oil / blend of different oil specifications (impacts to engine
internal moving parts)
Oil viscosity (for hot & cold temperature)
Oil change interval / oil dilution
1. Over Time (Aging)
Low-cost oil filter adoption (risk of poor filtering or filter clogging)

Coolant type / specifications / coolant quality (for hot & cold temperature)

Low-cost cooling additive adoption (risk of corrosion)

Wear/degradation/presence of specific friction


Low-cost spark-plug adoption
Air to Fuel ratio learning (risk of knocking / spark advance error / ethanol
sensor learning performance, accuracy, and speed)
Piece to piece variation
assembly line process variation form plant to plant (e.g., oil first filling done
with vehicle tilted in some plant)
Incoming material variation
2.Manufacturing
Variation Tool wear
Presence of contaminants (parts cleanliness)
Service (Proper/Poorly/Not enough done)
Logistic (overseas transportation, storage)
Radiation (Solar or Part to Part)
Heat sources
3.Outside Interactions
Electrostatic/Magnetism effects (EMC/EMI)
Vehicle layout / installation (e.g., clearances, angles)
Humidity
Slope
Ambient Temperature hot/cold
Altitude high/low (i.e., different ambient pressure)
Chemical exposure
4.External
Environment Water (rain/water fording /intrusion/immersion)
Ice/snow/sleet/hail
Sand
Dust

Extreme intrusion (risk of FEAD and seals failure / premature wear)


Page: 14/50
CS.00133
Change Level: J

Deleterious atmosphere (e.g., ash, pollution)


Wind
Salinity
Sound/Audible noises
Vibration
Grass
Darkness/bright (sun-) light
Mud, salt infused mud

Presence of foreign object (Readability/glare/bleach out/heating)


Usage of vehicle (e.g., passenger, LCV, off road, special vehicles, customer
usage, load factors)
5. Duty Cycle and/or Washing/power washing/cleaning
Customer Usage Engine bay washing with high pressure washers (affecting electrical
connectors, module malfunction)
Road puddles / potholes /road debris

Figure 2 - Noise Factors and Examples

3.11 Correlation Matrix

The tool that correlates functions derived through functional breakdown to individual parts / components /
interfaces (which Item/Interfaces correlates to which functions).

Usage of Correlation Matrix is strongly recommended where applicable.


One Example is included in the DFMEA template, but the Layout of the Correlation Matrix can be chosen
by the Team.

3.12 DFMEA

The tool that allows the analysis of all the potential failure modes, their effects, their causes, identifying
and evaluating the associated risk defining the proper recommended actions to contain higher risks.

3.13 Relationships Between Internal and External Items

It is important for DFMEAs to cross-communicate internally and with other system and component
DFMEAs. This requires that the DFMEA author investigate other DFMEAs for key data. The Function
Block Diagram should be used to identify these relationships.

For Internal communication and System to System cross-communication, the following must be done
(Figure 3):

1. Failure Modes of the previous Item can be Causes for the Item being evaluated;
2. The highest Occurrence score from the failure mode of the previous item flows to the next failure
mode. The input failure mode is now a cause;
3. Severities will flow upstream to the correct corresponding Failure Modes.
Page: 15/50
CS.00133
Change Level: J

Figure 3 - Horizontal Communication

It is also important for system and component DFMEAs to communicate between levels. (Figure 4) This
requires that the DFMEA author investigate the other DFMEAs for key data.

For communication between levels, the following must be done:


1. Failure Modes of lower level items can be causes;
2. The Severities of the higher level DFMEA must be known for the lower level DFMEAs;
3. Communication between DFMEAs is necessary;
4. Communication between Stellantis and its suppliers is strongly encouraged.

Figure 4 - Vertical Communication

4 REGULATED SUBSTANCES & RECYCLABILITY


Not applicable.

5 REQUIREMENTS/CONDITIONS

5.1 Generalities

This global harmonized DFMEA methodology is based on the EMEA 2nd generation and AIAG/VDA
FMEA Handbook. It places a greater emphasis on risk prevention rather than on detection or mitigation.
Modifications from SAE J1739 / AIAG/VDA FMEA Handbook are:

- Correlation matrix development resulting from Quality Function Deployment (QFD) method
(functional/physical breakdown and mutual correlations identification);
- Severity ranking table refinement with specific examples related to real cases (engine, transmission
and vehicle);
Page: 16/50
CS.00133
Change Level: J

- Analysis focused on root causes and related parameters/specifications identification;


- Improved objectivity of occurrence and detection rating criteria by using tables that show examples
and guide the designer in the proper risk evaluation;
- Clear distinction between design/concept phase (prevention) and validation phase (detection) with the
identification of the design criteria and the experimental tests.

5.2 Proactive Reliability Process

The proactive reliability process starts with the Global DFMEA Process Impact Assessment (PIA)
(CEP.00068/01276_22_00060/Annex J) to identify the areas of focus. For each program the list of
DFMEAs is one of the outputs of the Program Impact Assessment. The DFMEAs might be new or
derivatives of the master/core or existing specific program DFMEAs.

The Execution of DFMEAs will be monitored as Key Performance Indicators. Propulsion and Vehicle
Engineering have different standards therefore:

Link to Vehicle KPI: 01276_23_00053


Link to Propulsion PSK: PSK-T014 User Guide

In case a new DFMEA is required, the process starts with identifying the functions, which can be done
using:

- The functional block diagram for a system / subsystem;


- Functional block diagram and correlation matrix for components. If a component is well
understood, has few interactions with other components and/or the functional analysis is basically
focused on main functions, a functional block diagram may be sufficient without using a
correlation matrix. If the team believes the functional block diagram alone is sufficient, the team
must get concurrence from the Quality Reliability organization or its regional equivalent before
proceeding. Using the correlation matrix however is preferred and strongly encouraged for
components.

The functions then are analyzed in the DFMEA. The DFMEA provides a list of proactive prevention
activities as well as list of required tests for verification and validation. A high-level proactive reliability
process is shown at Figure 5.

Figure 5 - Role of DFMEA in the Proactive Reliability Process


Page: 17/50
CS.00133
Change Level: J

5.2.1 Functional Block Diagram (FBD)

The functional block diagram defines the relationship between the design items and the functions. It is
used to:

- Define components and systems/subsystems under analysis;


- Highlight the interactions between them, whether the interaction are physical or NOT physical;
- Define analysis scope and who is responsible;
- A preliminary function analysis that can be used as an input to the correlation matrix for complex
components.

Different approaches and formats can be used for the construction of a functional block diagram; in some
cases it may be useful to explicitly highlight the type of connection (electrical, pneumatic, hydraulic,
mechanical, signal) and the flow of fluids and the paths for force or torque.

It is up to the team to define the best approach of format for the block diagram, in relation to the specific
analysis, possibly considering as alternative the use of specific images (eg, exploded 3D) in those cases
that requires a visual approach. The functions and their from/to relationships must be clearly
documented.

An encouraged acceptable approach is to identify the items to be evaluated in the DFMEA with solid -lined
boxes. Items not being evaluated but provide an input into an item being evaluated or an output from an
item being evaluated can be identified with a dotted-line boxes. This approach is illustrated in the second
image of Figure 6.

B Pillar Seat Belt

Provide Prevent seat belt trapping


Hand Clearance Prevent seat belt routing interference
to B Pillar

Handle
Provide Hand
Height Adjust Clearance to Door
Seat Locate Handle
Recline Seat Door
Attach Handle Lumbar Adjust
Easy Enter
Clear occupant for
Ingress / Egress

Access to Handle

Customer

Figure 6 - Examples of FBD

5.3 DFMEA Realization


5.3.1 Heading
The Heading/Cover will strongly depend on the future Stellantis FMEA Software and is therefore not
described specifically in this Norm. To be updated after software implementation
Page: 18/50
CS.00133
Change Level: J

5.4 DFMEA Input Columns


The different sections of the template are explained in the following.
5.5 Template (Standard Risk Analysis)
Link to reference
Most Severe
Effective of Technical
Item to study Item Complexity/ Failure Effect
Requirement of Generic Potential Failure Life Situation Safety Safety
(Individual part Elementary Application Potential Effect(s) of Failure with effective
the Function Failure Modes (Optional) Barrier Requirements
/ Interface) function (Optional) Safety barrier
(Optional) (Optional)
(Optional)

Figure 7 - Template Header (Partial)

5.5.1 Item to study (Individual part / Interface)


Enter the item/ interface which is in focus.
5.5.2 Item Elementary Function
Each Item might have several functions. The item is listed multiple times, once for each function. The
summary of item and its function(s) should be clearly stated from the function summary list/Correlation
Matrix.
The DFMEA starts with identifying the functions which are to be analyzed. Each item (system,
subsystem, and component) to be analyzed in the Design FMEA has functions associated with it. The
function describes what the item is intended to do or produce and is written as a verb-noun format (e.g.
transfer torque). Adding other words beyond the verb-noun is encouraged to fully define the action. The
function should be written using words common to the industry and FCA. The function must be written as
positive and answers: What is the purpose of this item?
There may be multiple functions for an item. The team determines the priority in which to analyze
functions. The more precise the function, the easier it is to identify potential failure modes for prevention
and detection recommended actions.
Types of functions may include: primary, interface, clearance, attachment, locating, transfer energy /
information, etc.
All end and intermediate customers must be considered. These are defined in 3.1 and 3.2.
5.5.3 Requirement of the function <->Function Specification
If the function is a Primary/Main Function, reference the specification (Target, Upper Spec, and/or Lower
Spec). Most primary/main functions can be measured. Functions with no measurable output must be
stated as Not Applicable.
Core Documents should only mention values which are valid for all the applications. Application
dependent values should only be stated in the program specific DFMEA. The program specific DFMEA
must have values for all functions with the exception of those functions that do not have measurable
outputs. These must be stated as Not Applicable.
5.5.4 Complexity/Application(Optional)
If a part/subsystem/interface/function refers to different versions/programs (e.g. manual/electric or
Jeep/Peugeot) this can be documented in this column. Usage of this column is optional.

5.5.6 Generic Failure Mode


Generic failure mode is to warranty the exhaustivity of the study, Loss of function is a generic FM, we
don't want to see this in the potential failure. If the FM is to some generic we do only one line in the
DFMEA for the FM and we do the traceability to the different generic FM
Page: 19/50
CS.00133
Change Level: J

There are 15 Generic failure mode categories which will help to uncover potential causes and their
effects. Table 1 shows the 15 different categories with some examples.

Category Description Examples


1. Loss of function / - Does not create torque
Does not Inoperable
- Does not provide clearance
2. Stops Stops functioning prematurely - Stop Engine while driving
- Insufficiently transfer fluid
3. Incomplete Performance loss - Cool Cabin Insufficiently
- Provide insufficient clearance
- Does not provide damping over time
5.Degradation /
Wear /Aging Performance loss over time - Does not provide good appearance over time (ex: color
fade, wear, corrosion)
Operation above acceptable threshold, - Create excessive torque
5. Excessive
too fast, too much - Close door requiring excessive force
- Create flow erratically
6. Intermittent Operation randomly starts/stops/starts
- Send messages irregularly
- Move window up when down requested
Operation at the wrong time, unintended - Move window down when up requested
7. Unintended activation
direction, wrong output - Create heat when cool required (HVAC)
- Change radio station to undesired channel
- Create brake force delayed
- Move window down with hesitation
8. Delayed Operation after unintended time interval
- Engage transmission gears late
- Engage turbo with lag
- Distribute heat unevenly within passenger compartment
9. Uneven Uneven, imbalance, unequal, irregular
- Apply brake force unequally when not desired
- Stop vehicle too slowly
- Close window too slowly
10. Too Slowly Operation occurred, but too slowly
- Send message too slowly
- Accelerate vehicle too slowly
- Stop vehicle too quickly
11. Too Quickly Operation occurred, but too quickly - Close window too quickly
- Accelerate vehicle too quickly
The function becomes locked or stuck in - Keep running after turning off (e.g. engine)
12. Stuck the activated state and cannot be de-
activated or turned off - Cornering lamp doesn`t return to middle position

Functions outside of time duration - Brake torque pulse is too long


13. Duration too long
specification too long - Steering wheel haptic pulse is too long
Functions outside of time duration - Brake torque pulse is too short
14. Duration to short
specification too short - Steering wheel haptic pulse is too short
Function is performed in the opposite - Steering wheel torque reverse of requested
15. Inverse / Incorrect
way/manner than desired, expected or
Selection
requested
Table 1 - Failures Modes NOTE: There may be multiple failure modes created from a Failure Mode Category.

All failure modes must be described using one or more of the failure mode categories. Some failure mode
categories may not be applicable to a specific function and therefore should not be reported. Failure mode should
be written using words common to the industry and Stellantis while maintaining the intent of the failure mode
category
Page: 20/50
CS.00133
Change Level: J

5.5.7 Potential Failure Mode

Failure modes are derived from the function. A failure mode is the manner in which a function did not
achieve its objective.

- Failure modes are not necessarily directly related to the customer;


- Each failure mode should be separate from other failure modes in order to study the causes for
each failure mode.

Examples of Failure Modes


- Does not disengage
- Does not transmit torque
- Does not seal
- Does not withstand to temperatures
- Does not hold full torque (fasteners)
- Loss of structural support
- Sends signal intermittently
- Generates too much pressure/signal/voltage
- Generates delayed pressure/signal/voltage
- Loss of Electrical Contact
- Loss of Information
- Wrong Value / Value not plausible

5.5.8 Life Situation (Optional)

Life situation is used to ensure all Feared Events are found and the higher level of different effect`s
severity. Life situation is used to evaluate frequency in MSR FMEA

Same Failure Modes lead to different severity rating in different Life Situations
There is no list with default values defined for these columns because it is specific for each Global
Function and has to be determined by the responsible engineer.

Examples are:
Running
Stop, Parking
Service
Production
Recycling
Crash
Pedestrian Crash

Usage of this column is optional.

5.5.9 Potential Effect(s) of Failure

Indicate the potential effects of failure mode. The effect is the failure perceived by the customer
(intermediate or final customer) in case of total or partial negation of the elementary function. The effects
of failure have to be described in terms of “what” the customer might notice or experience. There may be
several potential effects for a failure mode. If there is any doubt if all relevant customer effects are
identified, Engineers of the higher level should be contacted.
Page: 21/50
CS.00133
Change Level: J

5.5.10 Initial Severity

Enter the severity evaluation of the effect of failure from the customer point of view, according to
Annex B
If more than one potential effect is listed, enter the value associated with the most serious potential effect.

5.5.11 Effective Safety Barrier (Optional)


Safety Barrier is used to check the safety concept and to reduce the requirement for the supplier (ASIL
and quantitative target)
They are physical or non-physical measures desired to prevent, control, or mitigate undesired events or
accidents.
Examples of Safety Barriers could be e.g. detection of loss of communication, detection of failure with
alert of the driver, redundancies or mechanical solutions
Usage of this column is optional.

5.5.12 Link to reference of Technical Safety Requirements (Optional)


This column is used to check the coherence between FMEA safety barrier and safety concept
Reference of the Technical Safety requirement associate to the safety barrier
Examples could be e.g.“TSR_ACC_4.01” which is based on the Global Function coding rule
Usage of this column is optional.

5.5.13 Most Severe Failure Effect with effective Safety barrier (Optional)
To precise the feared event generated by the safety barrier
It is used to reduce the severity if we have a redundancy (or after MSR)
Examples could be e.g. a non working headlight which is detected and warning signal is shown in the
instrument panel. Therefore, the ASIL is reduced.
Usage of this column is optional.

Occurrence
ASIL target Mechanical Drawing Initial
PMHF Potential Root Cause(s) / Impact Characteristic Prevention Controls Actual Value Detection Controls
for FM Occurrence Characteristic / Action
(Optional) Mechanism(s) of Failure (Optional) Classification (Optional)
(Optional) (Optional) Design requirements Priority

Figure 8 - Template Header (Partial)

5.5.14 ASIL target for FM (Optional)


Failure mode specific ASIL Target for the supplier which is sent to PLM
It is only used if S=10 (or 9 if a ASIL decomposition is existing)
ASIL are defined in ISO 26262 and are related to HARA
Usage of this column is optional.

5.5.15 Mechanical Occurrence (Optional)


Durability target associate to the ASIL target for Failure Mode
Usage of this column is optional.

5.5.16 Occurrence PMHF (Optional)


PMHF means Probabilistic Metric for random Hardware Failures and is defined in ISO 26262
Reliability per hour associated to the ASIL target for Failure Mode
It is used for EE problems only
Page: 22/50
CS.00133
Change Level: J

5.5.17 Potential Root Cause(s) / Mechanism(s) of Failure

The cause creates the failure mode and the failure mode creates the effect. A failure cause is the specific
reason why the failure mode could occur. The root cause of the failure can be better found using the
5Why method through asking “why” multiple times until the root cause is found. Each Failure Mode can
have multiple causes and all potential causes should be identified.

For a Design FMEA the causes are related to the design process. It is assumed that the product will be
manufactured correctly as specified. A design deficiency may impact the manufacturing or assembly of
the product and may then be considered as a design cause.

The cause should be as specific as possible so that action can be identified to reduce the impact of the
cause. All realistic potential causes should be listed in the DFMEA. Causes should also include noise
factors which may come from the external or internal vehicle environment which may be the result of
neighboring items, packaging, region and duty cycle.

At a system level the causes may be the failure modes of the components of the system and/or the
interfaces and interactions between components and other systems. Causes related to failure modes of
a component in a system may require a DFMEA of the component.

At a component level the causes may include dimensions, material and interfaces between parts. The
cause must be stated as specific as required to be able to take action as required. Causes like” poor
design”, “wrong material selected”, “incorrect dimensions” are not specific enough to take action. What
characteristic of the design, material or dimensions caused the failure?

Causes may include another item not providing the expected input into the item being evaluated. Causes
from other items that precede the immediate item before the item being evaluated are not to be
considered.

5.4.18 Drawing Characteristic / Design requirements

This section is applicable when key characteristics need to be defined.

Indicate, for each identified cause, the characteristics on the drawings, table, standard, model (e.g.
dimensions of the raw part, minimum clearances), or technical design specification under Design
department definition, choice, calculation, virtual validation that, if not properly designed, causes the
failure. Examples: dimensional characteristics, such as diameters, curvature radius, shape tolerances,
surface roughness, hardness, material or performance in terms of functionalities/ assembly/ serviceability,
tightening torque, key clearances, notes on drawings of caps presence, etc.

Remember that the DFMEAs must not necessarily include ALL the characteristics / notes on
drawings but only the ones considered related to the root causes of the potential failure modes
found, since on drawing there are also information that are not necessarily functional (e.g. merely
constructional drawing dimensions, aesthetic characteristics such as the color for not aesthetical parts).

If S=10, it`s mandatory to include the Drawing characteristic in the FMEA and to transfer it into the control
plan.

5.5.19 Impact (Optional)

Evaluate the impact of the considered characteristic on the function. Refer to current regional Standard
FPW.IFN053, CEP-12679, CS.00071 or Q741300 for details. If the impact is not applicable, specify N/A.
Page: 23/50
CS.00133
Change Level: J

Examples of not applicability are mainly:


- attribute characteristics (e.g.: color, presence of danger symbol, presence of oil symbol…);
- clearances / gaps between separate components (and in general derived dimensions);
- classification at component level (part level such as presence of caps);
- environmental conditions / noise factors: (e.g.: maximum operating temperature);
- nominal dimension;
- design selection (e.g.: metal vs plastic material choice…).

5.5.20 Characteristic Classification

Enter the proper classification of the drawing and/ or component characteristics according to the criteria
described in the applicable regional procedure (e.g. FPW.IFN053, CS.00071, Q741300,
01276_16_00027) If the characteristic classification is not applicable, enter N/A.

5.5.21 Prevention Controls

Prevention controls are those activities that are performed prior to the release of the design and reduce
the likelihood of occurrence of the causes. These DO NOT include DV (Design Validation) or PV
(Process Validation) tests. Prevention controls should be clearly and specifically stated. The results of
the prevention controls influence the Probability of Occurrence (O).

Special Case: Engineering development testing such as DOE or screening tests (brake lining screening,
etc.) can be used as prevention. In case of metal sheet processing, a forming simulation shall be done,
with reference to harmonized standards DES.FORMING/01 and DES.FORMING/02.

Design Criteria:
Indicate all the design rules used to minimize the influence of a cause and design tools used to discover
potential issues for improvement. Examples:

- Design Standards (with precise references);


- CAE verifications, DMU verifications, worst case analysis (with precise reference or attached /
contained in the cell);
- Similar components know-how/ drawings (carryover), after working conditions verification and field
returns;
- S.o.R. (Statement of Requirements);
- Design Forming Analysis for formed parts.

The references must be as specific as possible and should include the document number and a
paragraph description.

If a design tool or design practice is not used or additional are added, document the exception within the
design criteria section.

5.5.22 Actual Value (Optional)

Indicate the actual design content: the drawing characteristic value, if dimensional, except for special
cases (e.g., complex shapes for which is necessary to refer to the 3D model), materials, surface
treatment, design specifications / design standards (e.g., CS.00028 for hose/fitting design), tables, notes
on drawing.
Page: 24/50
CS.00133
Change Level: J

This allows highlighting compliance or deviation from design criteria (supposed to be the best practice).
Specify n/a if not applicable.

The Actual column is program specific DFMEA oriented. This column shall be left blank in the Core
DFMEAs.

5.5.23 Initial Occurrence

Occurrence measures the effectiveness of the Actual design criteria prevention activities for program
specific DFMEAs or the Design Criteria for a Core DFMEA. Occurrence is the likelihood that a specific
cause will produce a failure mode although they comply with the drawings. This evaluation expresses the
confidence level in the design criteria adopted.

The Occurrence value is in a range between 1 and 10 and must be defined Annex C as a guideline. The
meaning of this value must not to be considered in absolute terms. The lower the value the higher is the
confidence in the adopted design criteria.

In accordance with the table below, for the determination of the value it is necessary to evaluate the
following aspects:

- Field return data availability on similar/ carry over components in current production, with particular
focus on the diagnosis of the cause;
- Availability of updated technical know-how related to the solution being analyzed (Design Standards,
Specifications, Product specifications, previous DFMEAs, …);
- Availability of virtual validation report (tolerance stack up analysis, VSA, CAE analysis, etc. ID report
numbers should be stated for document traceability;
- Content of carry-over from similar solutions;
- Knowledge of boundary conditions, mission profile, targets, etc.;
- Knowledge of the noise factors and their management;
- Warranty Information (C/1000) in conjunction with part return evaluations, verbatim analyses, etc.
should be understood when evaluating the Occurrence score;
- The Occurrence score is evaluated based on the combination of all preventions controls;
- It is possible to reduce the initial occurrence value after positive feedback from specific and highly
detectable experimental tests.

NOTE: For EE and SA developed DFMEAs, high occurrence rating may be used while waiting for the
prevention actions (such as drawings released according to design guidelines, best practices, stack up &
virtual validation positively performed). The rating can be lowered once positive results are available.

5.5.24 Detection Controls (was Current Design Controls Detection)

Detection controls are activities that ensure design defects are discovered and corrected in the design
before saleable vehicles/ engines/ propulsion systems are built. Detection controls must be clearly and
specifically described, with reference to specific Performance Standards and test procedures. The
references must be as specific as possible and should include the document number and a paragraph
description. A description of “vehicle testing” or “Lab testing” is not enough. Detection controls include
DV and PV Testing and are included in the test plan (DVP&R).

Indicate the experimental test and the specific actions on the physical parts useful to detect the effect or
the failure mode.
Page: 25/50
CS.00133
Change Level: J

Pilot builds and pre-production plant builds may be considered as detection items for causes related to
assembly.

It is possible to specify an experimental test that will not be included in the test plan since specific tests
were previously carried out on standard components (for example, qualified connectors, salt spray tests
done on standard screws, as listed on standardized table, chemical analysis to detect presence of
hazardous materials…).

5.5.25 Initial Detection

Detection evaluates the effectiveness of the detection controls.

Indicate the rank associated to the experimental test planned. The value varies in a range from 1 to 10
and the lower is the value the greater is the confidence that the test is able to detect potential failure (Low
value = high failure detection).

The rank evaluation is independent from the type and the results of the test. In fact, some type of test can
have a high or low detection depending on the considered failure and if an experimental test detects a
failure this would confirm the low value of detection.

This value is defined using Annex D as guidelines considering that the meaning of this value is not
intended in absolute terms.

There may be one or multiple items listed for a cause. The TEAM will define the overall detection value
to be considered in the analysis.

A standardized test is a test regulated by a specific international standard or a company procedure / norm
available in beSTandard (such as LP.DUR101, LP.DUR102, LP.7T003...).

A consolidated test an established but not formally published that is test in-use as a best practice in the
company following instructions and rules not formalized into a released norm.

A not-standardized test is a new test that needs to be developed and is typically a new test created
specifically to detect a new failure mode or effect. It should become a standard or a consolidated test
after it will be tuned and recognized as best practice.

A specific test is one that is tailored to a unique failure mode or cause.

A generic test is one that assesses the overall functionality.


In rating the Detection (D), the following should be considered:
- Effectiveness of the test to find the Failure mode related to the Cause;
- Maturity of the Test Procedure;
- Duty cycle/ Mission Profile definition;
- Test method;
- Test Timing - Is there time to discover and correct design defects before saleable vehicles/ engine/
propulsion system are built?
- Part release level tested.

5.5.26 Initial Action Priority

Different than in former versions of FMEA, the new AIAG/VDA Handbook does not use the Risk Priority
Number RPN, which was calculated by multiplication of SxOxD.
Page: 26/50
CS.00133
Change Level: J

The New index is the Action Priority “AP” which is identified by using the new created table which can
be found in Annex E, and can result in 3 levels:

High
Medium
Low

The consequence of the 3 Levels is

Priority High:
The team needs to either identify an appropriate action and/or detection controls or justify and document
why current controls are adequate.

Priority Medium:
The team should identify appropriate actions to improve prevention and/or detection controls, or, at the
discrete of the team, justify and document why controls are adequate.

Priority Low:
The team could identify actions to improve prevention or detection controls.

Priority shall be given to prevention controls improvement rather than detection in order to focus the effort
on making the design more robust.

Recommended Action(s)
Predicted
or No Action with Completion Action(s)
Action Responsibility Due Date
Rationale Date Results
Priority
(H & M Only)

Figure 9 - Template Header (Partial)

5.6 Recommended Actions(s)

5.6.1 Recommended Action(s):

Identify the improvement actions defined to reduce the Action Priority.


Recommended actions are generally focused to occurrence and detection reduction.

This means to improve the design (reduce risk of occurrence) or improve testing (detection risk reduction
– reduce the risk of “escape”).

Example - OCCURRENCE REDUCTION:

- Error proof measures to avoid the failure mode (design poka yoke);
- Design tolerances and design geometries modification (eg. CAE, DMU);
- Design modification to reduce stress or structurally weak components replacement (eg.FEM,CFD);
- materials improvement.
Page: 27/50
CS.00133
Change Level: J

Example - DETECTION REDUCTION:

Improve the test by:


- Confirming the duty cycle;
- Add stresses related to the cause or failure mode;
- Add noise factors;
- Improve test method such as test to failure instead of test to bogey;
- Test parts built to tolerance limits (worst case analysis).

Only in very few and special cases recommended actions are aimed to severity reduction (e.g. recovery
strategies introduction, design solution modification). If the design is changed to reduce severity, then the
Failure Mode, Effect and Cause have to be reviewed.

The left side of the DFMEA represents the initial design and the right side recommended actions
represents the reduced risk design.

Recommended Actions become the left hand side of the core DFMEA after the improvements in detection
and causes are included into the core DFMEA, design standards, and design guidelines and best
practices.

Although not required, recommended actions when the Action Priority is Low can be created for product
improvement.

Responsibility and timing of "Recommended actions" must be recorded and tracked.

Recommended Actions must exist for all Action Priority Ratings “High”
If Severity is 10 or 9, Recommended Actions are required for a “Medium” Action Priority
For Severity lower than 9, Recommended Actions are encouraged for a “Medium” Action Priority
For Action Priority “Low”, Recommended Action are not expected, but the team may define them if
desired.

There may be instances when recommended actions are not possible or feasible for a High or Medium
AP scores. In these instances, “No Action” should be entered with rationale as to why a recommended
action is not possible or feasible.

5.6.2 Predicted Action Priority: Severity

Predicted Action Priority is the forecast of the improvement with the Recommended Actions in place.
As long as the Failure Effect does not mitigate by the Recommended Actions, the severity rating stays the
same (see 5.6.1)

5.6.3 Predicted Action Priority: Occurrence

Forecast of the improved Occurrence Rating with the Recommended Actions in place

5.6.4 Predicted Action Priority: Detection

Forecast of the improved Detection Rating with the Recommended Actions in place

5.6.5 Predicted Action Priority: Action Priority

Forecast of the improved Action Priority with the Recommended Actions in place
Page: 28/50
CS.00133
Change Level: J

5.6.6 Responsibility

Indicate person name or department responsible to complete the recommended actions.

5.6.7 Due date

Indicate agreed planned date for recommended action implementation.

5.6.8 Action(s) results

Indicate improvement actions introduced and checked (possibly indicates ODM reference number for
design changes and/ or improvements completion date) or the reasons that led to reconsider risk level.

5.7 Monitor System Response (MSR) Risk Reduction Evaluation

The FMEA-MSR is a supplemental after a Design FMEA is already conducted.


It is used to analyze if:

Failure Causes or Failure Modes are detected by the system

Or

Failure Effects are detected by the driver

Therefore the rating system was changed from

Severity/Occurrence/Detection to Severity/Frequency/Monitoring.

MSR is linked to ISO26262 and helps to check and confirm the Technical Safety Concept.
In DFMEA, the testing takes place in Development / Verification Phase.
FMEA-MSR examines diagnostic and monitoring system response during customer usage.
FMEA-MSR adresses risks, that in DFMEA would be assessed as high

Recommended improvement could be:


additional/better sensors
redundancy
plausibility checks to discover sensor malfunctions.

They are normally different than the recommendations out of DFMEA

While the DFMEA is done, it should be decided for which lines to continue with MSR.

FMEA-MSR cares about Mechatronic systems which normally at least possess each one:

Sensor (or missing sensor)


Control Unit
Actuator/ Signal (Warning to the driver)

If such diagnostic monitoring and response exist and has an influence on the failure mode/cause during
the operation of the customer:

for Severity 10+9 a MSR must be done


Page: 29/50
CS.00133
Change Level: J

for Severity 8 a MSR should be done


for Severity < 8 a MSR could be done

For well known diagnostic monitoring systems with long time experience without safety or warranty
problem, MSR is not mandatory, but rationale shall be documented by the team

The project plan should use:


5T Method: InTent Timing, Team, Task, Tool
5W Method: Why, When, Who, What HoW

Depending on scope, the structure may consist of hardware elements and software elements.
Complex structures could be split into several less complex structures.

Scope of FMEA-MSR is limited to elements where DFMEA indicates hazardous or non compliant effects.
Root elements of Structure Trees for FMEA-MSR can be at

Vehicle level (e.g. for OEM)


System level (e.g. for Supplier)

In FMEA-MSR, monitoring for failure detection and failure responses are considered as functions:

Out of range detection


Cyclic redundancy checks
Plausibility Checks
Sequence counter checks
Sensor signals received by control units.

In FMEA-MSR we assume the diagnostic monitoring to work as intended.


Non working monitoring system could be part of DFMEA.

Failure Mode is the consequence of fault:

Not detected or system reaction too late → Failure mode is same as in DFMEA.
Failure detected → System response leads to mitigated failure mode.
Failure Effect is the consequence of failure mode.

Severity rating is the same as in DFMEA


If the failure is not mitigated, the harm to customer will stay the same.
If the failure is mitigated, a lower rating is the consequence.

Frequency rating cares about the occurrence of Cause.


Monitoring rating evaluates the ability to detect Cause and Mode during customer operation and
the ability to mitigate the effect.

Results of the risk analysis is used to develop actions to reduce risk and improve safety.

Target is similar like in DFMEA, but with focus on improvement of Monitoring System.
Page: 30/50
CS.00133
Change Level: J

Monitor System Response (MSR) Risk Reduction Evaluation


Current Most Severe
Rationale for Frequency (F) Diagnostic Current System Failure Effect Severity (S) of MSR Action
Monitoring (M)
Frequency (F) of FC Monitoring Response after System FE after MSR Priority (AP)
Controls Response

Figure 10 - MSR Template Header (Partial)

5.7.1 Rationale for Frequency (F)

The Frequency Rating should be based on facts. This column explains the justification for the Rating
It could be based on evaluations or Quality Data

5.7.2 Frequency (F) of FC


Frequency rating (F) evaluates the probability, that a Failure cause will appear. While Occurrence (O) in
the classical DFMEA considers product novelty/experience, tests and prevention controls, the Frequency
rating is focused on the use in the hand of the customer. The Rating can be found in Annex F

5.7.3 Current Diagnostic Monitoring


Summary of all existing or planned controls, dedicated to detect Failure Cause, -Mode and -Effect
IF TSR alreay exists, instead of adding the full text, it is allowed to only insert the link to the TSR (to avoid
double work)

5.7.4 Monitoring (M)


Monitoring rating (M) evaluates the ability of all sensor, logic and human sensory perception to detect a
fault or failure ,during the usage in the hand of the customer
The Monitoring Rating can be found in Annex G

5.7.5 Most Severe Failure Effect after System Response


Different Failures would have different Severity ratings. For the determination of the Action Priority, the
Failure Effect with the highest Severity Rating is relevant

5.7.6 Severity (S) of FE after MSR


Severity Rating for the most severe Failure Effect after System Response

5.7.7 MSR Action Priority (AP)


The Action Priority is determined according to the 3 Ratings for Severity, Frequency and monitoring
The consequences for the different Levels of Action Priority are explained in 5.5.26
The Action Priority Table can be found in Annex H

NOTE: Because the Action Priority in AIAG/VDA Handbook is less critical for S=9 than S=10 and this
violates Stellantis ethic codes, Stellantis decided to use the more critical S=9 rating even for S=10

Most Severe
Diagnostic Target Action Taken
MSR Preventive System Failure Effect Responsible Action Item Completion
Monitoring Completion with Pointer to
Action Response after System Person Status Date
Action Date Evidence
Response

Figure 11 - MSR Template Header (Partial)


Page: 31/50
CS.00133
Change Level: J

5.7.8 MSR Preventive Action

Preventive action reducing or eliminating potential Failure Modes

5.7.9 Diagnostic Monitoring Action

Description of Diagnostic Monitoring Action to reduce or mitigate the failure effect, e.g. “Implementation of
plausibility check”

5.7.10 System Response

Result out of the Diagnostic Monitoring Action, e.g. warning signal or function modification

5.7.11 Most Severe Failure Effect after System Response

If the original failure is mitigate, a Failure Effect with a lower Severity should occur

5.7.12 Responsible Person

Indicate person name or department responsible to complete the recommended actions.

5.7.13 Target Completion Date

Indicate agreed planned date for recommended action implementation.

5.7.14 Action Item Status

Current status of action implementation

5.7.15 Action Taken with Pointer to Evidence

Description of taken action with evidence (e.g. reference of document)

5.7.16 Completion Date

Finalization of Preventive Action

Severity (S)
Frequency Monitoring Action priority
of FE
(F) (M) (AP)
after MSR

Figure 12 - MSR Template Header (Partial)

5.7.17 Severity (S) of FE after MSR

If Failure Effect is mitigated reliably by the system, a new Failure will occur.
In this case the Severity of the new Failure Effect can be used to calculate the new Action Priority after
action implementation
Page: 32/50
CS.00133
Change Level: J

If M is rated with 1 (e.g. Diagnostic coverage estimated to be significantly greater than 99.9%) it is always
allowed to take the severity rating of the mitigated failure effect

If M is rated with 2 (e.g., Diagnostic coverage estimated > 99.9%) the team has to decide, depending on a
good Frequency Rating together with a nonhazardous life situation, if the severity rating should be
reduced.

If M is 3 or higher, the Monitoring is not reliable enough to reduce the severity.


For details of M-Rating please see Annex G

5.7.18 Frequency (F)

New Frequency Rating after action implementation (for MSR)

5.7.19 Monitoring (M)

New Monitoring Rating after action implementation (for MSR)

5.7.20 Action priority (AP)

New action Priority after action implementation (for MSR)


5.8 Final Action Priority/Comment

Final Action Priority/Comment

Final
Action Comments
Priority

Figure 13 - Template Header (Partial)

5.8.1 Final Action Priority: Severity

Final Severity Rating to be newly evaluated after action implementation (for DFMEA)

5.8.2 Final Action Priority: Occurrence

Final Occurrence Rating to be newly evaluated after action implementation (for DFMEA)

5.8.3 Final Action Priority: Detection

Final Detection Rating to be newly evaluated after action implementation (for DFMEA)

5.8.4 Final Action Priority: Action Priority

Final Action Priority to be newly evaluated after action implementation (for DFMEA)

If the Action Priority is still “High”, other corrective actions must be found, resulting in “Medium” or “Low”
Action Priority Rating.
Page: 33/50
CS.00133
Change Level: J

Management has to approve/refuse remaining Risks after improvement with:

Severity 9+10 and High or Medium Action Priority


Severity 8 and lower and High Action Priority

5.8.5 Comment

Free to use by the Team for everything which is not defined in the template (e.g. Global Function specific
information or special decision explanations)

5.9 Software Logic DFMEA

5.9.1 DFMEA Relationships

Software Logic DFMEAs are usually lower level system DFMEAs that support a system DFMEA just as
there may be lower level hardware DFMEAs that support the system DFMEA. Integrating and referencing
the lower level software logic DFMEAs is essential just as it is for the hardware DFMEAs.

The DFMEA-MSR template, CEP.00031, must be used.

There are also relationships between the Software Logic DFMEAs. These relationships are usually
numerous. Therefore, completing an individual Software Logic DFMEA is usually not feasible. Software
Logic DFMEAs then should usually be done as a set as cross-integrating and cross-referencing the
DFMEAs to each other is important. Connectivity to hardware DFMEAs may also be required. (Figure
14)

System DFMEA

Hardware A Hardware B Hardware C


CFTS 1 DFMEA CFTS 2 DFMEA CFTS 3 DFMEA
DFMEA DFMEA DFMEA

Figure 14 - DFMEA Relationships

5.9.2 Functional Safety

Functional Safety (FuSa) addresses unreasonable risks due to hazards caused by malfunctioning
behavior of electronic and electromechanical control systems (sensors, modules, software and
calibration, signals).

FuSa is a unique tool set from DFMEA, however there are areas where cross-communication between
the FuSa and DFMEA workgroups may be very useful. The FuSa Management Process is defined in
CS.00046 French Counterpart? which was developed from ISO 26262.
Page: 34/50
CS.00133
Change Level: J

5.9.3 Function Block Diagram

A Function Block Diagram (FBD) is used to define the items under evaluation. The FBD will be a
significant source of information for the DFMEA. (See 5.2.1)
If it is more useful, it is allowed to use a Structure/Function Analysis Tree instead

5.9.4 Grand Interaction FBD

Before the individual teams begin to work on the different DFMEAs, it is recommended that the various
teams gather together and create a Grand Interaction FBD. (Figure 15) The goal of the Grand Interaction
FBD is to fully understand how each CFTS, VF other software element provides functional output to each
other. The teams should agree on what functions are being provided, the exact wording of each function,
and resolve any discrepancies. It may be necessary to update other CFTS, VF, or other software
element documents to reflect the agreements and resolutions. It is recommended that the teams meet
periodically throughout the DFMEA development to review the Grand Interaction FBD to ensure these are
still in agreement and to resolve any issues that may have development while creating the DFMEAs.

CFTS 1 Determine Temperature CFTS 2 Command Engine RPM Reduction CFTS 3

Determine Ignition ON/OFF Status


Set Temperature Control Determine Ignition ON/OFF Status Determine Ignition ON/OFF Status
Determine Engine RPM

CFTS 4 Determine Coolant Flow Rate CFTS 5

Figure 15 - Grand Interaction FBD

5.9.5 DFMEA FBD

The items under evaluation are generally the logic sub-element/subroutines of the system logic.

Each logic sub-element/subroutine must be given a name. The name should align to what the logic sub-
element/subroutine is doing (Example: Vehicle Speed Calculator). This name must be used within an
FBD box and will be the Item listed in the Function List and within the DFMEA.

Items to be evaluated in the DFMEA should be identified with solid-lined boxes. Items not being evaluated
but however provide an input into an item being evaluated, or an output from an item being evaluated,
should be identified with a dotted-line box. Sensors providing data may be included and are generally
considered as out of scope.

Functions of the item should be entered on the line. If an item as many functions, the engineer should
consider decomposing the logic sub-element/subroutine into smaller units to ensure more clarity. This
decomposition should then be reflected in other relevant documents.
Page: 35/50
CS.00133
Change Level: J

Out-of-Scope functions from another CFTS or VF source should be entered within the box with a
reference to that CFTS or VF in parentheses. The function should be entered as written within the
referenced CFTS or VF. No other data or reference should be listed within a box except the item name.

Hardware that provide an input required for the software function should be included in the FBD with the
function it provides. These are usually sensors. The hardware should usually be captured as out-of-
scope items.

Item A
Determine Ignition ON/
Temperature Sensor Send Temperature Voltage
OFF status (CFTS 05)

Determine Temperature

CFTS 02

Item B Set Temperature Control CFTS 04

Figure 16 - DFMEA FBD Example

5.9.6 Agile Development with DFMEA

Agile is a cross-functional team-based empirical approach to software development where the software is
developed in short sprints instead of a traditional program management approach. This allows the teams
to be adaptable to change as information is discovered. The DFMEA can be executed within an Agile
mindset as the DFMEA process is also a cross-functional team approach which uses data and
engineering knowledge to develop a product.

Using Agile, the DFMEA can be developed within the sprints. The results of the sprints must be
integrated within the overall DFMEA and to other DFMEAs as necessary.

The results of the DFMEA may include recommended actions for updating, adding, or removing
requirements which is a goal of Agile. Consistent cross-team and cross-functional review of the Grand
Interaction FBD, DFMEA FBDs, and the DFMEAs themselves can help ensure good communication of
requirement needs occurs. Using a DFMEA development software can also be extremely helpful in
ensuring good continuous communication.

It is important to continually adapt to change as the software evolves due to the sprints and cross-
communication. The DFMEA that was developed in earlier sprints may need to be revisited and revised.

5.9.7 Generic Failure Modes

The same Generic Failure Mode categories as in classical DFMEA should be considered for each
function. Special care should be used when considering the Unintended Function Failure Mode as there
Page: 36/50
CS.00133
Change Level: J

may be multiple Failure Modes using this category for a given function in a Software Logic DFMEA (see
5.5.6)

5.9.8 Effects

Effects ultimately are related to what is perceived by the customer. However, since Software Logic
DFMEAs are usually lower to a System DFMEA, the Effects are often related to the System DFMEA
(Figure 17). The Effects can also be described as related to the failure mode of the next item (Figure 18).

5.9.9 Causes

There are generally three basic types of causes that can be applicable to a Failure Mode

An internal logic design error in the Software


An internal logic design error in the Calibration
A Failure Mode from an input function

The software or calibration design error must be stated specifically and clearly. The cause should not be
stated generically.

Unacceptable Entry Example: Calibration error


Acceptable Entry Example: Calibration voltage to temperature conversion has higher result than actual

A failure mode of a Software Level DFMEA can be a Cause for a System DFMEA (Figure 17). This is
similar to other lower level DFMEAs

System DFMEA Cause Failure Mode Effect

Software Logic
DFMEA Cause Failure Mode Effect

Figure 17 - Vertical Relationships

A failure mode from an input function becoming a cause may come from an in-scope or an out-of-scope
item. These items may be other software elements within the software being evaluated, external
software, or hardware (Figure 18). The FBD should be used to understand the functional inputs and
therefore the failure modes causing the item being evaluated to fail. This is similar to other item
relationships.

Data Transmitter
Controller Diagnostic
Send Ignition Status
Send Fault Flag Controller
Set Fault Code CFTS 47
(CFTS 2)

Cause: CFTS 2 Does Not Send Ignition Status Cause: Data Transmitter Controller—Does Not Send Fault Flag

Failure Mode: Does Not Set Fault Flag Failure Mode: Does Not Set Fault Code

Effect: Diagnostic Controller Does Not Set Fault Code Effect: CFTS 47 Does Not Set MIL
Page: 37/50
CS.00133
Change Level: J

Figure 18 - Horizontal Relationships


5.9.10 Prevention Controls

Preventions controls are actions taken to prevent the cause from occurring as infrequently as possible
(Figure 19).

Drawing Initial
Potential Root Cause(s) / Characteristic Prevention Controls Detection Controls
Characteristic / Action
Mechanism(s) of Failure Classification
Design requirements Priority

ECM software
ECM don`t have recovery ECM shall inhibit validation on HL for
ECM should have
9 action for manual park brake REGULATORY engaging if manual 2 reaction for CCM 1 L
recovery Action
engaged park brake engaged (system test case-
CFTS009_TC062

Figure 19 - Cause, Prevention Control, & Detection Control Example

Prevention Controls can often be identified within Engineering Best Practices, Calibration Guidelines or
industry standards. When referencing the Prevention Control, the relevant sections within the document
should be references to avoid confusion and allow for quick reviews.

If the Cause is a Failure Mode from an input function analyzed within the DFMEA, a reference to that
section should be stated.

If the Cause is Failure Mode of an input function, the Prevention Control may include a reference to
another DFMEA. If another DFMEA is referenced, the DFMEA must exist, be easily discoverable, and
must include the Failure Mode being referenced. The referenced DFMEA may be a software or hardware
DFMEA.

5.9.11 Detection Controls

Detection Controls are generally Hardware In the Loop (HIL) or vehicle level tests. These tests are
generally called Test Cases (Figure 19). The tests should be ensuring a proper response when
subjugated to the Cause or simulation to the Cause if acceptable. Testing should also include noise
factors that may affect the outcome.

A Design Verification Plan and Report (DVP&R) shall be created to collate the testing plan and results. A
software alternative, such IBM® Rational® Quality Manager, may be used if the data are easily traceable.
All tests in the DFMEA must be included in the DVP&R or the alternative software location. Conversely,
all entries in the DVP&R or alternative software location must be shown in the DFMEA.

If the Cause is a Failure Mode from an input function analyzed within the DFMEA, a reference to that
section should be stated. If the Cause is Failure Mode of an input function, the Detection Control may
include a reference to another DFMEA. If another DFMEA is referenced, the DFMEA must exist, be
easily discoverable, and must include the Failure Mode being referenced. The referenced DFMEA may
be a software or hardware DFMEA.
Page: 38/50
CS.00133
Change Level: J

5.9.12 Severity Scoring

The Severity scoring from Annex B must be used.

If the Failure Mode is the Cause of a higher level DFMEA, its Effect shall have the same Severity as
higher level DFMEA (Figure 20). If the Failure Mode is a Cause for another item, its Effect shall have the
same Severity as the other item (Figure 21).

System
DFMEA
Occurrence
Severity
Detection

CFTS 1 DFMEA

Figure 20 - Scoring Vertical Relationships

Severity

Item A Item B CFTS 2

Occurrence & Detection

Figure 21 - Scoring Horizontal Relationships

5.9.13 Occurrence Scoring

The Occurrence score must conform to Annex C, based on the effectiveness of the Prevention Controls.

If a cause is a failure mode from a lower level DFMEA, the Occurrence shall equal the maximum
Occurrence score stated for the referenced failure mode within the lower level DFMEA (Figure 20). If the
Cause is a Failure Mode of an input function, the Occurrence score shall equal the maximum Occurrence
score stated for that Failure Mode within the referenced item (Figure 21).
Page: 39/50
CS.00133
Change Level: J

5.9.14 Detection Scoring

The Detection score must conform to Annex D

If a cause is a failure mode from a lower level DFMEA, the Detection score shall equal the maximum
Detection score stated for the referenced failure mode within the lower level DFMEA (Figure 20). If the
Cause is a Failure Mode of an input function, the Detection score shall equal the maximum Detection
score stated for that Failure Mode within the referenced item (Figure 21).

5.9.15 Monitoring & System Response

Monitoring & System Response (MSR) actions should be considered and may be included in the DFMEA
(See 5.7 and Annex F-H).

Examples:

- If data received is determined to be implausible, an alternative replacement for the data source may
be used
- If data received is determined to be implausible, the vehicle may go into a degraded state

5.9.16 Considerations

The following may need to be considered when developing software logic and the corresponding DFMEA.

Undefined State: Software is confronted with a combination of inputs that were not comprehended and
there is no defined “default” thereby causing code to hang or crash.

Incomplete Code Testing: Software development check is lacking and an operating condition is not
tested, therefore the code used to address that particular condition does not exist or has errors.

Malicious Code: An undesired code is executed.

Corrupted Code/Data: Digital information in memory or being transported across a network can
susceptible to corruption via “soft” errors caused by electronic interference or charge. Continuous
monitoring is encouraged to ensure every read and write is plausible. Key systems should have
redundancy whenever possible.

Hardware: Hardware malfunctions can cause software to not function properly.

Power Loss: Loss of power to an item may require a response of the vehicle

CAN Loss: The loss of a CAN may require a system response such as choice of alternate data or other
actions

5.9.17 Recommended Actions

Recommended Actions must be included when the Action Priority is High. Recommended Actions are
encouraged when Action Priority is Medium, but are not required. Recommended Actions may include
Prevention Control, Detection Control, or MSR improvements or additions.
Page: 40/50
CS.00133
Change Level: J

5.10 Transitions from Core to Application DFMEA

A Core DFMEA is not an application DFMEA. An application DFMEA is derived from a Core DFMEA, but
these DFMEAs are separate and distinct deliverables. The application DFMEA evaluates the specific
risks to the specific program. The application DFMEA may have different functions, failure modes,
causes, and specific risks due to prevention and detection actions for the specific program. There may
be unique risks due to usage and environmental difference along with the specific design differences.

Main Steps of Transition

1. Update Function Block Diagram and/ or Correlation Matrix


a. Add/ Delete Items and Interfaces Unique to the Application
b. Add/ Delete Functions Unique to the Application
2. Update DFMEA Items and Functions
a. Add/ Delete Items Unique to the Application
b. Add/ Delete Functions Unique to the Application
c. Add Content for Any Items and Functions
3. Review
a. Review Potential Difference
i. Requirements
ii. Design Change
iii. Environment/ Customer Usage
iv. Manufacturing
b. Add/ Delete Content
i. Add/ Delete Causes Based on Difference Results
ii. Input Data into the Actual Design Criteria Column and Score Occurrence
iii. Add/ Delete Detection Controls Based on Difference Results and Update Score
4. Recommended Actions
a. Add Recommended Actions Based on New Scoring
b. Add Recommended Action Results and Re-Score

6 APPROVED SOURCE LIST


Not Applicable.
Page: 41/50
CS.00133
Change Level: J

Annex A: DFMEA Template


Link to reference
Most Severe
Effective of Technical
Item to study Item Complexity/ Failure Effect
Requirement of Generic Potential Failure Life Situation Safety Safety
(Individual part Elementary Application Potential Effect(s) of Failure with effective
the Function Failure Modes (Optional) Barrier Requirements
/ Interface) function (Optional) Safety barrier
(Optional) (Optional)
(Optional)

Occurrence
ASIL target Mechanical Drawing Initial
PMHF Potential Root Cause(s) / Impact Characteristic Prevention Controls Actual Value Detection Controls
for FM Occurrence Characteristic / Action
(Optional) Mechanism(s) of Failure (Optional) Classification (Optional)
(Optional) (Optional) Design requirements Priority

Recommended Action(s)
Predicted
or No Action with Completion Action(s)
Action Responsibility Due Date
Rationale Date Results
Priority
(H & M Only)

Monitor System Response (MSR) Risk Reduction Evaluation


Current Most Severe
Rationale for Frequency (F) Diagnostic Current System Failure Effect Severity (S) of MSR Action
Monitoring (M)
Frequency (F) of FC Monitoring Response after System FE after MSR Priority (AP)
Controls Response

Most Severe
Diagnostic Target Action Taken
MSR Preventive System Failure Effect Responsible Action Item Completion
Monitoring Completion with Pointer to
Action Response after System Person Status Date
Action Date Evidence
Response

Severity (S)
Frequency Monitoring Action priority
of FE
(F) (M) (AP)
after MSR

Final Action Priority/Comment

Final
Action Comments
Priority
Page: 42/50
CS.00133
Change Level: J

Annex B: Severity Rating


Product General Evaluation Criteria Severity (S)
Potential Failure Effects rated according Blank until filled in by
to the criteria below. user
Corporate or Product
S Effect Severity criteria Examples
Line Examples
Loss of Propulsion while in motion
Engine/transmission/clutch failure leading to sudden deceleration or sudden
acceleration, e.g.:
Affects safe operation
o Unexpected gear disengagement when the vehicle is running
of the vehicle and/or
o Engine shutdown during working condition
other vehicles, the
10 o Uncontrollable acceleration Linked to ERC4
health of driver or
o Engine detachment from vehicle
passenger(s) or road
Transmission detachment from engine
users or pedestrians.
Very High Failure of parking system
Loss of brakes, steerage
Lack of clear vision
All of 10 - But with an avoidable warning. The warning must provide enough
safe reaction time to avoid the effect
Noncompliance with o Red lamp vehicle stop is mandatory as prescribed by the user's guide Linked to ERC3 related
9
regulations. Plastic/elastomeric materials not recyclable to regulation
Use of forbidden materials (i.e. lead)
Emissions over the EOBD limits and not detected by the control system
The vehicle fails safely
Linked to ERC3 +
Loss of primary o Engine does not start
vehicle breakdown +
vehicle function o Gear shift not possible with vehicle in stationary conditions
unavailability of vehicle +
8 necessary for normal o Oil / coolant leakage (noticeable --> stain on the floor)
Unability to leave the
driving during o Switching on of yellow lamp in the instrument panel (NOT EOBD
vehicle (unaivalability of
expected service life. relevant)
park break)
Emissions over the EOBD limits and detected by the control system
Any poor quality of primary function statements:
High o Oil/ coolant leakage (small)
o Oil/ fuel consumption increase
Degradation of o Difficulty engine cold start ability
primary vehicle o Engine irregularity
7 function necessary for o Early component degradation/wear (early belt change) Limp home mode
normal driving during o Systematic grind gears during gear shift
expected service life. o Clutch judding
o No authorization to produce by Plant
o High assistance time / cost
o Unable to track components subject to traceability
Secondary effect such as
o Low response to the control
Linked to ERC2 +
o Vibrations during deceleration or wobbling in shutdown
Loss of secondary Unability to leave the
6 o Sporadic grating during gear shift
vehicle function. vehicle (unavailability of
o Assembly line stop / significant delays
lock access to vehicle)
o Glove box light inoperable
o Radio inoperable
Secondary effect such as:
o Turbocharger whistle
Linked to ERC2 +
Moderate Degradation of o Engine compartment fouling/stained (e.g. exhaust gas/ oil leakage…)
Reduced performance of
5 secondary vehicle o Transmission noise during gear shift
air conditioning -
function. o Exhaust gas smell in the passenger compartment
entertainment
o High manufacturing costs (e.g. Difficult components objectivation)
o High production parts to be scrapped/ reworked
Visible aesthetic defects (rust…)
Very objectionable
Minor squeak and rattle ERC1 (more than 75%
appearance, sound,
4 Increased cycle time without stopping the Assembly line of customers will detect
vibration, harshness,
Rework in successive operations the fault)
or haptics.

Marginally noticeable noise and visual defects


Moderately
Finishing/ Assembly not aesthetically optimal (confused arrangement of
objectionable ERC1 (more than 50%
electric cables)
3 appearance, sound, of customers will detect
Defects detected in the Assembly line
vibration, harshness, the fault)
or haptics. Difficult handling
Low
Rework in successive operations
Slightly objectionable
ERC1 (less than 25% of
appearance, sound, Difficult to hear noise
2 customers will detect the
vibration, harshness, Difficult to see visual defects
fault)
or haptics.
1 Very low No discernible effect.
Page: 43/50
CS.00133
Change Level: J

Annex C: Occurrence Rating


Occurrence Potential (O) for the Product

Potential Failure Causes rated according to the criteria below. Consider Product Experience and Blank until filled in by
Prevention Controls when determining the best Occurrence estimate (Qualitative rating). user

Prediction
of Failure Corporate or Product
O Occurrence criteria - DFMEA
Cause Line Examples
Occurring
First application of new technology anywhere without operating experience and / or
under uncontrolled operating conditions. No product verification and/or validation
Extremely > 100 per thousand
10 experience.
high > 1 in 10
Standards do not exist and best practices have not yet been determined. Prevention
controls not able to predict field performance or do not exist.
First use of design with technical innovations or materials within the company. New
application or change in duty cycle / operating conditions. No product verification and/or 50 per thousand
9
validation experience. 1 in 20
Prevention controls not targeted to identify performance to specific requirements.
Very high First use of design with technical innovations or materials on a new application. New
application or change in duty cycle / operating conditions. No product verification and/or
20 per thousand
8 validation experience.
1 in 50
Few existing standards and best practices, not directly applicable for this design.
Prevention controls not a reliable indicator of field performance.
New design based on similar technology and materials. New application or change in
duty cycle / operating conditions. No product verification and/or validation experience. 10 per thousand
7
Standards, best practices, and design rules apply to the baseline design, but not the 1 in 100
innovations. Prevention controls provide limited indication of performance
High Similar to previous designs, using existing technology and materials. Similar
application, with changes in duty cycle or operating conditions. Previous testing or field
2 per thousand
6 experience.
1 in 500
Standards and design rules exist but are insufficient to ensure that the failure cause will
not occur. Prevention controls provide some ability to prevent a failure cause.
Detail changes to previous design, using proven technology and materials. Similar
application, duty cycle or operating conditions. Previous testing or field experience, or
new design with some test experience related to the failure.
0.5 per thousand
5 Design addresses lessons learned from previous designs. Best Practices re-evaluated
1 in 2000
for this design but have not yet been proven. Prevention controls capable of finding
deficiencies in the product related to the failure cause and provide some indication of
Moderate performance.
Almost identical design with short-term field exposure. Similar application, with minor
change in duty cycle or operating conditions. Previous testing or field experience.
0.1 per thousand
4
Predecessor design and changes for new design conform to best practices, standards, 1 in 10000
and specifications. Prevention controls capable of finding deficiencies in the product
related to the failure cause and indicate likely design conformance.
Detail changes to known design (same application, with minor change in duty cycle or
operating conditions) and testing or field experience under comparable operating
conditions, or new design with successfully completed test procedure. 0.01 per thousand
3 Low
Design expected to conform to Standards and Best Practices, considering Lessons 1 in 100000
Learned from previous designs. Prevention controls capable of finding deficiencies in
the product related to the failure cause and predict conformance of production design.
Almost identical mature design with long term field exposure. Same application, with
comparable duty cycle and operating conditions. Testing or field experience under
comparable operating conditions. < 0.001 per
2 Very low Design expected to conform to Standards and Best Practices, considering Lessons thousand
Learned from previous designs, with significant margin of confidence. Prevention 1 in 1000000
controls capable of finding deficiencies in the product related to the failure cause and
indicate confidence in design conformance.
Failure is
Extremely Failure eliminated through preventive control and failure cause is not possible by eliminated through
1
low design preventative
control
Product Experience: History of product usage within the company (Novelty of design, application or use case). Results of already
completed detection controls provide experience with the design.
Prevention Controls: Use of Best Practices for product design, Design Rules, Company Standards, Lessons Learned, Industry
Standards, Material Specifications, Government Regulations and effectiveness of prevention oriented analytical tools including
Computer Aided Engineering, Math Modeling, Simulation Studies, Tolerance Stacks and Design Safety Margins

Note: O can drop based on product validation activities.


Page: 44/50
CS.00133
Change Level: J

Annex D: Detection Rating


Detection Potential (D) for the Validation of the Product Design

Detection Controls rated according to Detection Method Maturity and Opportunity for Detection.
Chance to Likelihood of Detection By Current Risk of Non-
D detect Design Control Test Status Detection Mission Profile

Not and/or Control will not and/or cannot detect a No Test exists 85% to 100%
cannot potential uncertainty
N/A
10 cause/mechanism and subsequent
failure mode; or there is no Current
Control
Very Very remote chance the Control will The Tests are not able to detect 75% to 85%
N/A
9 remote detect a potential cause/mechanism the failure
and subsequent failure mode
Remote Remote chance the Control will detect 65% to 75% Not Coherent
Not Standardized, new Test
8 a potential cause/mechanism and The number of challenges, events, or
subsequent failure mode cycles is not defined. The stress
Very low Very low chance the Control will 55% to 65% factors are not defined/coherent, as
Standardized published Test
detect a potential cause/mechanism well as temperatures (internal and
7 Procedure
and subsequent failure mode external), vibration, chemicals, debris,
radiated energy, installation, etc.
Low Low chance the Control will detect a Not standardized New Test, 45% to 55%
potential cause/mechanism and Specific Test for Specific Cause e.g.: C50
6
subsequent failure mode or Failure Mode Generic Test for
Overall Function
Moderate Moderate chance the Control will Not standardized New Test, 35% to 45%
detect a potential cause/mechanism Specific Test for Specific Cause e.g.: C60
5
and subsequent failure mode or Failure Mode

Moderately Moderately high chance the Control Consolidated but not 25% to 35%
high will detect a potential standardized, Developed, but not e.g.: C70
cause/mechanism and subsequent published, Generic Test for
4 failure mode Overall Function, Coherent
Process Verification, (Assembly, The mission is thoroughly defined in
EOL, Handling…), Installation terms of the number of challenges,
checks, Maintainability checks events, and cycles as well as
High High chance the Control will detect a Consolidated but not 15% to 25% temperatures (internal and external),
potential cause/mechanism and standardized, Developed, but not e.g.: C80 vibration, chemicals, debris, radiated
subsequent failure mode published, Specific Test for energy, installation, etc.
3 Specific Cause for Failure Mode,
Process Verification, (Assembly,
EOL, Handling…), Installation
checks, Maintainability checks
Very High Very High chance the Control will Standardized Published 5% to 15%
2 detect a potential cause/mechanism Procedure, Generic Test for e.g.: C90
and subsequent failure mode Overall Function
Almost Control will almost certainly detect a Standardized Published 0% to 5%
certainly potential cause/mechanism and Procedure, Specific Test for
1
subsequent failure mode Specific Cause for Failure Mode

End of Annex D
Page: 45/50
CS.00133
Change Level: J

Annex E: DFMEA Action Priority


Severity 10 Severity 5
Occ \ Det 1 2 3 4 5 6 7 8 9 10 Occ \ Det 1 2 3 4 5 6 7 8 9 10
1 L L L L L L L L L L 1 L L L L L L L L L L
2 L L M M H H H H H H 2 L L L L L L L L L L
3 M M M M H H H H H H 3 L L L L L L L L L L
4 H H H H H H H H H H 4 L L L L L L M M M M
5 H H H H H H H H H H 5 L M M M M M M M M M
6 H H H H H H H H H H 6 M M M M M M M M M M
7 H H H H H H H H H H 7 M M M M M M H H H H
8 H H H H H H H H H H 8 M M M H H H H H H H
9 H H H H H H H H H H 9 M M H H H H H H H H
10 H H H H H H H H H H 10 M M H H H H H H H H

Severity 9 Severity 4
Occ \ Det 1 2 3 4 5 6 7 8 9 10 Occ \ Det 1 2 3 4 5 6 7 8 9 10
1 L L L L L L L L L L 1 L L L L L L L L L L
2 L L M M M M H H H H 2 L L L L L L L L L L
3 L M M M M H H H H H 3 L L L L L L L L L L
4 M H H H H H H H H H 4 L L L L L L M M M M
5 M H H H H H H H H H 5 L L L L L L M M M M
6 H H H H H H H H H H 6 L M M M M M M M M M
7 H H H H H H H H H H 7 L M M M M M M M M M
8 H H H H H H H H H H 8 M M M M H H H H H H
9 H H H H H H H H H H 9 M M M M H H H H H H
10 H H H H H H H H H H 10 M M M M H H H H H H

Severity 8 Severity 3
Occ \ Det 1 2 3 4 5 6 7 8 9 10 Occ \ Det 1 2 3 4 5 6 7 8 9 10
1 L L L L L L L L L L 1 L L L L L L L L L L
2 L L L L M M M M M M 2 L L L L L L L L L L
3 L L L M M M H H H H 3 L L L L L L L L L L
4 M M M M H H H H H H 4 L L L L L L L L L L
5 M H H H H H H H H H 5 L L L L L L L L L L
6 M H H H H H H H H H 6 L L L L L L L L L L
7 M H H H H H H H H H 7 L L L L M M M M M M
8 M H H H H H H H H H 8 L L L M M M M M M M
9 M H H H H H H H H H 9 L L L M M M M M H H
10 M H H H H H H H H H 10 L L L M M M M M H H

Severity 7 Severity 2
Occ \ Det 1 2 3 4 5 6 7 8 9 10 Occ \ Det 1 2 3 4 5 6 7 8 9 10
1 L L L L L L L L L L 1 L L L L L L L L L L
2 L L L L M M M M M M 2 L L L L L L L L L L
3 L L L L M M M M M M 3 L L L L L L L L L L
4 M M M M M M H H H H 4 L L L L L L L L L L
5 M M M M M M H H H H 5 L L L L L L L L L L
6 M H H H H H H H H H 6 L L L L L L L L L L
7 M H H H H H H H H H 7 L L L L M M M M M M
8 M H H H H H H H H H 8 L L L L M M M M M M
9 M H H H H H H H H H 9 L L L L M M M M M M
10 M H H H H H H H H H 10 L L L L M M M M M M

Severity 6 Severity 1
Occ \ Det 1 2 3 4 5 6 7 8 9 10 Occ \ Det 1 2 3 4 5 6 7 8 9 10
1 L L L L L L L L L L 1 L L L L L L L L L L
2 L L L L L L M M M M 2 L L L L L L L L L L
3 L L L L L L M M M M 3 L L L L L L L L L L
4 L L L L M M M M M M 4 L L L L L L L L L L
5 L M M M M M M M M M 5 L L L L L L L L L L
6 M M M M M M H H H H 6 L L L L L L L L L L
7 M M M M M H H H H H 7 L L L L L L L L L L
8 M H H H H H H H H H 8 L L L L L L L L L L
9 M H H H H H H H H H 9 L L L L L L L L L L
10 M H H H H H H H H H 10 L L L L L L L L L L

End of Annex E
Page: 46/50
CS.00133
Change Level: J

Annex F: Frequency Rating


Frequency Potential (F) for the Product
Frequency criteria (F) for the estimated occurrence of the Failure Cause in relevant operating Blank until filled in by
situations during the intended service life of the vehicle user
Estimated Corporate or Product
F Frequency criteria - FMEA-MSR
Frequency Line Examples
Extremely high or
Frequency of occurrence of the Failure Cause is unknown or known > 100 per thousand
10 cannot be
to be unacceptably high during the intended service life of the vehicle > 1 in 10
determined
Failure Cause is likely to occur during the intended service life of the 50 per thousand
9
vehicle 1 in 20
High
Failure Cause may occur often in the field during the intended service 20 per thousand
8
life of the vehicle 1 in 50
Failure Cause may occur frequently in the field during the intended 10 per thousand
7
service life of the vehicle 1 in 100
Failure Cause may occur somewhat frequently in the field during the 2 per thousand
6
Medium intended service life of the vehicle 1 in 500
Failure Cause may occur rarely in the field during the intended
0.5 per thousand
5 service life of the vehicle. At least ten occurrences in the field are
1 in 2000
predicted.
Failure Cause is predicted to occur in isolated cases in the field
during the intended service life of the vehicle. At least one occurrence 0.1 per thousand
4 Low
in the field is predicted. May be acceptable if effects are not related to 1 in 10000
safety or regulatory compliance.
Failure Cause is predicted not to occur in the field during the intended
service life of the vehicle based on prevention and detection controls 0.01 per thousand
3 Very low
and field experience with similar parts. Isolated cases cannot be ruled 1 in 100000
out. No proof it will not happen. Acceptable for series production.
Failure Cause is predicted not to occur in the field during the intended
service life of the vehicle based on prevention and detection controls < 0.001 per thousand
2 Extremely low
and field experience with similar parts. Isolated cases cannot be ruled 1 in 1000000
out. No proof it will not happen. Acceptable for series production.
Failure is
Failure Cause cannot occur during the intended service life of the
eliminated through
1 Cannot Occur vehicle or is virtually eliminated. Evidence that Failure Cause cannot
preventative
occur. Rationale is documented.
control

Alternative way to determine the Frequency Rating:

All ASIL Feared Event are S10 in the FMEA. It is not possible to differentiate ASIL A and ASIL D.
In ISO26262 exposure is taking into account to reduce the ASIL.
Frequency (F) in the FMEA allows to differentiate the ASIL and to adapt the monitoring.
Frequency (F) must be equal or less to Occurrence (O).

If the Failure mode is associated to the Failure effect in a life situation with 100% of time,
Frequency (F) = Occurrence (O).

If the life situation is between 10% and 100% we can have Frequency (F) = Occurrence (O) -1.
For example driving at night

If the life situation is between 1% and 10% we can have Frequency (F) = Occurrence (O) -2.
For example driving in the rain

If the life situation is equal/less than 1% we can have Frequency (F) = Occurrence (O) -3.
For example crash situation

End of Annex F
Page: 47/50
CS.00133
Change Level: J

Annex G: Monitoring Rating


Supplemental FMEA for Monitoring and System Response (M)
Monitoring Criteria (M) for Failure Causes, Failure Modes and Failure Effects by Monitoring during Customer Operation.
Blank until filled in by user
Use the rating number that corresponds with the least effective of either criteria for Monitoring or System Response
Effectiveness of
Monitoring Diagnostic Monitoring /Sensory Perception System Response / Human Corporate or Product
M
Controls and Criteria Reaction Criteria Line Examples
System Response
The fault/failure cannot be detected at all or not during
No response during the Fault Tolerant
10 Not effective the Fault Tolerant Time Interval; by the system, the
Time Interval.
driver, a passenger, or service technician.
The fault/failure can almost never be detected in The reaction to the fault/failure by the
relevant operating conditions. Monitoring control with system or the driver may not reliably
9 Very Low
low effectiveness, high variance, or high uncertainty. occur during the Fault Tolerant Time
Minimal diagnostic coverage. Interval.
The fault/failure can be detected in very few relevant The reaction to the fault/failure by the
operating conditions. Monitoring control with low system or the driver may not always
8 Low
effectiveness, high variance, or high uncertainty. occur during the Fault Tolerant Time
Diagnostic coverage estimated <60%. Interval.
Low probability of detecting the fault/failure during the
Low probability of reacting to the
Fault Tolerant Time Interval by the system or the
detected fault/failure during the Fault
7 Moderately Low driver. Monitoring control with low effectiveness, high
Tolerant Time Interval by the system
variance, or high uncertainty. Diagnostic coverage
or the driver.
estimated >60%.
The fault/failure will be automatically detected by the The automated system or the driver
system or the driver only during power-up, with will be able to react to the detected
6
medium variance in detection time. Diagnostic fault/failure in many operating
coverage estimated >90%. conditions.
Moderate The fault/failure will be automatically detected by the The automated system or the driver
system during the Fault Tolerant Time Interval, with will be able to react to the detected
5 medium variance in detection time, or detected by the fault/failure during the Fault Tolerant
driver in very many operating conditions. Diagnostic Time Interval in very many operating
coverage estimated between 90% - 97%. conditions.
The fault/failure will be automatically detected by the The automated system or the driver
system during the Fault Tolerant Time Interval, with will be able to react to the detected
4 Moderately High medium variance in detection time, or detected by the fault/failure during the Fault Tolerant
driver in most operating conditions. Diagnostic Time interval, in most operating
coverage estimated >97%. conditions.
The system will automatically react to
The fault/failure will be automatically detected by the the detected fault/failure during the
system during the Fault Tolerant Time Interval with Fault Tolerant Time Interval in most
3 High
very low variance in detection time, and with a high operating conditions with very low
probability. Diagnostic coverage estimated >99%. variance in system response time, and
with a high probability.
The system will automatically react to
The fault/failure will be detected automatically by the
the detected fault/failure during the
system with very low variance in detection time during
2 Very High Fault Tolerant Time Interval with very
the Fault Tolerant Time Interval, and with a very high
low variance in system response time,
probability. Diagnostic coverage estimated > 99.9%.
and with a very high probability.
Reliable and
The system will always automatically
acceptable for The fault/failure will always be detected automatically
react to the detected fault/failure
1 elimination of by the system. Diagnostic coverage estimated to be
during the Fault Tolerant Time
original Failure significantly greater than 99.9%.
Interval.
Effect.

End of Annex G
Page: 48/50
CS.00133
Change Level: J

Annex H: MSR Action Priority


Severity 10 Severity 5
F\M 1 2 3 4 5 6 7 8 9 10 F\M 1 2 3 4 5 6 7 8 9 10
1 L L L L L L L L L L 1 L L L L L L L L L L
2 L H H H H H H H H H 2 L L L L L L M M M M
3 L H H H H H H H H H 3 L L L L L L M M M M
4 H H H H H H H H H H 4 L L L L L L M M M M
5 H H H H H H H H H H 5 M M M M M H H H H H
6 H H H H H H H H H H 6 M M M M M H H H H H
7 H H H H H H H H H H 7 H H H H H H H H H H
8 H H H H H H H H H H 8 H H H H H H H H H H
9 H H H H H H H H H H 9 H H H H H H H H H H
10 H H H H H H H H H H 10 H H H H H H H H H H

Severity 9 Severity 4
F\M 1 2 3 4 5 6 7 8 9 10 F\M 1 2 3 4 5 6 7 8 9 10
1 L L L L L L L L L L 1 L L L L L L L L L L
2 L H H H H H H H H H 2 L L L L L L M M M M
3 L H H H H H H H H H 3 L L L L L L M M M M
4 H H H H H H H H H H 4 L L L L L L M M M M
5 H H H H H H H H H H 5 M M M M M H H H H H
6 H H H H H H H H H H 6 M M M M M H H H H H
7 H H H H H H H H H H 7 H H H H H H H H H H
8 H H H H H H H H H H 8 H H H H H H H H H H
9 H H H H H H H H H H 9 H H H H H H H H H H
10 H H H H H H H H H H 10 H H H H H H H H H H

Severity 8 Severity 3
F\M 1 2 3 4 5 6 7 8 9 10 F\M 1 2 3 4 5 6 7 8 9 10
1 L L L L L L L L L L 1 L L L L L L L L L L
2 L L L L L L M M M M 2 L L L L L L L L L L
3 L L L L L L M M H H 3 L L L L L L L L L L
4 L L L M M M H H H H 4 L L L L L L L L L L
5 M M M M H H H H H H 5 L L L L L L M M M M
6 H H H H H H H H H H 6 L L L L L L M M M M
7 H H H H H H H H H H 7 H H H H H H H H H H
8 H H H H H H H H H H 8 H H H H H H H H H H
9 H H H H H H H H H H 9 H H H H H H H H H H
10 H H H H H H H H H H 10 H H H H H H H H H H

Severity 7 Severity 2
F\M 1 2 3 4 5 6 7 8 9 10 F\M 1 2 3 4 5 6 7 8 9 10
1 L L L L L L L L L L 1 L L L L L L L L L L
2 L L L L L L M M M M 2 L L L L L L L L L L
3 L L L L L L M M H H 3 L L L L L L L L L L
4 L L L M M M H H H H 4 L L L L L L L L L L
5 M M M M H H H H H H 5 L L L L L L M M M M
6 H H H H H H H H H H 6 L L L L L L M M M M
7 H H H H H H H H H H 7 H H H H H H H H H H
8 H H H H H H H H H H 8 H H H H H H H H H H
9 H H H H H H H H H H 9 H H H H H H H H H H
10 H H H H H H H H H H 10 H H H H H H H H H H

Severity 6 Severity 1
F\M 1 2 3 4 5 6 7 8 9 10 F\M 1 2 3 4 5 6 7 8 9 10
1 L L L L L L L L L L 1 L L L L L L L L L L
2 L L L L L L M M M M 2 L L L L L L L L L L
3 L L L L L L M M M M 3 L L L L L L L L L L
4 L L L L L L M M M M 4 L L L L L L L L L L
5 M M M M M H H H H H 5 L L L L L L L L L L
6 M M M M M H H H H H 6 L L L L L L L L L L
7 H H H H H H H H H H 7 L L L L L L L L L L
8 H H H H H H H H H H 8 L L L L L L L L L L
9 H H H H H H H H H H 9 L L L L L L L L L L
10 H H H H H H H H H H 10 L L L L L L L L L L

Note: Because the Action Priority in AIAG/VDA Handbook is less critical for S=9 than S=10
and this violates Stellantis Ethic Codes, it was decided to use the more critical S=9 rating even
for S=10

End of Annex H
Page: 49/50
CS.00133
Change Level: J

Annex I: Impact Assessment Tool RASI

Working Team Management


e.g. Review Team
Lead Facilitator Quality
Testing
/
Manufacturing Chief Engineer
Create Plan/
Impact R S S I
Assessment
Create Team R S - I
Gather Data R I S -
Develop
R S S -
DFMEA
Approval R S S A
Manage
Recommended R I S I
Actions

Communicate
Status & R S S I
Results

R = Responsible; A = Approve; S= Support; I = to be informed

End of Annex I
Page: 50/50
CS.00133
Change Level: J

Annex J: Impact Assessment Tool Example

H High Impact
Safety/
M Medium Impact Change Newness Complexity Product Risk
Regulatory
L Low Impact
DFMEA
Item Part Change Regulatory Impact Field Design Integration
System
Testing Overall OR Link to the DFMEA or
OR Safety w hich makes Product Complexity of Experience / Manufacturing Service Installation
Index OR In-house/ Newness Risk DFMEA or DRBFM DFMEA or DRBFM that w as done
Item Responsible Change to Impact DFMEA Mandatory New ness Changes Know n Impact Impact Usage / Environment Rationale for Decison Comments
Component Supplier Score OR DRBFM based on this
OR Part (Non-Safety) Problems Change
None assessment
Software
Y/N Y/N Y/N 3 1 3 2 1 3 3
Neither a DFMEA nor a
1 Example A Thomas N 0 NONE NONE DRBFM is required since
no changes exist
2 Example B Linda Y Y 0 DFMEA DFMEA A DFMEA is required
A DFMEA is required
3 Example C Robert Y N Y 0 DFMEA DFMEA
based on the criteria
A DFMEA is required
4 Example D Khalid Y N N H 9 DFMEA DFMEA
based on the criteria
A DFMEA is required
5 Example E Hanna Y N N L H L L L L L 18 DFMEA DFMEA
based on the criteria
A DFMEA is required
6 Example F Susan Y N N L L H H H H H 40 DFMEA DFMEA
based on the criteria
ENTER
7 Example G Jose Y N N L L M M L H L 27 DFMEA or DRBFM ENTER RATIONAL
DECISION

End of Annex J

You might also like