Module 25. MSA - Attribute Data
Module 25. MSA - Attribute Data
UNCLASSIFIED / FOUO
National Guard
Black Belt Training
Module 25
Measurement
System Analysis (MSA)
Attribute Data
This material is not for general distribution, and its contents should not be quoted, extracted for publication, or otherwise
UNCLASSIFIED / FOUO
copied or distributed without prior coordination with the Department of the Army, ATTN: ETF.
UNCLASSIFIED / FOUO
UNCLASSIFIED / FOUO
TOOLS
•Process Mapping
ACTIVITIES
• Map Current Process / Go & See •Process Cycle Efficiency/TOC
• Identify Key Input, Process, Output Metrics •Little’s Law
• Develop Operational Definitions •Operational Definitions
• Develop Data Collection Plan •Data Collection Plan
• Validate Measurement System •Statistical Sampling
• Collect Baseline Data •Measurement System Analysis
• Identify Performance Gaps •TPM
• Estimate Financial/Operational Benefits •Generic Pull
• Determine Process Stability/Capability •Setup Reduction
• Complete Measure Tollgate •Control Charts
•Histograms
•Constraint Identification
•Process Capability
Note: Activities and tools vary by project. Lists provided here are not necessarily all-inclusive. UNCLASSIFIED / FOUO
UNCLASSIFIED / FOUO
Learning Objective
Understand how to conduct and interpret a
measurement system analysis with Attribute Data
Data Scales
Nominal: Contains numbers that have no basis on which to arrange
in any order or to make any assumptions about the quantitative
difference between them. These numbers are just names or labels.
For example:
In an organization: Dept. 1 (Accounting), Dept. 2 (Customer
Service), Dept. 3 ( Human Resources)
In an insurance co.: Business Line 1, Line 2, Line 3
Modes of transport: Mode 1 (air), Mode 2 (truck), Mode 3 (sea)
Kappa Techniques
Kappa is appropriate for non-quantitative systems
such as:
Good or bad
Go/No Go
Differentiating noises (hiss, clank, thump)
Pass/fail
Kappa Techniques
Kappa for Attribute Data:
Treats all misclassifications equally
Does not assume that the ratings are equally
distributed across the possible range
Requires that the units be independent and that the
persons doing the judging or rating make their
classifications independently
Requires that the assessment categories be mutually
exclusive
Operational Definitions
There are some quality characteristics that are either difficult
or very time consuming to define
To assess classification consistency, several units must be
classified by more than one rater or judge
If there is substantial agreement among the raters, there is
the possibility, although no guarantee, that the ratings are
accurate
If there is poor agreement among the raters, the usefulness
of the rating is very limited
Consequences?
What are the important concerns?
What are the risks if agreement within and between
raters is not good?
Are bad items escaping to the next operation in the
process or to the external customer?
Are good items being reprocessed unnecessarily?
What is the standard for assessment?
How is agreement measured?
What is the Operational Definition for assessment?
Kappa
Pobserved Pchance
K
1 Pchance
For perfect agreement, P observed = 1 and K=1
As a rule of thumb, if Kappa is lower than 0.7, the
measurement system is not adequate
If Kappa is 0.9 or above, the measurement system is
considered excellent
The lower limit for Kappa can range from 0 to -1
For P observed = P chance (expected), then K=0
Therefore, a Kappa of 0 indicates that the agreement is
the same as would be expected by random chance
Measurement System Analysis - Attribute UNCLASSIFIED / FOUO 14
UNCLASSIFIED / FOUO
Attribute = Rating
Samples = Sample
Appraisers = Appraiser
2. Click on OK
Within A ppraisers
100 95.0% C I
P ercent
80
60
Percent
40
20
0
Duncan Hayes Holmes Montgomery Simpson
Appraiser
Let’s Do It Again
Stat>Quality Tools>Attribute Agreement Analysis
3. Click on OK
80 80
70 70
Percent
Percent
60 60
50 50
40 40
30 30
n s es y n n s es y n
nca aye lm mer pso nca aye lm mer pso
Du H Ho tg
o
Si
m Du H Ho tg
o
Si
m
on on
M M
Appraiser Appraiser
Within Appraiser
Note: This is only a part of the total data set for illustration
Measurement System Analysis - Attribute UNCLASSIFIED / FOUO 29
UNCLASSIFIED / FOUO
Note: This is only a part of the total data set for illustration
Measurement System Analysis - Attribute UNCLASSIFIED / FOUO 30
UNCLASSIFIED / FOUO
Ordinal Data
If your data is
Ordinal, you
must also check
this box
What Is Kendall’s
Kendall’s
Kendall’s (Cont.)
Takeaways
How to set-up/conduct an MSA
Use attribute data only if the measurement can not be
converted to continuous data
Operational definitions are extremely important
Attribute measurement systems require a great deal
of maintenance
Kappa is an easy method to test how repeatable and
reproducible a subjective measurement system is
UNCLASSIFIED / FOUO
UNCLASSIFIED / FOUO
References
Cohen, J., “A Coefficient of Agreement for Nominal
Scales,” Educational and Psychological Measurement,
Vol. 20,
pp. 37-46, 1960
Futrell, D., “When Quality Is a Matter of Taste, Use
Reliability Indexes,” Quality Progress, May 1995
Kappa Example #1
The Chief of Staff (COS) of the 1st Infantry Division is preparing for the
redeployment of 3 brigade combat teams supporting Operation Iraqi Freedom.
The Secretary of General Staff (SGS) informs the COS that awards for civilian
personnel (Department of the Army Civilians and military dependents) who
provided volunteer support prior to and during the deployment is always a
“significant emotional issue.” There are hundreds of submissions for awards.
A board of senior Army personnel decides who receives an award. The
measurement system the board uses to determine who receives an award is a
major concern due to differences in board member to board member
differences as well as within board member differences.
The COS directs the SGS (a certified Army Black Belt) to conduct a
measurement system study using historical data to “level set” the board
members. Kappa for each board member as well as Kappa between board
members must be calculated.
The COS‟ guidance is to retrain and/or replace board members until the
measurement system is not a concern.
Award 15 3 18
Board
- 2nd
No Award 3 19 22
18 22
Board Member 1 – 1st : shows the results of Board Member 1’s 1st recommendations. The 1st board
member recommended an “Award” or “No Award” for each of the 40 candidates on the first review
of the files.
Board Member 1 – 2nd : shows the results of Board Member 1’s 2nd recommendations. The 1st
board member recommended an “Award” or “No Award” for each of the 40 candidates on the
second review of the files.
Award 15 3 18
Board
- 2nd
No Award 3 19 22
18 22
Award 15 3 18
Board
- 2nd
No Award 3 19 22
18 22
Award 15 3 18
Board
- 2nd
No Award 3 19 22
18 22
Award 15 3 18
Board
- 2nd
No Award 3 19 22
18 22
Award 15 3 18
Board
- 2nd
No Award 3 19 22
18 22
Award 15 3 18
Board
- 2nd
No Award 3 19 22
18 22
Represents 18/40
Board Member 1 Proportions: The lower table is the data in the upper table
represented as a percentage of the total.
Measurement System Analysis - Attribute UNCLASSIFIED / FOUO 52
UNCLASSIFIED / FOUO
- 2nd
Calculating Kappa
Pobserved Pchance
K
1 Pchance
Pobserved
Proportion of candidates for which both Board Members agree
= proportion both Board Members agree are “Award” +
proportion both Board Members agree are “No Award”.
Pchance
Proportion of agreements expected by chance = (proportion
Board Member 1 says “Award” * proportion Board Member 2
says “Award”)+ (proportion Board Member 1 says “No Award”
* proportion Member 2 says ”No Award”)
The verbiage for defining Kappa will vary slightly depending on whether
we are defining a Within-Rater Kappa or Between-Rater Kappa
- 2nd
Pchance is the probabilities for each classification multiplied and then summed:
Pchance =(0.450*0.450) + (0.550*0.550) = 0.505
Kappa for Board Member 1 is sufficiently close to 0.700 that we conclude that Board Member 1
exhibits repeatability.
Measurement System Analysis - Attribute UNCLASSIFIED / FOUO 55
UNCLASSIFIED / FOUO
Award
Board
- 2nd
No Award
Award
Board
- 2nd
No Award
K Board Member 2 = ?
Measurement System Analysis - Attribute UNCLASSIFIED / FOUO 56
UNCLASSIFIED / FOUO
Contingency Table:
Board Member 1 - 1st
Counts
Award No Award
Member 2
Award 14 5 19
Board
- 1st
No Award 4 17 21
18 22
Award 14 5 19
Board
- 1st
No Award 4 17 21
18 22
Contingency Table:
Counts
Board Member 1 - 1st
Award No Award
Member 2
Award 14 5 19
Board
- 1st
No Award 4 17 21
18 22
Contingency Table:
Board Member 1 - 1st
Counts
Award No Award
Member 2
Award 14 5 19
Board
- 1st
No Award 4 17 21
18 22
Award 14 5 19
Board
- 1st
Pchance
Proportion of agreements expected by chance = (proportion
Board Member 1 recommends “Award” * proportion Board
Member 2 says “No Award”) + (proportion Board Member 1 says
No Award” * proportion Board Member 2 says “No Award”)
The verbiage for defining Kappa will vary slightly depending on whether we are
defining a Within-Board Member Kappa or Between-Board Member Kappa
- 1st
Pchance is the probability for each classification multiplied and then summed:
Pchance =(0.480*0.450) + (0.530*0.550) = 0.503
The Board Members evaluate candidate packets differently too often. The SGS
will retrain each Board Member before dismissing a Board Member and finding a
replacement.
Measurement System Analysis - Attribute UNCLASSIFIED / FOUO 64
UNCLASSIFIED / FOUO
Improvement Ideas
How might we improve this measurement system?
Additional training
Physical standards/samples
Rater certification (and periodic re-certification)
process
Better operational definitions
Kappa Conclusions
Is the current measurement system adequate?
Where would you focus your improvement efforts?
What rater would you want to conduct any training
that needs to be done?