100% found this document useful (6 votes)
5K views

MSA Training PPT 14-07-2020 PDF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 A1 1 9.98 9.99 10.00 10.01 10.02 10.03 10.04 10.05 10.06 10.07 10.08 10.09 10.10 10.11 10.12 10.13 10.14 10.15 10.16 0.18 A1 2 9.97 9.98 9.99 10.00 10.01 10.02 10.03 10.04 10.05 10.06 10.07 10.08 10.09 10.10 10.11 10.12 10.13 10.14 10.15 0

Uploaded by

LAKSHYA MITTAL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (6 votes)
5K views

MSA Training PPT 14-07-2020 PDF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 A1 1 9.98 9.99 10.00 10.01 10.02 10.03 10.04 10.05 10.06 10.07 10.08 10.09 10.10 10.11 10.12 10.13 10.14 10.15 10.16 0.18 A1 2 9.97 9.98 9.99 10.00 10.01 10.02 10.03 10.04 10.05 10.06 10.07 10.08 10.09 10.10 10.11 10.12 10.13 10.14 10.15 0

Uploaded by

LAKSHYA MITTAL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 125

By

MARUTI CENTER FOR EXCELLENCE


MSA

MEASUREMENT
SYSTEM
ANALYSIS
CONTENTS
 Section 1:-
 MSA -- Introduction
 a) Measurement, b) Measurement System, c) Measurement System Analysis
 Properties of a good measurement system
 Location error / Precision error
 Effect of measurement system error on measurement decision
 Precision error -- a) Repeatability b) Reproducibility
 How to calculate GRR
 Location error --- a) Bias b) Linearity c) Stability
 How to calculate bias and decision making

 Section 2:-
 MSA - Attribute
 Probability Method
 Kappa Method
INTRODUCTION
• The quality of the product depends in part on the quality
of process.

• The quality of the process depends on the ability to


control the process.

• The ability to control the process depends on the ability


to measure the process.

• The ability to measure the process depends on the


quality of the measurement system.
INTRODUCTION

Product
Quality

Ability of Process
Control

Ability to Measure

Quality of Measurement System

So, let us understand measurement process


MEASUREMENT
What is Measurement :

Assignment of numbers (values) to material things


to represent the relationship among them w.r.t.
particular properties.
MEASUREMENT SYSTEM
What is Measurement System:

Measurement System is a measurement process.


Input Process Output
•Standard
•Work Piece (Part)
•Instrument Measurement Measurement
•Person Result
•Procedure
•Environment
MEASUREMENT PROCESS
•Standard
•Work Piece (Part)
•Instrument Measurement Measurement
•Person Result
•Procedure
•Environment

Decision (Action) Analysis


MEASUREMENT SYSTEM

The complete process used to obtain measurement result

Combination of –
Operations
Procedures
Gauges and other equipments
Personnel
Environmental and
assumption etc.
MEASUREMENT SYSTEM ANALYSIS
• Study of effect of Measurement system on
measurement result and

• Assessing their suitability for Product or


Process Control
PROPERTIES OF A GOOD MEASUREMENT
SYSTEM
• Adequate discrimination (resolution)

• Under statistical control

• Accuracy

• Precision
DISCRIMINATION
Ability of measuring the smallest difference

Should be small relative to


- PROCESS VARIATION or
- SPECIFICATION LIMIT
Rule of 1/10th should be followed as a starting
point.
i.e. least count/ resolution of equipment should be
1/10th of process variation (10data categories)
DISCRIMINATION
DATA CATEGORIES:
The number of groups in which the measurement data (results) can be
obtained by using the measurement system.

Example :
• Process variation : 3.93~4.06 mm
• Equipment : Vernier Caliper L. C. (0.02mm)
• Group of readings : 3.94, 3.96, 3.98, 4.00, 4.02,4.04, 4.06
• Data categories :7
WHAT IS THE MEANING OF “UNDER
STATISTICAL CONTROL” ?
Variability is caused due to two reasons …..
1. Natural or Inherent called COMMON CAUSES
2. Sudden or Special called ASSIGNABLE
CAUSES

A process showing variation only due to common


causes are said to be under STATISTICAL
CONTROL
STATISTICAL CONTROL OF MEASUREMENT
SYSTEM

• Common cause variations only

• No special cause variation


STATISTICAL CONTROL OF
MEASUREMENT SYSTEM

• Generally, a variation found within Mean +/- 3


Sigma (6 Sigma spread) is considered as
common cause variation.

• 6 Sigma spread covers 99.73% of the process.


ACCURACY AND PRECISION
What is Accuracy :
“Closeness” to the true value, or to an accepted
reference value

What is Precision :
“Closeness” of repeated readings to each other
ACCURACY AND PRECISION
With the center of the target taken to be the true
value of the characteristic being measured and by
the rifle shots representing the measured values,
there are four combinations of accuracy and
precision as depicted in the following slides.
INACCURATE AND IMPRECISE
ACCURATE AND IMPRECISE
PRECISE BUT INACCURATE
ACCURATE AND PRECISE
IF MEASUREMENT SYSTEM HAS ACCURACY ERROR /
LOCATION ERROR

ACTUAL VARIATION OBS. VARIATION


DUE TO MS ERROR

Location
shifted
It will create LOCATION error in result of measurement
ACCURACY ERROR / LOCATION ERROR

Example for one part :

Observed value (20 observations) = 9.98~10.00


Here range = 0.02
mean = 9.99

Reference value (20 observations) = 9.99~10.01


Here range = 0.02
mean = 10.00
This example range in both the cases is 0.02, but there
is difference in mean is 0.01.

This error is called accuracy error or location error.


IF MEASUREMENT SYSTEM HAS PRECISION ERROR

ACTUAL VARIATION OBS. VARIATION


DUE TO MS ERROR

It will create SPREAD error in result of measurement


PRECISION ERROR

Example for one part :

Observed value (20 observations) = 9.98~10.02


Here range = 0.04
mean = 10.00

Reference value (20 observations) = 9.99~10.01


Here range = 0.02
mean = 10.00
This example mean in both the cases is 10.00, but
there is difference in range is 0.02.

This error is called precision error or spread error.


SO, WE CONCLUDE

total   process  MSAccuracyerror

 2 total   2 process   2 error


MS Pr ecision

Observed Process = Actual Process + Measurement System Error


EFFECT OF MEASUREMENT SYSTEM ERROR ON
MEASUREMENT DECISION
1. EFFECT ON PRODUCT CONTROL :
1a. Calling a good part as bad part (called type –I error)
LSL USL LSL USL

1b. Calling a bad part as good part (called type –II error)
LSL USL
EFFECT OF MEASUREMENT SYSTEM ERROR ON
MEASUREMENT DECISION

2. EFFECT ON PROCESS CONTROL :

2a. Calling a common cause as special cause (called type –I error)

2b. Calling a special cause as common cause (called type –II error)

2c. Observed variance is equal to actual variance and measurement


system variance.

σ2 obs. = σ2 actual + σ2 msa


TYPES OF MEASUREMENT SYSTEM ERRORS

Measurement System Errors

Location Spread

Bias Repeatability

Linearity Reproducibility

Stability
SUMMARY
• Types of measurement system error

Measured value = true value + location error + dispersion error

Location error (accuracy) Dispersion error (precision)


- Bias
- Stability - Repeatability
- Linearity - Reproducibility
PERFORMING MSA: PRE-CONDITIONS
• Data to be collected under routine measurement conditions

• Level -1 control exists i.e. the controls required to be used even


without MSA.

• Equipment is calibrated

• Adequate discrimination

• Persons are qualified

• Un-necessary causes of variations does not exist etc.


REPEATABILITY (WITHIN SYSTEM VARIATION)
The variation in measurements obtained
• with one measurement instrument
• When used several times
• By one appraiser
• While measuring the identical
characteristics
• On the same part.
Repeatability

σ repeatability = Rtrial / d2* = K1R, where K1= 1/d2*

Note : Repeatability is commonly referred to as equipment variation (EV),


although this is misleading. In fact repeatability is within system (SWIPPE)
variation.
REPRODUCIBILITY (BETWEEN SYSTEM VARIATION)
The variation in the average of the measurements
• Made by different appraisers
• Using the same measuring instrument
• When measuring the identical characteristic
• On the same part.
This is also commonly known as AV – “Appraiser Variation”

σreproducibility = Rappraiser / d2* = K2R, where K2= 1 / d2*

Appraiser A C B

Reproducibility
GAGE REPEATABILITY & REPRODUCIBILITY (GRR)

An estimate of the combined variation of repeability and


reproducibility.

GRR is the variance equal to the sum of within system &


between system variances.
σ2GRR = σ2EV + σ2AV

Appraiser A C B
R&R – STUDY
Three Methods
1. Range Method
2. X Bar –R method
3. ANOVA method (preferable in case of appropriate computer
programme)
R&R – AVERAGE AND RANGE METHOD
- Conducting the study
1) Selection of sample: n > 10 parts depending on size,
measurement time / cost etc (representing process variation).
2) Identification : 1 to n (not visible to the appraisers).
- Location marking (easily visible & identifiable by the appraisers).
- Selection of appraiser (k): 2-3 routine appraisers
- Selection of Measuring equipment : Calibrated routine
equipment
- Deciding number of trials ( r ): 2-3
- Data collection :
- Using data collection sheet
- Under normal measurement condition
- in random order
- using blind measurement process
R&R – DATA COLLECTION
Oper TRIAL AVER RO-
PART
No. W
-ator A-GE
No.
1 2 3 4 5 6 7 8 9 10
48.060 48.055 48.054 48.065 48.064 48.056 48.063 48.064 48.065 48.066
A 1 . 1
48.061 48.056 48.055 48.065 48.063 48.055 48.060 48.066 48.062 48.063
2 2
3 3
AVERAGE Xa bar
4
RANGE Ra bar
5
48.060 48.057 48.053 48.065 48.052 48.055 48.063 48.064 48.065 48.063
B 1 6
48.060 48.056 48.056 48.065 48.053 48.054 48.060 48.065 48.065 48.063
2 7
3 8
AVERAGE Xb bar
9
RANGE Rb bar
10
R&R – AVERAGE AND RANGE METHOD
Data Collection
- Enter appraiser A result (Ist trial) in row 1.

- Enter appraiser B result (Ist trial) in row 6 respectively

- Repeat the cycle (2nd trial) & enter data in rows 2 and 7.

- If three trials are needed, repeat the cycle and enter data in row 3
and 8.
R&R – GRAPHICAL ANALYSIS (MANUAL)
1) For appraiser A, calculate average (X bar) & range ( R ) for each
part and enter in rows 4 & 5 respectively.
2) Do the same for appraiser B and enter results in rows 9 and 10.
3) For appraiser A, calculate average (Xa bar) of all the averages
(row 4) and average (Ra bar) of all the ranges (row 5) and enter in
data sheet.
4) Calculate Xb bar and Rb bar for appraiser B and enter the results
in data sheet.
5) Calculate average of all the observations (rows 4 & 9) of each part
and enter result in row 11.
6) Calculate Part range (Rp) = Difference of Max. and Min. of 11
and enter in data sheet (right most row 11).
R&R – CALCULATION
Oper TRI AVER RO-
AL
PART
-ator A-GE W
No.
No.
1 2 3 4 5 6 7 8 9 10
48.060 48.055 48.054. 48.06 48.064 48.056 48.063 48.064 48.065 48.066
A 1 5 1
48.061 48.056 48.055 48.06 48.063 48.055 48.06 48.066 48.062 48.063
2 5 2
3 3
AVERAGE 48.061 48.056 48.055 48.06 48.064 48.056 48.062 48.065 48.064 48.065 Xa bar =
5 48.0609 4
RANGE 0.001 0.001 0.001 0.000 0.001 0.001 0.003 0.002 0.003 0.003 Ra bar =
0.0016 5
48.06 48.057 48.053 48.06 48.052 48.055 48.063 48.064 48.065 48.063
B 1 5 6
48.06 48.056 48.056 48.06 48.053 48.054 48.06 48.065 48.065 48.063
2 5 7
3 8
AVERAGE 48.060 48.057 48.055 48.06 48.053 48.055 48.062 48.065 48.065 48.063 Xb bar =
5 48.0597 9
RANGE 0.000 0.001 0.003 0.000 0.001 0.001 0.003 0.001 0.000 0.000 Rb bar =
0.001 10
Oper- TRIAL AVE. RO-
PART
ator No. W
1 2 3 4 5 6 7 8 9 10 No.
1 48.060 48.055 48.054 48.065 48.06 48.056 48.063 48.064 48.065 48.06 1
A . 4 6

2 48.061 48.056 48.055 48.065 48.06 48.055 48.06 48.066 48.062 48.06 2
3 3

3 3

AVERAGE 48.061 48.056 48.055 48.065 48.06 48.056 48.062 48.065 48.064 48.06 Xa bar = 4
4 5 48.0609

RANGE 0.001 0.001 0.001 0.000 0.001 0.001 0.003 0.002 0.003 0.003 Ra bar = 5
0.0016

1 48.06 48.057 48.053 48.065 48.05 48.055 48.063 48.064 48.065 48.06 6
B 2 3

2 48.06 48.056 48.056 48.065 48.05 48.054 48.06 48.065 48.065 48.06 7
3 3

3 8

AVERAGE 48.060 48.057 48.055 48.065 48.05 48.055 48.062 48.065 48.065 48.06 Xb bar = 9
3 3 48.0597

RANGE 0.000 0.001 0.003 0.000 0.001 0.001 0.003 0.001 0.000 0.000 Rb bar = 10
0.001
PART AVERAGE 48.060 48.056 48.055 48.065 48.05 48.055 48.062 48.065 48.064 48.06 Rp=0.011 11
(Xp bar) 8 4

R double bar= (Ra bar + Rb bar) / No. of appraisers R double 12


bar =
0.0013

X bar Diff = Max. of (Xa bar, Xb bar) – Min. (Xa bar, Xb bar) X bar Diff = 13
0.0012

D4 = 3.27 for 2 trials & 2.58 for 3 trials UCLr = D4 X R bar 0.0043 14

( D3 = 0 for trials < 7 ) LCLr = D3 X R bar 0 15


R & R – GRAPHICAL ANALYSIS (MANUAL)
RANGE CHARTS

Appraiser A Appriaser B

0.005 0.005
UCL UCL
0.004 0.004

Range
Range

0.003 0.003
0.002
0.002
CL CL
0.001 0.001
0
0
LCL LCL
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10

Part Part

D4 = 3.27 for 2 trials & 2.58 for 3 trials, UCLr = D4 X R double bar = 3.27 X 0.0013 = 0.004251

( D3 = 0 for trials < 7 ), LCLr = D3 X R bar = 0 X 0.0013 = 0


R & R – GRAPHICAL ANALYSIS (MANUAL)

AVERAGE CHARTS

Appriaser A Appraiser B

48.07 48.07
48.065 48.065
UCL UCL
Average

Average
48.06 48.06
48.055 LCL 48.055 LCL
48.05 48.05
48.045 48.045
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
Part Part

Xp double bar = (Average of Xp bar) = 48.060


UCLx = Xp double bar + A2 x R double bar = 48.060 + (1.88 x 0.0013)= 48.0624
LCLx = Xp double bar - A2 x R double bar = 48.060 - (1.88 x 0.0013)= 48.0576
Average Chart
The area within the control limits represents the
measurement sensitivity (“noise”). Since the group of parts
used in the study represents the process variation,
approximately one half or more of the averages should fall
outside the control limits. If the data show this pattern, then
the measurement system should be adequate to detect
part-to-part variation and the measurement system can
provide useful information for analyzing and controlling the
process. If less than half fall outside the control limits then
either the measurement system lacks adequate effective
resolution or the sample does not represent the expected
process variation.
R&R ANALYSIS – NUMERICAL (MANUAL)
Calculate the following and record in report sheet.
- Repeatability (EV) = R double bar X K1
- where as K1 = 0.8862 (2 trials), 0.5908 (3 trials)

- Reproducibility (AV) = (Xbar diff X K2)2 - (EV)2


- nr
- where as K2 = 0.7071 (2 app.) & 0.5231 (3 app.)

- Repeatability & Reproducibility (GRR) = (EV)2 + (AV)2


- Part to part variation (PV) = Rp X K3

n 2 3 4 5 6 7 8 9 10
K3 .7071 .5231 .4467 .4030 .3742 .3534 .3375 .3249 .3146
-
- Total Variation(TV) = (GRR)2 + ( PV)2
NUMERICAL ANALYSIS
Calculate % variation and ndc as follows

% EV = 100 (EV / TV)

% AV = 100 (AV / TV)

% GRR = 100 (GRR / TV)

% PV = 100 (PV / TV)

No. of distinct categories (ndc) = 1.41 (PV / GRR)


No. OF DISTINCT CATEGORIES (NDC)
AIAG suggests that when the number of
categories is less than 2, the measurement
system is of no value for controlling the process,
since one part cannot be distinguished from
another. When the number of categories is 2,
the data can be divided into two groups , say
high and low. When then number of categories
is 3, the data can divided into 3 groups, say low,
middle and high. A value of 5 or more denotes
an acceptable measurement system.
NUMERICAL ANALYSIS
- Decision making :
- For % R & R

Error < 10% - MS is acceptable


10% < Error < 30% - May be acceptable with justification
Error > 30% - MS needs improvement

- ndc > = 5
Inference : % R & R = 41.2% and ndc = 3.12. Hence
it is not acceptable and MS needs improvement
How to estimate process behaviour ?

Shape

Location

Spread
Spread - Range
The difference between the largest and
the smallest of a set of numbers.hed from
another. When the number of categories is
2, the data can be divided into two groups
, say high and low. When then number of
categories is 3, the data can divided into 3
groups, say low, middle and high. A value
of 5 or more denotes an acceptable
measurement system.
TV calculation – Different Approaches
Priority order :
1. PV approach
2. Surrogate process variation approach
3. Pp/ Ppk approach
4. Specification tolerance approach
TV calculation – PV Approach

TV = (GRR)2 + (PV)2

PV is calculated from Rp if parts represent


entire process variation
TV calculation –
Surrogate Process variation Approach
TV = Process Variation
6

PV = (TV)2 - (GRR)2

Process variation is historical process


variation from a stable and statistically
controlled process
TV calculation –
Pp / Ppk Approach
TV = USL - LSL
6 Pp

PV = (TV)2 - (GRR)2
TV calculation –
Tolerance Approach
TV = USL - LSL
6

PV = (TV)2 - (GRR)2
BIAS

Reference value

Observed Average
-Difference between the observed average Value
Bias
Of measurements and the true value
(reference value)

-On the same characteristics

-On the same part

Observed Average
Value
DETERMINING BIAS
1. Obtain sample and determine reference value
2. Collect data
3. Determine Bias
4. Plot bias histogram
5. Compute Average bias
6. Compute Repeatability Standard deviation
7. Determine acceptability of repeatability
8. Determine t statistic for bias
9. Compute bias Confidence Interval and decision making
Step 1: Obtain REFERENCE SAMPLE SELECTION

Key Consideration
• Should be sufficiently stable during study for the
characteristics being evaluated

Priority order
-Sample piece else
-Production part else
-Similar other component else
-Metrology standard
Step 1 : DETERMINING REFERENCE VALUE
• Identify measurement location
- To the extent possible to minimize the effect of within
part variation
• Measure the part for n>_ 10 times
-In standard room / tool room
-With a measurement equipment of better accuracy
-Using standard measurement method
Reference Value (x) = Average of measured value
Step -2 : DATA COLLECTION
• Under routine measurement condition
Trials True Value Observed
Value

For n >_ 10 times 1 6.00 5.8


2 6.00 5.7
3 6.00 5.9
4 6.00 5.9
5 6.00 6.0
6 6.00 6.1
7 6.00 6.0
8 6.00 6.1
9 6.00 6.4
10 6.00 6.3
11 6.00 6.0
12 6.00 6.1
13 6.00 6.2
14 6.00 5.6
15 6.00 6.0
Step -3
• Determine Bias for each reading :Trials True Observed Bias
Value Value

Biasi = xi – Reference Value 1 6.00 5.8 -0.2


2 6.00 5.7 -0.3
3 6.00 5.9 -0.1
4 6.00 5.9 -0.1
5 6.00 6.0 0.0
6 6.00 6.1 0.1
7 6.00 6.0 0.0
8 6.00 6.1 0.1
9 6.00 6.4 0.4
10 6.00 6.3 0.3
11 6.00 6.0 0.0
12 6.00 6.1 0.1
13 6.00 6.2 0.2
14 6.00 5.6 -0.4
15 6.00 6.0 0.0
Step -4 Graphical Analysis
• Plot4 the bias as a histogram :

F
r
e 2
q
u
e
1
n
c
y
0
-0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4

Bias

Analyse if any special cause present.


If yes, identify & remove the cause and recollect data & re-analyse.
If not, proceed for numerical analysis
Step 5 : Compute Average Bias
Trials True Value Observed Value Bias
1 6.00 5.8 -0.2
2 6.00 5.7 -0.3

i n 3 6.00 5.9 -0.1

biasi
bias  
4 6.00 5.9 -0.1
5 6.00 6.0 0.0

i 1 n 6
7
6.00
6.00
6.1
6.0
0.1
0.0
8 6.00 6.1 0.1
9 6.00 6.4 0.4
10 6.00 6.3 0.3
11 6.00 6.0 0.0
12 6.00 6.1 0.1
13 6.00 6.2 0.2
14 6.00 5.6 -0.4
15 6.00 6.0 0.0

Sum bias 0.1000

Average bias 0.0067


Compute Repeatability Standard Deviation
n

 ( xi  x) 2
0.6293
EV   r  i 1
  0.2120
n 1 14
Trials True Value Observed Bias Xi Avg. Bias X – Avg. Sq
Value (X)
1 6.00 5.8 -0.20 0.0067 -0.2067 0.0427
2 6.00 5.7 -0.30 0.0067 -0.3067 0.0940
3 6.00 5.9 -0.10 0.0067 -0.1067 0.0114
4 6.00 5.9 -0.10 0.0067 -0.1067 0.0114
5 6.00 6.0 0 0.0067 -0.0067 0.0000
6 6.00 6.1 0.10 0.0067 0.0933 0.0087
7 6.00 6.0 0 0.0067 -0.0067 0.0000
8 6.00 6.1 0.10 0.0067 0.0933 0.0087
9 6.00 6.4 0.40 0.0067 0.3933 0.1547
10 6.00 6.3 0.30 0.0067 0.2933 0.0860
11 6.00 6.0 0 0.0067 -0.0067 0.0000
12 6.00 6.1 0.10 0.0067 0.0933 0.0087
13 6.00 6.2 0.20 0.0067 0.1933 0.0374
14 6.00 5.6 -0.40 0.0067 -0.4067 0.1654
15 6.00 6.0 0 0.0067 -0.0067 0.0000
Sum (X) 90.1000 Sum = 0.6293
Avg. (X) 6.0067 Sigma r 0.2120
7. Determine acceptability of Repeatability

EV  r 
% EV  100  100  
TV  TV 

Where, TV= Process Standard Deviation

 1000.0848  8.48%
0.2120
%EV  100
2.5
8. Determine Bias standard error
r
b 
n
.2120
b   0.0547
15
9. Determine Confidence Limit
Alpha-two- 0.05
tails
Sample Size DF

- Lower limit (L) = bias – t σb 2 1 12.71

- Upper limit (U) = bias + t σb 3 2 4.303


4 3 3.182
5 4 2.776
6 5 2.571
-t can be obtained from Table 7 6 2.447
- alpha (preferably 0.05) is a measure of confidence 8 7 2.365
9 8 2.306
10 9 2.262
11 10 2.228
12 11 2.201
13 12 2.179
- Lower limit (L) = 0.0067 – 2.145 * 0.0547 = -0.1106
14 13 2.16
- Upper limit (U) = 0.0067 + 2.145 * 0.0547 = 0.1240
15 14 2.145
16 15 2.131
17 16 2.12
18 17 2.11
19 18 2.101
20 19 2.093
DECISION MAKING

Bias is acceptable

At 100 (1-ά)% confidence level


If
L< 0 < U

Inference : L = - 0.1106 & U = 0.1240,


Zero lies between L & U, Hence bias
is acceptable
IF BIAS IS STATISTICALLY NON ZERO
• Possible causes can be :-
- Error in master or reference value. Check mastering
procedure.
- Worn instruments. This can show up in stability
analysis and will suggest the maintenance or
refurbishment schedule.
- Instrument made to wrong dimensions
- Instrument measuring wrong characteristics
- Instrument not calibrated properly
- Improper use by operator. Review instrument
instructions.
LINEARITY

- The difference of bias through the expected operating


(measurement) range of the equipment.
- This is change of bias with respect to size.

Measurement Measurement Measurement


point 1 point 2 point 3
LINEARITY

No linearity Constant Non


1 linearity
error linear
Bias

-1

Reference Value
LINEARITY STEPS
Determine Process Range

Select Reference Sample

Determine Ref. Value

Calculate Bias

Check Linear Relation

Draw best line

Draw Confidence band

Determine Repeatability Error

Take decision
Example of linearity
Sample No. 1 2 3 4 5
Reference 2 4 6 8 10
Example 1 : Value
Observed Value 2.492 4.125 6.025 7.708 9.383

Inference : Since 0 bias line does not lie within


the confidence bands of the fitted line.
Therefore Linearity is not acceptable.

Linearity acceptable if, “bias = 0” line lie entirely


within the confidence bands of the fitted line.
Example of linearity
Sample No. 1 2 3 4 5
Reference 13.9 34.2 68.70 92.30 130.9
Example 2 : Value
Observed Value 14.06 34.14 68.77 92.18 130.9

Inference : Since 0 bias line lie within the


confidence bands of the fitted line. Therefore
Linearity is acceptable.

Linearity acceptable if, “bias = 0” line lie entirely


within the confidence bands of the fitted line.
STABILITY (DRIFT)
The total variation in the measurements obtained with a measurement
system –

• On the same master or parts


• When measuring a single characteristic,
• Over an extended time period.
i.e. Stability is the change of bias over time
DETERMINING STABILITY
• Selection of reference standard : Refer bias study.

• Establish reference value : Refer bias study.

• Data collection :

•Decide subgroup size

•Decide subgroup frequency

•Collect data for 20-25 subgroups


DETERMINING STABILITY
• Analysis
•Calculate control limits for Xbar-R chart

•Plot data on chart

•Analyze for any out of control situation

• Decision
Measurement system is stable & accetable if no out of
control condition is observed other wise not stable and
needs improvement.
Example - Stability
To determine if the stability of a new measurement
instrument is acceptable, the process team selected a part
near the middle of the range of the production process.
Determine the reference value which is 6.01. Measured this
part 5 times once a shift (20 subgroups). After all the data
were collected, X bar & R charts were developed.
X bar chart for stability

X bar chart for stability


6.3
UCL=6.297

6.2

6.1

6.021
6

5.9

5.8

LCL=5.746
5.7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
R chart for stability

R chart for stability


1
UCL=1.010

0.5
0.4779

LCL=0
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Control Chart Analysis for Stability
Analysis of the control charts indicates that the
measurement process is stable since there are no obvious
special cause effects visible.
MSA - ATTRIBUTE

MEASUREMENT
SYSTEM
ANALYSIS
MSA Study for attribute data

Attribute data only included OK/ rejected

Attribute gauge (plug gauge & snap


gauge) can not indicate how good or bad a
part is

It can only indicate that whether the part is


accepted or rejected
Attribute Measurement System Study
Points to be considered before study
Numbered all the parts
Identify the appraisers from those who
operate the gauge
Give one part to one appraiser in random
order (such a way that the appraiser
should not be able to know the no.)
Then give all parts to different appraisers
in different order)
Repeat the steps and record the result
Attribute Measurement System Study
Select n 20 to 30 parts. In AIAG manual
example is shown for 50 parts
Approximately 25% close to lower specification
(conforming and non-conforming)
Approximately 25% close to upper specification
(conforming and non-conforming)
Remaining both conforming and non-forming
Note down the correct measurement attribute
(true status)
Decide the no.of appraiser & no. of trials
Record the measurement result in data sheet
Types of error in Attribute Measurement System
Type1 Errors : When a good part is rejected, Type 1
errors increase manufacturing cost . This is also called
Producers risk or alpha errors

Type2 Errors : When a bad part is accepted. May


occur due to inspection equipment can not detect
certain types of failure modes, inspector was poorly
trained or rushed through inspection and overlooked a
small defect on the part. Type 2 errors put the customer
at risk of receiving defective parts. Also called as
consumers risk and as beta errors
Probability Method
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 G G G G G G G
2 B B G B G G B
3 G G G G G G G
4 B G B B G B G
5 G G G G G G G
6 G G B G G G G
7 B B G B B B B
8 G G B G G G G
9 G G G G G G G
10 B B B G G B B
11 B G B B B B B
12 G G G G G G G
13 B B B B B B B
14 G G G G G G G
15 B B B B B B B
16 B G B B B B B
17 G G G G G G G
18 G G B G G B G
19 G G B B B B B
20 B B B B B B B

B means Bad Number of correct decisions of Appraiser


G means Good
A = 10
Probability Method
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 G G G G G G G
2 B B G B G G B
3 G G G G G G G
4 B G B B G B G
5 G G G G G G G
6 G G B G G G G
7 B B G B B B B
8 G G B G G G G
9 G G G G G G G
10 B B B G G B B
11 B G B B B B B
12 G G G G G G G
13 B B B B B B B
14 G G G G G G G
15 B B B B B B B
16 B G B B B B B
17 G G G G G G G
18 G G B G G B G
19 G G B B B B B
20 B B B B B B B

B means Bad Number of correct decisions of Appraiser


G means Good B = 15
Probability Method
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 G G G G G G G
2 B B G B G G B
3 G G G G G G G
4 B G B B G B G
5 G G G G G G G
6 G G B G G G G
7 B B G B B B B
8 G G B G G G G
9 G G G G G G G
10 B B B G G B B
11 B G B B B B B
12 G G G G G G G
13 B B B B B B B
14 G G G G G G G
15 B B B B B B B
16 B G B B B B B
17 G G G G G G G
18 G G B G G B G
19 G G B B B B B
20 B B B B B B B
When calling bad but good w.r.t to standard Count =
B means Bad 5 (Total false alarm) Appraiser A
G means Good Also called Type 1 Error (Producers risk)
Probability Method
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 G G G G G G G
2 B B G B G G B
3 G G G G G G G
4 B G B B G B G
5 G G G G G G G
6 G G B G G G G
7 B B G B B B B
8 G G B G G G G
9 G G G G G G G
10 B B B G G B B
11 B G B B B B B
12 G G G G G G G
13 B B B B B B B
14 G G G G G G G
15 B B B B B B B
16 B G B B B B B
17 G G G G G G G
18 G G B G G B G
19 G G B B B B B
20 B B B B B B B
B means Bad When calling bad but good w.r.t to standard Count =
G means Good 4 (Total false alarm) Appriaser B
Also called Type 1 Error (Producers risk)
Probability Method
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 G G G G G G G
2 B B G B G G B
3 G G G G G G G
4 B G B B G B G
5 G G G G G G G
6 G G B G G G G
7 B B G B B B B
8 G G B G G G G
9 G G G G G G G
10 B B B G G B B
11 B G B B B B B
12 G G G G G G G
13 B B B B B B B
14 G G G G G G G
15 B B B B B B B
16 B G B B B B B
17 G G G G G G G
18 G G B G G B G
19 G G B B B B B
20 B B B B B B B
When calling good but bad w.r.t to standard
B means Bad Count = 6 (Total miss) Appriaser A
G means Good Also called Type 2 Error (Consumers risk)
Probability Method
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 G G G G G G G
2 B B G B G G B
3 G G G G G G G
4 B G B B G B G
5 G G G G G G G
6 G G B G G G G
7 B B G B B B B
8 G G B G G G G
9 G G G G G G G
10 B B B G G B B
11 B G B B B B B
12 G G G G G G G
13 B B B B B B B
14 G G G G G G G
15 B B B B B B B
16 B G B B B B B
17 G G G G G G G
18 G G B G G B G
19 G G B B B B B
20 B B B B B B B
B means Bad When calling good but bad w.r.t to standard
G means Good Count = 5 (Total miss) Appraiser B
Also called Type 2 Error (Consumers risk)
Probability Method (Appraiser A)
Total correct decision = 10 = 0.5
Effectiveness (E) = 20
Total decision

Total false alarm 5


= = 0.1515
Probability false alarm (Pfa) = 33
Total opportunity false alarm
5 is type 1 error (Producers risk) Calling bad but actual good
Total miss
Pmiss(Pm) = = 6 = 0.222
Total opportunity for miss 27
6 is type 2 error (Consumers risk) Calling good but actual bad

Parameter Acceptable Marginal Unacceptable


E > 0.90 0.80 to 0.90 < 0.80
Pfa < 0.05 0.05 to 0.10 > 0.10
Pm < 0.02 0.02 to 0.05 > 0.05
Probability Method (Appraiser B)
Total correct decision = 15 = 0.75
Effectiveness (E) = 20
Total decision

Total false alarm 4


= = 0.1212
Probability false alarm (Pfa) = 33
Total opportunity false alarm
4 is type 1 error (Producers risk) Calling bad but actual good
Total miss
Pmiss(Pm) = = 5 = 0.185
Total opportunity for miss 27
5 is type 2 error (Consumers risk) Calling good but actual bad

Parameter Acceptable Marginal Unacceptable


E > 0.90 0.80 to 0.90 < 0.80
Pfa < 0.05 0.05 to 0.10 > 0.10
Pm < 0.02 0.02 to 0.05 > 0.05
Probability Method (Appriaser A)

Conclusion :-

Parameter Specification Observed Result

Effectiveness Unacceptable 0.500 Unacceptable


(E) (< 0.80) due to less than
0.80
Probability false Unacceptable 0.1515 Unacceptable
alarm (Pfa) (> 0.10) due to more than
0.10
P miss (Pm) Unacceptable 0.222 Unacceptable
( > 0.05 ) due to more than
0.05
Probability Method (Appriaser B)

Conclusion :-

Parameter Specification Observed Result

Effectiveness Unacceptable 0.750 Unacceptable


(E) (< 0.80) due to less than
0.80
Probability false Unacceptable 0.1212 Unacceptable
alarm (Pfa) (> 0.10) due to more than
0.10
P miss (Pm) Unacceptable 0.185 Unacceptable
( > 0.05 ) due to more than
0.05
Kappa Method (Between Appraiser A and B)
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3

1 B B B B B B B
2 G G B G B B G
3 B B B B B B B
4 G B G G B G B
5 B B B B B B B
6 B B G B B B B
7 G G B G G G G
8 B B G B B B B
9 B B B B B B B
10 G G G B B G G
11 G B G G G G G
12 B B B B B B B
13 G G G G G G G
14 B B B B B B B
15 G G G G G G G
16 G B G G G G G
17 B B B B B B B
18 B B G B B G B
19 B B G G G G G
20 G G G G G G G

B means Bad There are 11 times where A-1 and B-1 = B, there are
G means Good 8 times where A-2 and B-2 = B, and there are 10
times where A-3 and B-3 = B. Total agreement = 29
Kappa Method (Between Appraiser A and B)
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 B B B B B B B
2 G G B G B B G
3 B B B B B B B
4 G B G G B G B
5 B B B B B B B
6 B B G B B B B
7 G G B G G G G
8 B B G B B B B
9 B B B B B B B
10 G G G B B G G
11 G B G G G G G
12 B B B B B B B
13 G G G G G G G
14 B B B B B B B
15 G G G G G G G
16 G B G G G G G
17 B B B B B B B
18 B B G B B G B
19 B B G G G G G
20 G G G G G G G

B means Bad When Appraiser B declared bad


G means Good Appraiser A declared it Good
No. of counts =5
Kappa Method (Between Appraiser A and B)
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 B B B B B B B
2 G G B G B B G
3 B B B B B B B
4 G B G G B G B
5 B B B B B B B
6 B B G B B B B
7 G G B G G G G
8 B B G B B B B
9 B B B B B B B
10 G G G B B G G
11 G B G G G G G
12 B B B B B B B
13 G G G G G G G
14 B B B B B B B
15 G G G G G G G
16 G B G G G G G
17 B B B B B B B
18 B B G B B G B
19 B B G G G G G
20 G G G G G G G

B means Bad There are 4 times where A-1 and B-1 = G, there are 9
G means Good times where A-2 and B-2 = G, and there are 8 times
where A-3 and B-3 = G. Total agreement = 21
Kappa Method (Between Appraiser A and B)
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 B B B B B B B
2 G G B G B B G
3 B B B B B B B
4 G B G G B G B
5 B B B B B B B
6 B B G B B B B
7 G G B G G G G
8 B B G B B B B
9 B B B B B B B
10 G G G B B G G
11 G B G G G G G
12 B B B B B B B
13 G G G G G G G
14 B B B B B B B
15 G G G G G G G
16 G B G G G G G
17 B B B B B B B
18 B B G B B G B
19 B B G G G G G
20 G G G G G G G

B means Bad When Appraiser B declared Good


G means Good Appraiser A declared it Bad
No. of counts =5
Kappa Method (Between Appraiser A and B)
A*B Cross Tabulation
B Appraiser Total
(B)Bad (G)Good
Count 29 5 34
(B) Expected Count 19.3 14.7
A Bad
Appraiser Count 5 21 26
(G) Expected Count 14.7 11.3
Good
Total Count 34 26 60
Expected Count

Expected count( )=(Row total* Column total)/Grand total


e.g = (34*34)/60 = 19.3
Expected count to be calculated for all counts
Kappa Method (Between Appraiser A and B)
Calculate Kappa ( A * B Cross Tabulation)
Po= Sum of observed proportion in diagonal cells
= (29+21)/60
= 52/60
Pe= Sum of expected proportion in diagonal cells
= (19.3+11.3)/60
= 30.6/60
Kappa = Po - Pe /1- Pe
= (50/60)-(30.6/60)/1-30.6/60
= 0.659

Kappa A B
A - .659
B .659 -
Kappa more than 0.75 : Good Agreement
Less than 0.40 : Poor Agreement
Kappa Method (Between Appraiser A and B)

Inference between 2 appraisers A & B :-

As per standard, kappa more than 0.75 is Good


agreement , kappa observed appraisers A & B
is 0.659 (near about OK) means that much
variation between appraisers Good Agreement
(Acceptable)
Kappa Method (Between True status and Appraiser A)
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 B B B B B B B
2 G G B G B B G
3 B B B B B B B
4 G B G G B G B
5 B B B B B B B
6 B B G B B B B
7 G G B G G G G
8 B B G B B B B
9 B B B B B B B
10 G G G B B G G
11 G B G G G G G
12 B B B B B B B
13 G G G G G G G
14 B B B B B B B
15 G G G G G G G
16 G B G G G G G
17 B B B B B B B
18 B B G B B G B
19 B B G G G G G
20 G G G G G G G
B means Bad When True status (Bad) matching with
Appraiser A also declared bad
G means Good
No. of counts =28
Kappa Method (Between True status and Appraiser A)
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 B B B B B B B
2 G G B G B B G
3 B B B B B B B
4 G B G G B G B
5 B B B B B B B
6 B B G B B B B
7 G G B G G G G
8 B B G B B B B
9 B B B B B B B
10 G G G B B G G
11 G B G G G G G
12 B B B B B B B
13 G G G G G G G
14 B B B B B B B
15 G G G G G G G
16 G B G G G G G
17 B B B B B B B
18 B B G B B G B
19 B B G G G G G
20 G G G G G G G
B means Bad When True status (Good) matching with
Appraiser A also declared Good
G means Good
No. of counts =21
Kappa Method (Between True status and Appraiser A)
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 B B B B B B B
2 G G B G B B G
3 B B B B B B B
4 G B G G B G B
5 B B B B B B B
6 B B G B B B B
7 G G B G G G G
8 B B G B B B B
9 B B B B B B B
10 G G G B B G G
11 G B G G G G G
12 B B B B B B B
13 G G G G G G G
14 B B B B B B B
15 G G G G G G G
16 G B G G G G G
17 B B B B B B B
18 B B G B B G B
19 B B G G G G G
20 G G G G G G G
B means Bad When True status (Good) but Appraiser A
also declared Bad
G means Good
No. of counts =6
Kappa Method (Between True status and Appraiser A)
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 B B B B B B B
2 G G B G B B G
3 B B B B B B B
4 G B G G B G B
5 B B B B B B B
6 B B G B B B B
7 G G B G G G G
8 B B G B B B B
9 B B B B B B B
10 G G G B B G G
11 G B G G G G G
12 B B B B B B B
13 G G G G G G G
14 B B B B B B B
15 G G G G G G G
16 G B G G G G G
17 B B B B B B B
18 B B G B B G B
19 B B G G G G G
20 G G G G G G G
B means Bad When True status (Bad) but Appraiser A
also declared Good
G means Good
No. of counts =5
Kappa Method (Between True status and Appraiser A)
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 B B B B B B B
2 G G B G B B G
3 B B B B B B B
4 G B G G B G B
5 B B B B B B B
6 B B G B B B B
7 G G B G G G G
8 B B G B B B B
9 B B B B B B B
10 G G G B B G G
11 G B G G G G G
12 B B B B B B B
13 G G G G G G G
14 B B B B B B B
15 G G G G G G G
16 G B G G G G G
17 B B B B B B B
18 B B G B B G B
19 B B G G G G G
20 G G G G G G G
B means Bad When True status (Bad) but Appraiser A
also declared Good
G means Good
No. of counts =5
Kappa Method (Between True status and Appraiser A)
A*True Status Cross Tabulation
True Status Total
(B)Bad (G)Good
Count 28 6 34
(B) Expected Count 18.7 15.3
A Appraiser Bad
Count 5 21 26
(G) Expected Count 14.3 11.7
Good

Total Count 33 27 60
Expected Count

Expected count( )=(Row total* Column total)/Grand total


e.g = (33*34)/60 = 18.7
Expected count to be calculated for all counts
Kappa Method (Between True status and Appraiser A)

Calculate Kappa ( A * True Status Cross Tabulation)


Po= Sum of observed proportion in diagonal cells
= (28+21)/60
= 49/60
Pe= Sum of expected proportion in diagonal cells
= (18.7+11.7)/60
= 30.4/60
Kappa =Po - Pe /1 - Pe
= (49/60) - (30.4/60) / 1-30.4/60
= 0.628

Kappa A
Ref. .628
Kappa more than 0.75 : Good Agreement
Less than 0.40 : Poor Agreement
Kappa Method (Between True status and Appraiser A)

Inference between appraiser A and


reference :-

As per standard, kappa more than 0.75 is Good


agreement , kappa observed appraisers between
appraiser A & reference is 0.628 (near about OK)
means not much variation between appraiser A
and reference (True Status)
Good agreement- acceptable
Kappa Method (Between True status and Appraiser B)
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 B B B B B B B
2 G G B G B B G
3 B B B B B B B
4 G B G G B G B
5 B B B B B B B
6 B B G B B B B
7 G G B G G G G
8 B B G B B B B
9 B B B B B B B
10 G G G B B G G
11 G B G G G G G
12 B B B B B B B
13 G G G G G G G
14 B B B B B B B
15 G G G G G G G
16 G B G G G G G
17 B B B B B B B
18 B B G B B G B
19 B B G G G G G
20 G G G G G G G
B means Bad When True status (Bad) matching with
Appraiser B also declared bad
G means Good
No. of counts =29
Kappa Method (Between True status and Appraiser B)
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 B B B B B B B
2 G G B G B B G
3 B B B B B B B
4 G B G G B G B
5 B B B B B B B
6 B B G B B B B
7 G G B G G G G
8 B B G B B B B
9 B B B B B B B
10 G G G B B G G
11 G B G G G G G
12 B B B B B B B
13 G G G G G G G
14 B B B B B B B
15 G G G G G G G
16 G B G G G G G
17 B B B B B B B
18 B B G B B G B
19 B B G G G G G
20 G G G G G G G
B means Bad When True status (Good) matching with
Appraiser B also declared Good
G means Good
No. of counts =22
Kappa Method (Between True status and Appraiser B)
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 B B B B B B B
2 G G B G B B G
3 B B B B B B B
4 G B G G B G B
5 B B B B B B B
6 B B G B B B B
7 G G B G G G G
8 B B G B B B B
9 B B B B B B B
10 G G G B B G G
11 G B G G G G G
12 B B B B B B B
13 G G G G G G G
14 B B B B B B B
15 G G G G G G G
16 G B G G G G G
17 B B B B B B B
18 B B G B B G B
19 B B G G G G G
20 G G G G G G G
B means Bad When True status (Good) but Appraiser B
also declared Bad
G means Good
No. of counts =5
Kappa Method (Between True status and Appraiser B)
Appraiser A Appraiser B
No. of Parts True Status
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 B B B B B B B
2 G G B G B B G
3 B B B B B B B
4 G B G G B G B
5 B B B B B B B
6 B B G B B B B
7 G G B G G G G
8 B B G B B B B
9 B B B B B B B
10 G G G B B G G
11 G B G G G G G
12 B B B B B B B
13 G G G G G G G
14 B B B B B B B
15 G G G G G G G
16 G B G G G G G
17 B B B B B B B
18 B B G B B G B
19 B B G G G G G
20 G G G G G G G
B means Bad When True status (Bad) but Appraiser B
also declared Good
G means Good
No. of counts =4
Attribute Measurement System Study (kappa
Method)
B*True Status Cross Tabulation

True Status
Total
(B)Bad (G)Good
(B) Count 29 5 34
15.3
18.7 Correct Type 1 error
A Appraiser Bad Expected Count
Decision Producers Risk

Count 4 22 26
14.3 11.7
(G) Good Type 2 error
Expected Count
Consumers Risk
Correct Decision

Count 33 27 60
Total
Expected Count

Expected count( )=(Row total* Column total)/Grand total


e.g = (33*34)/60
= 18.7 Expected count to be calculated for all counts
Attribute Measurement System Study (kappa
Method) Between True Status & Appraiser B
Calculate Kappa ( B * True Status Cross Tabulation)
Po= Sum of observed proportion in diagonal cells
= (29+22)/60
= 51/60
Pe= Sum of expected proportion in diagonal cells
= (18.7+11.7)/60
= 30.4/60
Kappa =Po-Pe /1-Pe
= (51/60)-(30.4/60)/1-30.4/60
= 0.696

Kappa B
Ref. .696
Kappa more than 0.75 : Good Agreement
Less than 0.40 : Poor Agreement
Attribute Measurement System Study (By kappa
Method) Between True Status & Appraiser B
Inference between True Status & Appraiser
B :-

As per standard, kappa more than 0.75 is Good


agreement , kappa observed appraisers between
appraiser B & reference is 0.696 (near about OK)
means not much variation between appraiser B
and reference (True Status)
Good Agreement (Acceptable)
Definitions
True Value :
Actual value of an artifact
Unknown and Unknowable
Reference value :
Accepted Value of an artifact
Used as a Surrogate to the true value
Uncertainty :
An estimated range of values about the measured
value in which the true value is believed to be
contained
Definitions
Gage :
Gage is any device used to obtain measurements,
frequently used to refer specifically the devices
used on shop floor, includes Go/ No Go devices.
Discrimination :
The ability of the system to detect and indicate
even small changes of the measured
characteristic, also known as resolution.
Measurement system is unacceptable for
analysis if it can not detect process variation.
Definitions
Measurement :
Assignment of numbers (values) to material things
to represent the relationship among them w.r.t.
particular properties.
Calibration :
A set of operations that establish, under specified
conditions , the relationship between a
measuring device and a traceable standard of
known reference value and uncertainty.
Definitions
Validation :
Validation is confirmation, through the provision of
objective evidence, that the requirements for a
specific intended use or application have been
fulfilled.
THANK YOU

You might also like