Towards Measuring The ''SPC Implementation/practice'' Construct
Towards Measuring The ''SPC Implementation/practice'' Construct
implementation/practice''
construct
Some evidence of measurement quality
Manus Rungtusanatham
Towards
measuring the
``SPC'' construct
301
Received August 1997
Revised June 1998
John C. Anderson
Kevin J. Dooley
Introduction
Implemented in such diverse industries as the automobile suppliers industry
(e.g. Dale and Shaw, 1989), the chain-saw industry (e.g. Chen, 1991), the plastic
molding industry (e.g. Sower, 1993), and even the software development
industry (e.g. Gardiner and Montgomery, 1987), statistical process control
(SPC) has become one of the most popular and widespread organizational
interventions in the name of quality improvement (Chen, 1991; General
Accounting Office, 1991; Lascelles and Dale, 1988; Modarress and Ansari,
1989). SPC's industrial prominence can be explained, in part, by the quality and
costs benefits that have been ascribed in literature to this quality improvement
intervention (e.g. Bounds, 1988; Bushe, 1988; Dondero, 1991; Rucinski, 1991).
This literature, however, is primarily anecdotal in nature, with an over-reliance
on case studies (Gordon et al., 1994). As a result, despite numerous claims of
The authors wish to thank the Quality Leadership Center at the University of Minnesota for
providing partial funding to support this research.
IJQRM
16,4
302
SPC's quality and costs benefits, a scientific body of knowledge to justify and
rationalize these benefits in the literature has not emerged.
Three reasons may explain why scientific knowledge on the quality and
costs benefits of SPC has not been forthcoming. One reason pertains to the
nature and focus of prior scientific research on SPC. Because of SPC's historical
intellectual roots in statistics, much of the scientific research on SPC has been
conducted by statisticians and operations researchers, whose attention has
been devoted to the statistical and mathematical issues in the various
techniques embodied by SPC (e.g. control charting). While this stream of
research has augmented the methodological side of SPC, it has had, according
to Kolesar (1993), very little relevance in advancing scientific and pragmatic
understanding of how and why SPC should be implemented within
organizations.
A second reason to explain the paucity of scientific knowledge on the
organizational effects of SPC stems from inadequate efforts to conceptualize
what it means when an organization claims to have implemented and is
subsequently practicing SPC. Academic and practitioner writings on the effects
of SPC, for example, have generally adopted rather narrow nominal and
operational definitions for the organizational implementation and practice of
SPC (Gordon et al., 1994). Many studies, for example, simply equated the
provision of training in SPC techniques (e.g. control charts) to actual
implementation and practice (e.g. Depew, 1987; Keefer, 1986). Still others have
treated the implementation and practice of SPC to be an ``all or nothing''
phenomenon (e.g. Harmon, 1984; Niedermeier, 1990). This failure to provide a
rich and exhaustive conceptualization of what organizations mean by their
claims that they have implemented and are practicing SPC has incontrovertibly
hindered further development of theories theories that not only would
describe, explain, and predict the antecedents and consequences of SPC but
also might have shed practical insights into how to make the implementation
and practice of SPC an effective and viable element of an organization's quality
improvement efforts.
Yet a third reason, closely related to the second and perhaps the most
compelling issue, may be the inability to measure the phenomenon itself (i.e. the
implementation and practice of SPC within organizations). Since operational
definitions and subsequent measurements necessarily derive from theorybuilding efforts that provide nominal definitions and testable theoretical
propositions, this observation should hardly come as a surprise. Conversely,
the ability to measure some phenomenon in a reliable and valid manner can
also spur theory development, especially since the process of scientific inquiry,
in and of itself, is an iterative process reciprocating between theory building
and theory testing iterations that involve operationalization and
measurement of some phenomenon being studied (see Wallace, 1971).
Therefore, it can be argued that in order to allow for scientific testing of the
Towards
measuring the
``SPC'' construct
303
IJQRM
16,4
304
(4)
(5)
(6)
(7)
(8)
(9)
(10)
definition: The extent to which the process operator perceives that steps
have been taken to identify and control customer-related quality
characteristic(s) associated with a particular process.
Technological sophistication and soundness of measurement devices.
Nominal definition: The technological sophistication and soundness of
measurement devices used to collect data from the process. Operational
definition: The extent to which the process operator perceives that
devices used to take measurements of process/product characteristics on
a particular process are technologically advanced and sound.
Operator responsibility for process control via control charts. Nominal
definition: Actions performed by front-line operators to ensure the
correct application of control charts for monitoring process performance.
Operational definition: The extent to which the process operator
perceives that he/she is responsible for applying control chart
methodology to the control of his/her process.
Verification of control charting assumptions. Nominal definition: Actions
taken before using control charts for monitoring process performance to
ensure that the assumptions salient to use of control charts are
reasonably satisfied. Operational definition: The extent to which the
process operator perceives that assumptions salient to the use control
charts have been verified in the process of setting up control charts for
process control.
Usage of control chart information for continuous improvement.
Nominal definition: Actions pertaining to the interpretation and
application of control chart information for purposes of process control
and improvement. Operational definition: The extent to which the
process operator perceives that information from control charts is being
used to guide actions of process control and improvement.
Sampling strategies for control charting. Nominal definition: Actions
pertaining to how data about the process are collected from the process
itself. Operational definition: The extent to which the process operator
perceives that there is rationale behind how measurements of critical
process/product characteristics are made.
Training in statistical and cognitive methods for process control and
improvement. Nominal definition: The availability and frequency of
training courses and materials in methodologies aimed at monitoring
and improving process performance. Operational definition: The extent
to which process operators are provided with training courses and
materials in methodologies aimed at monitoring and improving process
performance, as perceived by the process operator.
Technical support for SPC implementation and practice. Nominal
definition: The availability and accessibility of knowledgeable technical
staff experts to support actions in the implementation and practice of
Towards
measuring the
``SPC'' construct
305
IJQRM
16,4
(11)
306
(12)
(13)
(14)
Towards
measuring the
``SPC'' construct
307
IJQRM
16,4
1.
Dimension?
4
TASK A
308
Figure 1.
Example of face validity
tasks
Adequacy?
1 2 3 4 5 6 (7)
TASK B
This example shows that the rater deems the statement Measurements of
critical process/product are automated to be most relevant to Dimension 4.
The rater also believes that the statement, together with its response categories
ranging from very inaccurate to very accurate, is an almost perfect
measure of Dimension 4, so this measurement item receives a 7 (Almost
Perfect).
items. These experts were instructed to use the operational definitions to guide
them in classifying the measurement items into no more than one dimension.
The sorting results from task A were then used to compute Cohen's (1960) k, an
index of beyond-chance agreement among different ``judges'' for the overall
task that is, what is the overall degree of inter-expert agreement as to the
placement of the measurement items, after chance agreement has been
removed?
We then asked the subject-matter-experts to evaluate, in task B, how
adequately each measurement item measures the dimension to which it has
been assigned. For each measurement item, the experts were asked to respond
to a seven-point scale with ``1'' anchored as ``barely adequate'' and ``7'' anchored
as ``almost perfect.'' This approach has been successfully employed by
McCullough (1988) to operationalize Deming's (1986) 14 points quality
management philosophy. From the experts' input for task B, we computed and
evaluated the average adequacy score and the standard deviation of adequacy
scores for each individual measurement item.
Face validity results. For task A, Cohen's k was computed to be 0.64. The
standard deviation for Cohen's k, sk, was 0.06, yielding a 95 percent confidence
interval for k in the interval [0.52, 0.76]. Also, a two-tailed test of the statistical
significance of k concluded that the observed inter-expert agreement as to the
sorting of the measurement items did not occur by chance.
The results for task B are tabulated in Table I. Using arbitrary but
conservative cut-off values, 49 of the 66 measurement items were deemed to
have face validity with high average adequacy scores (> ``3'') and, at the same
time, small standard deviations of adequacy scores ( 1.00). These 49
measurement items were then subjected to subsequent internal consistency
reliability.
Step 2: Internal consistency reliability assessment
Internal consistency reliability. Reliability refers to the:
. . . extent to which an experiment, test, or any measuring procedure yields the same results on
repeated trials.. . The more consistent the results given by repeated measurements, the higher
the reliability of the measuring procedure ... (Carmines and Zeller, 1979, pp. 11-12).
Dimension 2
Dimension 3
Dimension 4
Dimension 5
Dimension 6
Dimension 7
Dimension 8
Dimension 9
Proposed
measurement
item
Average
adequacy
scores
Sample
standard
deviation
1
2*
3*
4
5
1
2
3*
4
1
2
3
4
5*
4
2*
3
4
5
6*
1
2
3
4*
5
1
2
3*
4
1
2
3
4*
5*
6*
1
2
3
4
1
2
3
4*
5*
6.71
4.00
3.86
6.33
5.29
6.00
5.83
2.75
6.14
6.00
6.20
6.14
6.60
4.43
5.42
4.71
4.00
5.83
6.17
3.86
6.40
5.86
6.00
4.80
6.86
4.83
6.00
4.83
5.80
4.67
6.33
5.67
3.33
4.40
3.60
5.71
5.50
5.86
6.20
6.29
6.00
5.43
4.00
4.29
0.49
2.00
2.19
0.82
0.95
0.58
0.75
2.36
0.90
0.89
0.84
0.69
0.55
1.90
0.98
0.76
1.83
0.75
0.98
1.86
0.55
0.90
0.58
1.48
0.38
0.98
0.58
2.04
0.45
0.82
0.81
0.82
2.31
1.82
2.41
0.95
0.84
0.69
0.84
0.76
0.63
0.98
2.08
2.14
(continued)
Towards
measuring the
``SPC'' construct
309
Table I.
Face validity results
IJQRM
16,4
Proposed
measurement
item
Average
adequacy
scores
Sample
standard
deviation
Dimension 10
1
2
3
4
1
2
3
4
1
2
3
4
1
2*
3
4
5*
6*
1
2
3*
4
6.29
5.43
5.57
6.43
6.00
3.86
5.14
4.14
4.83
6.14
6.43
5.57
6.00
1.20
5.50
4.83
4.20
3.80
6.00
6.57
4.71
6.14
310
Dimension 11
Dimension 12
Dimension 13
Dimension 14
Table I.
0.76
0.79
0.98
0.79
0.82
0.90
0.69
0.90
0.98
0.69
0.53
0.79
0.89
0.45
0.55
0.75
1.79
2.17
0.58
0.53
1.50
0.90
Note: For average adequacy scores, 1 = barely adequate; 2 = adequate; 3 = more than
adequate; 4 = good; 5 = very good; 6 = excellent; 7 = almost perfect; * = to be deleted
Towards
measuring the
``SPC'' construct
311
IJQRM
16,4
312
Proposed
measurement
scale for
Dimension 1
Dimension 2a
Dimension 3
Dimension 4
Dimension 5
Dimension 6
Dimension 7b
Dimension 8
Dimension 9a
Dimension 10
Table II.
Internal consistency
reliability results
Proposed
measurement
scale/item
Scale
1
4*
5
Scale
1
2
4
Scale
1
2
3
4
Scale
1
2
4
5
Scale
1
2
3
5
Scale
1
2
4
Scale
1
2
3
Scale
1
2
3
4
Scale
1
2
3
Scale
1
2
3
4
Research site 1
(n = 55)
Cronbach's a
Cronbach's
with item
a
deleted
0.57
0.51
0.71
0.42
0.80
0.73
0.78
0.71
0.65
0.66
0.12
0.68
0.50
0.51
0.51
0.16
0.72
0.61
0.65
0.62
0.26
0.11
0.05
0.48
0.70
0.74
0.75
0.83
0.58
0.61
0.72
0.79
0.68
0.62
0.67
0.62
0.66
0.62
0.58
0.60
0.46
0.51
0.69
0.62
0.50
Research site 2
(n = 49)
Cronbach's a
Cronbach's
with item
a
deleted
0.60
0.37
0.72
0.57
0.75
0.73
0.43
0.64
0.55
0.74
0.43
0.62
0.41
0.26
0.55
0.20
0.56
0.69
0.72
0.60
0.53
0.39
0.32
0.72
0.69
0.78
0.62
0.66
0.61
0.76
0.51
0.38
0.16
0.58
0.49
0.49
0.58
0.69
0.31
0.69
0.26
0.66
0.76
0.61
0.69
(continued)
Proposed
measurement
scale for
Dimension 11
Dimension 12
Dimension 13
Dimension 14
Proposed
measurement
scale/item
Scale
1
2
3
4
Scale
1
2
3
4
Scale
1
3
4
Scale
1
2
4
Research site 1
(n = 55)
Cronbach's a
Cronbach's
with item
a
deleted
0.83
0.83
0.68
0.85
0.79
0.74
0.81
0.79
0.79
0.73
0.82
0.78
0.54
0.58
0.63
0.81
0.81
0.76
Research site 2
(n = 49)
Cronbach's a
Cronbach's
with item
a
deleted
0.64
0.73
0.74
0.71
0.47
0.41
0.76
0.57
Towards
measuring the
``SPC'' construct
313
0.65
0.54
0.67
0.78
0.80
0.61
0.55
0.71
0.43
0.68
Notes: *Deleted in order to improve Cronbach's a; asince the measurement scale for
dimension 2 was consistently not reliable across both samples, with no improvements
attainable from deleting one or more measurement items from this three-item measurement
scale, only one measurement item was retained for dimension 2 (item 4). Item 4 was chosen
because it best reflects the nominal definition for this SPC implementation/practice
dimension. Retaining one measurement item for dimension 2 is consistent with DeVellis'
(1991, p 10) argument that ``... imperfect measurement may be better than no measurement
at all ...'' The disadvantage with this approach is that the reliability (and dimensionality) of
this single-item measurement scale cannot (and need not) be assessed; bthe measurement
scales for dimensions 7 and 9 were reliable per results of one sample but not the other.
Since the conflicting results may be due to differences between the two samples, we
decided to subject the measurement items within these two measurement scales to a
dimensionality assessment in step 3
the factor loadings on the first extract principal components factor should be
greater than the rule-of-thumb minimum of 0.30 for samples with less than 100
observations (Hair et al., 1979, p. 236). Also, the eigenvalue for the first
extracted principal components factor (l1) should generally explain more than
50 percent of the variance among the measurement items while the eigenvalues
(i.e. l2, . . . , ln for subsequent extracted principal components factors should be
less than 1.00.
Of the 14 measurement scales, only 13 were subjected to this dimensionality
assessment, since dimension 2's measurement scale contains only a single
measurement item. Again, the analyses were conducted separately for each
research site. Also, while it would be ideal to factor analyze all measurement
Table II.
IJQRM
16,4
314
items across the 14 measurement scales, this strategy was not empirically
feasible, given the small samples from each research site. Typically, a ratio of
four to ten times as many observations as there are measured variables to be
factor analyzed is recommended, although a ratio of twice the observations to
the number of measured variables is common (see Hair et al., 1979, p. 219;
Schwab, 1980, p. 19; Tinsley and Tinsley, 1987, p. 415). An alternative strategy
was, therefore, adopted, whereby principal components factor analysis was
employed to evaluate the dimensionality of relevant measurement scales, on a
``measurement scale'' by ``measurement scale'' basis. For each measurement
scale, if the number of extracted factors exceeds one, then the measurement
scale would be not be uni-dimensional, and hence, not possessing high
construct validity. The results of the ``measurement scale'' by ``measurement
scale'' dimensionality assessment are shown in Table III.
Discussion of results
The advancement of scientific knowledge requires a scientific foundation, one
that is built upon the rigor and discipline of continual scientific endeavors that
either:
(1) disconfirm established theories; or
(2) propose and test new or alternative theories that are credible and,
equally important, available for replicable studies.
Such scientific endeavors, however, require the ability to measure the
phenomenon of interest in a reliable and valid manner. With respect to the
quality and costs benefits of implementing and practicing SPC, we have not
had the luxury of being able to operationalize and measure what organizations
mean when they claim to have implemented and to be practicing SPC. It is,
therefore, not surprising to observe that scientific knowledge on this topic has
not been cumulative.
The significance of developing a measurement instrument to evaluate an
organization's efforts at implementing and practicing SPC cannot be overstated
from the standpoint of science. Such a measurement instrument would benefit
exploratory or confirmatory investigations into the antecedents and
consequences of SPC implementation/practice. At the same time, research
examining the relationship among the 14 dimensions can also be conducted in
order to derive an aggregate index for the phenomenon. Like Brie et al. (1976),
such research would investigate different approaches to combine the scores on
the individual measurement scales into an aggregate index, as well as the
substantive theory-testing implications of the different combinatorial
approaches.
From a pragmatic perspective, organizations have long felt the need for an
assessment vehicle that can be employed to help improve their efforts at
implementing and practicing SPC (see Rungtusanatham et al., 1997, p. 133). In
the particular case of SPC, Kolesar (1993: 332) has argued that:
Dimension 7
Dimension 6
Dimension 5
Dimension 4
Dimension 3
Dimension 2
Dimension 1
Proposed
measurement
scale for
Scale
1
2
3
Scale
1
5
Scale
4
Scale
1
2
3
4
Scale
1
2
4
Scale
1
2
3
5
Scale
1
2
4
Proposed
measurement
scale/item
2.09
(70%)
1.93
(64%)
2.60
(65%)
1.48
(49%)
Not
assessed
2.20
(55%)
1.54
(77%)
0.77
0.85
0.88
0.84
0.82
0.75
0.90
0.84
0.80
0.67
0.52
0.79
0.76
0.63
0.80
0.75
0.77
0.88
0.88
Not
assessed
Yes
Yes
Yes
Yes
Not
assessed
Yes
Yes
Eigenvalue (l1)
and variance
Is the eigenvalue of
explained by Unrotated factor
the second factor
first factor
loadings
(l2) 5 1.00?
Research site 1
(n = 55)
1.44
(48%)
1.97
(66%)
2.33
(58%)
1.97
(66%)
Not
assessed
2.17
(54%)
1.46
(73%)
Eigenvalue (l1)
and variance
explained by first
factor
0.75
0.88
0.35a
0.84
0.70
0.88
0.78
0.58
0.85
0.82
0.78
0.86
0.79
0.84
0.68
0.60
0.80
0.85
0.85
Not
assessed
(continued)
NO
l2 = 1.07
(36%)
Yes
Yes
Yes
Not
assessed
Yes
Yes
Yes
Is the eigenvalue
of the second
Unrotated
factor
factor loadings (l2) 5 1.00?
Research site 2
(n = 49)
Towards
measuring the
``SPC'' construct
315
Table III.
Principal components
factor analytical results
Table III.
Dimension 11
Dimension 10
Dimension 9
Dimension 8
(without item 1)
Dimension 8
Dimension 7
(without item 3)
Scale
1
2
Scale
1
2
3
4
Scale
2
3
4
Scale
1
2
3
Scale
1
2
3
4
Scale
1
2
3
4
Proposed
measurement
scale/item
2.66
(66%)
2.03
(50%)
1.77
(59%)
1.82
(61%)
2.14
(53%)
1.45
(72%)
0.80
0.87
0.77
0.82
0.78
0.51
0.69
0.83
0.75
0.74
0.82
0.82
0.74
0.78
0.69
0.77
0.70
0.77
0.85
0.85
Yes
Yes
Yes
Yes
Yes
Yes
Eigenvalue (l1)
and variance
Is the eigenvalue of
explained by Unrotated factor
the second factor
first factor
loadings
(l2) 5 1.00?
2.07
(52%)
2.28
(57%)
1.63
(54%)
1.84
(62%)
1.94
(48%)
1.41
(82%)
Eigenvalue (l1)
and variance
explained by first
factor
0.83
0.87
0.28c
0.74
0.80
0.61
0.84
0.74
0.82
0.48
0.85
0.72
0.81
0.81
0.45b
0.77
0.72
0.79
0.84
0.84
(continued)
Yes
Yes
Yes
Yes
NO
l2 = 1.07
(27%)
Yes
Is the eigenvalue
of the second
Unrotated
factor
factor loadings (l2) 5 1.00?
316
Proposed
measurement
scale for
Research site 2
(n = 49)
IJQRM
16,4
Scale
1
2
4
Scale
1
2
3
4
Scale
1
3
4
Scale
1
2
4
Proposed
measurement
scale/item
2.31
(77%)
1.82
(61%)
2.65
(66%)
2.19
(73%)
0.86
0.87
0.90
0.81
0.78
0.75
0.80
0.88
0.76
0.81
0.85
0.88
0.83
Yes
Yes
Yes
Yes
Eigenvalue (l1)
Is the eigenvalue of
and variance
explained by Unrotated factor
the second factor
first factor
loadings
(l2) 5 1.00?
1.90
(63%)
2.05
(68%)
2.24
(56%)
2.03
(68%)
Eigenvalue (l1)
and variance
explained by first
factor
0.73
0.88
0.77
0.68
0.88
0.91
0.78
0.88
0.76
0.53
0.83
0.87
0.75
Yes
Yes
Yes
Yes
Is the eigenvalue
of the second
Unrotated
factor
factor loadings (l2) 5 1.00?
Note: aItem 3 in the measurement scale for dimension 7 loaded at 0.89 on a second extracted factor for research site 2. Deleting this
measurement item and re-running the principal components factor analysis left a two-item measurement scale for dimension 7 that was unidimensional across both samples. The resulting Cronbach's a for dimension 7's two-item measurement scale were 0.62 (research site 1) and 0.58
(research site 2); bItem 1 in the measurement scale for dimension 8 loaded at 0.82 on a second extracted factor for research site 2. Removing item
1 produces a uni-dimensional measurement scale that supported Cronbach's a ranging from 0.49 for research site 2 to 0.67 for research site 1;
c
Item 3 in the measurement scale for dimension 11 loaded at 0.82 on a second extracted factor for research site 2. When item 3 was removed,
l1's for the first extracted principal components factor for research site 1 and research site 2 were computed to be 2.19 (73 percent) and 2.03
(68%) respectively, and factor loadings on the first extracted principal components factor, for both samples, were at least 0.75
Dimensin 14
Dimension 13
Dimension 12
Dimension 11
(without item 3)
Proposed
measurement
scale for
Research site 2
(n = 49)
Towards
measuring the
``SPC'' construct
317
Table III.
IJQRM
16,4
318
. . . Although the emphasis on designing quality into products and processes in the first place
will continue, there will be a continuing need for more effective SPC over the forseeable future
because the reality is that all quality problems and variation cannot be designed away.
(3.89)
(3.70)
(4.57)
Dimension 4
Dimension 3
Towards
measuring the
``SPC'' construct
(3.12)
Dimension 5
Dimension 2
(4.38)
Dimension 6
(4.92)
319
Dimension 1
(4.48)
Dimension 7
(4.26)
Dimension 8
(4.24)
Dimension 14
(3.96)
Dimension 9
(4.41)
Dimension 13
Dimension 10
(4.59)
Dimension 11
(4.47)
Dimension 12
(3.99)
theoretical minimum and maximum scores of 1.00 and 7.00 respectively. These
scores reveal that ``control charting''-related activities have not been widely
implemented within the plant. Furthermore, it appears that there is only
moderate managerial and technical support for the deployment of statistical
and cognitive procedures to facilitate the monitoring, adjustment, and
improvement of processes. Demographic analysis of the 49 process operators
led to three additional supporting insights. First, out of the 49 process
operators, there were approximately half who had not received formal training
in SPC. Second, approximately 20 process operators were actively using control
charts to monitor process performance. Third, of the 20 process operators
engaged in control charting, some had not received formal SPC training.
Conclusions
Our purpose, in this paper, has been to describe developmental research
towards creating an instrument for the SPC implementation/practice construct.
That this research is developmental is reflected in our employment of small
sample sizes, which, in turn, constrains the type of statistical analyses that can
be performed, especially with regard to construct validity. Also, while the
Figure 2.
The implementation and
practice of SPC at
research site 2
IJQRM
16,4
320
Towards
measuring the
``SPC'' construct
321
IJQRM
16,4
Depew, D.R. (1987), ``The effect of statistical process control training for production operators on
product quality and productivity'', unpublished PhD dissertation, Purdue University.
DeVellis, R. F. (1991), Scale Development: Theory and Applications, Sage Publications, Newbury
Park, CA.
Dondero, C. (1991), ``SPC hits the road'', Quality Progress, Vol. 24 No. 1, pp. 43-4.
322
Gardiner, J.S. and Montgomery, D.G. (1987), ``Using statistical control charts for software quality
control'', Quality and Reliability Engineering International, Vol. 3 No. 1, pp. 15-20.
General Accounting Office (1991), US Companies Improve Performance through Quality Efforts
(NSIAD-91-190), United States General Accounting Office, Washington, DC.
Gordon, M.E., Philpot, J.W., Bounds, G.M. and Long, W.S. (1994), ``Factors associated with the
success of the implementation of statistical process control'', Journal of High Technology
Management Research, Vol. 5 No. 1, pp. 101-21.
Green, S.B., Lissitz, R.W. and Mulaik, S.A. (1977), ``Limitations of coefficient alpha as an index of
test unidimensionality'', Educational and Psychological Measurement, Vol. 37 No. 4,
pp. 827-38.
Hackman, J.R. and Oldham, G.R. (1980), Work Redesign, Addison-Wesley Publishing Company,
Reading, MA.
Hair Jr, J.F., Anderson, R.E., Tatham, R.L. and Grablowsky, B.J. (1979), Multivariate Data
Analysis with Readings, The Petroleum Publishing Company, Tulsa, OK.
Harmon, K.P. (1984), ``An examination of the relationships between quality costs and
productivity in conjunction with the introduction of statistical process control: a case
study'', unpublished PhD dissertation, University of Tennessee at Knoxville, TN.
Johnson, R.A. and Wichern, D.W. (1988), Applied Multivariate Statistical Analysis, 2nd ed.,
Prentice-Hall, Englewood Cliffs, NJ.
Keefer, C.J. (1986), ``The effects of statistical process control instruction upon job performance of
machine operators'', unpublished doctoral dissertation, Purdue University.
Kim, J. and Mueller, C.W. (1978), Factor Analysis: Statistical Methods and Practical Issues, Sage
Publications, Newbury Park, CA.
Kolesar, P.J. (1993), ``The relevance of research on statistical process control to the total quality
movement'', Journal of Engineering and Technology Management, Vol. 10 No. 3,
pp. 317-38.
Lascelles, D.M. and Dale, B.G. (1988), ``A study of the quality management methods employed by
U.K. automotive suppliers'', Quality and Reliability Engineering International, Vol. 4 No. 3,
pp. 301-9.
Long, J.S. (1983), Confirmatory Factor Analysis: A Preface to LISREL, Sage Publications, Beverly
Hills, CA.
McCullough, P.M. (1988), ``Development and validation of an instrument to measure adherence to
Deming's philosophy of quality management'', unpublished PhD dissertation, University
of Tennessee at Knoxville, TN.
Modarress, B. and Ansari, A. (1989), ``Quality control techniques in US firms: a survey'',
Production and Inventory Management Journal, Vol. 30 No. 2, pp. 58-62.
Niedermeier, C.A. (1990), ``Quality improvement in a corrugated converting plant: An application
of statistical process control'', unpublished master's thesis, Michigan State University, IL.
Nunnally, J.C. (1967), Psychometric Theory, McGraw-Hill, New York, NY.
Nunnally, J.C. (1978), Psychometric Theory (2nd ed.), McGraw-Hill, New York, NY.
Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1988), ``SERVQUAL: a multiple-item scale for
measuring consumer perceptions of service quality'', Journal of Retailing, Vol. 64 No. 1,
pp. 12-40.
Peter, J.P. (1981), ``Construct validity: a review of basic issues and marketing practices'', Journal
of Marketing Research, Vol. 18 No. 2, pp. 133-45.
Peter, J.P. and Churchill, Jr, A. (1986), ``Relationships among research design choices and
psychometric properties of rating scales: a meta-analysis'', Journal of Marketing Research,
Vol. 23 No. 1, pp. 10-12.
Rucinski, D.W. (1991), ``SPC more than quality control'', Quality, Vol. 30, October, pp. 43-5.
Rungtusanatham, M., Anderson, J.C. and Dooley, K.J. (1997), ``Conceptualizing organizational
implementation and practice of statistical process control'', Journal of Quality
Management, Vol. 2 No. 1, pp. 113-37.
Saraph, J.V., Benson, P.G. and Schroeder, R.G. (1989), ``An instrument for measuring the critical
factors of quality management'', Decision Sciences, Vol. 20 No. 4, pp. 810-29.
Schwab, D.P. (1980), ``Construct validity in organizational behavior'', Research in Organizational
Behavior, Vol. 2, pp. 3-43.
Shewhart, W.A. (1939), Statistical Method: From the Viewpoint of Quality Control, The Graduate
School, United States Department of Agriculture, Washington, DC.
Singleton, Jr, R.A., Straits, B.C. and Straits, M.M. (1993), Approaches to Social Research, 2nd ed.,
Oxford University Press, New York, NY.
Sower, V.E. (1993), ``SPC implementation in the plastic molding industry'', Production and
Inventory Management Journal, Vol. 34 No. 1, pp. 41-5.
Tinsley, H.E.A. and Tinsley, D.J. (1987), ``Uses of factor analysis in counseling psychology
research'', Journal of Counseling Psychology, Vol. 34 No. 4, pp. 414-24.
Wallace, W. (1971), The Logic of Science in Sociology, Aldine -Atherton, Inc., Chicago, IL.
Towards
measuring the
``SPC'' construct
323
IJQRM
16,4
324
Table AI.
Towards
measuring the
``SPC'' construct
325
Table AI.
IJQRM
16,4
326
Table AI.
Towards
measuring the
``SPC'' construct
327
Table AI.
IJQRM
16,4
328
Table AI.
Towards
measuring the
``SPC'' construct
329
Table AI.