0% found this document useful (0 votes)
113 views

Towards Measuring The ''SPC Implementation/practice'' Construct

This document describes research aimed at developing a measurement instrument to assess the implementation and practice of statistical process control (SPC) within organizations. The researchers created 14 measurement scales corresponding to 14 dimensions that underlie the multi-dimensional SPC implementation/practice construct. They assessed the face validity, internal consistency reliability, and uni-dimensionality of these new scales. They also demonstrated applying the instrument within one organizational setting. The goal is to advance scientific understanding of the antecedents and consequences of SPC implementation by having a valid and reliable way to measure it.

Uploaded by

INFO CENTRE
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
113 views

Towards Measuring The ''SPC Implementation/practice'' Construct

This document describes research aimed at developing a measurement instrument to assess the implementation and practice of statistical process control (SPC) within organizations. The researchers created 14 measurement scales corresponding to 14 dimensions that underlie the multi-dimensional SPC implementation/practice construct. They assessed the face validity, internal consistency reliability, and uni-dimensionality of these new scales. They also demonstrated applying the instrument within one organizational setting. The goal is to advance scientific understanding of the antecedents and consequences of SPC implementation by having a valid and reliable way to measure it.

Uploaded by

INFO CENTRE
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Towards measuring the ``SPC

implementation/practice''
construct
Some evidence of measurement quality
Manus Rungtusanatham

Arizona State University, Tempe, Arizona, USA

Towards
measuring the
``SPC'' construct
301
Received August 1997
Revised June 1998

John C. Anderson

University of Minnesota, Minneapolis, Minnesota, USA, and

Kevin J. Dooley

Arizona State University, Tempe, Arizona, USA


Keywords Assessment, Measurement, Quality, Reliability, Statistical process control
Abstract Describes the process and outcomes of operationalizing the 14 dimensions underlying
the SPC implementation/practice construct. Employs a standard procedure to create a
measurement instrument comprising 14 measurement scales, with the number of constituent
measurement items ranging from one to four, that correspond to the 14 dimensions underlying
the SPC implementation/practice construct. Reports the results of assessing three properties of
measurement quality for these newly-created measurement scales, namely: face validity, internal
consistency reliability and uni-dimensionality. Such a measurement instrument can then be applied
to examine antecedents and consequences of SPC implementation/practice and to diagnose
existing organizational efforts at implementing and practicing SPC and to identify opportunities to
improve organizational implementation and practice of this quality improvement intervention.
Demonstrates the application and interpretation of the SPC implementation/practice
measurement instrument within one organizational setting. Concludes by identifying future
research needs.

Introduction
Implemented in such diverse industries as the automobile suppliers industry
(e.g. Dale and Shaw, 1989), the chain-saw industry (e.g. Chen, 1991), the plastic
molding industry (e.g. Sower, 1993), and even the software development
industry (e.g. Gardiner and Montgomery, 1987), statistical process control
(SPC) has become one of the most popular and widespread organizational
interventions in the name of quality improvement (Chen, 1991; General
Accounting Office, 1991; Lascelles and Dale, 1988; Modarress and Ansari,
1989). SPC's industrial prominence can be explained, in part, by the quality and
costs benefits that have been ascribed in literature to this quality improvement
intervention (e.g. Bounds, 1988; Bushe, 1988; Dondero, 1991; Rucinski, 1991).
This literature, however, is primarily anecdotal in nature, with an over-reliance
on case studies (Gordon et al., 1994). As a result, despite numerous claims of
The authors wish to thank the Quality Leadership Center at the University of Minnesota for
providing partial funding to support this research.

International Journal of Quality &


Reliability Management,
Vol. 16 No. 4, 1999, pp. 301-329.
# MCB University Press, 0265-671X

IJQRM
16,4

302

SPC's quality and costs benefits, a scientific body of knowledge to justify and
rationalize these benefits in the literature has not emerged.
Three reasons may explain why scientific knowledge on the quality and
costs benefits of SPC has not been forthcoming. One reason pertains to the
nature and focus of prior scientific research on SPC. Because of SPC's historical
intellectual roots in statistics, much of the scientific research on SPC has been
conducted by statisticians and operations researchers, whose attention has
been devoted to the statistical and mathematical issues in the various
techniques embodied by SPC (e.g. control charting). While this stream of
research has augmented the methodological side of SPC, it has had, according
to Kolesar (1993), very little relevance in advancing scientific and pragmatic
understanding of how and why SPC should be implemented within
organizations.
A second reason to explain the paucity of scientific knowledge on the
organizational effects of SPC stems from inadequate efforts to conceptualize
what it means when an organization claims to have implemented and is
subsequently practicing SPC. Academic and practitioner writings on the effects
of SPC, for example, have generally adopted rather narrow nominal and
operational definitions for the organizational implementation and practice of
SPC (Gordon et al., 1994). Many studies, for example, simply equated the
provision of training in SPC techniques (e.g. control charts) to actual
implementation and practice (e.g. Depew, 1987; Keefer, 1986). Still others have
treated the implementation and practice of SPC to be an ``all or nothing''
phenomenon (e.g. Harmon, 1984; Niedermeier, 1990). This failure to provide a
rich and exhaustive conceptualization of what organizations mean by their
claims that they have implemented and are practicing SPC has incontrovertibly
hindered further development of theories theories that not only would
describe, explain, and predict the antecedents and consequences of SPC but
also might have shed practical insights into how to make the implementation
and practice of SPC an effective and viable element of an organization's quality
improvement efforts.
Yet a third reason, closely related to the second and perhaps the most
compelling issue, may be the inability to measure the phenomenon itself (i.e. the
implementation and practice of SPC within organizations). Since operational
definitions and subsequent measurements necessarily derive from theorybuilding efforts that provide nominal definitions and testable theoretical
propositions, this observation should hardly come as a surprise. Conversely,
the ability to measure some phenomenon in a reliable and valid manner can
also spur theory development, especially since the process of scientific inquiry,
in and of itself, is an iterative process reciprocating between theory building
and theory testing iterations that involve operationalization and
measurement of some phenomenon being studied (see Wallace, 1971).
Therefore, it can be argued that in order to allow for scientific testing of the

numerous claims of SPC's quality and costs benefits, it is imperative that


research efforts be devoted to develop approaches to measuring the
implementation and practice of SPC within organizations.
Our focus, in this paper, is on this latter issue that is, how can we measure
the implementation and practice of SPC within organizations. To do so, we
adopt and extend the conceptual work of Rungtusanatham et al. (1997), in
which they proposed and defined a multi-dimensional construct, SPC
implementation/practice, to answer the following question: ``What does the
implementation and practice of SPC entail [within organizations]?'' Specifically,
we describe the process and outcomes of developing a measurement instrument
that operationalizes the 14 dimensions underlying the SPC implementation/
practice construct. We employ a standard procedure (see DeVellis, 1991) to
create 14 measurement scales, with the number of constituent measurement
items ranging from one to four, that correspond to the 14 dimensions
underlying the SPC implementation/practice construct. We report and discuss
initial results of assessing three properties of measurement quality for these
newly created measurement scales, namely:
(1) face validity;
(2) internal consistency reliability; and
(3) the uni-dimensionality aspect of construct validity.
In addition, as part of this development, we demonstrate the application of this
measurement instrument to the assessment of the implementation and practice
of SPC within a real organizational setting. We conclude by identifying further
research that needs to be undertaken in order to improve the SPC
implementation/practice measurement instrument and to advance scientific
and pragmatic knowledge as to the effects of SPC.
The SPC implementation/practice construct
Rungtusanatham et al. (1997) recently observed that much of the SPC-related
writing has not subjected the phenomenon itself (i.e. the implementation and
practice of SPC) to a detailed conceptual or empirical examination. Hence:
. . . Very little knowledge has . . . accumulated or has been documented to identify, describe,
and define the requisite organizational policies and actions to make the implementation and
subsequent practice of SPC an effective and viable part of any organization's quality
management system (Rungtusanatham et al., 1997, p. 114).

In an effort to redress this oversight and to enhance the conceptualization of the


phenomenon in question, Rungtusanatham et al. (1997) engaged in both
deductive and inductive research.
In their deductive work, they began by examining four different definitional
perspectives on the implementation and practice of SPC. First, they provided a
historical perspective by tracing the origins of the implementation and practice
of SPC back to Shewhart's (1939) dual concepts of ``the operation of statistical
control'' and ``the state of statistical control''. Second, they provided a

Towards
measuring the
``SPC'' construct
303

IJQRM
16,4

304

contemporary definitional perspective that equated the implementation and


practice of SPC to the systematic deployment of a set of process monitoring and
process control techniques. Third, they provided a ``prevention'' perspective
that identified the purpose of implementing and practicing SPC as different
from the traditional ``final inspection'' approach to quality. Finally, they
adopted an organizational development perspective that conceptualized the
implementation and practice of SPC as effecting multi-dimensional
organizational changes. Synthesizing the insights from these four perspectives,
Rungtusanatham et al. (1997, p. 122) then deduced a nominal definition for the
SPC implementation/practice construct, defining SPC implementation/practice
as:
. . . the adoption of specific policies and performance of specific actions supporting the
deployment of various statistical and cognitive procedures, aimed at facilitating the
monitoring, adjustment, and improvement of processes.

Rungtusanatham et al. (1997) complemented their deductive research with


inductive research that involved a panel of subject-matter-experts and the
deployment of the affinity diagram in order to delineate and define the policies
and actions underlying SPC implementation/ practice. The subject-matterexperts panel consisted of 12 academic and non-academic (industry and
consulting) experts, who were invited to complete the following statement ``In
organizations which have successfully implemented and are effectively
practicing SPC, one key characteristic is that . . .'' These experts were instructed
to respond in the form of actions identifying the actor and the act to be
performed, a task that yielded 123 usable responses. These 123 usable
responses were then ``clustered'' using the affinity diagram, resulting in the
derivation, definition, and juxtaposition in literature of the 14 dimensions in the
following list:
(1) Managerial actions and policies to support the implementation/practice of
SPC. Nominal definition: Actions performed and policies instituted by
individuals in managerial positions to support the implementation/
practice of SPC. Operational definition: The extent to which higher
management, defined as individuals in managerial positions two levels
above the respondent, is perceived by the process operator to behave in a
manner which demonstrates commitment to the implementation and use
of control charts.
(2) Prominence of control chart usage for process control. Nominal
definition: The visibility of use of control chart methodology as a tool for
monitoring and controlling process performance. Operational definition:
The extent of application of control charts throughout the organization,
as perceived by the process operator.
(3) Identification of critical measurement characteristics. Nominal definition:
Actions pertaining to the identification of key process and/or product
characteristics affecting quality for critical processes. Operational

(4)

(5)

(6)

(7)

(8)

(9)

(10)

definition: The extent to which the process operator perceives that steps
have been taken to identify and control customer-related quality
characteristic(s) associated with a particular process.
Technological sophistication and soundness of measurement devices.
Nominal definition: The technological sophistication and soundness of
measurement devices used to collect data from the process. Operational
definition: The extent to which the process operator perceives that
devices used to take measurements of process/product characteristics on
a particular process are technologically advanced and sound.
Operator responsibility for process control via control charts. Nominal
definition: Actions performed by front-line operators to ensure the
correct application of control charts for monitoring process performance.
Operational definition: The extent to which the process operator
perceives that he/she is responsible for applying control chart
methodology to the control of his/her process.
Verification of control charting assumptions. Nominal definition: Actions
taken before using control charts for monitoring process performance to
ensure that the assumptions salient to use of control charts are
reasonably satisfied. Operational definition: The extent to which the
process operator perceives that assumptions salient to the use control
charts have been verified in the process of setting up control charts for
process control.
Usage of control chart information for continuous improvement.
Nominal definition: Actions pertaining to the interpretation and
application of control chart information for purposes of process control
and improvement. Operational definition: The extent to which the
process operator perceives that information from control charts is being
used to guide actions of process control and improvement.
Sampling strategies for control charting. Nominal definition: Actions
pertaining to how data about the process are collected from the process
itself. Operational definition: The extent to which the process operator
perceives that there is rationale behind how measurements of critical
process/product characteristics are made.
Training in statistical and cognitive methods for process control and
improvement. Nominal definition: The availability and frequency of
training courses and materials in methodologies aimed at monitoring
and improving process performance. Operational definition: The extent
to which process operators are provided with training courses and
materials in methodologies aimed at monitoring and improving process
performance, as perceived by the process operator.
Technical support for SPC implementation and practice. Nominal
definition: The availability and accessibility of knowledgeable technical
staff experts to support actions in the implementation and practice of

Towards
measuring the
``SPC'' construct
305

IJQRM
16,4
(11)

306

(12)

(13)

(14)

control chart methodology. Operational definition: The availability and


accessibility of in-house, knowledgeable technical staff experts to
support actions in the implementation and practice of control chart
methodology, as perceived by the process operator.
Quality improvement team support of SPC practice. Nominal definition:
Actions that teams perform to support continuous improvement
activities identified via the interpretation of control chart information.
Operational definition: The extent of support that teams provide to
continuous improvement activities, as perceived by the process
operator.
Absence of final inspection as a quality control strategy. Nominal
definition: The de-emphasis on inspecting quality into process output as
quality control strategy. Operational definition: The extent to which the
process operator perceives that there is a de-emphasis on inspecting
quality into final product as a quality control strategy.
Documentation and update of knowledge of processes. Nominal
definition: Actions performed and policies instituted to ensure that
knowledge of any critical process is reviewed, documented, and updated
as the process changes. Operational definition: The extent to which the
process operator perceives that knowledge of his/her process is reviewed
and updated as the process changes.
Audit and review of SPC practice and performance. Nominal definition:
Actions performed and policies instituted to ensure that SPC is correctly
implemented and practiced. Operational definition: The extent to which
the process operator perceives that the organization audits its SPC
intervention, in its components and as a whole, so as to ensure that SPC
is correctly implemented and practiced.

These 14 dimensions were deemed to be comprehensive, conceptually distinct,


and content valid; corresponded to actions and policies enacted at different
levels within an organization; and provided the starting point for developing an
approach to operationalize the SPC implementation/practice construct.
Operationalizing the 14 dimensions
To operationalize the 14 dimensions, we created measurement items and
corresponding formats for each of the 14 dimensions (see Appendix). The
nominal and operational definitions for the dimensions (see list above) guided
the creation of measurement items for each dimension. Technically, we avoided
the incorporation of double-barreled, negative, and biased terms into the
measurement items and, with notable exceptions, attempted to make the
items clear, short, and easy to understand (see discussion in Babbie, 1992,
pp. 147-51). For some dimensions (e.g. dimension 6), it was necessary to write
long, complicated items because the dimensions referred to technical aspects of
SPC.

At least four measurement items were generated for each of the 14


dimensions. Most of these measurement items were created as Likert scales in
the form of perceptual statements, with response categories ranging from ``1''
(very inaccurate) to ``7'' (very accurate) (see Babbie, 1992, pp. 180-1). Some
measurement items were formatted as behaviorally-anchored rating scales (see
description in Carroll and Schneier, 1982, pp. 102-12), with response categories
of ``1'' to ``5'' or ``1'' to ``7''. For behaviorally-anchored rating scales with five
response categories, critical incidents (i.e. brief descriptions of exhibited
behavior) were attached to each response category[1]. For behaviorallyanchored rating scales with seven response categories, critical incidents were
anchored to three points only the two endpoints (``1'' and ``7'') and to the
midpoint (``4'').
Assessing measurement quality
To assess the measurement quality of the measurement scales created for the
14 SPC implementation/practice dimensions, we employed the following threestep, sequential approach:
Step 1: Assess the face validity of the measurement items.
Step 2: Assess internal consistency reliability of each measurement scale.
Step 3: Assess the dimensionality of each measurement scale.
This three-step approach is consistent with the suggested guidelines for multiitem scale development (see DeVellis, 1991) and has been successfully
employed in other contexts (e.g. Parasuraman et al., 1988; Saraph et al., 1989).
Each step resulted in the deletion or retention of measurement items for
subsequent analysis in the next step (see Appendix).
Step 1: Face validity assessment
Face validity. The face validity of a measurement item refers to the degree to
which the measurement item appears, on its face value, to measure the
construct (in this case, the dimension) that it intends to measure (Nunnally,
1967, p. 99; 1978, p. 111). Assessing face validity, in this sense, can be
accomplished by having the constituent measurement items evaluated by
subject-matter-experts, under the condition that the experts have been briefed
as to the dimension's research definition. A measurement item that does not
have face validity would not be an appropriate empirical indicator for its
respective dimension, and would consequently not be internally consistent with
other measurement items comprising the particular measurement scale.
Research tasks and statistics. For this research, seven subject-matter-experts
on SPC volunteered to participate in evaluating the face validity of the 66
measurement items for the 14 SPC implementation/practice dimensions. The
seven experts had studied, taught, written about, and/or actually engaged in
the organizational implementation of SPC. Each expert was asked to complete
the two tasks described in Figure 1.
For task A, we provided the seven experts with the operational definitions
for each of the 14 dimensions and a random listing of the 66 measurement

Towards
measuring the
``SPC'' construct
307

IJQRM
16,4

1.

Measurements of critical process /product


characteristics are automated.

Dimension?
4

TASK A

308
Figure 1.
Example of face validity
tasks

Adequacy?
1 2 3 4 5 6 (7)

TASK B

This example shows that the rater deems the statement Measurements of
critical process/product are automated to be most relevant to Dimension 4.
The rater also believes that the statement, together with its response categories
ranging from very inaccurate to very accurate, is an almost perfect
measure of Dimension 4, so this measurement item receives a 7 (Almost
Perfect).

items. These experts were instructed to use the operational definitions to guide
them in classifying the measurement items into no more than one dimension.
The sorting results from task A were then used to compute Cohen's (1960) k, an
index of beyond-chance agreement among different ``judges'' for the overall
task that is, what is the overall degree of inter-expert agreement as to the
placement of the measurement items, after chance agreement has been
removed?
We then asked the subject-matter-experts to evaluate, in task B, how
adequately each measurement item measures the dimension to which it has
been assigned. For each measurement item, the experts were asked to respond
to a seven-point scale with ``1'' anchored as ``barely adequate'' and ``7'' anchored
as ``almost perfect.'' This approach has been successfully employed by
McCullough (1988) to operationalize Deming's (1986) 14 points quality
management philosophy. From the experts' input for task B, we computed and
evaluated the average adequacy score and the standard deviation of adequacy
scores for each individual measurement item.
Face validity results. For task A, Cohen's k was computed to be 0.64. The
standard deviation for Cohen's k, sk, was 0.06, yielding a 95 percent confidence
interval for k in the interval [0.52, 0.76]. Also, a two-tailed test of the statistical
significance of k concluded that the observed inter-expert agreement as to the
sorting of the measurement items did not occur by chance.
The results for task B are tabulated in Table I. Using arbitrary but
conservative cut-off values, 49 of the 66 measurement items were deemed to
have face validity with high average adequacy scores (> ``3'') and, at the same
time, small standard deviations of adequacy scores ( 1.00). These 49
measurement items were then subjected to subsequent internal consistency
reliability.
Step 2: Internal consistency reliability assessment
Internal consistency reliability. Reliability refers to the:
. . . extent to which an experiment, test, or any measuring procedure yields the same results on
repeated trials.. . The more consistent the results given by repeated measurements, the higher
the reliability of the measuring procedure ... (Carmines and Zeller, 1979, pp. 11-12).

Proposed measurement scale for:


Dimension 1

Dimension 2

Dimension 3

Dimension 4

Dimension 5

Dimension 6

Dimension 7

Dimension 8

Dimension 9

Proposed
measurement
item

Average
adequacy
scores

Sample
standard
deviation

1
2*
3*
4
5
1
2
3*
4
1
2
3
4
5*
4
2*
3
4
5
6*
1
2
3
4*
5
1
2
3*
4
1
2
3
4*
5*
6*
1
2
3
4
1
2
3
4*
5*

6.71
4.00
3.86
6.33
5.29
6.00
5.83
2.75
6.14
6.00
6.20
6.14
6.60
4.43
5.42
4.71
4.00
5.83
6.17
3.86
6.40
5.86
6.00
4.80
6.86
4.83
6.00
4.83
5.80
4.67
6.33
5.67
3.33
4.40
3.60
5.71
5.50
5.86
6.20
6.29
6.00
5.43
4.00
4.29

0.49
2.00
2.19
0.82
0.95
0.58
0.75
2.36
0.90
0.89
0.84
0.69
0.55
1.90
0.98
0.76
1.83
0.75
0.98
1.86
0.55
0.90
0.58
1.48
0.38
0.98
0.58
2.04
0.45
0.82
0.81
0.82
2.31
1.82
2.41
0.95
0.84
0.69
0.84
0.76
0.63
0.98
2.08
2.14
(continued)

Towards
measuring the
``SPC'' construct
309

Table I.
Face validity results

IJQRM
16,4

Proposed
measurement
item

Average
adequacy
scores

Sample
standard
deviation

Dimension 10

1
2
3
4
1
2
3
4
1
2
3
4
1
2*
3
4
5*
6*
1
2
3*
4

6.29
5.43
5.57
6.43
6.00
3.86
5.14
4.14
4.83
6.14
6.43
5.57
6.00
1.20
5.50
4.83
4.20
3.80
6.00
6.57
4.71
6.14

310
Dimension 11

Dimension 12

Dimension 13

Dimension 14

Table I.

0.76
0.79
0.98
0.79
0.82
0.90
0.69
0.90
0.98
0.69
0.53
0.79
0.89
0.45
0.55
0.75
1.79
2.17
0.58
0.53
1.50
0.90

Note: For average adequacy scores, 1 = barely adequate; 2 = adequate; 3 = more than
adequate; 4 = good; 5 = very good; 6 = excellent; 7 = almost perfect; * = to be deleted

The reliability of a measurement scale is assessed before an assessment of its


dimensionality because the presence of unreliable measurement items would
contribute to the measurement scale's lack of uni-dimensionality (Cortina, 1993,
p. 100). Reliability is, therefore, a necessary condition for validity (Peter, 1981,
p. 136; Peter and Churchill, 1986, p. 7).
Since each of the 14 measurement scales contains multiple measurement
items that are intended to measure the same underlying dimension, and since
these 14 measurement scales are to be administered at the same time in data
collection, the appropriate ``form'' of reliability to be assessed would be internal
consistency reliability. Computational methods for assessing internal
consistency reliability examine the equivalence among multiple measurement
items that operationalize a single construct (Cortina, 1993, p. 100; Green et al.,
1977), of which the most commonly reported method is Cronbach's (1951) a.
To assess the internal consistency reliability of the 14 measurement scales,
only those measurement items with high face validity were considered. For
example, with respect to dimension 1, only items 1, 4, and 5 were used to
compute Cronbach's a for the measurement scale; items 2 and 3 were excluded
based on their low face validity results.

Sample. Data for conducting the internal consistency reliability assessment


were collected from 104 process operators employed by two discrete
manufacturing facilities located within a 100-mile radius of a large Midwest
metropolis. Research site 1 is a plant of a discrete manufacturer of power
supplies for computer and electronic equipment. This plant operates three
production shifts, with each shift employing up to 25 process operators
working on different tasks. The plant is physically laid out as three different
areas each area having a functional layout and producing its own set of
products. Research site 2 is a plant belonging to a division of a Fortune 500
company. This plant manufactures a family of small electric motors used in
industrial and residential applications. Within the plant are four assembly
lines: three sub-assembly lines that manufacture the three basic components of
sub-motors, gear trains, and printed circuit boards and a final assembly line.
Questionnaire administration. A pen-and-paper questionnaire containing a
random ordering of the measurement items for the 14 SPC implementation/
practice dimensions was constructed. At research site 1, a designated plant coordinator administered questionnaires to 55 process operators present during
mandatory diversity training sessions. At research site 2, group supervisors
administered questionnaires to 49 process operators during regularly
scheduled group meetings
Cronbach's a results. Because the 104 process operators represented two
different research sites, Cronbach's a for each of the 14 measurement scales
was computed separately for each research site. These results, as well as the
improvement in Cronbach's a for a particular measurement scale if a
constituent measurement item were to be deleted, are tabulated in Table II.
Step 3: Dimensionality assessment
Construct validity and dimensionality. Construct validity generally refers to:
. . . the degree to which [a proposed measurement scale] assesses the construct it is purported
to assess. In this sense, a [measurement scale] is construct valid (1) to the degree that it
assesses the magnitude and direction of a representative sample of the characteristics of the
construct and (2) to the degree that the [measurement instrument] is not contaminated with
elements from the domain of other constructs or error (Peter, 1981, p. 134).

Evidence of construct validity can be provided from different sources


via different methodological approaches (see Cronbach and Meehl, 1955,
pp. 287-89; Singleton et al., 1993, pp. 127-8). Since the intention in this research
is to operationalize the 14 dimensions as multi-item measurement scales,
assessing the dimensionality of the measurement scales should provide
insights into the construct validity of the measurement scales[2].
Principal components factor analysis. From a methodological standpoint, a
widely employed approach to assess the dimensionality of multi-item
measurement scales involves the application of factor analysis (see Carmines
and Zeller, 1979; DeVellis, 1991; Nunnally, 1967, 1978). In this study, we applied
a principal components analysis, based on the principal components factor
model (see Johnson and Wichern, 1988, pp. 340-71; Kim and Mueller, 1978,
pp. 14-21), to the same data used in the assessment of internal consistency
reliability. In order to conclude that a measurement scale is uni-dimensional,

Towards
measuring the
``SPC'' construct
311

IJQRM
16,4

312

Proposed
measurement
scale for
Dimension 1

Dimension 2a

Dimension 3

Dimension 4

Dimension 5

Dimension 6

Dimension 7b

Dimension 8

Dimension 9a

Dimension 10
Table II.
Internal consistency
reliability results

Proposed
measurement
scale/item
Scale
1
4*
5
Scale
1
2
4
Scale
1
2
3
4
Scale
1
2
4
5
Scale
1
2
3
5
Scale
1
2
4
Scale
1
2
3
Scale
1
2
3
4
Scale
1
2
3
Scale
1
2
3
4

Research site 1
(n = 55)
Cronbach's a
Cronbach's
with item
a
deleted
0.57

0.51

0.71

0.42

0.80

0.73

0.78

0.71

0.65

0.66

0.12
0.68
0.50
0.51
0.51
0.16
0.72
0.61
0.65
0.62
0.26
0.11
0.05
0.48
0.70
0.74
0.75
0.83
0.58
0.61
0.72
0.79
0.68
0.62
0.67
0.62
0.66
0.62
0.58
0.60
0.46
0.51
0.69
0.62
0.50

Research site 2
(n = 49)
Cronbach's a
Cronbach's
with item
a
deleted
0.60

0.37

0.72

0.57

0.75

0.73

0.43

0.64

0.55

0.74

0.43
0.62
0.41
0.26
0.55
0.20
0.56
0.69
0.72
0.60
0.53
0.39
0.32
0.72
0.69
0.78
0.62
0.66
0.61
0.76
0.51
0.38
0.16
0.58
0.49
0.49
0.58
0.69
0.31
0.69
0.26
0.66
0.76
0.61
0.69
(continued)

Proposed
measurement
scale for
Dimension 11

Dimension 12

Dimension 13

Dimension 14

Proposed
measurement
scale/item
Scale
1
2
3
4
Scale
1
2
3
4
Scale
1
3
4
Scale
1
2
4

Research site 1
(n = 55)
Cronbach's a
Cronbach's
with item
a
deleted
0.83

0.83

0.68

0.85

0.79
0.74
0.81
0.79
0.79
0.73
0.82
0.78
0.54
0.58
0.63
0.81
0.81
0.76

Research site 2
(n = 49)
Cronbach's a
Cronbach's
with item
a
deleted
0.64

0.73

0.74

0.71

0.47
0.41
0.76
0.57

Towards
measuring the
``SPC'' construct
313

0.65
0.54
0.67
0.78
0.80
0.61
0.55
0.71
0.43
0.68

Notes: *Deleted in order to improve Cronbach's a; asince the measurement scale for
dimension 2 was consistently not reliable across both samples, with no improvements
attainable from deleting one or more measurement items from this three-item measurement
scale, only one measurement item was retained for dimension 2 (item 4). Item 4 was chosen
because it best reflects the nominal definition for this SPC implementation/practice
dimension. Retaining one measurement item for dimension 2 is consistent with DeVellis'
(1991, p 10) argument that ``... imperfect measurement may be better than no measurement
at all ...'' The disadvantage with this approach is that the reliability (and dimensionality) of
this single-item measurement scale cannot (and need not) be assessed; bthe measurement
scales for dimensions 7 and 9 were reliable per results of one sample but not the other.
Since the conflicting results may be due to differences between the two samples, we
decided to subject the measurement items within these two measurement scales to a
dimensionality assessment in step 3

the factor loadings on the first extract principal components factor should be
greater than the rule-of-thumb minimum of 0.30 for samples with less than 100
observations (Hair et al., 1979, p. 236). Also, the eigenvalue for the first
extracted principal components factor (l1) should generally explain more than
50 percent of the variance among the measurement items while the eigenvalues
(i.e. l2, . . . , ln for subsequent extracted principal components factors should be
less than 1.00.
Of the 14 measurement scales, only 13 were subjected to this dimensionality
assessment, since dimension 2's measurement scale contains only a single
measurement item. Again, the analyses were conducted separately for each
research site. Also, while it would be ideal to factor analyze all measurement

Table II.

IJQRM
16,4

314

items across the 14 measurement scales, this strategy was not empirically
feasible, given the small samples from each research site. Typically, a ratio of
four to ten times as many observations as there are measured variables to be
factor analyzed is recommended, although a ratio of twice the observations to
the number of measured variables is common (see Hair et al., 1979, p. 219;
Schwab, 1980, p. 19; Tinsley and Tinsley, 1987, p. 415). An alternative strategy
was, therefore, adopted, whereby principal components factor analysis was
employed to evaluate the dimensionality of relevant measurement scales, on a
``measurement scale'' by ``measurement scale'' basis. For each measurement
scale, if the number of extracted factors exceeds one, then the measurement
scale would be not be uni-dimensional, and hence, not possessing high
construct validity. The results of the ``measurement scale'' by ``measurement
scale'' dimensionality assessment are shown in Table III.
Discussion of results
The advancement of scientific knowledge requires a scientific foundation, one
that is built upon the rigor and discipline of continual scientific endeavors that
either:
(1) disconfirm established theories; or
(2) propose and test new or alternative theories that are credible and,
equally important, available for replicable studies.
Such scientific endeavors, however, require the ability to measure the
phenomenon of interest in a reliable and valid manner. With respect to the
quality and costs benefits of implementing and practicing SPC, we have not
had the luxury of being able to operationalize and measure what organizations
mean when they claim to have implemented and to be practicing SPC. It is,
therefore, not surprising to observe that scientific knowledge on this topic has
not been cumulative.
The significance of developing a measurement instrument to evaluate an
organization's efforts at implementing and practicing SPC cannot be overstated
from the standpoint of science. Such a measurement instrument would benefit
exploratory or confirmatory investigations into the antecedents and
consequences of SPC implementation/practice. At the same time, research
examining the relationship among the 14 dimensions can also be conducted in
order to derive an aggregate index for the phenomenon. Like Brie et al. (1976),
such research would investigate different approaches to combine the scores on
the individual measurement scales into an aggregate index, as well as the
substantive theory-testing implications of the different combinatorial
approaches.
From a pragmatic perspective, organizations have long felt the need for an
assessment vehicle that can be employed to help improve their efforts at
implementing and practicing SPC (see Rungtusanatham et al., 1997, p. 133). In
the particular case of SPC, Kolesar (1993: 332) has argued that:

Dimension 7

Dimension 6

Dimension 5

Dimension 4

Dimension 3

Dimension 2

Dimension 1

Proposed
measurement
scale for

Scale
1
2
3

Scale
1
5
Scale
4
Scale
1
2
3
4
Scale
1
2
4
Scale
1
2
3
5
Scale
1
2
4

Proposed
measurement
scale/item

2.09
(70%)

1.93
(64%)

2.60
(65%)

1.48
(49%)

Not
assessed
2.20
(55%)

1.54
(77%)

0.77
0.85
0.88

0.84
0.82
0.75

0.90
0.84
0.80
0.67

0.52
0.79
0.76

0.63
0.80
0.75
0.77

0.88
0.88
Not
assessed

Yes

Yes

Yes

Yes

Not
assessed
Yes

Yes

Eigenvalue (l1)
and variance
Is the eigenvalue of
explained by Unrotated factor
the second factor
first factor
loadings
(l2) 5 1.00?

Research site 1
(n = 55)

1.44
(48%)

1.97
(66%)

2.33
(58%)

1.97
(66%)

Not
assessed
2.17
(54%)

1.46
(73%)

Eigenvalue (l1)
and variance
explained by first
factor

0.75
0.88
0.35a

0.84
0.70
0.88

0.78
0.58
0.85
0.82

0.78
0.86
0.79

0.84
0.68
0.60
0.80

0.85
0.85
Not
assessed

(continued)

NO
l2 = 1.07
(36%)

Yes

Yes

Yes

Not
assessed
Yes

Yes
Yes

Is the eigenvalue
of the second
Unrotated
factor
factor loadings (l2) 5 1.00?

Research site 2
(n = 49)

Towards
measuring the
``SPC'' construct
315

Table III.
Principal components
factor analytical results

Table III.

Dimension 11

Dimension 10

Dimension 9

Dimension 8
(without item 1)

Dimension 8

Dimension 7
(without item 3)

Scale
1
2
Scale
1
2
3
4
Scale
2
3
4
Scale
1
2
3
Scale
1
2
3
4
Scale
1
2
3
4

Proposed
measurement
scale/item

2.66
(66%)

2.03
(50%)

1.77
(59%)

1.82
(61%)

2.14
(53%)

1.45
(72%)

0.80
0.87
0.77
0.82

0.78
0.51
0.69
0.83

0.75
0.74
0.82

0.82
0.74
0.78

0.69
0.77
0.70
0.77

0.85
0.85

Yes

Yes

Yes

Yes

Yes

Yes

Eigenvalue (l1)
and variance
Is the eigenvalue of
explained by Unrotated factor
the second factor
first factor
loadings
(l2) 5 1.00?

2.07
(52%)

2.28
(57%)

1.63
(54%)

1.84
(62%)

1.94
(48%)

1.41
(82%)

Eigenvalue (l1)
and variance
explained by first
factor

0.83
0.87
0.28c
0.74

0.80
0.61
0.84
0.74

0.82
0.48
0.85

0.72
0.81
0.81

0.45b
0.77
0.72
0.79

0.84
0.84

(continued)

Yes

Yes

Yes

Yes

NO
l2 = 1.07
(27%)

Yes

Is the eigenvalue
of the second
Unrotated
factor
factor loadings (l2) 5 1.00?

316

Proposed
measurement
scale for

Research site 2
(n = 49)

IJQRM
16,4

Scale
1
2
4
Scale
1
2
3
4
Scale
1
3
4
Scale
1
2
4

Proposed
measurement
scale/item

2.31
(77%)

1.82
(61%)

2.65
(66%)

2.19
(73%)

0.86
0.87
0.90

0.81
0.78
0.75

0.80
0.88
0.76
0.81

0.85
0.88
0.83

Yes

Yes

Yes

Yes

Eigenvalue (l1)
Is the eigenvalue of
and variance
explained by Unrotated factor
the second factor
first factor
loadings
(l2) 5 1.00?

1.90
(63%)

2.05
(68%)

2.24
(56%)

2.03
(68%)

Eigenvalue (l1)
and variance
explained by first
factor

0.73
0.88
0.77

0.68
0.88
0.91

0.78
0.88
0.76
0.53

0.83
0.87
0.75

Yes

Yes

Yes

Yes

Is the eigenvalue
of the second
Unrotated
factor
factor loadings (l2) 5 1.00?

Note: aItem 3 in the measurement scale for dimension 7 loaded at 0.89 on a second extracted factor for research site 2. Deleting this
measurement item and re-running the principal components factor analysis left a two-item measurement scale for dimension 7 that was unidimensional across both samples. The resulting Cronbach's a for dimension 7's two-item measurement scale were 0.62 (research site 1) and 0.58
(research site 2); bItem 1 in the measurement scale for dimension 8 loaded at 0.82 on a second extracted factor for research site 2. Removing item
1 produces a uni-dimensional measurement scale that supported Cronbach's a ranging from 0.49 for research site 2 to 0.67 for research site 1;
c
Item 3 in the measurement scale for dimension 11 loaded at 0.82 on a second extracted factor for research site 2. When item 3 was removed,
l1's for the first extracted principal components factor for research site 1 and research site 2 were computed to be 2.19 (73 percent) and 2.03
(68%) respectively, and factor loadings on the first extracted principal components factor, for both samples, were at least 0.75

Dimensin 14

Dimension 13

Dimension 12

Dimension 11
(without item 3)

Proposed
measurement
scale for

Research site 2
(n = 49)

Towards
measuring the
``SPC'' construct
317

Table III.

IJQRM
16,4

318

. . . Although the emphasis on designing quality into products and processes in the first place
will continue, there will be a continuing need for more effective SPC over the forseeable future
because the reality is that all quality problems and variation cannot be designed away.

Organizations would, therefore, benefit from a comprehensive assessment


instrument, especially one that would serve diagnostic and evaluative purposes
in helping organizations to more effectively implement and practice SPC. In
this respect, the operational definitions and the measurement scales developed
for the 14 SPC implementation/practice dimensions should contribute towards
satisfying this practical necessity. These measurement scales can be applied in
a diagnostic manner to identify opportunities and/or shortcomings for
improving the organizational implementation and practice of SPC.
Furthermore, such an assessment instrument can also be used by an
organization in a proactive manner to gauge the degree of conformance to
quality improvement standards by its suppliers. Since the assessment is
relatively easy to implement, an organization could potentially survey
hundreds of its suppliers, in order to give them feedback regarding their own
quality system and to assess where particular strengths and weaknesses are
relative to particular industry segments. Such industry segment studies can
uncover important differences with respect to the implementation and practice
of SPC differences that might lead an organization to employ different
supplier selection strategies in different industry segments.
Applying the SPC assessment instrument
In agreeing to become part of this research effort, the management at research
site 2 asked that we provide the organization with an assessment of its efforts
at implementing and practicing SPC. Research site 2, as described earlier, is a
plant belonging to a division of a Fortune 500 company that manufactures a
family of small electric motors for industrial and residential applications. In the
early 1980s, the engineering group at this manufacturing facility was charged
with the task of introducing SPC within the organization. The initial
implementation began with the parts fabrication process, followed essentially
by a ``shotgun'' approach that tried to apply SPC to every operation within the
manufacturing facility. Control charts were set up but were not properly
maintained; nor was information from these control charts used in any
proactive manner for quality improvement. As a result, there was significant
loss of credibility with respect to the implementation and continued practice of
SPC, leading, in turn, to a partial abandonment of ``control charting''-related
activities within research site 2.
Our analysis of the responses from the 49 process operators from research
site 2 confirms what has happened with the implementation and practice of
SPC within this organization. Figure 2 shows the average scores by dimension
based on the responses from the 49 process operators within this plant. The
average scores for research site 2 ranged between 3.12 and 4.92, with

(3.89)

(3.70)

(4.57)

Dimension 4

Dimension 3

Towards
measuring the
``SPC'' construct

(3.12)

Dimension 5

Dimension 2

(4.38)
Dimension 6

(4.92)

319

Dimension 1
(4.48)
Dimension 7

(4.26)

Dimension 8

(4.24)
Dimension 14

(3.96)
Dimension 9

(4.41)
Dimension 13

Dimension 10
(4.59)

Dimension 11
(4.47)

Dimension 12
(3.99)

theoretical minimum and maximum scores of 1.00 and 7.00 respectively. These
scores reveal that ``control charting''-related activities have not been widely
implemented within the plant. Furthermore, it appears that there is only
moderate managerial and technical support for the deployment of statistical
and cognitive procedures to facilitate the monitoring, adjustment, and
improvement of processes. Demographic analysis of the 49 process operators
led to three additional supporting insights. First, out of the 49 process
operators, there were approximately half who had not received formal training
in SPC. Second, approximately 20 process operators were actively using control
charts to monitor process performance. Third, of the 20 process operators
engaged in control charting, some had not received formal SPC training.
Conclusions
Our purpose, in this paper, has been to describe developmental research
towards creating an instrument for the SPC implementation/practice construct.
That this research is developmental is reflected in our employment of small
sample sizes, which, in turn, constrains the type of statistical analyses that can
be performed, especially with regard to construct validity. Also, while the

Figure 2.
The implementation and
practice of SPC at
research site 2

IJQRM
16,4

320

creation of multi-item measurement scales ideally follows an iterative process,


where the execution of each assessment step potentially leads to the writing of
new measurement items, followed by repeated assessments of the
corresponding measurement property at each step, we did not have the luxury
nor the opportunity to do so. As such, the assessment results that are reported
here truly amount to a ``first-pass'' evaluation of 14 newly created measurement
scales and, therefore, should not be interpreted as definitive evidence about the
reliability and validity of the proposed measurement scales. Nonetheless, these
results do shed some evidence and some insights into how the SPC
implementation/practice construct might be measured in organizational
settings (as illustrated with the case of research site two) and how its
measurement can be refined and improved in subsequent research. To quote
Peter (1981, p. 135) on construct validation:
A single study does not establish construct validity. In fact, Cronbach (1971) notes that
construct validation is an ever-extending process of investigation and development. Even
tentative acceptance of construct validity requires some amount of aggregation of results,
including both logical deductive reasoning and series of reliability and validity studies.

Likewise, a single study, such as this one, is a necessary but insufficient


condition for making final conclusions about the properties of a newly
constructed measurement instrument.
In conclusion, we suggest several related research streams that should be
pursued in the future. Foremost, additional measurement items should be
created and evaluated in order to improve the reliability and validity of those
measurement scales that contain relatively few measurement items (e.g.
dimension 2). Also, other forms of reliability (e.g. inter-rater, test-retest, etc.)
and other evidence of construct validity should be provided to enhance
conclusions as to the measurement quality of the proposed measurement
scales. Finally, opportunities for applying confirmatory methodologies, such as
confirmatory factor analysis (see Long, 1983), to assess the properties of these
measurement scales should drive subsequent research designs and, thereby,
lead to more definitive statements about the reliability and validity of the
proposed measurement scales.
Notes
1. The five-point behaviorally-anchored rating scales can easily be transformed into sevenpoint scales using a linear transformation of the form (see Hackman and Oldham, 1980,
p. 306):
X = 1.5 0.5
where,
X = pre-transformed score on a five-point scale.
X = post-transformed score on a seven-point scale.
2. The current literature encourages the examination of both convergent validity and
discriminant validity (see Campbell and Fiske, 1959) for newly proposed multi-item

measurement scales. Both properties are considered to provide confirmatory, as opposed to


exploratory, evidence of construct validity. Given the developmental nature of the research
and the restrictions imposed by sample sizes, we were unable to assess the convergent
validity and discriminant validity aspects of construct validity. Furthermore, convergent
validity, by definition, requires the existence of different methods for measuring the same
phenomenon. This requirement, of course, cannot be satisfied in this research since the 14
measurement scales constitute the first method to operationalize the SPC implementation/
practice construct.
References
Babbie, E. (1992), The Practice of Social Research (6th ed.), Wadsworth Publishing Company,
Belmont, CA.
Bounds, G.M. (1988), ``Success in implementing statistical process control as a function of
contextual variables in 20 manufacturing organizations'', unpublished PhD dissertation,
University of Tennessee at Knoxville, TN.
Brief, D.J., Wallace, M.J., Jr and Aldag, R.J. (1976), ``Linear versus non-linear models of the
formation of affective reactions: the case of job enlargement'', Decision Sciences, Vol. 7, No.
1, pp. 1-9.
Bushe, G. (1988), ``Cultural contradictions of statistical process control in American
manufacturing organizations'', Journal of Management, Vol. 14 No. 1, pp. 19-31.
Campbell, D.T. and Fiske, D.W. (1959), ``Convergent and discriminant validation by the
multritrait-multimethod matrix'', Psychological Bulletin, Vol. 56 No. 2, pp. 81-105.
Carmines, E.G. and Zeller, R.A. (1979), Reliability and Validity Assessment, Sage Publications,
Newbury Park, CA.
Carroll, S.J. and Schneider, C.E. (1982), Performance Evaluation Appraisal and Review Systems:
The Identification, Measurement, and Development of Performance in Organizations,
Scott, Foresman and Company, Glenview, IL.
Chen, F.T. (1991), ``Quality management in the chain saw industry: a case study'', International
Journal of Quality and Reliability Management, Vol. 8 No. 1, pp. 31-9.
Cohen, J. (1960), ``A coefficient of agreement for nominal scales'', Educational and Psychological
Measurement, Vol. 20 No. 1, pp. 37-46.
Cortina, J.M. (1993), ``What is coefficient alpha? An examination of theory and applications'',
Journal of Applied Psychology, Vol. 78, No. 1, pp. 98-104.
Cronbach, L.J. (1971), ``Test validation'', in Thorndike, R.L., Educational Measurement (2nd ed.),
American Council of Education, Washington, DC, pp. 443-507, as cited in Peter, J.P. (1981),
``Construct validity: a review of basic issues and marketing practices'', Journal of
Marketing Research, Vol. 18 No. 2, pp. 133-45.
Cronbach, L.J. (1951), ``Coefficient alpha and the internal structure of tests'', Psychometrika,
Vol. 16 No. 4, pp. 297-334.
Cronbach, L.J. and Meehl, P.E. (1955), ``Construct validity in psychological tests'', Psychological
Bulletin, Vol. 52 No. 4, pp. 281-302.
Dale, B.G. and Shaw, P. (1989), ``The application of statistical process control in UK automotive
manufacture: Some research findings'', Quality and Reliability Engineering International,
Vol. 5 No. 1, pp. 5-15.
Deming, W.W. (1986), Out of the Crisis, MIT Centre for Advanced engineering, Cambridge, MA.

Towards
measuring the
``SPC'' construct
321

IJQRM
16,4

Depew, D.R. (1987), ``The effect of statistical process control training for production operators on
product quality and productivity'', unpublished PhD dissertation, Purdue University.
DeVellis, R. F. (1991), Scale Development: Theory and Applications, Sage Publications, Newbury
Park, CA.
Dondero, C. (1991), ``SPC hits the road'', Quality Progress, Vol. 24 No. 1, pp. 43-4.

322

Gardiner, J.S. and Montgomery, D.G. (1987), ``Using statistical control charts for software quality
control'', Quality and Reliability Engineering International, Vol. 3 No. 1, pp. 15-20.
General Accounting Office (1991), US Companies Improve Performance through Quality Efforts
(NSIAD-91-190), United States General Accounting Office, Washington, DC.
Gordon, M.E., Philpot, J.W., Bounds, G.M. and Long, W.S. (1994), ``Factors associated with the
success of the implementation of statistical process control'', Journal of High Technology
Management Research, Vol. 5 No. 1, pp. 101-21.
Green, S.B., Lissitz, R.W. and Mulaik, S.A. (1977), ``Limitations of coefficient alpha as an index of
test unidimensionality'', Educational and Psychological Measurement, Vol. 37 No. 4,
pp. 827-38.
Hackman, J.R. and Oldham, G.R. (1980), Work Redesign, Addison-Wesley Publishing Company,
Reading, MA.
Hair Jr, J.F., Anderson, R.E., Tatham, R.L. and Grablowsky, B.J. (1979), Multivariate Data
Analysis with Readings, The Petroleum Publishing Company, Tulsa, OK.
Harmon, K.P. (1984), ``An examination of the relationships between quality costs and
productivity in conjunction with the introduction of statistical process control: a case
study'', unpublished PhD dissertation, University of Tennessee at Knoxville, TN.
Johnson, R.A. and Wichern, D.W. (1988), Applied Multivariate Statistical Analysis, 2nd ed.,
Prentice-Hall, Englewood Cliffs, NJ.
Keefer, C.J. (1986), ``The effects of statistical process control instruction upon job performance of
machine operators'', unpublished doctoral dissertation, Purdue University.
Kim, J. and Mueller, C.W. (1978), Factor Analysis: Statistical Methods and Practical Issues, Sage
Publications, Newbury Park, CA.
Kolesar, P.J. (1993), ``The relevance of research on statistical process control to the total quality
movement'', Journal of Engineering and Technology Management, Vol. 10 No. 3,
pp. 317-38.
Lascelles, D.M. and Dale, B.G. (1988), ``A study of the quality management methods employed by
U.K. automotive suppliers'', Quality and Reliability Engineering International, Vol. 4 No. 3,
pp. 301-9.
Long, J.S. (1983), Confirmatory Factor Analysis: A Preface to LISREL, Sage Publications, Beverly
Hills, CA.
McCullough, P.M. (1988), ``Development and validation of an instrument to measure adherence to
Deming's philosophy of quality management'', unpublished PhD dissertation, University
of Tennessee at Knoxville, TN.
Modarress, B. and Ansari, A. (1989), ``Quality control techniques in US firms: a survey'',
Production and Inventory Management Journal, Vol. 30 No. 2, pp. 58-62.
Niedermeier, C.A. (1990), ``Quality improvement in a corrugated converting plant: An application
of statistical process control'', unpublished master's thesis, Michigan State University, IL.
Nunnally, J.C. (1967), Psychometric Theory, McGraw-Hill, New York, NY.
Nunnally, J.C. (1978), Psychometric Theory (2nd ed.), McGraw-Hill, New York, NY.

Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1988), ``SERVQUAL: a multiple-item scale for
measuring consumer perceptions of service quality'', Journal of Retailing, Vol. 64 No. 1,
pp. 12-40.
Peter, J.P. (1981), ``Construct validity: a review of basic issues and marketing practices'', Journal
of Marketing Research, Vol. 18 No. 2, pp. 133-45.
Peter, J.P. and Churchill, Jr, A. (1986), ``Relationships among research design choices and
psychometric properties of rating scales: a meta-analysis'', Journal of Marketing Research,
Vol. 23 No. 1, pp. 10-12.
Rucinski, D.W. (1991), ``SPC more than quality control'', Quality, Vol. 30, October, pp. 43-5.
Rungtusanatham, M., Anderson, J.C. and Dooley, K.J. (1997), ``Conceptualizing organizational
implementation and practice of statistical process control'', Journal of Quality
Management, Vol. 2 No. 1, pp. 113-37.
Saraph, J.V., Benson, P.G. and Schroeder, R.G. (1989), ``An instrument for measuring the critical
factors of quality management'', Decision Sciences, Vol. 20 No. 4, pp. 810-29.
Schwab, D.P. (1980), ``Construct validity in organizational behavior'', Research in Organizational
Behavior, Vol. 2, pp. 3-43.
Shewhart, W.A. (1939), Statistical Method: From the Viewpoint of Quality Control, The Graduate
School, United States Department of Agriculture, Washington, DC.
Singleton, Jr, R.A., Straits, B.C. and Straits, M.M. (1993), Approaches to Social Research, 2nd ed.,
Oxford University Press, New York, NY.
Sower, V.E. (1993), ``SPC implementation in the plastic molding industry'', Production and
Inventory Management Journal, Vol. 34 No. 1, pp. 41-5.
Tinsley, H.E.A. and Tinsley, D.J. (1987), ``Uses of factor analysis in counseling psychology
research'', Journal of Counseling Psychology, Vol. 34 No. 4, pp. 414-24.
Wallace, W. (1971), The Logic of Science in Sociology, Aldine -Atherton, Inc., Chicago, IL.

Towards
measuring the
``SPC'' construct
323

IJQRM
16,4

324

Table AI.

Appendix. Measurement items for the 14 dimensions underlying SPC


implementation/practice

Towards
measuring the
``SPC'' construct
325

Table AI.

IJQRM
16,4

326

Table AI.

Towards
measuring the
``SPC'' construct
327

Table AI.

IJQRM
16,4

328

Table AI.

Towards
measuring the
``SPC'' construct
329

Table AI.

You might also like