GP29A2E
GP29A2E
GP29-A2
Assessment of Laboratory Tests When
Proficiency Testing Is Not Available; Approved
Guideline—Second Edition
A guideline for global application developed through the Clinical and Laboratory Standards Institute consensus process.
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Clinical and Laboratory Standards Institute
Setting the standard for quality in medical laboratory testing around the world.
The Clinical and Laboratory Standards Institute (CLSI) is a not-for-profit membership organization that brings
together the varied perspectives and expertise of the worldwide laboratory community for the advancement of a
common cause: to foster excellence in laboratory medicine by developing and implementing medical laboratory
standards and guidelines that help laboratories fulfill their responsibilities with efficiency, effectiveness, and global
applicability.
Consensus Process
Consensus—the substantial agreement by materially affected, competent, and interested parties—is core to the
development of all CLSI documents. It does not always connote unanimous agreement, but does mean that the
participants in the development of a consensus document have considered and resolved all relevant objections
and accept the resulting agreement.
Commenting on Documents
CLSI documents undergo periodic evaluation and modification to keep pace with advancements in technologies,
procedures, methods, and protocols affecting the laboratory or health care.
CLSI’s consensus process depends on experts who volunteer to serve as contributing authors and/or as participants
in the reviewing and commenting process. At the end of each comment period, the committee that developed
the document is obligated to review all comments, respond in writing to all substantive comments, and revise the
draft document as appropriate.
Comments on published CLSI documents are equally essential, and may be submitted by anyone, at any time, on
any document. All comments are managed according to the consensus process by a committee of experts.
Appeals Process
When it is believed that an objection has not been adequately considered and responded to, the process for
appeals, documented in the CLSI Standards Development Policies and Processes, is followed.
All comments and responses submitted on draft and published documents are retained on file at CLSI and are
available upon request.
Get Involved—Volunteer!
Do you use CLSI documents in your workplace? Do you see room for improvement? Would you like to get
involved in the revision process? Or maybe you see a need to develop a new document for an emerging
technology? CLSI wants to hear from you. We are always looking for volunteers. By donating your time and talents
to improve the standards that affect your own work, you will play an active role in improving public health across
the globe.
Abstract
Clinical and Laboratory Standards Institute document GP29-A2—Assessment of Laboratory Tests When Proficiency Testing Is
Not Available; Approved Guideline—Second Edition offers methods to assess test performance when formal proficiency testing
(PT) programs (also known as external quality assessment [EQA] programs) are not available. The guideline includes examples
with statistical analyses. This document is intended for use by laboratory managers and testing personnel in traditional clinical
laboratories as well as in point-of-care and bedside testing environments.
Clinical and Laboratory Standards Institute (CLSI). Assessment of Laboratory Tests When Proficiency Testing Is Not Available;
Approved Guideline—Second Edition. CLSI document GP29-A2 (ISBN 1-56238-673-5). Clinical and Laboratory Standards
Institute, 950 West Valley Road, Suite 2500, Wayne, Pennsylvania 19087 USA, 2008.
The Clinical and Laboratory Standards Institute consensus process, which is the mechanism for moving a document through
two or more levels of review by the health care community, is an ongoing process. Users should expect revised editions of any
given document. Because rapid changes in technology may affect the procedures, methods, and protocols in a standard or
guideline, users should replace outdated editions with the current editions of CLSI documents. Current editions are listed in
the CLSI catalog and posted on our website at www.clsi.org. If your organization is not a member and would like to become
one, and to request a copy of the catalog, contact us at: Telephone: 610.688.0100; Fax: 610.688.0700; E-Mail:
[email protected]; Website: www.clsi.org.
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
Copyright ©2008 Clinical and Laboratory Standards Institute. Except as stated below, any reproduction of
content from a CLSI copyrighted standard, guideline, companion product, or other material requires
express written consent from CLSI. All rights reserved. Interested parties may send permission requests to
[email protected].
CLSI hereby grants permission to each individual member or purchaser to make a single reproduction of
this publication for use in its laboratory procedure manual at a single site. To request permission to use
this publication in any other manner, e-mail [email protected].
Suggested Citation
CLSI. Assessment of Laboratory Tests When Proficiency Testing Is Not Available; Approved Guideline—
Second Edition. CLSI document GP29-A2. Wayne, PA: Clinical and Laboratory Standards Institute;
2008.
Proposed Guideline
July 2001
Approved Guideline
December 2002
ISBN 1-56238-673-5
ISSN 0273-3099
ii
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
Committee Membership
Acknowledgment
CLSI gratefully acknowledges the following individuals for their help in preparing the approved-level, second
edition of this guideline:
iii
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
iv
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
Contents
Abstract ....................................................................................................................................................i
1 Scope .......................................................................................................................................... 1
2 Terminology............................................................................................................................... 1
2.1 Note on Terminology .................................................................................................... 1
2.2 Definitions .................................................................................................................... 1
2.3 Abbreviations/Acronyms .............................................................................................. 4
3 Rationale .................................................................................................................................... 4
References ............................................................................................................................................. 12
v
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
vi
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
Foreword
Proficiency testing (PT), also known as external quality assessment (EQA), is an important part of quality
management in the clinical laboratory. PT complements internal quality control (QC) to help ensure
patient test results are valid.1 Many agencies and associations, governmental and nongovernmental, offer
PT for numerous analytes. In many cases, accrediting bodies and governmental agencies that oversee
clinical laboratories require participation in a formal PT program.
However, formal PT programs are not available for a substantial number of laboratory tests; the reasons
vary. Some analytes are unstable, precluding the preparation of PT materials, or matrix effects may
prevent reliable analysis. Some tests are performed in only a few laboratories, so it is not practical to
develop a formal PT program. PT is not available for certain pathogenic microorganisms because of the
hazards of transporting the organisms. In some parts of the world, competent PT programs may not be
available or may not be affordable.
This document offers methods to assess test performance when PT is not available. These methods are
termed “alternative assessment procedures,” or AAPs.a The document addresses a variety of tests,
including quantitative analyses of blood, microbiological cultures, morphologic analyses, and in vivo
tests. The options for some of these tests are necessarily rather limited.
Overview of Changes
This document replaces the first edition approved guideline, GP29-A, which was published in 2002.
Principal among the changes in this edition are revised/harmonized terminology (see Section 2.1),
updated examples of tests for which PT is not available, and examples of government comparison
programs. References were updated throughout.
Key Words
a
Neither PT nor the procedures described in this document are adequate by themselves to comprehensively validate a test
method.
vii
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
viii
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
1 Scope
This document offers methods to assess test performance when proficiency testing (PT), or external
quality assessment (EQA), is not available. The guidelines in this document apply to clinical laboratory
tests performed in the traditional laboratory setting, as well as point-of-care testing, and testing in clinics’
laboratories. The subcommittee believes assessment procedures (either PT or alternative assessment
procedures [AAPs]) to validate ongoing performance are important for all laboratory tests. However, the
scope of the document does not include home testing (ie, patient self-testing). Quality assessment
programs for home testing have been described.2
This document makes no distinction between “regulated” and “nonregulated analytes” as defined in the
United States under the Clinical Laboratory Improvement Amendments of 1988 (CLIA ‘88).3,4 The
subcommittee believes assessment procedures (either PT or AAPs) to validate ongoing performance are
important for all laboratory tests.
This document suggests general approaches and provides examples, but does not prescribe specific
assessment procedures for individual analytes. The responsibility for selecting specific assessment
procedures lies with the individual clinical laboratory.
The document may be useful for managers, supervisors, and laboratory personnel in traditional
laboratories, as well as personnel performing point-of-care and bedside testing.
2 Terminology
In many countries, the PT programs for clinical laboratories are called “external quality assurance” or
“external quality assessment” (EQA) programs. The preferred term now is the latter, and is defined in
Section 2.2. The recommendations in this document relate equally to PT and EQA. However, in this
document, only the term “proficiency testing” will be used, for two reasons. First, the committee
determined that the phrase “PT/EQA” was awkward. Second, CLSI has a policy to use international
harmonized terminology where appropriate; currently, the generally accepted international terminology is
“proficiency testing.”5-8 PT is the term currently in use in many countries, including the United States.
2.2 Definitions
accepted reference value – value that serves as an agreed-upon reference for comparison and which is
derived as a theoretical or established value based on scientific principles; an assigned value based on
experimental work of some national or international organization; or a consensus value based on
collaborative experimental work under the auspices of a scientific or engineering group (ISO 5725-1, ISO
Guide 30).9,10
accuracy (of measurement) – closeness of the agreement between a measured quantity value and a true
quantity value of a measurand (VIM07).11
analyte – component represented in the name of a measurable quantity (ISO 17511)8; NOTE: In the type
of quantity “mass of protein in 24-hour urine,” “protein” is the analyte. In “amount of substance of
©
Clinical and Laboratory Standards Institute. All rights reserved. 1
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
glucose in plasma,” “glucose” is the analyte. In both cases, the long phrase represents the measurand (ISO
17511).8 The analyte is the particular component of interest to the patient.
audit-sample testing – testing of stored aliquots from a biologic sample repeatedly over time in a
specific assay system.
coefficient of variation (CV) – for a non-negative characteristic, the ratio of the standard deviation to the
average (ISO 3534-1)12; NOTE: It is often multiplied by 100 and expressed as a percentage.
common cause variation – variation resulting from sources inherent in the testing process; NOTE: Also
known as “random variation” or “process variation.”
commutability (of a material) – ability of a material to yield the same numerical relationships between
results of measurements by a given set of measurement procedures, purporting to measure the same
quantity, as those between the expectations of the relationships obtained when the same procedures are
applied to other relevant types of material (ISO 15194, ISO 15197, ISO 17593).13-15
manufacturer’s product calibrator – calibration material provided to the customer for use with a
routine clinical measurement procedure.
matrix effect – influence of a property of the sample, other than the measurand, on the measurement of
the measurand according to a specified measurement procedure and thereby on its measured value (ISO
17511)8; NOTE: Viscosity, surface tension, turbidity, ionic strength, and pH are common causes of
matrix effects.
measurand – quantity intended to be measured (VIM07)11; NOTE 1: Generally includes the analyte as
measured with respect to specific conditions; NOTE 2: For example, the enzymatic activity of alkaline
phosphatase at 37 °C; NOTE 3: In the example above, the measurand includes not only the entity being
measured (alkaline phosphatase), but the particular quantity being measured (enzymatic activity), and the
specific environmental condition under which it is being measured (37 °C); NOTE 4: The term
“measurand” and its definition encompass all quantities, while the commonly used term “analyte” refers to a
tangible entity subject to measurement; for example, “substance” concentration is a quantity that may be
related to a particular analyte.
power of error detection – the statistical probability that a quality control system will detect changes that
exceed defined limits.
©
2 Clinical and Laboratory Standards Institute. All rights reserved.
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
precision – the closeness of agreement between independent test results from the same sample obtained
under prescribed (stipulated) conditions (modified from ISO 3534-1)12; NOTE: Precision is not typically
represented as a numerical value but is expressed quantitatively in terms of imprecision—the standard
deviation (SD) or the coefficient of variation (CV) of the results in a set of replicate measurements.
proficiency testing (PT) – a program in which multiple samples are periodically sent to members of a
group of laboratories for analysis and/or identification, in which each laboratory’s results are compared
with those of other laboratories in the group and/or with an assigned value, and reported to the
participating laboratory and others.
quality control (QC) – the operational techniques and activities that are used to fulfill and verify
requirements for quality (modified from ISO 9000).16
quality management – coordinated activities to direct and control an organization with regard to quality
(ISO 9000)16; NOTE: Direction and control with regard to quality generally includes establishment of the
quality policy and quality objectives, quality planning, quality control, quality assurance, and quality
improvement.
quantity value – number and reference together expressing magnitude of a quantity (VIM07).11
reference method – a thoroughly investigated method, in which exact and clear descriptions of the
necessary conditions and procedures are given for the accurate determination of one or more property
values, and in which the documented trueness and precision of the method are commensurate with the
method’s use for assessing the trueness of other methods for measuring the same property values or for
assigning reference method values to reference materials.
regression analysis – the process of estimating the parameters of a model by optimizing the value of an
objective function and then testing the resulting predictions for statistical significance against an
appropriate null hypothesis model; the process of describing mathematically the relationship between two
or more variables; NOTE: This can include the parametric testing of the statistical significance of the
relationship, if random errors are assumed to be normal.
sensitivity (of a measuring system) – quotient of the change in an indication of a measuring system and
the corresponding change in a value of the quantity being measured (VIM07)11; NOTE 1: In the context
of QC, the power of error detection of a QC system; NOTE 2: In qualitative testing, the test method’s
ability to obtain positive results in concordance with positive results obtained by the reference method.
special cause variation – variation from sources outside the testing process; also known as “assignable
cause variation,” or “process error”; NOTE: Sources of special cause variation include interferences,
operator error, instrument malfunction, and deterioration of reagents.
specificity – ability of a test or procedure to correctly identify or quantify an entity in the presence of
interfering phenomena/influence quantities; the ability of a measurement procedure to measure solely the
measurand; NOTE: In the context of QC, the probability that a QC system will indicate absence of
special cause variation (ie, process error) when special cause variation is truly absent; 1 minus the
probability of “false alarms” wherein QC data points exceed tolerance limits yet no error can be identified
in the test system.
split-sample testing – a testing condition in which a single sample is divided into aliquots with one
aliquot tested on a particular instrument or laboratory and the other aliquot(s) tested on other
instrument(s), or laboratory, followed by the comparison of results.
©
Clinical and Laboratory Standards Institute. All rights reserved. 3
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
split-specimen testing – a test involving two biologic samples obtained at the same time from the same
source on two or more different assay systems (or by two or more analysts), often in two or more different
laboratories; NOTE: For example, testing two tubes of blood collected from a particular patient during
the same venipuncture.
standard deviation (SD) – the quantity characterizing the dispersion of the results for a series of
measurements of the same measurand (modified from VIM93).17
systematic error – component of measurement error that in replicate measurements remains constant or
varies in a predictable manner (VIM07).11
trueness – closeness of agreement between the average value obtained from a large series of test results
and an accepted reference value (ISO 3534-1).12
trueness control material – reference material that is used to assess the bias of measurement of a
measuring system.
validation – confirmation, through the provision of objective evidence, that requirements for a specific
intended use or application have been fulfilled (ISO 9000).16
2.3 Abbreviations/Acronyms
3 Rationale
Clinical laboratories use internal quality control (QC) procedures as a primary tool to ensure the validity
of patient test data. For quantitative assays, these procedures generally employ manufactured materials
that are used as surrogates for patient samples and are tested along with patient samples. Routine QC
allows laboratorians to separate variation that is inherent in the testing process (ie, common cause
variation) from special cause variation resulting from an abnormal condition affecting the testing process,
such as operator error, reagent problems, incorrect calibration, or instrument malfunction. However, QC
has limitations. Among them are the following:
©
4 Clinical and Laboratory Standards Institute. All rights reserved.
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
• QC is not perfectly sensitive or specific; it does not detect all instances of special cause variation, and
it sometimes inappropriately flags variation inherent in the testing process (ie, false rejection).
• PT may detect problems/errors that were not detected by the internal QC system (see CLSI document
GP27).18
• When level of analyte in PT materials can be traced to a reference method, a laboratory can determine
the accuracy of its analysis under some circumstances (eg, in the absence of significant matrix effects
when the PT material was validated as commutable with native clinical samples).19
The functions of PT have been well-described in the literature.1,18,20-24 However, PT is not available for
many tests, so laboratories should, when appropriate and practical, implement AAPs.b
Some regulatory and accrediting bodies require participation in PT and also require laboratories to
implement AAPs in the absence of PT. However, regardless of the requirements of certifying/accrediting
bodies, PT and AAPs are important quality elements in their own right and implementing them represents
good laboratory practice.
Laboratories, including those performing unique or low-volume analyses (eg, research laboratories),
should try to develop AAPs that provide information similar to that provided by participation in PT. For
example, patient samples may be sent to another laboratory(ies) to generate data on interlaboratory
comparability (eg, split-sample procedures; see below). If an AAP can be traced to a reference method,
accuracy can be assessed. Even if neither interlaboratory comparison nor evaluation of accuracy is
practical for a particular test, it is still worthwhile to use an AAP to complement QC, because QC is not
perfectly sensitive or specific.
AAPs may often use patient samples, which have certain advantages over the manufactured materials
frequently used in PT.
• Matrix effects are reduced or eliminated when patient samples are used.
• The steps in the preexamination phase of clinical patient testing—sample acquisition, transportation,
and processing—are not evaluated by PT programs using manufactured testing materials, because the
preexamination phase of PT differs from patient testing.c In contrast, variables related to
preexamination processing may be evaluated by an AAP that uses patient samples. Patient samples
used for AAPs require attention to stability during storage and transportation between laboratories, to
minimize introduction of additional variability not related to clinical testing performance.
b
Neither PT nor the AAPs discussed in this document are adequate, in themselves, to validate a test method.
c
Most laboratory errors occur in the preexamination and postexamination (results reporting) phases.3,25,26 The critical domain of
quality management of specimen identification, accessioning, and results reporting lies outside the scope of this document.
©
Clinical and Laboratory Standards Institute. All rights reserved. 5
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
• Pulsed field gel electrophoresis (PFGE), many molecular method assays other than for sexually
transmitted diseases (STDs) (eg, polymerase chain reaction), 17-hydroxyprogesterone, biotinidase,
various metabolites, toxoplasmosis.
• Emerging technologies.
• Certain drugs:
– felbamate;
– gabapentin; and
− anabolic steroids.
• Limitations of PT materials:
– instability of material or lability of analyte (eg, erythrocyte osmotic fragility, erythrocyte sucrose
hemolysis, cold agglutinins, serum acetoacetate, cryoglobulins, stool leukocyte counts, nasal
smear eosinophil count, breath tests);
– cellular function assays (eg, studies of neutrophil or lymphocyte function);
– contamination in highly sensitive analyses (eg, molecular amplification techniques); and
– inability of vendor to provide sufficient material for market demand (eg, whole-blood
cytogenetics).
d
To the knowledge of the subcommittee, PT was not available for the analytes/tests listed in this section at the time of writing.
©
6 Clinical and Laboratory Standards Institute. All rights reserved.
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
• Microbiology issues:
• In vivo testing:
– bleeding time;
– reflectance bilirubinometry;
– pulse oximetry;
– monitoring of anesthetic gas concentrations;
– Schilling test; and
– fetal scalp pH.
The laboratory should define the limits of acceptability for each quantitative assessment procedure in
advance, before performing the procedure. Laboratories may develop limits of acceptability from internal
QC data (eg, ± 2 or 3 standard deviations [SD] from the mean), provided that sufficient QC data exist; or
from data in the literature (ie, clinical practice-based limits derived from biologic variation or clinical
decision points).27-29 A procedure for developing analytic bias and imprecision (uncertainty) tolerance
limits from patient data has been described,30,31 although a large patient database is required (20 000 test
values). A summary of statistical methods for evaluation of PT data is available.5 This information may
be helpful to laboratories in analyzing results of AAPs.
Over time (ie, multiple assessment events over years), and when it is practical to obtain suitable samples,
AAPs should employ samples across the clinically relevant range of the analysis.
Results of AAPs should be documented and retained by the laboratory so trends can be identified.
Corrective action in response to unacceptable results should be documented.
Some AAPs use patient samples/data. As noted above, advantages of using patient results include
independence from the routine QC system; avoidance of matrix effects; and the capability of evaluating
preexamination factors, such as the effect of collection systems (eg, gel-containing blood collection
tubes32), quality of phlebotomy procedure, delays in processing, etc. In addition, the external split-sample
procedure (described below) provides interlaboratory comparison. When adopting split-sample
procedures, laboratories should consider their institutions’ requirements for informed patient consent and
maintenance of patient confidentiality.
e
Centers for Disease Control and Prevention (CDC) Biosafety Levels 3 and 4.
©
Clinical and Laboratory Standards Institute. All rights reserved. 7
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
One common procedure for externally verifying test results is to send aliquots of samplesf to another
laboratory (or other laboratories) for testing. Split-sample procedures evaluate interlaboratory agreement
and testing errors, but do not evaluate trueness (ie, bias compared to a reference measurement procedure)
per se. Comparison to another routine method may provide assurance of comparability of results.
However, there may be sample-specific interferences that are different between different routine methods,
and can influence the comparability of results between them.26,g It is the responsibility of the individual
laboratory to determine the appropriate number of samples to send for split-sample testing. For many
analytes, two samples per assessment event are adequate.
For example, in one investigation, the power of split-sample testing to detect problems in the analysis of
serum total cholesterol and potassium was studied. In this study, the absence of a discrepancy between
samples in the split-sample procedure strongly predicted that the original result was correct (negative
predictive value of 93% to 100%); however, the presence of discrepancy in the split-sample procedure
was less predictive of error in the original result (positive predictive value of 43% to 67%).33
Please refer to Appendixes A and B for an example of a procedure to determine limits of acceptability for
results of quantitative split-sample procedures.
(2) For operator-dependent testing, retest sample using a different operator (for example, in morphologic
analysis, or interpretation of electrophoretic patterns).
For stable measurands, aliquots of a patient sample are stored by the laboratory and analyzed periodically
across time.h Periodic analysis of aliquots of the audit sample assesses reproducibility and stability of
calibration of the assay. The audit-sample procedure does not evaluate trueness,33 nor does it provide
interlaboratory comparison.
The correct performance of a method can be confirmed by use of the calibration material provided by the
method manufacturer, or another reference material documented as commutable with the patients’
samples for the test procedure and traceable to a reference material or procedure.23 When the method
manufacturer’s calibration material or trueness control material is used for an AAP, it is best to use a
different lot of material from that used for method calibration to ensure independence of the verification.
f
In split-sample testing, a single biologic sample (eg, tube of blood) is divided into aliquots. If testing is performed on different
biologic samples obtained at the same time (eg, two tubes of blood obtained at the same venipuncture), then the term split-
specimen testing is used. These procedures are treated identically for the purposes of this document.
g
Evaluation of trueness may be problematic even in organized PT programs.19
h
After an aliquot is removed from storage and tested, it should be discarded and not returned to storage. Returning an aliquot to
storage for later testing may compromise specimen integrity because of denaturation, evaporation, contamination, etc.
©
8 Clinical and Laboratory Standards Institute. All rights reserved.
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
Caution should be used, however, because for some methods, a lot of calibration material may be specific
for a lot of reagent. (NOTE: It is suggested to use manufacturers’ product calibrator or trueness control
material only when there is no other alternative material or process to provide independent verification of
method performance.)
This assessment procedure includes participation in peer comparison programs that evaluate QC data
submitted from multiple laboratories. Many manufacturers have such programs. Usually, however, when
PT is not available for a particular analyte, neither are peer comparison programs.
There is extensive literature describing the use of patient data for the QC of clinical laboratory
measurements. In the 1950s and 1960s, tracking the daily average of hematology measurements such as
hemoglobin, hematocrit, and red cell counts was proposed as a QC measure.35,36 This monitoring of the
averages of patient data came into widespread application in the 1970s in a form commonly known as
“Bull’s algorithm.”37-40 This technique compared the average of 20 consecutive patient values vs an
established patient mean value. Methods to monitor a daily mean or an “average of normals” need not be
limited to hematology and have been applied as QC measures for a wide variety of clinical laboratory
tests (see CLSI/NCCLS document H38).28,32,33,35-50
The assumption made for this approach is the average result, or other statistic, of a group of patients’
samples will be relatively constant when the test procedure is stable. For this condition to be true, the
results included in the mean must not include values that are outliers relative to the reference population
distribution. This technique is best applied for test procedures that have a fairly high volume of results
during a reasonably short time period. However, the approach can be considered for lower-volume tests
when a population of test samples can be identified that are expected to have a predictable distribution of
results. In an acute care hospital setting, if the laboratory can identify specific days when it receives an
increased proportion of abnormal samples (eg, weekends, or days when samples are received from an
oncology clinic or dialysis facility), the best practice might be to exclude the patient data obtained on
those days from the calculations. The literature cited above provides guidance on selecting suitable
numbers of observations and acceptance criteria.
The median of patient data, rather than the mean, may be used. Medians are more resistant to outlier
effects, but require a large number of patient results.51
Usually, reference intervals are used within a laboratory to provide information for evaluation of
individual patient data. Here, periodic reevaluation of reference intervals is proposed to validate test
procedure stability within a laboratory and to verify agreement between laboratories.52-56 For effectiveness
of this approach, the original reference interval determination must be robust and clinically appropriate
for the population served by the laboratory, and the new sample must represent the same reference
population with the same preexamination parameters. The general approach is to obtain test procedure
results for a minimum of 20 subjects (see CLSI/NCCLS document C28).57 A nonparametric analysis that
18 of the 20 results are within the original reference range validates the continuing applicability of that
interval with an approximate 7% probability of false rejection of the existing reference interval. If the
criteria are not met, a second set of 20 samples should be obtained and the evaluation repeated. Failure to
validate the reference interval would initiate a more detailed investigation to determine if the cause was
©
Clinical and Laboratory Standards Institute. All rights reserved. 9
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
due to problems with the analytic test procedure, with preexamination conditions for sample collection
and handling, or with sampling the appropriate healthy population.
When larger numbers of results are available (for example, from a computer query over a time period of
weeks or months), histograms of the result distributions can be prepared and compared to previous time
periods and/or to other laboratories’ results. Considerations of outlier identification and exclusion can be
applied to obtain more homogeneous populations for purposes of comparing stable population groups.
Several statistical approaches have been described to extract adequate reference interval data from
hospitalized and clinic patient populations that can be used for AAPs.28,48,52-56,58
Reevaluation of interpreted results by a second person provides a process to confirm correct test
performance. This approach is applicable to: morphologic analyses, electrophoretic patterns,
chromatographic patterns, etc. This approach can identify discrepancies in interpretation and in technical
quality of the morphologic or other pattern being evaluated.
For evaluation of morphologic analyses, glass slides or electronic images can be reviewed as unknowns
by testing personnel, with subsequent educational feedback, if necessary, to achieve uniformity of
interpretation.
Personnel performing technique-dependent tests (eg, sweat test, bleeding time) may be observed by
experienced senior analysts or supervisors. A checklist delineating the factors to be observed should be
used in the evaluation. For more information on competence assessment, refer to CLSI/NCCLS document
GP21.59
Clinical correlation studies have limited application to routine test assessment, because of the imperfect
correlation of clinical events to laboratory results, as well as biases that may be operating (eg, test referral
bias, disease classification bias). However, correlation studies may be useful in certain circumstances,
when a specific disorder can be diagnosed or strongly suggested if the laboratory test result exceeds a
threshold value, and the presence of the disorder can be independently determined at a reasonable point in
time after testing.
Culture of attenuated strains or morphologically similar organisms can be used as an AAP for culture of
dangerous organisms.
©
10 Clinical and Laboratory Standards Institute. All rights reserved.
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
©
Clinical and Laboratory Standards Institute. All rights reserved. 11
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
References
1
Shahangian S. Proficiency testing in laboratory medicine. Arch Path Lab Med. 1998;122:15-30.
2
Schrot RJ, Foulis PR, Morrison AD, Farese RV. A computerized model for home glucose monitoring proficiency testing: efficacy of an
innovative testing program. Diabetes Educator. 1999;25:48-55.
3
Department of Health and Human Services, Health Care Financing Administration. Clinical laboratory improvement amendments of 1988;
final rule. Federal Register. 1992(Feb 28):7139-7186 [42 CFR 493.801-494.51].
4
Directive of the German Medical Association on quality assurance of quantitative medical laboratory tests. J Germ Med Assoc.
2002;17:A1187.
5
ISO. Proficiency testing by interlaboratory comparisons. Part 1. Development and operation of proficiency testing schemes. ISO/IEC Guide
43-1. Geneva: International Organization for Standardization; 1997.
6
ILAC. Guidelines for the Requirements for the Competence of Providers of Proficiency Testing Schemes. ILAC G13:2007. Sydney, Australia:
International Laboratory Accreditation Cooperation; 2007.
7
ISO/IEC. General requirements for the competence of testing and calibration laboratories. ISO/IEC 17025. Geneva: International
Organization for Standardization; 2005.
8
ISO. In vitro diagnostic medical devices – Measurement of quantities in biological samples – Metrological traceability of values assigned to
calibrators and control materials. ISO 17511. Geneva: International Organization for Standardization; 2003.
9
ISO. Accuracy (trueness and precision) of measurement methods and results – Part 1: General principles and definitions. ISO 5725-1.
Geneva: International Organization for Standardization; 1994.
10
ISO. Terms and definitions used in connection with reference materials. ISO Guide 30. Geneva: International Organization for
Standardization; 1992.
11
ISO. International vocabulary of metrology — Basic and general concepts and associated terms. ISO/IEC Guide 99. Geneva: International
Organization for Standardization; 2007.
12
ISO. Statistics – Vocabulary and symbols – Part 1: Probability and general statistical terms. ISO 3534-1. Geneva: International Organization
for Standardization; 2006.
13
ISO. In vitro diagnostic medical devices – Measurement of quantities in samples of biological origin – Description of reference materials. ISO
15194. Geneva: International Organization for Standardization; 2002.
14
ISO. In vitro diagnostic test systems – Requirements for blood-glucose monitoring systems for self-testing in managing diabetes mellitus. ISO
15197. Geneva: International Organization for Standardization; 2003.
15
ISO. Clinical laboratory testing and in vitro medical devices – Requirements for in vitro monitoring systems for self-testing of oral
anticoagulant therapy. ISO 17593. Geneva: International Organization for Standardization; 2007.
16
ISO. Quality management systems – Fundamentals and vocabulary. ISO 9000. Geneva: International Organization for Standardization; 2000.
17
ISO. International vocabulary of basic and general terms in metrology. Geneva: International Organization for Standardization; 1993.
18
CLSI. Using Proficiency Testing to Improve the Clinical Laboratory; Approved Guideline—Second Edition. CLSI document GP27-A2.
Wayne, PA: Clinical and Laboratory Standards Institute; 2007.
19
Ross JW, Miller WG, Myers GL, Praestgaard J. The accuracy of laboratory measurements in clinical chemistry. Arch Pathol Lab Med.
1998;122:587-608.
20
Isenberg HD, D’Amato RF. Does proficiency testing meet its objective? J Clin Micro. 1996;34:2643-2644.
21
ASTM. Standard Guide for Proficiency Testing by Interlaboratory Comparisons. ASTM E1301-95. West Conshohocken, PA: ASTM; 1995.
22
Standards for an interlaboratory (proficiency) testing program. Am J Clin Path. 1976;66:276-278.
23
Steindel SJ, Howanitz PJ, Renner SW. Reasons for proficiency testing failures in clinical chemistry and blood gas analysis. Arch Pathol Lab
Med. 1996;120:1094-1101.
24
Laessig RH, Ehrmeyer SS, Lanphear BJ, et al. Limits of proficiency testing under CLIA ’67. Clin Chem. 1992;38:1237-1244.
25
Lapworth R, Teal TK. Laboratory blunders revisited. Ann Clin Biochem. 1994;31:78-84.
26
Miller WG. Specimen materials target values and commutability for external quality assessment (proficiency testing) schemes. Clin Chim
Acta. 2003;327:25-27.
©
12 Clinical and Laboratory Standards Institute. All rights reserved.
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
27
Shahangian S, Cohn RD, Gaunt EE, Krolak JM. System to monitor a portion of the total testing process in medical clinics and laboratories:
evaluation of a split-specimen design. Clin Chem.1999;45:269-280.
28
Cembrowski GS. The pursuit of quality in clinical laboratory analyses. Clin Chem. 1990;36:1602-1604.
29
Westgard JO, Seehafer JJ, Barry PL. Allowable imprecision for laboratory tests based on clinical and analytical test outcome criteria. Clin
Chem. 1994;40:1909-1914.
30
Klee G. A conceptual model for establishing tolerance limits for analytic bias and imprecision based on variations in population test
distributions. Clin Chim Acta. 1997;260:175-188.
31
Klee GG, Schryver PG, Kisabeth RM. Analytic bias specifications based on the analysis of effects on performance of medical guidelines.
Scand J Clin Lab Invest. 1999;59:509-512.
32
Douville PD, Cembrowski GS, Strauss JF. Evaluation of the average of patients: application to endocrine assays. Clin Chim Acta.
1987;167:173-185.
33
Cembrowski GS. Thoughts on quality-control systems: a laboratorian’s perspective. Clin Chem. 1997;43:886-892.
34
Shahangian S, Cohn RD. Variability of laboratory test results. Am J Clin Pathol. 2000;113:521-527.
35
Dorsey DB. Quality control in hematology. Am J Clin Pathol. 1963;40:457-464.
36
Waid ME, Hoffmann RG. The quality control of laboratory precision. Am J Clin Pathol. 1955;25:585-594.
37
Cembrowski GS. Use of patient data for quality control. Clin Lab Med. 1986;6:715-733.
38
Cembrowski GS, Lunetzky ES, Patrick CC, Wilson MK. An optimized quality control procedure for hematology analyzers with the use of
retained patient specimens. Am J Clin Pathol. 1988;89:203-210.
39
Bull BS, Elashoff RM, Heilbron DC, et al. A study of various estimators for the deviation of quality control procedures from patient
erythrocyte indices. Am J Clin Pathol. 1974;61:473-481.
40
Cembrowski GS, Westgard JO. Quality control of multichannel hematology analyzers: evaluation of Bull’s algorithm. Am J Clin Pathol.
1985;83:337-345.
41
Dixon K, Northam BE. Quality control using the daily mean. Clin Chim Acta. 1970;30:453-461.
42
Begtrup H, Leroy S, Thyregod P, Wallöe-Hansen P. ‘Average of normals’ used as control of accuracy and a comparison with other controls.
Scand J Clin Lab Invest. 1971;27:247-253.
43
Talamo TS, Losos FJ, Gebhardt WD, Kessler GF. Microcomputer assisted hematology quality control using a modified average of normals
program. Am J Clin Pathol. 1981;76:707-712.
44
Cembrowski GS, Chandler EP, Westgard JO. Assessment of “average of normals” quality control procedures and guidelines for
implementation. Am J Clin Pathol. 1984;81:492-499.
45
Woo J, LeFever D, Winkelman JW. Use of “average of normals” quality control procedure in the detection and resolution of assay
discrepancies. Am J Clin Pathol. 1988;89:125-129.
46
Westgard JO, Smith FA, Mountain PJ, Boss S. Design and assessment of average of normals (AON) patient data algorithms to maximize run
lengths for automatic process control. Clin Chem. 1996;42:1683-1688.
47
Parvin CA, Gronowski AM. Effect of analytical run length on quality control (QC) performance and the QC planning process. Clin Chem.
1997;43:2149-2154.
48
Ye JJ, Ingels SC, Parvin CA. Performance evaluation and planning for patient-based quality control procedures. Am J Clin Pathol.
2000;113:240-248.
49
Kazmierczak SC. Laboratory quality control: using patient data to assess analytical performance. Clin Chem Lab Med. 2003;41:617-627.
50
CLSI/NCCLS. Calibration and Quality Control of Automated Hematology Analyzers; Proposed Standard. CLSI/NCCLS document H38-P.
Wayne, PA: NCCLS; 1999.
51
Lott JA, Smith DA, Mitchell LC, Moeschberger ML. Use of medians and “average normals” of patients’ data for assessment of long-term
analytical stability. Clin Chem. 1996;42(6 Pt 1):888-892.
52
Amador E, Hsi BP. Indirect methods for estimating the normal range. Am J Clin Pathol. 1969;52:538-546.
53
Harwood SJ, Cole GW. Reference values based on hospital admission laboratory data. JAMA. 1978;240:270-274.
©
Clinical and Laboratory Standards Institute. All rights reserved. 13
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
54
Baadenhuijsen H, Smit JC. Indirect estimation of clinical chemical reference intervals from total hospital patient data: application of a
modified Bhattacharya procedure. J Clin Chem Clin Biochem. 1985;23:829-839.
55
Swaanenburg JC, Rutten WP, Holdrinet AC, van Strik R. The determination of reference values for hematologic parameters using results
obtained from patient populations. Am J Clin Pathol. 1987;88:182-191.
56
Kouri T, Kairisto V, Virtanen A, et al. Reference intervals developed from data for hospitalized patients: computerized method based on
combination of laboratory and diagnostic data. Clin Chem. 1994;40:2209-2215.
57
CLSI/NCCLS. How to Define and Determine Reference Intervals in the Clinical Laboratory; Approved Guideline—Second Edition.
CLSI/NCCLS document C28-A2. Wayne, PA: NCCLS; 2000.
58
Horn PS, Pesce AJ, Copeland BE. A robust approach to reference interval estimation and evaluation. Clin Chem. 1998;44:622-631.
59
CLSI/NCCLS. Training and Competence Assessment; Approved Guideline—Second Edition. CLSI document GP21-A2. Wayne, PA: NCCLS;
2004.
60
CLSI. User Protocol for Evaluation of Qualitative Test Performance; Approved Guideline—Second Edition. CLSI document EP12-A2.
Wayne, PA: Clinical and Laboratory Standards Institute; 2008.
©
14 Clinical and Laboratory Standards Institute. All rights reserved.
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
Additional References
Bock JL, Endres DB, Elin RJ. Comparison of fresh frozen serum to traditional proficiency testing material in a College of American Pathologists
survey for ferritin, folate, and vitamin B12. Arch Path Lab Med. 2005;129:323-327.
Cembrowski GS, Carey RN. Adding value to proficiency testing programs. Clin Chem. 2000;46:7-8.
Chappel RJ, Goris M, Palmer MF, Hartskeerl RA. Impact of proficiency testing on results of the microscopic agglutination test for diagnosis of
leptospirosis. J Clin Microbiol. 2004;42:5484-5488.
Cott EMV, Ledford-Kraemer M, Meijer P. Protein S assays: an analysis of North American Specialized Coagulation Laboratory Association
proficiency testing. Am J Clin Pathol. 2005;123:778-786.
Dale JC, Hamrick HJ. Neonatal bilirubin testing practices: reports from 312 laboratories enrolled in the College of American Pathologists Excel
Proficiency Testing Program. Arch Pathol Lab Med. 2000;124:1425-1428.
Deming WE. Out of the Crisis. Cambridge, MA: MIT Center for Advanced Engineering Study; 1986.
Doxiadis II, Witvliet M, Verduyn W, et al. The relevance of proficiency testing for laboratories involved in cadaveric organ transplantation and its
consequences for graft survival. Clin Transpl. 2000:99-103.
Edson D, Russell D, Massey L. CE Update - Proficiency testing: a guide to successful performance. Lab Med. 2007;38:184.
Edson DC, Dyke JW, Glick T. Identification of Escherichia coli O157:H7 in a proficiency testing program. Lab Med. 2005;36:229-231.
England JM, Rowan RM, van Assendelft OW, et al. Guidelines for organisation and management of external quality assessment using
proficiency testing. Int J Hematol. 1998;68:45-52.
Hageman JC, Fridkin SK, Mohammed JM, Steward CD, Gaynes RP, Tenover FC. National Nosocomial Infections Surveillance System
Hospitals. Antimicrobial proficiency testing of National Nosocomial Infections Surveillance System hospital laboratories. Infect Control Hosp
Epidemiol. 2003;24:356-361.
Hassemer DJ. Interlaboratory surveys to assess performance: more than proficiency testing. MLO. 2006;38:20-25.
Hoeltge GA, Phillips MG, Styer PE. Detection and correction of systematic laboratory problems by analysis of clustered proficiency testing
failures. Arch Path Lab Med. 2005;129:186-189.
Ichihara K, Kawai T. An iterative method for improved estimation of the mean of peer-group distributions in proficiency testing. Clin Chem Lab
Med. 2005;43:412-421.
Ichihara K, Kawai T. Impact of a common CV evaluation scheme on overall laboratory performance: 8-year experience of a large national
proficiency testing program in Japan. Clin Chem Lab Med. 2005;43:422-430.
ISO. Statistical procedures for use in proficiency testing by interlaboratory comparison. ISO 13528. Geneva: International Organization for
Standardization; 2005.
Jennings I, Greaves M, Mackie IJ, Kitchen S, Woods TA, Preston FE. UK National External Quality Assessment Scheme for Blood Coagulation:
Lupus anticoagulant testing: improvements in performance in a UK NEQAS proficiency testing exercise after dissemination of national
guidelines on laboratory methods. Br J Haematol. 2002;119:364-369.
Jenny RW, Jackson-Tarentino KY. Causes of unsatisfactory performance in proficiency testing. Clin Chem. 2000;46:89-99.
Johnson PR. The contribution of proficiency testing to improving laboratory performance and ensuring quality patient care. Clin Leadersh Manag
Rev. 2004;18:335-341.
Keel BA, Quinn P, Schmidt CF Jr, Serafy NT Jr, Serafy NT Sr, Schalue TK. Results of the American Association of Bioanalysts national
proficiency testing programme in andrology. Hum Reprod. 2000;15:680-686.
Klee GG, Killeen AA. College of American Pathologists 2003 fresh frozen serum proficiency testing studies. Arch Path Lab Med. 2005;129:292.
Knight GJ, Palomaki GE, Klee GG. A comparison of human chorionic gonadotropin-related components in fresh frozen serum with the
proficiency testing material used by the College of American Pathologists. Arch Path Lab Med. 2005;129:328-330.
Knowles SM, Milkins CE, Chapman JF, Scott M. The United Kingdom National External Quality Assessment Scheme (blood transfusion
laboratory practice): trends in proficiency and practice between 1985 and 2000. Transfus Med. 2002;12:11-23.
Kroll MH, Styer PE, Vasquez DA. Calibration verification performance relates to proficiency testing performance. Arch Path Lab Med.
2004;128:544-548.
Lo SF, Doumas BT, Ashwood ER. Bilirubin proficiency testing using specimens containing unconjugated bilirubin and human serum: results of a
College of American Pathologists study. Arch Pathol Lab Med. 2004;128:1219-1223.
©
Clinical and Laboratory Standards Institute. All rights reserved. 15
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
Mascarello JT, Brothman AR, Davison K, et al. Cytogenetics Resource Committee of the College of American Pathologists and American
College of Medical Genetics. Proficiency testing for laboratories performing fluorescence in situ hybridization with chromosome-specific DNA
probes. Arch Pathol Lab Med. 2002;126:1458-1462.
Nannini D, Tittarelli M, Ricci L, et al. Model proficiency testing scheme for serological diagnosis of Brucellosis: interlaboratory study. J AOAC
Int. 2004;87:965-971.
Neumaier M, Braun A, Gessner R, Funke H. Experiences with external quality assessment (EQA) in molecular diagnostics in clinical laboratories
in Germany. Working Group of the German Societies for Clinical Chemistry (DGKC) and Laboratory Medicine (DGLM). Clin Chem Lab Med.
2000;38:161-163.
Novak RW. Do proficiency testing participants learn from their mistakes? Experience from the EXCEL throat culture module. Arch Pathol Lab
Med. 2002;126:147-149.
Olson JD. College of American Pathologists: proficiency testing in coagulation. Lab Hematol. 2004;10:177.
Palmer-Toy DE, Wang E, Winter WE. Comparison of pooled fresh frozen serum to proficiency testing material in College of American
Pathologists surveys: cortisol and immunoglobulin E. Arch Path Lab Med. 2005;129:305-309.
Schreiber WE, Endres, DB, McDowell GA. Comparison of fresh frozen serum to proficiency testing material in College of American
Pathologists surveys: alpha-fetoprotein, carcinoembryonic antigen, human chorionic gonadotropin, and prostate-specific antigen. Arch Path Lab
Med. 2005;129:331-337.
Singh RJ, Grebe SK, Yue B. Precisely wrong? Urinary fractionated metanephrines and peer-based laboratory proficiency testing. Clin Chem.
2005;51:472-474.
Snakkers G, Cantagrel R. Use of proficiency testing scheme data to assess the accuracy of laboratory results – Estimation of uncertainty of
measurement. Bulletin de l'O.I.V. 2004;77:48-84.
Stanton NV, Maney JM, Jones R. Evaluation of filter paper blood lead methods: results of a pilot proficiency testing program. Clin Chem.
1999;45:2229-2235.
Tan I, Gajra B, Lim MSF. External proficiency testing programmes in laboratory diagnoses of inherited metabolic disorders. Ann Acad Med,
Singapore. 2006;35:688.
Thomson S, Lohmann RC, Crawford L, Dubash R, Richardson H. External quality assessment in the examination of blood films for malarial
parasites within Ontario, Canada. Arch Pathol Lab Med. 2000;124:57-60.
Westgard JO, Westgard S. The quality of laboratory testing today: an assessment of σ metrics for analytic quality using performance data from
proficiency testing surveys and the CLIA criteria for acceptable performance. Am J Clin Path. 2006;125:343-357.
©
16 Clinical and Laboratory Standards Institute. All rights reserved.
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
In external split-sample procedures, usually the outside laboratory (or laboratories) uses the same method,
but this is not necessary as long as the relationship between the methods is known. The laboratories
should understand any differences, including differences in specificity (interferences) between the
methods (see CLSI document EP07 and CLSI/NCCLS documents EP09 and EP21).1-3
It is important that the laboratories involved have agreed-on criteria for test assessment before initiating
the split testing. This agreement should include the following:
The laboratories should retain the results so eventually, comparisons will cover a wide range of
concentrations. The laboratories can assess agreement across this range by plotting results on a two-
dimensional graph with the result from the laboratory of highest authority (if either*) on the horizontal (X)
axis and the other laboratory on the vertical (Y) axis, and a line of perfect agreement (Y=X) drawn in the
body of the graph. The laboratories may discover that their results tend to disagree in a predictable way
(that perhaps is correctable), or that disagreements are random, but are scattered on both sides of the line
of perfect agreement. Visual assessment should be adequate to identify predictable trends, or least squares
regression analysis may be used to assist in the determination.†
It is important that the capabilities of the assessment method are consistent with the desired uncertainty.
For example, if at a certain level, the internal QC data show typical repeatability (CV) of ± 2%, then the
laboratory can be sure only that real changes of 4% or more (approximately, depending on the QC rules
used internally) will be reflected in test results on single samples. If for clinical reasons, it is important to
detect a change of 3%, then the method is not adequate (it is said to have insufficient power to detect
changes of 3%). In such a case, the laboratory can perform the test in duplicate or triplicate and use the
mean as the test result. Because the internal QC CV of a single assay in this example is 2%, the mean of
two test results has a CV of 1.4% (CV single assay/ N ), and thus the use of duplicate testing provides
the laboratory with reasonable power to detect real changes of 3% or more.
*
Traditionally, the horizontal axis is used for the independent variable and the vertical axis for the dependent variable. If one of
the laboratories is considered of “higher authority,” then in this context, it is considered the comparison laboratory, and its values
should be placed on the horizontal axis.
†
Technically, orthogonal regression should be used, since both sets of results have variance in them, but this may not be
necessary for the purpose at hand, and orthogonal regression procedures are not widely available. Least squares regression can be
used for basic descriptive purposes if the test samples cover a broad range of concentrations. Differences between least squares
regression and orthogonal regression are minimal if the tests cover a broad range of concentrations.
©
Clinical and Laboratory Standards Institute. All rights reserved. 17
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
Appendix A. (Continued)
The following example demonstrates the use of a formula to calculate the allowance for the difference
(confidence interval) between results from two procedures or two laboratories, based on data that have
been accumulated over several years of split-sample testing. For this analysis to be valid, the testing
systems in each laboratory must have been stable over the period of data accumulation.
Two laboratories perform a test for serum antibody IgZ1. Occasionally, they send samples to each other
for validation of the performance of the test, and each laboratory performs each test in duplicate. Table
A1 shows the results for 18 samples tested by the two laboratories. For convenience, they are ordered by
concentration.
Table A1. IgZ1 Results From Laboratories X and Y. Results on duplicate samples for 18 patients,
arranged in ascending antibody levels. Actual differences between laboratory averages and the allowed
difference are noted.
Assume that the laboratories have assessed their tests in-house by performing routine replicates on patient
samples. Laboratory X is more experienced with the test, and serves as the comparison laboratory for this
test. The SDs from the in-house repeat tests show that in Laboratory X, the repeatability (precision) CV is
approximately 10% across the range of concentrations tested. In Laboratory Y, the repeatability is
approximately CV 12%. Further, a published study suggests that interlaboratory variability is
approximately CV 15%, which will be used as the clinically acceptable agreement.
‡
Calculated as shown by formula (1) below.
©
18 Clinical and Laboratory Standards Institute. All rights reserved.
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
Appendix A. (Continued)
Note that repeatability variances can come from QC data or published method capability sources as in this
example.§ If QC data are used, σX2 and σY2 equal the square of the SD of the applicable internal QC data.
Repeatability variances can also be estimated from the split-sample data, if multiple replicates are used.
The laboratory repeatability is calculated as the pooled proportional difference between replicates [(Rep1-
Rep2)/((Rep1+Rep2)/2))]. This ratio is squared, the squared terms are then summed over all samples, and
the sum is divided by the number of samples minus 1 to give the σX2 or σY2, respectively. Similarly, the
interlaboratory variance could be estimated by first calculating the sum of squared differences, divided by
the number of samples minus 1. This is the variance of the differences; the interlaboratory variance would
then be estimated by subtracting the combined repeatability variance (S2r = (sx2 + sy2)/2). If the
repeatability variance exceeds the variance of differences, the interlaboratory variance is set at zero. If
interlaboratory variance is unknown at the initiation of a split-sample comparison program, or if it can be
assumed that the laboratories should produce equivalent results, interlaboratory variance can be set at
zero.
If the estimates of variability had not been available elsewhere, they could have been estimated from the
data in Table A1.
[Formula (1)]
⎛ σX σ
2 2
⎞
Allowed difference D = z1−α /2 ⎜ σ 1 + + Y ⎟
2
⎜ nX nY ⎟
⎝ ⎠
If the difference between any pair of individual results lies within ± D, the difference is not statistically
significant, and the results can be considered equivalent for the individual sample.
§
Refer to the first part of Section 5 for further suggestions of ways to define limits of acceptability for variance in AAPs.
©
Clinical and Laboratory Standards Institute. All rights reserved. 19
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
Appendix A. (Continued)
At this level of antibody, a difference of up to 900 would be expected. Note that Sample 7 was at this
level of antibody, and the observed difference exceeds the allowed difference. Sample 11 shows an even
greater difference.
The following chart shows the averages plotted, with the line of equality and the allowed differences.
Sample 7 barely exceeds the limits, while Sample 11 is quite far from the allowed difference. Both
samples should be investigated for anomalies.
16000
14000
12000
10000
Laboratory Y
Sample 11
8000
6000
4000
2000
Sample 7
0
0 2000 4000 6000 8000 10000 12000 14000 16000
Laboratory X
The middle line represents perfect agreement; the upper and lower lines represent the allowed difference
for results.
Calculations are not shown here, but for these samples, get estimates of variance as follows:
©
20 Clinical and Laboratory Standards Institute. All rights reserved.
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
The formula for a confidence interval (CI) at any level L can be expressed as:
[Formula (2)]
σ2
CI = L + B ± Z1− α/2
n
where:
z1-α/2 = percentile of the standard normal distribution for stated confidence level 1-α (eg, 95% or 99%)
L = concentration level of interest (usually the level measured by the external reference method)
B = known bias or difference between test method and reference method (often B=0)
σ2 = variance of the test method at level L
n = number of replicates used in verification procedure
If the external test result lies within the confidence interval, then the method is considered to have passed
the verification at that level.
©
Clinical and Laboratory Standards Institute. All rights reserved. 21
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
For example, laboratories A and B have performed split-sample testing on 29 samples† over the past
several years, with the following result:
Laboratory A Total
Negative Positive
Laboratory B
Negative 9 5 14 (48.3%)
(31.0%) (17.2%)
Positive 1 14 15 (51.7%)
(3.5%) (48.3%)
Total 10 19 29 (100%)
(34.5%) (65.5%)
Using the data from the table (and multiplying the percentage figures by 0.01):
With this size sample, kappa = 0.58 indicates that the agreement between laboratories is not extremely
strong, but is greater than could have occurred by chance alone.
Kappa is useful for comparing agreement on different tests, agreement between different laboratories, or
tracking changes over time. This can assist in finding causes for the disagreement.
*
From: Dunn G, Everitt BS. Clinical Biostatistics: An Introduction to Evidence Based Medicine. London, UK: Edward Arnold;
1995. Reproduced in adapted form by permission of Edward Arnold (Publishers) Ltd.
†
NOTE: Use of a smaller sample size (eg, 10 samples) is valid if the samples include a fairly equal distribution of positive and
negative results.
©
22 Clinical and Laboratory Standards Institute. All rights reserved.
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
Clinical and Laboratory Standards Institute consensus procedures include an appeals process that
is described in detail in Section 8 of the Administrative Procedures. For further information,
contact CLSI or visit our website at www.clsi.org.
Section 7.1.1, Split Sample with Another Laboratory (now Section 5.1.1)
1. The GP29-A guideline describes in Section 7.1.1 on “split-sample with another laboratory” as an “alternative
assessment procedure” when proficiency testing is not available. What is the difference between this split-
sample procedure and the “split-sample testing scheme” described in Clause 4.4 of ISO/IEC Guide 43-1,
Proficiency testing by interlaboratory comparisons – Part 1: Development and operation of proficiency testing
schemes?
• They are very similar. This may be confusing because in ISO/IEC Guide 43-1, it is mentioned as a type of
proficiency testing (PT), while in GP29, it is mentioned as an alternative to PT.
The actual design, operation, and analysis of the schemes are pretty much the same, except in the GP29-A
protocol, there is no assumption of evaluation of the suitability of the performance of any particular
laboratory, which is assumed in ISO/IEC Guide 43-1. For that reason, ISO/IEC Guide 43-1 also assumes
that one laboratory will be a coordinator, and will follow guidelines for sample preparation, handling,
and competence of staff, etc. This is not necessary for a GP29-type split-sample protocol.
2. I’m currently looking for statistical methods used in analyzing results from split-sample experiments between
two laboratories. One of the guides I’ll use as a reference is the CLSI GP29-A guideline. I have one question
regarding the statistical formula used on page 18 for the calculation of the “allowed difference D” (Appendix
A). What is the origin of that formula (ie, how was it derived)? Is it a common formula that can be found in
statistics textbooks or was it developed just for the purpose of the guideline (laboratory assessment)?
• The formula was developed for this document, and did not come from a textbook or published article, as
there were no published references for guidance with split-sample testing. However, Formula 1 is a very
conventional procedure for combining independent variances, and is consistent with traditional statistical
practices. There may be more statistically rigorous procedures, but the subcommittee favored simplicity.
Also, it may be preferable to determine acceptable differences based on clinical need rather than
statistics, but that is up to the user.
In reviewing the appendix, two errors were discovered and have been corrected in the second edition of
the guideline (ie, the formula for S2r has incorrect placement of the right parenthesis; and above Formula
1, the text “Formula XYZ” should say “Formula 1”).
©
Clinical and Laboratory Standards Institute. All rights reserved. 23
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
1. This section could be expanded to include the medians of patient data. The medians are more resistant to outlier
effects, but require large N. See Lott JA, Smith DA, Mitchell LC, Moeschberger. Use of medians and “average
normals” of patients’ data for assessment of long-term analytical stability. Clin Chem. 1996;42(6 Pt 1):888-892.
• In response to the commenter’s recommendation, the following text has been added to Section 5.5.1:
“The median of patient data, rather than the mean, may be used. Medians are more resistant to outlier
effects, but require a large number of patient results.”
Lott JA, Smith DA Mitchell LC, Moeschberger ML. Use of medians and “average normals” of patients’
data for assessment of long-term analytical stability. Clin Chem. 1996;42(6 Pt 1):888-892.
©
24 Clinical and Laboratory Standards Institute. All rights reserved.
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
NOTES
©
Clinical and Laboratory Standards Institute. All rights reserved. 25
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
GP29-A2 addresses the QSEs indicated by an “X.” For a description of the other documents listed in the grid, please
refer to the Related CLSI Reference Materials section on the following page.
Purchasing &
Improvement
Organization
Management
Management
Assessments
Facilities &
and Internal
Information
Occurrence
Documents
Equipment
—External
& Records
Personnel
Inventory
Customer
Control
Process
Process
Service
Safety
GP21 H38 X X EP07
C28 GP27 GP27
EP07
EP09
EP12
GP27
H38
Adapted from CLSI/NCCLS document HS01—A Quality Management System Model for Health Care.
©
26 Clinical and Laboratory Standards Institute. All rights reserved.
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
EP07-A2 Interference Testing in Clinical Chemistry; Approved Guideline—Second Edition (2005). This document
provides background information, guidance, and experimental procedures for investigating, identifying, and
characterizing the effects of interfering substances on clinical chemistry test results.
EP09-A2 Method Comparison and Bias Estimation Using Patient Samples; Approved Guideline—Second Edition
(2002). This document addresses procedures for determining the bias between two clinical methods, and for
the design of a method comparison experiment using split patient samples and data analysis.
EP12-A2 User Protocol for Evaluation of Qualitative Test Performance; Approved Guideline—Second Edition
(2008). This document provides a consistent approach for protocol design and data analysis when evaluating
qualitative diagnostic tests. Guidance is provided for both precision and method-comparison studies.
GP21-A2 Training and Competence Assessment; Approved Guideline—Second Edition (2004). This document
provides background information and recommended processes for the development of training and
competence assessment programs that meet quality/regulatory objectives.
GP27-A2 Using Proficiency Testing to Improve the Clinical Laboratory; Approved Guideline—Second Edition
(2007). This guideline provides assistance to laboratories in using proficiency testing as a quality
improvement tool.
H38-P Calibration and Quality Control of Automated Hematology Analyzers; Proposed Standard (1999). This
document addresses calibration and quality control strategies for multichannel hematology analyzers;
assignment of values to calibrator materials; calibration using stabilized blood controls; internal quality
control; pair difference analysis; and use of the weighted moving average method. A joint project with ICSH.
∗
Proposed-level documents are being advanced through the Clinical and Laboratory Standards Institute consensus process;
therefore, readers should refer to the most current editions.
©
Clinical and Laboratory Standards Institute. All rights reserved. 27
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Number 21 GP29-A2
NOTES
©
28 Clinical and Laboratory Standards Institute. All rights reserved.
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Volume 28 GP29-A2
NOTES
©
Clinical and Laboratory Standards Institute. All rights reserved. 29
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Active Membership
(as of 1 July 2008)
Sustaining Members FDA Center for Biologics Evaluation Elanco Animal Health Trade Associations
and Research Emisphere Technologies, Inc.
Abbott FDA Center for Devices and Eurofins Medinet AdvaMed
American Association for Clinical Radiological Health Fio Japan Association of Clinical
Chemistry FDA Center for Veterinary Medicine Focus Diagnostics Reagents Industries (Tokyo, Japan)
AstraZeneca Pharmaceuticals Health Canada Gen-Probe
Bayer Corporation Indiana State Department of Health Genzyme Diagnostics Associate Active Members
BD Joanneum Research GlaxoSmithKline
Beckman Coulter, Inc. Forschungsgesellschaft mbH GR Micro LTD 5th Medical Group/SGSL (ND)
bioMérieux, Inc. Kern County Department of Health Greiner Bio-One Inc. 22 MDSS (KS)
CLMA Services Habig Regulatory Consulting 36th Medical Group/SGSL (Guam)
College of American Pathologists Meuhedet Central Lab Helena Laboratories 55th Medical Group/SGSAL (NE)
GlaxoSmithKline Ministry of Health and Social Welfare – HistoGenex N.V. 59th MDW/859th MDTS/MTL Wilford Hall
Ortho-Clinical Diagnostics, Inc. Tanzania Icon Laboratories, Inc. Medical Center (TX)
Pfizer Inc Namibia Institute of Pathology Ikonisys Inc. 81st MDSS/SGSAL (MS)
Roche Diagnostics, Inc. National Cancer Institute Immunicon Corporation Academisch Ziekenhuis-VUB (Belgium)
National Center of Infectious and Instrumentation Laboratory (MA) Acadiana Medical Labs, Ltd (LA)
Professional Members Parasitic Diseases (Bulgaria) Instrumentation Laboratory (NY) ACL Laboratories (IL)
National Health Laboratory Service Japan Assn. of Clinical Reagents Industries ACL Laboratories (WI)
American Academy of Family (South Africa) Johnson & Johnson Pharmaceutical Adams County Hospital (OH)
Physicians National Institute of Standards and Research and Development, L.L.C. Air Force Institute for Operational Health
American Association for Clinical Technology Kaiser Permanente (TX)
Chemistry National Pathology Accreditation K.C.J. Enterprises Akron’s Children’s Hospital (OH)
American Association for Advisory Council (Australia) Krouwer Consulting Al Hada Armed Forces Hospital/TAIF/KSA
Laboratory Accreditation New York City Office of Medical LabNow, Inc. (Saudi Arabia)
American Association for Examiner Laboratory Specialists, Inc. Alameda County Medical Center (CA)
Respiratory Care New York State Department of LifeScan, Inc. (a Johnson & Johnson Albany Medical Center Hospital (NY)
American Medical Technologists Health Company) Albemarle Hospital (NC)
American Society for Clinical NJ State Department of Health and LipoScience Alfred I. du Pont Hospital for Children (DE)
Laboratory Science Senior Services Maine Standards Company, LLC All Children’s Hospital (FL)
American Society for Microbiology Ontario Ministry of Health Medical Device Consultants, Inc. Allegheny General Hospital (PA)
American Type Culture Collection Orange County Health Care Agency - Merck & Company, Inc. Allegiance Health (MI)
ASCP Public Health Lab Metabolon Inc. Alpena General Hospital (MI)
Associazione Microbiologi Clinici Pennsylvania Dept. of Health Micromyx, LLC Alta Bates Summit Medical Center (CA)
Italiani (AMCLI) Saskatchewan Health-Provincial Monogen, Inc. American University of Beirut Medical
Canadian Society for Medical Laboratory Nanosphere, Inc. Center (NJ)
Laboratory Science Scientific Institute of Public Health Nihon Koden Corporation Anne Arundel Medical Center (MD)
COLA State of Alabama Nissui Pharmaceutical Co., Ltd. Antelope Valley Hospital District (CA)
College of American Pathologists University of Iowa, Hygienic Lab NJK & Associates, Inc. Arkansas Children’s Hospital (AR)
College of Medical Laboratory NorDx – Scarborough Campus Arkansas Dept of Health
Technologists of Ontario Industry Members NovaBiotics (Aberdeen, UK) Public Health Laboratory (AR)
College of Physicians and Surgeons Novartis Institutes for Biomedical Research Arkansas Methodist Medical Center (AR)
of Saskatchewan 3M Medical Division Nucryst Pharmaceuticals Asan Medical Center (Korea)
Elkin Simson Consulting Services AB Biodisk Olympus America, Inc. Asante Health System (OR)
ESCMID Abbott OneWorld-Lab, LLC Asiri Group of Hospitals Ltd. (Sri Lanka)
Family Health International Abbott Diabetes Care Opti Scan Bio Medical Assoc. Asociacion Espanola Primera de Socorros
Hong Kong Accreditation Service Abbott Molecular Inc. Optimer Pharmaceuticals, Inc. Mutuos (Uruguay)
Innovation and Technology Abbott Point of Care Inc. Oregon Translational Research and Drug Aspirus Wausau Hospital (WI)
Commission Access Genetics Development Institute Atlantic City Medical Center (NJ)
International Federation of Advanced Liquid Logic Ortho-Clinical Diagnostics, Inc. Atlantic Health Sciences Corp. (NB,
Biomedical Laboratory Science Affymetrix, Inc. (W. Sacramento, CA) (Rochester, NY) Canada)
International Federation of Clinical Ammirati Regulatory Consulting Ortho-McNeil, Inc. Auburn Regional Medical Center (WA)
Chemistry Anapharm, Inc. Paratek Pharmaceuticals, Inc. Aultman Hospital (OH)
Italian Society of Clinical Anna Longwell, PC PathCare Avera McKennan (SD)
Biochemistry and Clinical Arpida Ltd. PerkinElmer Genetics, Inc Az Sint-Jan (Belgium)
Molecular Biology A/S Rosco Pfizer Animal Health Azienda Ospedale Di Lecco (Italy)
JCCLS Associated Regional & University Pfizer Inc Baffin Regional Hospital (Canada)
The Joint Commission Pathologists Phadia AB Baptist Hospital for Women (TN)
Malaria Research Training Center Astellas Pharma Powers Consulting Services Baptist Hospital of Miami (FL)
National Society for AstraZeneca Pharmaceuticals PPD Baton Rouge General (LA)
Histotechnology, Inc. Aviir, Inc. QSE Consulting Baxter Regional Medical Center (AR)
Ontario Medical Association Quality Axis-Shield PoC AS Qualtek Clinical Laboratories Bay Regional Medical Center (MI)
Management Program-Laboratory Bayer Corporation – West Haven, CT Quest Diagnostics, Incorporated BayCare Health System (FL)
Service Bayer HealthCare, LLC, Diagnostics Quintiles Laboratories, Ltd. Baylor Health Care System (TX)
RCPA Quality Assurance Programs Div. – Elkhart, IN Radiometer America, Inc. Bayou Pathology, APMC (LA)
PTY Limited BD Reametrix Inc. Baystate Medical Center (MA)
Serbian Society of Microbiology BD Diagnostic Systems Replidyne B.B.A.G. Ve U. AS., Duzen Laboratories
SIMeL BD Vacutainer Systems Roche Diagnostics GmbH (Turkey)
Sociedad Espanola de Bioquimica Beckman Coulter, Inc. Roche Diagnostics, Inc. Beebe Medical Center (DE)
Clinica y Patologia Molecular Beth Goldstein Consultant (PA) Roche Diagnostics Shanghai Ltd. Belfast HSS Trust Royal Victoria Hospital
Sociedade Brasileira de Analises BG Medicine Inc Roche Molecular Systems (Ireland)
Clinicas Bioanalyse, Ltd. Sanofi Pasteur Beloit Memorial Hospital (WI)
Sociedade Brasileira de Patologia Bio-Development S.r.l. Sarstedt, Inc. Ben Taub General Hospital (TX)
Clinica Biohit Oyj. Schering Corporation The Bermuda Hospitals Board (Bermuda)
Turkish Society of Microbiology Biomedia Laboratories SDN BHD Sequenom, Inc. Beth Israel Medical Center (NY)
World Health Organization bioMérieux, Inc. (MO) Siemens Healthcare Diagnostics Bonnyville Health Center (Canada)
Bio-Rad Laboratories, Inc. – France Siemens Medical Solutions Diagnostics (CA) Boston Medical Center (MA)
Government Members Bio-Rad Laboratories, Inc. – Irvine, Siemens Medical Solutions Diagnostics (DE) Boulder Community Hospital (CO)
CA Siemens Medical Solutions Diagnostics (NY) Brantford General Hospital (Canada)
Armed Forces Institute of Pathology Bio-Reference Laboratories Specialty Ranbaxy Ltd Bridgeport Hospital (CT)
Association of Public Health Blaine Healthcare Associates, Inc. Sphere Medical Holding Limited Bronson Methodist Hospital (MI)
Laboratories Canon U.S. Life Sciences, Inc. Stirling Medical Innovations Broward General Medical Center (FL)
BC Centre for Disease Control Cempra Pharmaceuticals, Inc. Streck Laboratories, Inc. Calgary Health Region (Canada)
Centers for Disease Control and Center for Measurement Standards/ITRI Sysmex America, Inc. (Mundelein, IL) California Pacific Medical Center (CA)
Prevention Central States Research Centre, Inc. Sysmex Corporation (Japan) Cambridge Health Alliance (MA)
Centers for Disease Control and Cepheid Targanta Therapeutics, Inc Camden Clark Memorial Hospital (WV)
Prevention, China Chen & Chen, LLC (IQUUM) TheraDoc Canadian Science Center for Human and
Centers for Disease Control and The Clinical Microbiology Institute Third Wave Technologies, Inc. Animal Health (Canada)
Prevention – Ethiopia Comprehensive Cytometric Consulting ThromboVision, Inc. Cape Breton Healthcare Complex (Canada)
Centers for Disease Control and Copan Diagnostics Inc. Transasia Bio-Medicals Limited Cape Cod Hospital (MA)
Prevention – Namibia Cosmetic Ingredient Review Trek Diagnostic Systems Cape Fear Valley Medical Center
Centers for Disease Control and Cubist Pharmaceuticals Upside Endeavors, LLC Laboratory (NC)
Prevention – Nigeria Cumbre Inc. Ventana Medical Systems Inc. Capital Health - Regional Laboratory
Centers for Disease Control and Dade Behring Marburg GmbH – A Vital Diagnostics S.r.l. Services (Canada)
Prevention – Tanzania Siemens Company Watin-Biolife Diagnostics and Medicals Capital Health System Mercer
Centers for Medicare & Medicaid Dahl-Chase Pathology Associates PA Watson Pharmaceuticals Campus (NJ)
Services David G. Rhoads Associates, Inc. Wellstat Diagnostics, LLC Capital Health/QE II Health Sciences
Centers for Medicare & Medicaid DiagnoCure US, GP Wyeth Research Centre (Nova Scotia, Canada)
Services/CLIA Program Diagnostica Stago XDX, Inc. Carilion Labs Charlotte (NC)
Chinese Committee for Clinical Docro, Inc. YD Consultant Carl R. Darnall Army Medical Center (TX)
Laboratory Standards Dynacare Laboratory Carpermor S.A. de C.V. (Mexico)
Department
Infobase 2016. of Veterans
This document Affairsfor use on a single Dynacare
is licensed device, andNW,is Inc
not-to
Seattle
be distributed for network access. Cavan General Hospital (Ireland)
DFS/CLIA Certification
This document is protected by international copyright laws. Eiken Chemical Company, Ltd. Cedars-Sinai Medical Center (CA)
Central Kansas Medical Center (KS) DRAKE Center (OH) Hôtel Dieu Grace Hospital Library Medical Center Hospital (TX)
Central Texas Veterans Health Care Driscoll Children’s Hospital (TX) (Windsor, ON, Canada) Medical Center of Louisiana at NO-
System (TX) DUHS Clinical Laboratories Franklin Hunter Area Pathology Service Charity (LA)
Centralized Laboratory Services Site (NC) (Australia) Medical Center of McKinney (TX)
(NY) Dundy County Hospital (NE) Imelda Hospital (Belgium) Medical Centre Ljubljana (Slovenia)
Centre Hospitalier Anna-Laberge Durham VA Medical Center (NC) Indian River Memorial Hospital (FL) Medical College of Virginia
(Canada) DVA Laboratory Services (FL) Inova Fairfax Hospital (VA) Hospital (VA)
Centre Hospitalier Brome- Dwight D. Eisenhower Medical Institut fur Stand. und Dok. im Med. Medical Specialists (IN)
Missisquoi-Perkins (Canada) Center (KS) Lab. (Germany) Medical Univ. of South Carolina (SC)
Chaleur Regional Hospital (Canada) E. A. Conway Medical Center (LA) Institut National de Santé Publique du Quebec MediCorp - Mary Washington Hospital
Changhua Christian Hospital East Central Health (Canada) Centre de Doc. – INSPQ (Canada) (VA)
(Taiwan) East Georgia Regional Medical Institute Health Laboratories (PR) Memorial Hermann Healthcare System (TX)
Charleston Area Medical Center Center (GA) Institute of Clinical Pathology and Memorial Hospital at Gulfport (MS)
(WV) Eastern Health Pathology (Australia) Medical Research (Australia) Memorial Hospital Laboratory (CO)
The Charlotte Hungerford Hospital Easton Hospital (PA) Institute of Laboratory Medicine Memorial Medical Center (IL)
(CT) Edward Hospital (IL) Landspitali Univ. Hospital (Iceland) Memorial Medical Center (PA)
Chatham - Kent Health Alliance Effingham Hospital (GA) Institute of Medical & Veterinary Memorial Regional Hospital (FL)
(Canada) Eliza Coffee Memorial Hospital Science (Australia) Mercy Franciscan Mt. Airy (OH)
Chesapeake General Hospital (VA) (AL) Integrated Regional Laboratories Mercy Hospital (ME)
Chester County Hospital (PA) Emory University Hospital (GA) South Florida (FL) Methodist Dallas Medical Center (TX)
Children’s Healthcare of Atlanta Evangelical Community Hospital International Health Management Methodist Hospital (TX)
(GA) (PA) Associates, Inc. (IL) Methodist Hospital Pathology (NE)
The Children’s Hospital (CO) Evans Army Community Hospital IWK Health Centre (Canada) MetroHealth Medical Center (OH)
Children’s Hospital and Medical (CO) Jackson County Memorial Hospital (OK) Metropolitan Hospital Center (NY)
Center (WA) Exeter Hospital (NH) Jackson Health System (FL) Metropolitan Medical Laboratory, PLC (IA)
Children’s Hospital & Research Federal Medical Center (MN) Jackson Purchase Medical Center The Michener Inst. for Applied
Center at Oakland (CA) Fletcher Allen Health Care (VT) (KY) Health Sciences (Canada)
Children’s Hospital Medical Center Fleury S.A. (Brazil) Jacobi Medical Center (NY) Mid Michigan Medical Center – Midland
(OH) Florida Hospital (FL) John C. Lincoln Hospital (AZ) (MI)
Children’s Hospital of Philadelphia Florida Hospital Waterman (FL) John T. Mather Memorial Hospital (NY) Middelheim General Hospital (Belgium)
(PA) Fort St. John General Hospital Johns Hopkins Medical Institutions Middletown Regional Hospital (OH)
Children’s Hospitals and Clinics (Canada) (MD) Mike O'Callaghan Federal Hospital (NV)
(MN) Forum Health Northside Medical Johns Hopkins University (MD) Mississippi Baptist Medical Center (MS)
Children’s Medical Center (OH) Center (OH) Johnson City Medical Center Hospital (TN) Mississippi Public Health Lab (MS)
Children’s Medical Center (TX) Fox Chase Cancer Center (PA) JPS Health Network (TX) Monongalia General Hospital (WV)
Children’s Memorial Hospital (IL) Frankford Hospital (PA) Kadlec Medical Center (WA) Montefiore Medical Center (NY)
The Children’s Mercy Hospital Fraser Health Authority Kaiser Permanente (CA) Montreal General Hospital (Quebec,
(MO) Royal Columbian Hospital Site Kaiser Permanente (MD) Canada)
Childrens Hosp. – Kings Daughters (Canada) Kaiser Permanente (OH) Morton Plant Hospital (FL)
(VA) Fresenius Medical Care/Spectra East Kaiser Permanente Medical Care (CA) Mt. Sinai Hospital - New York (NY)
Childrens Hospital Los Angeles (NJ) Kantonsspital Aarau AG (Switzerland) MuirLab (CA)
(CA) Fundacio Joan Costa Roma Consorci Kenora-Rainy River Reg. Lab. Nassau County Medical Center (NY)
Childrens Hospital of Wisconsin Sanitari de Terrassa (Spain) Program (Canada) National Cancer Center (S. Korea)
(WI) Gamma-Dynacare Laboratories King Abdulaziz University Hospital (Saudi National Healthcare Group (Singapore)
Chilton Memorial Hospital (NJ) (Canada) Arabia) National Institutes of Health, Clinical
Christiana Care Health Services Garden City Hospital (MI) King Fahad Medical City (Saudi Arabia) Center (MD)
(DE) Garfield Medical Center (CA) King Fahad National Guard Hospital KAMC - National Naval Medical Center (MD)
Christus St. John Hospital (TX) Geisinger Medical Center (Danville, NGHA (Saudi Arabia) National University Hospital Department of
CHU Sainte-Justine (Quebec, PA) King Faisal Specialist Hospital (MD) Laboratory Medicine (Singapore)
Canada) Genesis Healthcare System (OH) King Hussein Cancer Center (Jordan) Nationwide Children's Hospital (OH)
City of Hope National Medical George Washington University Kings County Hospital Center (NY) Naval Hospital Great Lakes (IL)
Center (CA) Hospital (DC) Kingston General Hospital (Canada) Naval Medical Center Portsmouth (VA)
Clarian Health – Clarian Pathology Ghent University Hospital (Belgium) Lab Medico Santa Luzia LTDA (Brazil) NB Department of Health (Canada)
Laboratory (IN) Good Samaritan Hospital (OH) Labette Health (KS) The Nebraska Medical Center (NE)
Clinical Labs of Hawaii (HI) Good Shepherd Medical Center (TX) Laboratory Alliance of Central New New England Baptist Hospital (MA)
Clinton Memorial Hospital (OH) Grana S.A. (TX) York (NY) New England Fertility Institute (CT)
CLSI Laboratories, Univ. Pittsburgh Grand Strand Reg. Medical Center LabPlus Auckland Healthcare Services New Lexington Clinic (KY)
Med. Ctr. (PA) (SC) Limited (New Zealand) New York City Department of Health and
Colchester East Hants Health Gundersen Lutheran Medical Center Labway Clinical Laboratory Ltd (China) Mental Hygiene (NY)
Authority (Canada) (WI) Lafayette General Medical Center (LA) New York University Medical Center (NY)
College of Physicians and Surgeons Guthrie Clinic Laboratories (PA) Lakeland Regional Laboratories (MI) Newark Beth Israel Medical Center (NJ)
of Alberta (Canada) Haga Teaching Hospital Lakeland Regional Medical Center (FL) Newton Memorial Hospital (NJ)
Columbia Regional Hospital (MO) (Netherlands) Lancaster General Hospital (PA) North Bay Hospital (FL)
Commonwealth of Virginia (DCLS) Hagerstown Medical Laboratory Landstuhl Regional Medical Center (APO, AE) North Carolina Baptist Hospital (NC)
(VA) (MD) Langley Air Force Base (VA) North Coast Clinical Laboratory, Inc. (OH)
Community Hospital of the Halton Healthcare Services (Canada) LeBonheur Children’s Medical Center North District Hospital (Hong Kong, China)
Monterey Peninsula (CA) Hamad Medical Corporation (Qatar) (TN) North Mississippi Medical Center (MS)
Community Medical Center (NJ) Hamilton Regional Laboratory Medicine Legacy Laboratory Services (OR) North Shore Hospital Laboratory (New
Community Memorial Hospital (WI) Program (Canada) Lethbridge Regional Hospital (Canada) Zealand)
Consultants Laboratory of WI LLC Hanover General Hospital (PA) Lewis-Gale Medical Center (VA) North Shore-Long Island Jewish Health
(WI) Harford Memorial Hospital (MD) Licking Memorial Hospital (OH) System Laboratories (NY)
Contra Costa Regional Medical Harris Methodist Fort Worth (TX) LifeBridge Health Sinai Hospital (MD) Northeast Pathologists, Inc. (MO)
Center (CA) Health Network Lab (PA) LifeLabs (Canada) Northridge Hospital Medical Center (CA)
Cook Children’s Medical Center Health Partners Laboratories Bon LifeLabs Medical Laboratory Services (Canada) Northside Hospital (GA)
(TX) Secours Richmond (VA) Loma Linda University Medical (CA) Northwest Texas Hospital (TX)
Cork University Hospital (Ireland) Health Sciences Research Institute Long Beach Memorial Medical Northwestern Memorial Hospital (IL)
Cornwall Community Hospital (Japan) Center (CA) Norton Healthcare (KY)
(Canada) Health Waikato (New Zealand) Los Angeles County Public Health Ochsner Clinic Foundation (LA)
Corpus Christi Medical Center (TX) Heidelberg Army Hospital (APO, Lab. (CA) Ohio State University Hospitals (OH)
Covance CLS (IN) AE) Louisiana Office of Public Health Onze Lieve Vrouw Ziekenhuis (Belgium)
The Credit Valley Hospital (Canada) Helen Hayes Hospital (NY) Laboratory (LA) Ordre Professionel des Technologistes
Creighton University Medical Center Hema-Quebec (Canada) Louisiana State University Medical Ctr. Medicaux du Quebec (Quebec, Canada)
(NE) Hennepin Faculty Association (MN) (LA) Orebro University Hospital (Sweden)
Crozer-Chester Medical Center (PA) Henry Ford Hospital (MI) Lourdes Hospital (KY) Orlando Regional Healthcare System (FL)
Darwin Library NT Territory Health Henry M. Jackson Foundation (MD) Maccabi Medical Care and Health Fund (Isreal) Ospedale Casa Sollievo Della Sofferenza –
Services (Australia) Henry Medical Center, Inc. (GA) Mackay Memorial Hospital (Taiwan) IRCCS (Italy)
David Grant Medical Center (CA) Hi-Desert Medical Center (CA) Madison Parish Hospital (LA) The Ottawa Hospital (Canada)
Daviess Community Hospital (IN) Hoag Memorial Hospital Mafraq Hospital (UAE) Our Lady of Lourdes Medical Center (NJ)
Deaconess Hospital Laboratory (IN) Presbyterian (CA) Magnolia Regional Health Center (MS) Our Lady of Lourdes Reg. Medical Ctr.
Deaconess Medical Center (WA) Holy Cross Hospital (MD) Main Line Clinical Laboratories, Inc. (PA) (LA)
Dean Medical Center (WI) Holy Family Medical Center (WI) Makerere University Walter Reed Project Our Lady’s Hospital for Sick Children
DeWitt Healthcare Network (USA Holy Name Hospital (NJ) Makerere University Medical School (Ireland)
Meddac) (VA) Holy Spirit Hospital (PA) (Uganda) Overlake Hospital Medical Center (WA)
DHHS NC State Lab of Public Hopital Cite de La Sante de Laval Maricopa Integrated Health System (AZ) Palmetto Health Baptist Laboratory (SC)
Health (NC) (Canada) Marquette General Hospital (MI) Pathology and Cytology Laboratories,
Diagnostic Laboratory Services, Inc. Hopital du Haut-Richelieu (Canada) Marshfield Clinic (WI) Inc. (KY)
(HI) Hôpital Maisonneuve - Rosemont Martha Jefferson Hospital (VA) Pathology Associates Medical Laboratories
Diagnostic Services of Manitoba (Montreal, Canada) Martin Luther King, Jr. Harbor Hospital (WA)
(Canada) Hôpital Sacré-Coeur de Montreal (CA) Pathology Associates of Boone (NC)
Diagnósticos da América S/A (Quebec, Canada) Martin Memorial Health Systems (FL) Penn State Hershey Medical Center (PA)
(Brazil) Hopital Santa Cabrini Ospedale Mary Hitchcock Memorial Hospital (NH) Pennsylvania Hospital (PA)
Diaz Gill-Medicina Laboratorial (Canada) Mary Imogene Bassett Hospital (NY) The Permanente Medical Group (CA)
S.A. (Paraguay) Hospital Albert Einstein (Brazil) Massachusetts General Hospital (MA) Peterborough Regional Health Centre
Dimensions Healthcare System Hospital das Clinicas-FMUSP (Brazil) Mayo Clinic (MN) (Canada)
Prince George's Hospital Center Hospital Dirino Espirito Santa Mayo Clinic Scottsdale (AZ) Piedmont Hospital (GA)
(MD) (Portugal) Meadows Regional Medical Center (GA) Pitt County Memorial Hospital (NC)
Dr. Erfan & Bagedo General The Hospital for Sick Children Mease Countryside Hospital (FL) Prairie Lakes Hospital (SD)
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
Hospital (Saudi Arabia) (Canada) Medecin Microbiologiste (Canada) Presbyterian Hospital of Dallas (TX)
This document is protected by international copyright laws.
Presbyterian/St. Luke’s Medical St. Joseph’s Medical Center (CA) Tartu University Clinics (Estonia) UW Hospital (WI)
Center (CO) St. Joseph’s Regional Medical Texas Children's Hospital (TX) UZ-KUL Medical Center (Belgium)
Prince County Hospital (Canada) Center (NJ) Texas Department of State Health VA (Asheville) Medical Center (NC)
Princess Margaret Hospital (Hong St. Jude Children’s Research Services (TX) VA (Chillicothe) Medical Center
Kong, China) Hospital (TN) Thomason Hospital (TX) (OH)
Providence Alaska Medical Center St. Louis University Hospital (MO) Timmins and District Hospital VA (Cincinnati) Medical Center
(AK) St. Luke’s Hospital (FL) (Canada) (OH)
Providence Health Care (Canada) St. Luke’s Hospital (IA) The Toledo Hospital (OH) VA (Dallas) Medical Center (TX)
Providence Medford Medical Center St. Luke’s Hospital (PA) Touro Infirmary (LA) VA (Dayton) Medical Center (OH)
(OR) St. Martha’s Regional Hospital Tri-Cities Laboratory (WA) VA (Decatur) Medical Center (GA)
Provincial Health Services Authority (Canada) Trident Medical Center (SC) VA (Hines) Medical Center (IL)
(Vancouver, BC, Canada) St. Mary Medical Center (CA) Trinity Medical Center (AL) VA (Indianapolis) Medical Center
Provincial Laboratory for Public St. Mary's Hospital (WI) Tripler Army Medical Center (HI) (IN)
Health (Edmonton, AB, Canada) St. Michael’s Hospital Diagnostic Tufts New England Medical Center VA (Long Beach) Medical Center
Queen Elizabeth Hospital (Canada) Laboratories & Pathology Hospital (MA) (CA)
Queen Elizabeth Hospital (China) (Canada) Tulane Medical Center Hospital & VA (Miami) Medical Center (FL)
Queensland Health Pathology St. Tammany Parish Hospital (LA) Clinic (LA) VA New Jersey Health Care System
Services (Australia) Sampson Regional Medical Center Turku University Central Hospital (NJ)
Quest Diagnostics, Inc (NC) (Finland) VA Outpatient Clinic (OH)
Quest Diagnostics JV (IN, OH, PA) Samsung Medical Center (Korea) UC Davis Health System (CA) VA (San Diego) Medical Center
Quest Diagnostics Laboratories San Francisco General Hospital- UCI Medical Center (CA) (CA)
(WA) University of California San UCLA Medical Center VA (Seattle) Medical Center (WA)
Quincy Hospital (MA) Francisco (CA) Clinical Laboratories (CA) VA (Sheridan) Medical Center (WY)
Rady Children’s Hospital San Diego Sanford USP Medical Center (SD) UCSD Medical Center (CA) Valley Health (VA)
(CA) SARL Laboratoire Carron (France) UCSF Medical Center China Basin Vancouver Hospital and Health
Redington-Fairview General Saudi Aramco Medical (Saudi (CA) Sciences Center (BC, Canada)
Hospital (ME) Arabia) UMass Memorial Medical Center Vancouver Island Health Authority
Regional Health Authority Four Schneck Medical Center (IN) (MA) (SI) (Canada)
(RHA4) (Canada) Scott & White Memorial Hospital UMC of Southern Nevada (NV) Vanderbilt University Medical
Regions Hospital (MN) (TX) UNC Hospitals (NC) Center (TN)
Reid Hospital & Health Care Scott Air Force Base (IL) Union Clinical Laboratory (Taiwan) Via Christi Regional Medical Center
Services (IN) Seoul National University Hospital United Christian Hospital (Hong (KS)
Renown Regional Medical Center (Korea) Kong) Virga Jessezieukenhuis (Belgium)
(NV) Seton Medical Center (CA) United Clinical Laboratories (IA) Virtua - West Jersey Hospital (NJ)
Research Medical Center (MO) Shamokin Area Community Hospital Unity HealthCare (IA) WakeMed (NC)
Riverside Regional Medical Center (PA) Universita Campus Bio-Medico Walter Reed Army Medical Center
(VA) Sheik Kalifa Medical City (UAE) (Italy) (DC)
Riyadh Armed Forces Hospital, Shore Memorial Hospital (NJ) Universitair Ziekenhuis Antwerpen Warren Hospital (NJ)
Sulaymainia (Saudi Arabia) Shriners Hospitals for Children (SC) (Belgium) Washington Hospital Center (DC)
Rockford Memorial Hospital Assn. Singapore General Hospital University College Hospital (Ireland) Waterbury Hospital (CT)
(IL) (Singapore) University Hospital Center Waterford Regional Hospital
Roxborough Memorial Hospital SJRMC Plymouth Laboratory (IN) Sherbrooke (CHUS) (Canada) (Ireland)
(PA) Sky Lakes Medical Center (OR) University Medical Center at Wayne Memorial Hospital (NC)
Royal Victoria Hospital (Canada) South Bend Medical Foundation (IN) Princeton (NJ) Weirton Medical Center (WV)
Rush North Shore Medical Center South Miami Hospital (FL) University of Alabama Hospital Lab Wellstar Douglas Hospital
(IL) Southern Health Care Network (AL) Laboratory (GA)
SAAD Specialist Hospital (Saudi (Australia) University of Arkansas for Medical Wellstar Paulding Hospital (GA)
Arabia) Southern Maine Medical Center Sci. (AR) Wellstar Windy Hill Hospital
Sacred Heart Hospital (FL) (ME) University of Chicago Hospitals (IL) Laboratory (GA)
Sacred Heart Hospital (WI) Speare Memorial Hospital (NH) University of Colorado Health West China Second University
Sahlgrenska Universitetssjukhuset Spectrum Health - Blodgett Campus Sciences Center (CO) Hospital, Sichuan University (P.R.
(Sweden) (MI) University of Colorado Hospital China)
Saint Francis Hospital & Medical Stanford Hospital and Clinics (CA) (CO) West Valley Medical Center
Center (CT) State of Connecticut Department of University of Iowa Hospitals and Laboratory (ID)
Saint Mary's Regional Medical Public Health (CT) Clinics (IA) Western Baptist Hospital (KY)
Center (NV) State of Hawaii Department of University of Kentucky Med. Ctr. Western Healthcare Corporation
Saints Memorial Medical Center Health (HI) (KY) (Canada)
(MA) State of Washington-Public Health University of Maryland Medical Wheaton Franciscan & Midwest
St. Anthony Hospital (OK) Labs (WA) System (MD) Clinical Laboratories (WI)
St. Anthony’s Hospital (FL) Stillwater Medical Center (OK) University of Miami (FL) Wheeling Hospital (WV)
St. Barnabas Medical Center (NJ) Stony Brook University Hospital University of MN Medical Center - William Beaumont Army Medical
St. Christopher’s Hospital for (NY) Fairview (MN) Center (TX)
Children (PA) Sudbury Regional Hospital (Canada) University of MS Medical Center William Beaumont Hospital (MI)
St. Elizabeth Community Hospital Suncoast Medical Clinic (FL) (MS) William Osler Health Centre
(CA) Sunnybrook Health Science Center University of So. Alabama (Canada)
St. Eustache Hospital (Canada) (ON, Canada) Children’s and Women’s Hospital Winchester Hospital (MA)
St. Francis Hospital (SC) Sunrise Hospital and Medical Center (AL) Winn Army Community Hospital
St. John’s Hospital (IL) (NV) University of Texas Health Center (GA)
St. John’s Hospital & Health Ctr. Sydney South West Pathology (TX) Wisconsin State Laboratory of
(CA) Service Liverpool Hospital The University of Texas Medical Hygiene (WI)
St. John’s Mercy Medical Center (Australia) Branch (TX) Wishard Health Sciences (IN)
(MO) T.J. Samson Community Hospital University of the Ryukyus (Japan) Womack Army Medical Center (NC)
St. John’s Regional Health Center (KY) University of Virginia Medical Woodlawn Hospital (IN)
(MO) Taipei Veterans General Hospital Center (VA) York Hospital (PA)
St. Joseph Health Center (MO) (Taiwan) University of Washington (WA)
St. Joseph Mercy – Oakland (MI) Taiwan Society of Laboratory UPMC Bedford Memorial (PA)
St. Joseph Mercy Hospital (MI) Medicine (Taiwan) U.S.A. Meddac (Pathology Division)
St. Joseph’s Hospital (FL) Tallaght Hospital (Ireland) (MO)
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.
Explore the Latest Offerings From CLSI!
As we continue to set the global standard for quality in laboratory testing, we are adding products and
programs to bring even more value to our members and customers.
E: [email protected] www.clsi.org
Infobase 2016. This document is licensed for use on a single device, and is not to be distributed for network access.
This document is protected by international copyright laws.