0% found this document useful (0 votes)
62 views

Sensitivity and Specificity

Sensitivity and specificity are measures of the accuracy of diagnostic tests. Sensitivity refers to the ability of a test to correctly identify individuals who have a condition (true positive rate) while specificity refers to the ability of a test to correctly identify individuals who do not have the condition (true negative rate). There is usually a tradeoff between sensitivity and specificity, such that a more sensitive test will be less specific and vice versa. These measures are important for determining whether a test is better for diagnosing or ruling out a condition.

Uploaded by

mia farrow
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views

Sensitivity and Specificity

Sensitivity and specificity are measures of the accuracy of diagnostic tests. Sensitivity refers to the ability of a test to correctly identify individuals who have a condition (true positive rate) while specificity refers to the ability of a test to correctly identify individuals who do not have the condition (true negative rate). There is usually a tradeoff between sensitivity and specificity, such that a more sensitive test will be less specific and vice versa. These measures are important for determining whether a test is better for diagnosing or ruling out a condition.

Uploaded by

mia farrow
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Sensitivity and specificity

Sensitivity and specificity mathematically describe the accuracy of a test that


reports the presence or absence of a condition. If individuals who have the
condition are considered "positive" and those who don't are considered
"negative", then sensitivity is a measure of how well a test can identify true
positives and specificity is a measure of how well a test can identify true
negatives:

Sensitivity (true positive rate) is the probability of a positive test


result, conditioned on the individual truly being positive.
Specificity (true negative rate) is the probability of a negative
test result, conditioned on the individual truly being negative.

If the true status of the condition cannot be known, sensitivity and


specificity can be defined relative to a "gold standard test" which is assumed
correct. For all testing, both diagnostic and screening, there is usually a
trade-off between sensitivity and specificity, such that higher sensitivities
will mean lower specificities and vice versa.

A test which reliably detects the presence of a condition, resulting in a high


number of true positives and low number of false negatives, will have a high
sensitivity. This is especially important when the consequence of failing to
treat the condition is serious and/or the treatment is very effective and has
minimal side effects.

A test which reliably excludes individuals who do not have the condition,
resulting in a high number of true negatives and low number of false
positives, will have a high specificity. This is especially important when
people who are identified as having a condition may be subjected to more
testing, expense, stigma, anxiety, etc.

The terms "sensitivity" and "specificity" were introduced by American


biostatistician Jacob Yerushalmy in 1947.[1]

There are different definitions within laboratory quality control, wherein


"analytical sensitivity" is defined as the smallest amount of substance in a Sensitivity and specificity - The left half of the image with
sample that can accurately be measured by an assay (synonymously to the solid dots represents individuals who have the
detection limit), and "analytical specificity" is defined as the ability of an condition, while the right half of the image with the hollow
dots represents individuals who do not have the condition.
assay to measure one particular organism or substance, rather than
The circle represents all individuals who tested positive.
others.[12] However, this article deals with diagnostic sensitivity and
specificity as defined at top.

Application to screening study


Imagine a study evaluating a test that screens people for a disease. Each
person taking the test either has or does not have the disease. The test
outcome can be positive (classifying the person as having the disease) or
negative (classifying the person as not having the disease). The test results
for each subject may or may not match the subject's actual status. In that
setting:

True positive: Sick people correctly identified as sick


False positive: Healthy people incorrectly identified as sick
True negative: Healthy people correctly identified as healthy
False negative: Sick people incorrectly identified as healthy

After getting the numbers of true positives, false positives, true negatives,
Sensitivity and specificity
and false negatives, the sensitivity and specificity for the test can be
calculated. If it turns out that the sensitivity is high then any person who
has the disease is likely to be classified as positive by the Terminology and derivations
test. On the other hand, if the specificity is high, any person from a confusion matrix
who does not have the disease is likely to be classified as
negative by the test. An NIH web site has a discussion of condition positive (P)
how these ratios are calculated.[13] the number of real positive cases in the data
condition negative (N)
the number of real negative cases in the data
Definition
true positive (TP)
Sensitivity A test result that correctly indicates the presence of a
condition or characteristic
Consider the example of a medical test for diagnosing a true negative (TN)
condition. Sensitivity (sometimes also named the detection A test result that correctly indicates the absence of a
rate in a clinical setting) refers to the test's ability to condition or characteristic
correctly detect ill patients out of those who do have the false positive (FP), Type I error
A test result which wrongly indicates that a particular
condition.[14] Mathematically, this can be expressed as:
condition or attribute is present
false negative (FN), Type II error
A test result which wrongly indicates that a particular
condition or attribute is absent

sensitivity, recall, hit rate, or true positive rate (TPR)

specificity, selectivity or true negative rate (TNR)

precision or positive predictive value (PPV)

negative predictive value (NPV)

miss rate or false negative rate (FNR)

fall-out or false positive rate (FPR)

false discovery rate (FDR)

false omission rate (FOR)

Positive likelihood ratio (LR+)

Negative likelihood ratio (LR-)

prevalence threshold (PT)

threat score (TS) or critical success index (CSI)

Prevalence

accuracy (ACC)
balanced accuracy (BA)

F1 score
is the harmonic mean of precision and sensitivity:

phi coefficient (φ or rφ) or Matthews correlation coefficient


(MCC)

Fowlkes–Mallows index (FM)

informedness or bookmaker informedness (BM)

markedness (MK) or deltaP (Δp)

Diagnostic odds ratio (DOR)

Sources: Fawcett (2006),[2] Piryonesi and El-Diraby (2020),[3] Powers


(2011),[4] Ting (2011),[5] CAWCR,[6]
[7][8][9]
D. Chicco & G. Jurman (2020, 2021, 2023), Tharwat (2018).[10]
[11]
Balayla (2020)

A negative result in a test with high sensitivity can be useful for "ruling out" disease,[14] since it rarely misdiagnoses those who do have
the disease. A test with 100% sensitivity will recognize all patients with the disease by testing positive. In this case, a negative test result
would definitively rule out the presence of the disease in a patient. However, a positive result in a test with high sensitivity is not
necessarily useful for "ruling in" disease. Suppose a 'bogus' test kit is designed to always give a positive reading. When used on
diseased patients, all patients test positive, giving the test 100% sensitivity. However, sensitivity does not take into account false
positives. The bogus test also returns positive on all healthy patients, giving it a false positive rate of 100%, rendering it useless for
detecting or "ruling in" the disease.

The calculation of sensitivity does not take into account indeterminate test results. If a test cannot be repeated, indeterminate samples
either should be excluded from the analysis (the number of exclusions should be stated when quoting sensitivity) or can be treated as
false negatives (which gives the worst-case value for sensitivity and may therefore underestimate it).

A test with a higher sensitivity has a lower type II error rate.

Specificity

Consider the example of a medical test for diagnosing a disease. Specificity refers to the test's ability to correctly reject healthy patients
without a condition. Mathematically, this can be written as:
A positive result in a test with high specificity can be useful for "ruling in" disease, since the test rarely gives positive results in healthy
patients.[15] A test with 100% specificity will recognize all patients without the disease by testing negative, so a positive test result
would definitively rule in the presence of the disease. However, a negative result from a test with high specificity is not necessarily
useful for "ruling out" disease. For example, a test that always returns a negative test result will have a specificity of 100% because
specificity does not consider false negatives. A test like that would return negative for patients with the disease, making it useless for
"ruling out" the disease.

A test with a higher specificity has a lower type I error rate.

Graphical illustration

High sensitivity and low specificity

Low sensitivity and high specificity


A graphical illustration of sensitivity and specificity

The above graphical illustration is meant to show the relationship between sensitivity and specificity. The black, dotted line in the center
of the graph is where the sensitivity and specificity are the same. As one moves to the left of the black dotted line, the sensitivity
increases, reaching its maximum value of 100% at line A, and the specificity decreases. The sensitivity at line A is 100% because at that
point there are zero false negatives, meaning that all the negative test results are true negatives. When moving to the right, the opposite
applies, the specificity increases until it reaches the B line and becomes 100% and the sensitivity decreases. The specificity at line B is
100% because the number of false positives is zero at that line, meaning all the positive test results are true positives.

The middle solid line in both figures that show the level of sensitivity and specificity is the test cutoff point. As previously described,
moving this line results in a trade-off between the level of sensitivity and specificity. The left-hand side of this line contains the data
points that tests below the cut off point and are considered negative (the blue dots indicate the False Negatives (FN), the white dots
True Negatives (TN)). The right-hand side of the line shows the data points that tests above the cut off point and are considered positive
(red dots indicate False Positives (FP)). Each side contains 40 data points.

For the figure that shows high sensitivity and low specificity, there are 3 FN and 8 FP. Using the fact that positive results = true
positives (TP) + FP, we get TP = positive results - FP, or TP = 40 - 8 = 32. The number of sick people in the data set is equal to TP +
FN, or 32 + 3 = 35. The sensitivity is therefor 32 / 35 = 91.4%. Using the same method, we get TN = 40 - 3 = 37, and the number of
healthy people 37 + 8 = 45, which results in a specificity of 37 / 45 = 82.2 %.

For the figure that shows low sensitivity and high specificity, there are 8 FN and 3 FP. Using the same method as the previous figure,
we get TP = 40 - 3 = 37. The number of sick people is 37 + 8 = 45, which gives a sensitivity of 37 / 45 = 82.2 %. There are 40 - 8 = 32
TN. The specificity therefor comes out to 32 / 35 = 91.4%.
A test result with 100 percent A test result with 100 percent
sensitivity specificity

The red dot indicates the patient with the medical condition. The red background indicates the area where the test predicts the data point
to be positive. The true positive in this figure is 6, and false negatives of 0 (because all positive condition is correctly predicted as
positive). Therefore, the sensitivity is 100% (from 6 / (6 + 0)). This situation is also illustrated in the previous figure where the dotted
line is at position A (the left-hand side is predicted as negative by the model, the right-hand side is predicted as positive by the model).
When the dotted line, test cut-off line, is at position A, the test correctly predicts all the population of the true positive class, but it will
fail to correctly identify the data point from the true negative class.

Similar to the previously explained figure, the red dot indicates the patient with the medical condition. However, in this case, the green
background indicates that the test predicts that all patients are free of the medical condition. The number of data point that is true
negative is then 26, and the number of false positives is 0. This result in 100% specificity (from 26 / (26 + 0)). Therefore, sensitivity or
specificity alone cannot be used to measure the performance of the test.

Medical usage
In medical diagnosis, test sensitivity is the ability of a test to correctly identify those with the disease (true positive rate), whereas test
specificity is the ability of the test to correctly identify those without the disease (true negative rate). If 100 patients known to have a
disease were tested, and 43 test positive, then the test has 43% sensitivity. If 100 with no disease are tested and 96 return a completely
negative result, then the test has 96% specificity. Sensitivity and specificity are prevalence-independent test characteristics, as their
values are intrinsic to the test and do not depend on the disease prevalence in the population of interest.[16] Positive and negative
predictive values, but not sensitivity or specificity, are values influenced by the prevalence of disease in the population that is being
tested. These concepts are illustrated graphically in this applet Bayesian clinical diagnostic model (https://kennis-research.shinyapps.io/
Bayes-App/) which show the positive and negative predictive values as a function of the prevalence, sensitivity and specificity.

Misconceptions

It is often claimed that a highly specific test is effective at ruling in a disease when positive, while a highly sensitive test is deemed
effective at ruling out a disease when negative.[17][18] This has led to the widely used mnemonics SPPIN and SNNOUT, according to
which a highly specific test, when positive, rules in disease (SP-P-IN), and a highly sensitive test, when negative, rules out disease
(SN-N-OUT). Both rules of thumb are, however, inferentially misleading, as the diagnostic power of any test is determined by both its
sensitivity and its specificity.[19][20][21]

The tradeoff between specificity and sensitivity is explored in ROC analysis as a trade off between TPR and FPR (that is, recall and
fallout).[22] Giving them equal weight optimizes informedness = specificity + sensitivity − 1 = TPR − FPR, the magnitude of which
gives the probability of an informed decision between the two classes (>  0 represents appropriate use of information, 0 represents
chance-level performance, < 0 represents perverse use of information).[23]

Sensitivity index
The sensitivity index or d′ (pronounced "dee-prime") is a statistic used in signal detection theory. It provides the separation between the
means of the signal and the noise distributions, compared against the standard deviation of the noise distribution. For normally
distributed signal and noise with mean and standard deviations and , and and , respectively, d′ is defined as:

[24]

An estimate of d′ can be also found from measurements of the hit rate and false-alarm rate. It is calculated as:

d′ = Z(hit rate) − Z(false alarm rate),[25]

where function Z(p), p ∈ [0, 1], is the inverse of the cumulative Gaussian distribution.

d′ is a dimensionless statistic. A higher d′ indicates that the signal can be more readily detected.

Confusion matrix
The relationship between sensitivity, specificity, and similar terms can be understood using the following table. Consider a group with P
positive instances and N negative instances of some condition. The four outcomes can be formulated in a 2×2 contingency table or
confusion matrix, as well as derivations of several metrics using the four outcomes, as follows:

Predicted condition Sources: [26][27][28][29][30][31][32][33][34]

Informedness, bookmaker Prevalence threshold (PT)


Total population
Positive (PP) Negative (PN) informedness (BM)
=P+N =
= TPR + TNR − 1
True positive rate (TPR),
False negative recall, sensitivity (SEN), False negative rate (FNR),
Actual condition

True positive (TP), (FN), probability of detection, hit rate, miss rate


Positive (P)
hit type II error, miss, power FN
= P = 1 − TPR
underestimation TP
= P = 1 − FNR

False positive (FP), False positive rate (FPR), True negative rate (TNR),
True negative (TN), probability of false alarm, fall-out specificity (SPC), selectivity
Negative (N) type I error, false alarm,
correct rejection
overestimation = FP N = 1 − TNR
= TN N = 1 − FPR

Positive predictive value (PPV), False omission rate Negative likelihood ratio
Prevalence Positive likelihood ratio (LR+)
P precision (FOR) TPR (LR−)
= P+N TP FN = FPR FNR
= PP = 1 − FDR = PN = 1 − NPV = TNR

Negative predictive
Accuracy (ACC) False discovery rate (FDR) TN Markedness (MK), deltaP (Δp) Diagnostic odds ratio
TP + TN value (NPV) = PN
= P+N = FP
PP = 1 − PPV = PPV + NPV − 1
LR+
(DOR) = LR−
= 1 − FOR

Matthews correlation Threat score (TS), critical


Balanced F1 score Fowlkes–Mallows
coefficient (MCC) success index (CSI),
accuracy (BA) index (FM)
= TPR +2 TNR = 2 
PPV × TPR
= 2 TP +2 FP
TP
= Jaccard index
PPV + TPR + FN = TP
= TP + FN + FP

A worked example
A diagnostic test with sensitivity 67% and specificity 91% is applied to 2030 people to look for a disorder with a
population prevalence of 1.48%
Fecal occult blood screen test outcome

F1 score
Accuracy (ACC)
Total population Test outcome Test outcome = (TP + TN) / pop. precision × recall
= 2 × precision + recall
(pop.) = 2030 positive negative = (20 + 1820) / 2030
≈ 90.64%
≈ 0.174

True positive rate (TPR), False negative rate (FNR),


True positive (TP) False negative (FN)
Actual recall, sensitivity miss rate
Patients = 20 = 10
condition = TP / (TP + FN) = FN / (TP + FN)
with (2030 × 1.48% × (2030 × 1.48% ×
positive = 20 / (20 + 10) = 10 / (20 + 10)
bowel 67%) (100% − 67%))
≈ 66.7% ≈ 33.3%
cancer
(as False positive rate (FPR),
False positive (FP) True negative (TN) Specificity, selectivity, true
confirmed fall-out,
Actual = 180 = 1820 negative rate (TNR)
on probability of false alarm
condition (2030 × (2030 × = TN / (FP + TN)
endoscopy) = FP / (FP + TN)
negative (100% − 1.48%) × (100% − 1.48%) × = 1820 / (180 + 1820)
= 180 / (180 + 1820)
(100% − 91%)) 91%) = 91%
= 9.0%

Positive predictive Positive likelihood ratio Negative likelihood ratio


False omission rate
Prevalence value (PPV), (LR+) (LR−)
(FOR)
= (TP + FN) / pop. precision TPR FNR
= FN / (FN + TN) = FPR = TNR
= (20 + 10) / 2030 = TP / (TP + FP)
= 10 / (10 + 1820) = (20 / 30) / (180 / 2000) = (10 / 30) / (1820 / 2000)
≈ 1.48% = 20 / (20 + 180)
≈ 0.55% ≈ 7.41 ≈ 0.366
= 10%
False discovery Negative predictive
Diagnostic odds ratio (DOR)
rate (FDR) value (NPV)
= FP / (TP + FP) = TN / (FN + TN) = LR+
LR−
= 180 / (20 + 180) = 1820 / (10 + 1820)
≈ 20.2
= 90.0% ≈ 99.45%

Related calculations

False positive rate (α) = type I error = 1 − specificity = FP / (FP + TN) = 180 / (180 + 1820) = 9%
False negative rate (β) = type II error = 1 − sensitivity = FN / (TP + FN) = 10 / (20 + 10) ≈ 33%
Power = sensitivity = 1 − β
Positive likelihood ratio = sensitivity / (1 − specificity) ≈ 0.67 / (1 − 0.91) ≈ 7.4
Negative likelihood ratio = (1 − sensitivity) / specificity ≈ (1 − 0.67) / 0.91 ≈ 0.37

Prevalence threshold = ≈ 0.2686 ≈ 26.9%

This hypothetical screening test (fecal occult blood test) correctly identified two-thirds (66.7%) of patients with colorectal cancer.[a]
Unfortunately, factoring in prevalence rates reveals that this hypothetical test has a high false positive rate, and it does not reliably
identify colorectal cancer in the overall population of asymptomatic people (PPV = 10%).

On the other hand, this hypothetical test demonstrates very accurate detection of cancer-free individuals (NPV ≈ 99.5%). Therefore,
when used for routine colorectal cancer screening with asymptomatic adults, a negative result supplies important data for the patient and
doctor, such as ruling out cancer as the cause of gastrointestinal symptoms or reassuring patients worried about developing colorectal
cancer.

Estimation of errors in quoted sensitivity or specificity


Sensitivity and specificity values alone may be highly misleading. The 'worst-case' sensitivity or specificity must be calculated in order
to avoid reliance on experiments with few results. For example, a particular test may easily show 100% sensitivity if tested against the
gold standard four times, but a single additional test against the gold standard that gave a poor result would imply a sensitivity of only
80%. A common way to do this is to state the binomial proportion confidence interval, often calculated using a Wilson score interval.

Confidence intervals for sensitivity and specificity can be calculated, giving the range of values within which the correct value lies at a
given confidence level (e.g., 95%).[37]

Terminology in information retrieval


In information retrieval, the positive predictive value is called precision, and sensitivity is called recall. Unlike the Specificity vs
Sensitivity tradeoff, these measures are both independent of the number of true negatives, which is generally unknown and much larger
than the actual numbers of relevant and retrieved documents. This assumption of very large numbers of true negatives versus positives
is rare in other applications.[23]
The F-score can be used as a single measure of performance of the test for the positive class. The F-score is the harmonic mean of
precision and recall:

In the traditional language of statistical hypothesis testing, the sensitivity of a test is called the statistical power of the test, although the
word power in that context has a more general usage that is not applicable in the present context. A sensitive test will have fewer Type
II errors.

See also
Science portal

Biology portal

Medicine portal

Brier score
Cumulative accuracy profile
Discrimination (information)
False positive paradox
Hypothesis tests for accuracy
Precision and recall
Receiver operating characteristic
Statistical significance
Uncertainty coefficient, also called proficiency
Youden's J statistic

Notes
a. There are advantages and disadvantages for all medical screening tests. Clinical practice guidelines, such as those
for colorectal cancer screening, describe these risks and benefits.[35][36]

References
1. Yerushalmy J (1947). "Statistical problems in assessing methods of medical diagnosis with special reference to x-ray
techniques". Public Health Reports. 62 (2): 1432–39. doi:10.2307/4586294 (https://doi.org/10.2307%2F4586294).
JSTOR 4586294 (https://www.jstor.org/stable/4586294). PMID 20340527 (https://pubmed.ncbi.nlm.nih.gov/2034052
7). S2CID 19967899 (https://api.semanticscholar.org/CorpusID:19967899).
2. Fawcett, Tom (2006). "An Introduction to ROC Analysis" (http://people.inf.elte.hu/kiss/11dwhdm/roc.pdf) (PDF).
Pattern Recognition Letters. 27 (8): 861–874. doi:10.1016/j.patrec.2005.10.010 (https://doi.org/10.1016%2Fj.patrec.20
05.10.010).
3. Piryonesi S. Madeh; El-Diraby Tamer E. (2020-03-01). "Data Analytics in Asset Management: Cost-Effective
Prediction of the Pavement Condition Index". Journal of Infrastructure Systems. 26 (1): 04019036.
doi:10.1061/(ASCE)IS.1943-555X.0000512 (https://doi.org/10.1061%2F%28ASCE%29IS.1943-555X.0000512).
4. Powers, David M. W. (2011). "Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness
& Correlation" (https://www.researchgate.net/publication/228529307). Journal of Machine Learning Technologies. 2
(1): 37–63.
5. Ting, Kai Ming (2011). Sammut, Claude; Webb, Geoffrey I. (eds.). Encyclopedia of machine learning. Springer.
doi:10.1007/978-0-387-30164-8 (https://doi.org/10.1007%2F978-0-387-30164-8). ISBN 978-0-387-30164-8.
6. Brooks, Harold; Brown, Barb; Ebert, Beth; Ferro, Chris; Jolliffe, Ian; Koh, Tieh-Yong; Roebber, Paul; Stephenson,
David (2015-01-26). "WWRP/WGNE Joint Working Group on Forecast Verification Research" (https://www.cawcr.gov.
au/projects/verification/). Collaboration for Australian Weather and Climate Research. World Meteorological
Organisation. Retrieved 2019-07-17.
7. Chicco D.; Jurman G. (January 2020). "The advantages of the Matthews correlation coefficient (MCC) over F1 score
and accuracy in binary classification evaluation" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6941312). BMC
Genomics. 21 (1): 6-1–6-13. doi:10.1186/s12864-019-6413-7 (https://doi.org/10.1186%2Fs12864-019-6413-7).
PMC 6941312 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6941312). PMID 31898477 (https://pubmed.ncbi.nlm.n
ih.gov/31898477).
8. Chicco D.; Toetsch N.; Jurman G. (February 2021). "The Matthews correlation coefficient (MCC) is more reliable than
balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation" (https://ww
w.ncbi.nlm.nih.gov/pmc/articles/PMC7863449). BioData Mining. 14 (13): 1-22. doi:10.1186/s13040-021-00244-z (http
s://doi.org/10.1186%2Fs13040-021-00244-z). PMC 7863449 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC786344
9). PMID 33541410 (https://pubmed.ncbi.nlm.nih.gov/33541410).
9. Chicco D.; Jurman G. (2023). "The Matthews correlation coefficient (MCC) should replace the ROC AUC as the
standard metric for assessing binary classification" (https://doi.org/10.1186/s13040-023-00322-4). BioData Mining. 16
(1). doi:10.1186/s13040-023-00322-4 (https://doi.org/10.1186%2Fs13040-023-00322-4). PMC 9938573 (https://www.
ncbi.nlm.nih.gov/pmc/articles/PMC9938573).
10. Tharwat A. (August 2018). "Classification assessment methods" (https://doi.org/10.1016%2Fj.aci.2018.08.003).
Applied Computing and Informatics. doi:10.1016/j.aci.2018.08.003 (https://doi.org/10.1016%2Fj.aci.2018.08.003).
11. Balayla, Jacques (2020). "Prevalence threshold (ϕe) and the geometry of screening curves" (https://journals.plos.org/
plosone/article?id=10.1371/journal.pone.0240215). PLoS One. 15 (10). doi:10.1371/journal.pone.0240215 (https://do
i.org/10.1371%2Fjournal.pone.0240215).
12. Saah AJ, Hoover DR (1998). "[Sensitivity and specificity revisited: significance of the terms in analytic and diagnostic
language]" (https://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&tool=sumsearch.org/cite&retmode=
ref&cmd=prlinks&id=9747274). Ann Dermatol Venereol. 125 (4): 291–4. PMID 9747274 (https://pubmed.ncbi.nlm.nih.
gov/9747274).
13. Parikh, Rajul; Mathai, Annie; Parikh, Shefali; Chandra Sekhar, G; Thomas, Ravi (2008). "Understanding and using
sensitivity, specificity and predictive values" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2636062). Indian Journal
of Ophthalmology. 56 (1): 45–50. doi:10.4103/0301-4738.37595 (https://doi.org/10.4103%2F0301-4738.37595).
PMC 2636062 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2636062). PMID 18158403 (https://pubmed.ncbi.nlm.n
ih.gov/18158403).
14. Altman DG, Bland JM (June 1994). "Diagnostic tests. 1: Sensitivity and specificity" (https://www.ncbi.nlm.nih.gov/pmc/
articles/PMC2540489). BMJ. 308 (6943): 1552. doi:10.1136/bmj.308.6943.1552 (https://doi.org/10.1136%2Fbmj.308.
6943.1552). PMC 2540489 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2540489). PMID 8019315 (https://pubme
d.ncbi.nlm.nih.gov/8019315).
15. "SpPin and SnNout" (https://www.cebm.ox.ac.uk/resources/ebm-tools/sppin-and-snnout). Centre for Evidence Based
Medicine (CEBM). Retrieved 18 January 2023.
16. Mangrulkar R. "Diagnostic Reasoning I and II" (http://open.umich.edu/education/med/m1/patientspop-decisionmakin
g/2010/materials). Retrieved 24 January 2012.
17. "Evidence-Based Diagnosis" (https://web.archive.org/web/20130706035232/http://omerad.msu.edu/ebm/Diagnosis/D
iagnosis4.html). Michigan State University. Archived from the original (http://omerad.msu.edu/ebm/Diagnosis/Diagnos
is4.html) on 2013-07-06. Retrieved 2013-08-23.
18. "Sensitivity and Specificity" (http://www.med.emory.edu/EMAC/curriculum/diagnosis/sensand.htm). Emory University
Medical School Evidence Based Medicine course.
19. Baron JA (Apr–Jun 1994). "Too bad it isn't true". Medical Decision Making. 14 (2): 107.
doi:10.1177/0272989X9401400202 (https://doi.org/10.1177%2F0272989X9401400202). PMID 8028462 (https://pub
med.ncbi.nlm.nih.gov/8028462). S2CID 44505648 (https://api.semanticscholar.org/CorpusID:44505648).
20. Boyko EJ (Apr–Jun 1994). "Ruling out or ruling in disease with the most sensitive or specific diagnostic test: short cut
or wrong turn?". Medical Decision Making. 14 (2): 175–9. doi:10.1177/0272989X9401400210 (https://doi.org/10.117
7%2F0272989X9401400210). PMID 8028470 (https://pubmed.ncbi.nlm.nih.gov/8028470). S2CID 31400167 (https://a
pi.semanticscholar.org/CorpusID:31400167).
21. Pewsner D, Battaglia M, Minder C, Marx A, Bucher HC, Egger M (July 2004). "Ruling a diagnosis in or out with
"SpPIn" and "SnNOut": a note of caution" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC487735). BMJ. 329 (7459):
209–13. doi:10.1136/bmj.329.7459.209 (https://doi.org/10.1136%2Fbmj.329.7459.209). PMC 487735 (https://www.nc
bi.nlm.nih.gov/pmc/articles/PMC487735). PMID 15271832 (https://pubmed.ncbi.nlm.nih.gov/15271832).
22. Fawcett, Tom (2006). "An Introduction to ROC Analysis". Pattern Recognition Letters. 27 (8): 861–874.
Bibcode:2006PaReL..27..861F (https://ui.adsabs.harvard.edu/abs/2006PaReL..27..861F).
doi:10.1016/j.patrec.2005.10.010 (https://doi.org/10.1016%2Fj.patrec.2005.10.010). S2CID 2027090 (https://api.sema
nticscholar.org/CorpusID:2027090).
23. Powers, David M. W. (2011). "Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness
& Correlation" (https://www.researchgate.net/publication/228529307). Journal of Machine Learning Technologies. 2
(1): 37–63.
24. Gale SD, Perkel DJ (January 2010). "A basal ganglia pathway drives selective auditory responses in songbird
dopaminergic neurons via disinhibition" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2824341). The Journal of
Neuroscience. 30 (3): 1027–37. doi:10.1523/JNEUROSCI.3585-09.2010 (https://doi.org/10.1523%2FJNEUROSCI.35
85-09.2010). PMC 2824341 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2824341). PMID 20089911 (https://pubm
ed.ncbi.nlm.nih.gov/20089911).
25. Macmillan NA, Creelman CD (15 September 2004). Detection Theory: A User's Guide (https://books.google.com/book
s?id=hDX65v9bReYC). Psychology Press. p. 7. ISBN 978-1-4106-1114-7.
26. Balayla, Jacques (2020). "Prevalence threshold (ϕe) and the geometry of screening curves" (https://journals.plos.org/
plosone/article?id=10.1371/journal.pone.0240215). PLoS One. 15 (10). doi:10.1371/journal.pone.0240215 (https://do
i.org/10.1371%2Fjournal.pone.0240215).
27. Fawcett, Tom (2006). "An Introduction to ROC Analysis" (http://people.inf.elte.hu/kiss/11dwhdm/roc.pdf) (PDF).
Pattern Recognition Letters. 27 (8): 861–874. doi:10.1016/j.patrec.2005.10.010 (https://doi.org/10.1016%2Fj.patrec.20
05.10.010).
28. Piryonesi S. Madeh; El-Diraby Tamer E. (2020-03-01). "Data Analytics in Asset Management: Cost-Effective
Prediction of the Pavement Condition Index". Journal of Infrastructure Systems. 26 (1): 04019036.
doi:10.1061/(ASCE)IS.1943-555X.0000512 (https://doi.org/10.1061%2F%28ASCE%29IS.1943-555X.0000512).
29. Powers, David M. W. (2011). "Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness
& Correlation" (https://www.researchgate.net/publication/228529307). Journal of Machine Learning Technologies. 2
(1): 37–63.
30. Ting, Kai Ming (2011). Sammut, Claude; Webb, Geoffrey I. (eds.). Encyclopedia of machine learning. Springer.
doi:10.1007/978-0-387-30164-8 (https://doi.org/10.1007%2F978-0-387-30164-8). ISBN 978-0-387-30164-8.
31. Brooks, Harold; Brown, Barb; Ebert, Beth; Ferro, Chris; Jolliffe, Ian; Koh, Tieh-Yong; Roebber, Paul; Stephenson,
David (2015-01-26). "WWRP/WGNE Joint Working Group on Forecast Verification Research" (https://www.cawcr.gov.
au/projects/verification/). Collaboration for Australian Weather and Climate Research. World Meteorological
Organisation. Retrieved 2019-07-17.
32. Chicco D, Jurman G (January 2020). "The advantages of the Matthews correlation coefficient (MCC) over F1 score
and accuracy in binary classification evaluation" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6941312). BMC
Genomics. 21 (1): 6-1–6-13. doi:10.1186/s12864-019-6413-7 (https://doi.org/10.1186%2Fs12864-019-6413-7).
PMC 6941312 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6941312). PMID 31898477 (https://pubmed.ncbi.nlm.n
ih.gov/31898477).
33. Chicco D, Toetsch N, Jurman G (February 2021). "The Matthews correlation coefficient (MCC) is more reliable than
balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation" (https://ww
w.ncbi.nlm.nih.gov/pmc/articles/PMC7863449). BioData Mining. 14 (13): 1-22. doi:10.1186/s13040-021-00244-z (http
s://doi.org/10.1186%2Fs13040-021-00244-z). PMC 7863449 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC786344
9). PMID 33541410 (https://pubmed.ncbi.nlm.nih.gov/33541410).
34. Tharwat A. (August 2018). "Classification assessment methods" (https://doi.org/10.1016%2Fj.aci.2018.08.003).
Applied Computing and Informatics. doi:10.1016/j.aci.2018.08.003 (https://doi.org/10.1016%2Fj.aci.2018.08.003).
35. Lin, Jennifer S.; Piper, Margaret A.; Perdue, Leslie A.; Rutter, Carolyn M.; Webber, Elizabeth M.; O’Connor, Elizabeth;
Smith, Ning; Whitlock, Evelyn P. (21 June 2016). "Screening for Colorectal Cancer" (https://doi.org/10.1001/jama.201
6.3332). JAMA. 315 (23): 2576–2594. doi:10.1001/jama.2016.3332 (https://doi.org/10.1001%2Fjama.2016.3332).
ISSN 0098-7484 (https://www.worldcat.org/issn/0098-7484).
36. Bénard, Florence; Barkun, Alan N.; Martel, Myriam; Renteln, Daniel von (7 January 2018). "Systematic review of
colorectal cancer screening guidelines for average-risk adults: Summarizing the current global recommendations" (htt
ps://www.wjgnet.com/1007-9327/full/v24/i1/124.htm). World Journal of Gastroenterology. 24 (1): 124–138.
doi:10.3748/wjg.v24.i1.124 (https://doi.org/10.3748%2Fwjg.v24.i1.124). PMC 5757117 (https://www.ncbi.nlm.nih.gov/
pmc/articles/PMC5757117). PMID 29358889 (https://pubmed.ncbi.nlm.nih.gov/29358889).
37. "Diagnostic test online calculator calculates sensitivity, specificity, likelihood ratios and predictive values from a 2x2
table – calculator of confidence intervals for predictive parameters" (http://www.medcalc.org/calc/diagnostic_test.php).
medcalc.org.

Further reading
Altman DG, Bland JM (June 1994). "Diagnostic tests. 1: Sensitivity and specificity" (https://www.ncbi.nlm.nih.gov/pmc/
articles/PMC2540489). BMJ. 308 (6943): 1552. doi:10.1136/bmj.308.6943.1552 (https://doi.org/10.1136%2Fbmj.308.
6943.1552). PMC 2540489 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2540489). PMID 8019315 (https://pubme
d.ncbi.nlm.nih.gov/8019315).
Loong TW (September 2003). "Understanding sensitivity and specificity with the right side of the brain" (https://www.n
cbi.nlm.nih.gov/pmc/articles/PMC200804). BMJ. 327 (7417): 716–9. doi:10.1136/bmj.327.7417.716 (https://doi.org/10.
1136%2Fbmj.327.7417.716). PMC 200804 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC200804). PMID 14512479
(https://pubmed.ncbi.nlm.nih.gov/14512479).

External links
UIC Calculator (http://araw.mede.uic.edu/cgi-bin/testcalc.pl)
Vassar College's Sensitivity/Specificity Calculator (http://vassarstats.net/clin1.html)
MedCalc Free Online Calculator (https://www.medcalc.org/calc/diagnostic_test.php)
Bayesian clinical diagnostic model applet (https://kennis-research.shinyapps.io/Bayes-App/)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Sensitivity_and_specificity&oldid=1163552192"

You might also like