0% found this document useful (0 votes)
38 views29 pages

Lecture 10 Non Parametric Slides Edited 2019

This document discusses non-parametric statistics and tests. It describes how non-parametric tests make fewer assumptions than parametric tests and do not require the data to follow a particular distribution. Examples of commonly used non-parametric tests are provided, including the Mann-Whitney U test, which compares ranks instead of actual scores, and Spearman's rank correlation coefficient, which assesses monotonic relationships between variables. The advantages and disadvantages of non-parametric tests over parametric tests are also summarized.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views29 pages

Lecture 10 Non Parametric Slides Edited 2019

This document discusses non-parametric statistics and tests. It describes how non-parametric tests make fewer assumptions than parametric tests and do not require the data to follow a particular distribution. Examples of commonly used non-parametric tests are provided, including the Mann-Whitney U test, which compares ranks instead of actual scores, and Spearman's rank correlation coefficient, which assesses monotonic relationships between variables. The advantages and disadvantages of non-parametric tests over parametric tests are also summarized.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

BACHELOR OF BIOMEDICAL SCIENCES

BASIC BIOSTATISTICS
BMC 111

CHAPTER 10:
Non Parametric Statistics
1

BIOMEDICAL SCIENCE, FACULTY OF MEDICINE


Learning Outcome
At the end of this lecture, students should be able to:

• Describe the non-parametric test


• Differentiate the types of non-parametric tests
• Know the application of different types of non-
parametric tests for comparison of mean and
proportion from different groups

BIOMEDICAL SCIENCE, FACULTY OF MEDICINE


Hypothesis Testing Procedures
Hypothesis
Testing
Procedures

Parametric Nonparametric

Wilcoxon Kruskal-Wallis
Rank Sum H-Test
Test
One-Way
Z Test t Test
ANOVA
Many More Tests Exist!
Parametric Test Procedures
 1.Involve Population Parameters
 Example: Population Mean

 2.Require Interval Scale or Ratio Scale


 Whole Numbers or Fractions
 Example: Height in Inches (72, 60.5, 54.7)

 3.Have Stringent Assumptions


 Example: Normal Distribution

 4.Examples: Z Test, t Test, 2 Test


What is Nonparametric Test
• Techniques that do not rely on data belonging
to any particular distribution
• Non-parametric statistics do not assume any
underlying distribution of parameter.
• Non-parametric does not meant that model
lack parameters but that the number and
nature of the parameters are flexible.
Non-Parametric Test
Nonparametric Test
Procedures Procedures
 1.Do Not Involve Population
Parameters
 Example: Probability Distributions, Independence

 2.Data Measured on Any Scale


 Ratio or Interval
 Ordinal
 Example: Good-Better-Best
 Nominal
 Example: Male-Female
 3.Example: Wilcoxon Rank Sum Test
Advantage of
Advantages of Nonparametric
Non-Parametric Test
Tests
1. Used With All Scales
- Analysis possible for ranking or categorical data (data which is not
based on measurement scale)

2. Easier to Compute
- Developed Originally Before Wide Computer Use

3. Make Fewer Assumptions

4. Need Not Involve population parameters


-Distribution free: Non-parametric tests may be used when the form
of the sampled population is unknown)

5. Results May Be as Exact as Parametric Procedures


Disadvantage of
Disadvantages of Nonparametric
Non-Parametric Test
Tests
1. May Waste Information
If Data Permit Using Parametric Procedures
Example: Converting Data From Ratio to Ordinal Scale
2. Difficult to compute by hand for large samples
3. Tables not widely available
Frequently used non-
parametric tests
1.Sign Test
2.Wilcoxon Rank Sum Test
3.Wilcoxon Signed Rank Test
4.Kruskal Wallis H-Test
5. Friedman’s Fr-Test\
6. Spearman’s Rank Correlation Coefficient
Why Nonparametric Test
• Sample distribution is unknown.
• When the population distribution is abnormal
i.e. too many variables involved.
Man-Whitney U Test
• also called the Mann–Whitney–Wilcoxon (MWW), Wilcoxon
rank-sum test, or Wilcoxon–Mann–Whitney test)

• A non-parametric test of the null hypothesis that it is equally


likely that a randomly selected value from one sample will be
less than or greater than a randomly selected value from a
second sample.

• it does not require the assumption of normal distributions. It


is nearly as efficient as the t-test on normal distributions.
Example
• Recall that an independent samples t test compares the means of
two unrelated samples (e.g., majors and non-majors) for an
interval or ratio-level variable.

• The nonparametric alternative to an independent t test is a Mann-


Whitney U test, which compares the ranks of observed scores
instead of the actual scores.

• This process is similar to the use of ranks in calculating the


Spearman correlation. Here is an example.

• Suppose that you want to compare the ages of males and females
in a small sample of homeless shelter volunteers. Here is an
illustration of their ages:
• Instead of comparing the average ages for males and females,
the Mann-Whitney U test combines all of the ages into one
group, sorts them, and then assigns a rank to each age.

• These ranks are then added up for the two groups of


numbers. The test determines whether the two sums of ranks
are equal.

• The null hypothesis is that they are equal and the

• alternative hypothesis is that they are not equal.


• Here is the Mann-Whitney output from SPSS:
• Notice in the Ranks table that the sum of the ranks for the males
is larger than that of the females.

• This difference matches the pattern shown in the graph above.


The question is whether this difference between ages is
statistically significant.

• The p-value (i.e., significance level) associated with the Mann-


Whitney U of 85 is .768, which is greater than .05 indicating that

• we should not reject the null hypothesis and that even though
the two patterns of ages are different, they are not different
enough to be regarded as statistically significant.
• The Mann-Whitney U test is used instead of
the independent t test when assumptions
about the samples are violated. Usually these
violations happen with small sample sizes.
Spearman’s rank correlation
coefficient
• Spearman's rank correlation coefficient or Spearman's rho, is
a non parametric measure of rank correlation.

• Rank correlation - statistical independence between the


ranking of two variables

• Need to know how well the relationship between two


variables

• Which can be describe using a monotonic function (of a


function or quantity - varying in such a way that it either
never decreases or never increases).
• The Spearman correlation between two variables is equal to
the Pearson correlation between the rank values of those
two variables;

• while Pearson's correlation assesses linear relationships,


Spearman's correlation assesses monotonic relationships
(whether linear or not).

• If there are no repeated data values, a perfect Spearman


correlation of +1 or −1 occurs when each of the variables is a
perfect monotone function of the other.
• Suppose you have two sets of quiz scores for a
fairly small class. Due to a few extreme scores,
you determine that a Pearson correlation isn't
appropriate - instead you'll calculate a
Spearman correlation. Here is the procedure
(SPSS does all of this for you). 
• Instead of the actual scores, the Spearman
rank-order correlation uses the ranks of the
scores. Ranks are derived by sorting the scores
from lowest to highest and then assigning the
lowest score a 1, the next lowest a 2, and so
on. Here are the ranks:
• The Spearman rank-order correlation is based on the squared
differences between the two ranks.

• For example, the difference between the first two ranks is 1-


1=0 and the difference between the second two ranks is 13-
9=4.

• These differences are squared and then summed, and then


standardized based on the sample size.

• The resulting number, ρ, is interpreted exactly like the Pearson


r is - the sign of ρ identifies a direct or indirect relationship
and the size of ρ indicates the strength of the relationship. 

You might also like