0% found this document useful (0 votes)
18 views

Name The Common Parametric and Non Parametric Twests

Uploaded by

suvajmadhu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Name The Common Parametric and Non Parametric Twests

Uploaded by

suvajmadhu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Name the common parametric and non-parametric tests and explain how they differer each

other and what are their applications?

Applications Parametric test Non parametric test


Compare means between two Two-sample t-test Wilcoxon rank sum test
distinct/independent groups
Compare mean of a group with a One sample t test Wilcoxon signed rank test
standard mean
Compare two quantitative Paired t test Wilcoxon signed rank test
measurements taken from the same
individual
Compare means between three or Analysis of variance Kruskal-Wallis test
more distinct /independent groups (ANOVA)
Estimate the degree of association Pearson coefficient of Spearman’s rank correlation
between two quantitative variables correlation

Differences between parametric and non parametric tests

The fundamental differences between parametric and nonparametric test are discussed in the
following points:

1. A statistical test, in which specific assumptions are made about the population
parameter is known as the parametric test. A statistical test used in the case of non-
metric independent variables is called nonparametric test.
2. In the parametric test, the test statistic is based on distribution. On the other hand,
the test statistic is arbitrary in the case of the nonparametric test.
3. In the parametric test, it is assumed that the measurement of variables of interest is
done on interval or ratio level. As opposed to the nonparametric test, wherein the
variable of interest are measured on nominal or ordinal scale.
4. In general, the measure of central tendency in the parametric test is mean, while in
the case of the nonparametric test is median.
5. In the parametric test, there is complete information about the population.
Conversely, in the nonparametric test, there is no information about the population.
6. The applicability of parametric test is for variables only, whereas nonparametric test
applies to both variables and attributes.
7. For measuring the degree of association between two quantitative variables,
Pearson’s coefficient of correlation is used in the parametric test, while spearman’s
rank correlation is used in the nonparametric test.

What is correlation?
Correlation is a relationship or connection between two things based on co-
occurrence or pattern of change. It is the tendency for two values or
variables to change together, in either the same or opposite way.
Example: As the blood pressure increases, the chance for stroke also increases this is
called a positive correlation.
Example: As the quantity of body fat increases, chance for cardiovascular
diseases increases (Positive correlation)

1|Page
Example: As the intake of dietary carbohydrates reduced, good glycemic control is achieved
in diabetes (Negative correlation)
A correlation coefficient
It is a number between -1 and 1 that indicate the strength and direction of a relationship
between variables. In other words, it reflects how similar the measurements of two or more
variables are across a dataset.
Correlation
coefficient Correlation type Meaning
value

Perfect positive When one variable changes, the other variables


1
correlation change in the same direction.

0 Zero correlation There is no relationship between the variables.

Perfect negative When one variable changes, the other variables


-1
correlation change in the opposite direction.

2|Page
What is probability and p value?
A p-value, or probability value, is a number describing how likely it is that the data would
have occurred by random chance (i.e., that the null hypothesis is true). The level of statistical
significance is often expressed as a p-value between 0 and 1.
The p-value in statistics quantifies the evidence against a null hypothesis. A low p-value
suggests data is inconsistent with the null, potentially favouring an alternative hypothesis.
Common significance thresholds are 0.05 or 0.01.
The significance level (alpha) is a set probability threshold (often 0.05), while the p-value is
the probability we calculate based on a study or analysis. A p-value less than or equal to the
significance level (typically ≤ 0.05) is statistically significant.
A p-value less than or equal to a predetermined significance level (often 0.05 or 0.01)
indicates a statistically significant result, meaning the observed data provide strong evidence
against the null hypothesis

Degrees of freedom
The number of independent pieces of information used to calculate the statistic is called the
degrees of freedom. The degrees of freedom of a statistic depend on the sample size:
When the sample size is small, there are only a few independent pieces of information, and
therefore only a few degrees of freedom.
When the sample size is large, there are many independent pieces of information, and
therefore many degrees of freedom.
Degrees of freedom, often represented by v or df, is the number of independent pieces of
information used to calculate a statistic. It’s calculated as the sample size minus the number
of restrictions. The degrees of freedom of a statistic is the sample size minus the number of
restrictions. Most of the time, the restrictions are parameters that are estimated as
intermediate steps in calculating the statistic.

3|Page
n – r; Where: n is the sample size and r is the number of restrictions, usually the same as the
number of parameters estimated.

The degrees of freedom can’t be negative. As a result, the number of parameters you estimate
can’t be larger than your sample size.

What are Type I and Type II error in statistical hypothesis testing?


Type I error (False positive) : This type of error occurs when a null hypothesis is rejected
by mistake, when it is actually true.
Type II error (False negative): It occurs when a null hypothesis is accepted, when it was
actually false.

What is power of a statistical test? How it is related to sample size?


The statistical power of a study (sometimes called sensitivity) is how likely the study is to
distinguish an actual effect from one of chance. It’s the likelihood that the test is correctly
rejecting the null hypothesis (i.e. proving the hypothesis).

For example, a study that has an 80% power means that the study has an 80% chance of the
test having significant results.
A high statistical power means that the test results are likely valid. As the power increases,
the probability of making a Type II error decreases. A low statistical power means that the
test results are questionable.

How power is affected by sample size?


The sample size critically affects the hypothesis and the study design, and there is no
straightforward way of calculating the effective sample size for reaching an accurate

4|Page
conclusion. Use of a statistically incorrect sample size may lead to inadequate results in both
clinical and laboratory studies as well as resulting in time loss, cost, and ethical problems.
So, an inadequate sample size will reduce the power of a study.

5|Page

You might also like