0% found this document useful (0 votes)
15 views20 pages

Choosing The Right Statistic

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views20 pages

Choosing The Right Statistic

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Choosing the Right Statistic

June Rey S. Sulatra, Ph.D.


• One of the most difficult (and potentially fear-inducing)
parts of the research process for most research students
is choosing the correct statistical technique to analyse
their data.
• Although most statistics courses teach you how to
calculate a correlation coefficient or perform a t-test,
they typically do not spend much time helping students
learn how to choose which approach is appropriate to
address particular research questions.
• In most research projects it is likely that you will use
quite a variety of different types of statistics,
depending on the question you are addressing and
the nature of the data that you have.
• It is therefore important that you have at least a
basic understanding of the different statistical tools,
the type of questions they address and their
underlying assumptions and requirements.
Exploring Relationships
Often in survey research you will not be
interested in differences between groups, but instead in
the strength of the relationship between variables.
There are a number of different techniques that you
can use.
Exploring Relationships
Pearson Correlation or Spearman Correlation
Pearson correlation or Spearman correlation is
used when you want to explore the strength of the
relationship between two continuous variables. This
gives you an indication of both the direction (positive or
negative) and the strength of the relationship. A
positive correlation indicates that as one variable
increases, so does the other. A negative correlation
indicates that as one variable increases, the other
decreases.
Exploring Relationships
Partial Correlation
Partial correlation is an extension of Pearson
correlation—it allows you to control for the possible
effects of another confounding variable. Partial
correlation ‘removes’ the effect of the confounding
variable (e.g. socially desirable responding), allowing
you to get a more accurate picture of the relationship
between your two variables of interest.
Exploring Relationships
Multiple Regression
Multiple regression is a more sophisticated extension
of correlation and is used when you want to explore the
predictive ability of a set of independent variables on one
continuous dependent measure. Different types of multiple
regression allow you to compare the predictive ability of
particular independent variables and to find the best
set of variables to predict a dependent variable.
Exploring Relationships
Factor analysis
Factor analysis allows you to condense a large set
of variables or scale items down to a smaller, more
manageable number of dimensions or factors. It does
this by summarizing the underlying patterns of
correlation and looking for ‘clumps’ or groups of closely
related items. This technique is often used when
developing scales and measures, to identify the
underlying structure.
Exploring Differences between Groups
There is another family of statistics that can be
used when you want to find out whether there is a
statistically significant difference among a number of
groups. The parametric versions of these tests, which
are suitable when you have interval-scaled data with
normal distribution of scores, along with the
nonparametric alternative.
Exploring Differences between Groups
T-tests
• T-tests are used when you have two groups (e.g. males and
females) or two sets of data (before and after), and you wish to
compare the mean score on some continuous variable.
• Paired sample t-tests (also called repeated measures) are used
when you are interested in changes in scores for participants
tested at Time 1, and then again at Time 2 (often after some
intervention or event). The samples are ‘related’ because they
are the same people tested each time.
Exploring Differences between Groups
T-tests
• Independent sample t-tests are used when you have two
different (independent) groups of people (males and females),
and you are interested in comparing their scores. In this case,
you collect information on only one occasion but from two
different sets of people.
• The non-parametric alternatives, Mann-Whitney U Test and
Wilcoxon Signed Rank Test
Exploring Differences between Groups
One-way Analysis of Variance
One-way analysis of variance is similar to a t-test, but is
used when you have more than two groups and you wish to
compare their mean scores on a continuous variable. It is called
one-way because you are looking at the impact of only one
independent variable on your dependent variable. A one-way
analysis of variance (ANOVA) will let you know whether your
groups differ, but it won’t tell you where the significant
difference is (gp1/gp3, gp2/gp3 etc.).
Exploring Differences between Groups
One-way Analysis of Variance
• You can conduct post-hoc comparisons to find out which groups are
significantly different from one another. You could also choose to test
differences between specific groups, rather than comparing all the
groups, by using planned comparisons.
• Similar to t-tests, there are two types of one-way ANOVAs: repeated
measures ANOVA (same people on more than two occasions), and
between-groups (or independent samples) ANOVA, where you are
comparing the mean scores of two or more different groups of people.
• The non-parametric alternatives are Kruskal-Wallis Test and Friedman
Test.
Exploring Differences between Groups
Two-way Analysis of Variance
Two-way analysis of variance allows you to test the impact of
two independent variables on one dependent variable. The
advantage of using a two-way ANOVA is that it allows you to
test for an interaction effect—that is, when the effect of one
independent variable is influenced by another; for example,
when you suspect that optimism increases with age, but only
for males.
Exploring Differences between Groups
Two-way Analysis of Variance
It also tests for ‘main effects’—that is, the overall effect
of each independent variable (e.g. sex, age). There are two
different two-way ANOVAs: between-groups ANOVA (when the
groups are different) and repeated measures ANOVA (when the
same people are tested on more than one occasion). Some
research designs combine both between-groups and repeated
measures in the one study. These are referred to as ‘Mixed
Between-Within Designs’, or ‘Split Plot’.
Exploring Differences between Groups
Multivariate Analysis of Variance
Multivariate analysis of variance (MANOVA) is used when
you want to compare your groups on a number of different, but
related, dependent variables; for example, comparing the effects
of different treatments on a variety of outcome measures (e.g.
anxiety, depression). Multivariate ANOVA can be used with one-
way, two-way and higher factorial designs involving one, two or
more independent variables.
Exploring Differences between Groups
Analysis of Covariance
Analysis of covariance (ANCOVA) is used when you want to
statistically control for the possible effects of an additional
confounding variable (covariate). This is useful when you suspect
that your groups differ on some variable that may influence the
effect that your independent variables have on your dependent
variable. To be sure that it is the independent variable that is
doing the influencing, ANCOVA statistically removes the effect of
the covariate. Analysis of covariance can be used as part of a
one-way, two-way or multivariate design.

You might also like