0% found this document useful (0 votes)
27 views

Stats

This chapter discusses analyzing data from independent groups experiments. It explains how to calculate and interpret a t-test when comparing the means of two independent groups that have different participants assigned to each condition.

Uploaded by

angus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Stats

This chapter discusses analyzing data from independent groups experiments. It explains how to calculate and interpret a t-test when comparing the means of two independent groups that have different participants assigned to each condition.

Uploaded by

angus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Welcome to PSYC 2321:

Analysis of Behavioural
Data

CHAPTER 11
INSTRUCTOR: NICOLE JENNI
T-test comes in 3 forms
• Compare one group mean to a known
population μ when you don’t know σ
Single sample (Ch 9)
• Compare one group mean to some other null
value of interest (e.g., scale midpoint)

• Compare mean change in repeated measures


experiment
Paired Samples (Ch 10) • Compare mean differences when participants
are linked (e.g., parent and child, twin siblings)

• Compare difference between two


Independent Samples (Ch 11) group means in independent groups
experiment
Participant CBCL CBCL Difference
# Depression Depression Score
Score T1 Score T2

1 65 60 -5
2 65 70 5 Paired Samples t Test
66
3
4
66
67 65
0
-2
takes 2 columns of
5 70 68 -2 data and converts
6
7
71
72
72
62
1
-10
them into 1.
8 73 70 -3
9 75 73 -2 Then, just calculates a one-
N=9 M = 69.333 M = 67.333 MD = -2.000 sample t test on the
s = 3.708 s = 4.444 sD = 4.123 difference score column.
Paired-Samples t Test
Our sampling distributions is now a
Distribution of Mean Differences

◦ The population used here is one based on


the null hypothesis—that there is no average
difference between your two groups
Paired-Samples t Test
Participant # CBCL CBCL Difference
Depression Depression Score
Score T2
Step 3: Determine the characteristics of the
Score T1 comparison distribution.
1 65 60 -5
𝑋ത difference = –2
2 65 70 5
3 66 66 0
Standard deviation
4 67 65 -2
𝑠 = 4.123 ∗∗ 𝑁𝑜𝑡𝑒, 𝑏𝑒 𝑎𝑏𝑙𝑒 𝑡𝑜 𝑐𝑜𝑚𝑝𝑢𝑡𝑒 𝑡ℎ𝑖𝑠 𝑏𝑦 ℎ𝑎𝑛𝑑!
5 70 68 -2
6 71 72 1 Standard error 𝑠𝑋ത
7 72 62 -10
8 73 70 -3
9 75 73 -2 4.123
𝑠𝑋ത = = 1.3743
N=9 𝑋ത = 69.333 𝑋ത = 67.333 𝑋ത D = -2.000 9
s = 3.708 s = 4.444 sD = 4.123
Paired-Samples t Test
Participant # CBCL CBCL Difference
Depression Depression Score
Score T1 Score T2 Step 5: Calculate the test statistic.
1 65 60 -5 𝑠𝑋ത = 1.3743
2 65 70 5
3 66 66 0
(𝑋ത D − 𝜇𝑋ത ) (−2 − 0)
4 67 65 -2 𝑡= = = −1.46
𝑆𝑋ത 1.3743
5 70 68 -2
6 71 72 1
7 72 62 -10
8 73 70 -3
9 75 73 -2
N=9 𝑋ത = 69.333 𝑋ത = 67.333 𝑋ത D = -2.000
s = 3.708 s = 4.444 sD = 4.123
Chapter 11
Analyzing Paired Analyzing Independent
Samples Data Groups Data
A B C D E

=
Independent-Samples t Tests
Analyzing Paired Samples Data
A B C D E
Used to compare two means in a
between-groups design
=
Provides a situation in which each
participant is assigned to only one
Analyzing Independent Groups Data
condition
Q1. How can we express the null
hypothesis for independent groups?
A. H0: |μ1 – μ2|= 0 Protip: We should use more
informative subscripts…
B. H1: |μ1 – μ2|≠ 0 E for experimental and C for control
condition
C. H1: μ1 = μ2 or use the whole words!

D. A and C Ex:
H0: |μexperimental – μcontrol|= 0
Ways to express the null…
Conceptually speaking… Is the Two-tailed Null Hypothesis
difference between two group H0: |μ1 – μ2|= 0
means significantly different
or
from zero?
H0: μ1 = μ2
Corresponding Research Hypothesis
We tend to say… Are two
independent group means H1: |μ1 – μ2| ≠ 0
significantly different from each or
other? H1: μ1 ≠ μ2
Distribution of differences between means
This graph represents the
beginning of a distribution of
differences between means.

It includes only 30 differences,


whereas the actual distribution
would include all possible
differences.
Independent-Samples t Tests
We must consider the appropriate comparison
distribution when we choose which hypothesis test
to use.
Hypothesis Test Number of Samples Comparison Distribution
z test One Distribution of means
Single-sample t One Distribution of means
test
Paired-samples t Two (same participants) Distribution of mean
test difference scores
Independent- Two (different participants) Distribution of differences
samples t test between means
What values are we
going to need to analyze
these data?
PER GROUP: STANDARD DEVIATION, MEAN, SAMPLE SIZE
THEN: STANDARD ERROR, T OBTAINED , T CRITICAL , COHEN’S D, CI
How would you change the t and d formulas
for the Independent Groups Case?
ONE-SAMPLE PAIRED SAMPLES INDEPENDENT
GROUPS
𝑀 − 𝑛𝑢𝑙𝑙 𝑐𝑜𝑚𝑝𝑎𝑟𝑖𝑠𝑜𝑛 𝑣𝑎𝑙𝑢𝑒 𝑀𝐷 − 0
𝑡𝑜𝑏𝑡 = 𝑀1 − 𝑀2
𝑠𝑀 𝑡𝑜𝑏𝑡 = 𝑡𝑜𝑏𝑡 =
𝑠 𝑠𝑀 𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒
S𝐸𝑀 = 𝑠𝑀 = 𝑁
Standard error of the mean𝑠 difference
(scores) = 𝑠𝑀 = 𝐷𝑁
Standard error of the difference between
Cohen’s 𝑑 =
𝑀−𝑛𝑢𝑙𝑙 𝑐𝑜𝑚𝑝𝑎𝑟𝑖𝑠𝑜𝑛 𝑣𝑎𝑙𝑢𝑒
𝑀𝐷 −0 group means
𝑆 Cohen’s 𝑑 =
𝑠𝐷
𝑀1 −𝑀2
Confidence Interval around µ, or around Cohen’s 𝑑 =
𝑠𝑝𝑜𝑜𝑙𝑒𝑑
difference between µ and null value Confidence Interval around µD

Confidence Interval around each µ, or


around difference
Q2. The sampling distribution of the difference between means
needs one standard error value. What will we need to use for this
calculation when we have two groups of different people?

A. Use the standard deviation for the larger group.


B. Take the average of the two standard deviations.
C. Treat everyone as one group and calculate a
standard deviation.
D. Pool the two standard deviations so that the larger
one is weighted more heavily than the smaller
one.
Example of pooling /weighted means
We’re trying to compute the global average for all
students in Psyc218

If class sizes are even, this can be done If class sizes are not even, we need to
by simply taking the average compute a weighted mean
Section 1 (n=100) = 69 Section 1 (n=300) = 69
Section 2 (n=100) = 73 Section 2 (n=100) = 73
69+73
= 71
2 300 100
100 100 (400 ∗ 69) + (400 ∗ 73) = 70
(200 ∗ 69) + (200 ∗ 73) = 71

This is ‘weighting’ by proportion of students,when n’s are This is ‘weighting’ by proportion of students,when n’s are
equal, both scores contribute (100/200) or 50% not equal, Section 1 contributes (300/400) or 75%
Pooled variance that
Variance of one
incorporates two
sample
sample variances

Pooled variance estimate


TO ESTIMATE THE VARIANCE OF THE POPULATION, USE
INFORMATION FROM BOTH GROUPS, BUT GIVE MORE
WEIGHT TO THE LARGER GROUP (IF APPLICABLE).
USED S POOLED AS THE DENOMINATOR FOR COHEN’S D,
AND AS A STEP IN THE CALCULATION OF THE STANDARD
ERROR OF THE DIFFERENCE BETWEEN MEANS .
Pooled variance estimate
Variance for Group X
2
Σ 𝑋 − 𝑀𝑋 2
We need to do 𝑠𝑋 =
𝑛𝑋 − 1
weighted average in
“variance” form

2 𝑛𝑋 − 1 2 𝑛𝑌 − 1 2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 𝑠𝑋 + 𝑠𝑌
𝑁−2 𝑁−2

Variance for Group Y


Σ 𝑌 − 𝑀 2
Instead of weighing by ‘n’ we weight 𝑑𝑓𝑋 𝑌
𝑠𝑌2 =
by degrees of freedom 𝑑𝑓𝑇 𝑛𝑌 − 1
Standard Error of the Difference between Means
Standard Error for Ch
7-9
2 2
𝑠𝑋ത =
𝑠
=
𝑠2 𝑠𝑝𝑜𝑜𝑙𝑒𝑑 𝑠𝑝𝑜𝑜𝑙𝑒𝑑
𝑁 𝑁 𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = +
𝑛𝑋 𝑛𝑌
2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = Pooled variance

NOTE: I have combined steps c, d, and e on pages 254-255 of


the textbook into one equation.
Survey example: do introverts and extraverts differ in
how determined they are to do well in stats?
Descriptive Statistics on the DV Determined
GROUP “1” GROUP “2”
INTROVERTS EXTRAVERTS

n1 = 70 people n2 = 73 people

Mean Determined = 3.6357 Mean Determined = 3.5616

Standard Deviation = 0.92439 Standard Deviation = 0.89732

After I entered the data in SPSS, I


asked for an “Independent
Samples t Test”. This is the first
table in the output. Introvert
→ Extravert
Analyzing Independent Groups Data
FAMILIAR SET OF ANALYSES, ADAPTED FOR
TWO-GROUP CONTEXT
𝑀1 − 𝑀2
𝑡𝑜𝑏𝑡 = 2 𝑛1 − 1 2 𝑛2 − 1 2
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 𝑠1 + 𝑠2
𝑁−2 𝑁−2
Standard error of the difference
between group means
2 2
𝑀1 −𝑀2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 𝑠𝑝𝑜𝑜𝑙𝑒𝑑
Cohen’s 𝑑 = 𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = +
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 𝑛1 𝑛2
Confidence Interval around the
difference between means REMEMBER: I have combined steps c, d, and e on pages 254-255 of the
𝜇1 − 𝜇2 textbook into one equation.
Finding your t-critical
Q3. What’s the closest
critical value of t?
(assume two-tail)
Degrees of Freedom
A. ±1.645
dfx = nx -1
dfy. = = ny -1
B. ±1.658
dftotal = dfx + dfy -OR- C. ±1.960
dftotal = N-2 D. ±1.980
choose a more conservative value if the exact df isn’t listed.
2
Independent Groups t-test: calculate 𝑠𝑝𝑜𝑜𝑙𝑒𝑑
2
𝑀1 − 𝑀2 𝑠𝑖𝑛𝑡𝑟𝑎𝑣𝑒𝑟𝑡 = 0.92439 𝑠𝑖𝑛𝑡𝑟𝑎𝑣𝑒𝑟𝑡 = 0.854497
𝑡𝑜𝑏𝑡 = 2
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 𝑠𝑒𝑥𝑡𝑟𝑎𝑣𝑒𝑟𝑡 = 0.89732 𝑠𝑒𝑥𝑡𝑟𝑎𝑣𝑒𝑟𝑡 = 0.805183

2 2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 𝑠𝑝𝑜𝑜𝑙𝑒𝑑 What is the pooled
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = + variance estimate?
𝑛1 𝑛2
A. 0.82931
2 𝑛𝑋 − 1 2 𝑛𝑌 − 1 2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 𝑠𝑋 + 𝑠𝑌 B. 0.91067
𝑁−2 𝑁−2
C. 0.91057
Introvert D. 0.15234
Extravert
2
Independent Groups t-test: calculate 𝑠𝑝𝑜𝑜𝑙𝑒𝑑
𝑀1 − 𝑀2 𝑠𝑖𝑛𝑡𝑟𝑎𝑣𝑒𝑟𝑡 = 0.92439 𝑠 2
𝑖𝑛𝑡𝑟𝑎𝑣𝑒𝑟𝑡 = 0.854497
𝑡𝑜𝑏𝑡 =
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 𝑠𝑒𝑥𝑡𝑟𝑎𝑣𝑒𝑟𝑡 = 2
0.89732 𝑠𝑒𝑥𝑡𝑟𝑎𝑣𝑒𝑟𝑡 = 0.805183

2 2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 𝑠𝑝𝑜𝑜𝑙𝑒𝑑
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = + 2 70 − 1 73 − 1
𝑛1 𝑛2 𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 0.854497 + 0.805183
143 − 2 143 − 2
2
2 𝑛𝑋 − 1 2 𝑛𝑌 − 1 2 𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 0.48936 0.854497 + 0.510638 0.805183
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 𝑠𝑋 + 𝑠𝑌
𝑁−2 𝑁−2 2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 0.418158 + 0.411157
2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 0.829315
Independent Groups t-test: calculate 𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒
2
𝑀1 − 𝑀2 𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 0.829315
𝑡𝑜𝑏𝑡 =
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 What is the pooled
2 2
variance estimate?
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 𝑠𝑝𝑜𝑜𝑙𝑒𝑑
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = + A. 0.011361
𝑛1 𝑛2

B. 0.02321
2 𝑛𝑋 − 1 2 𝑛𝑌 − 1 2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 𝑠𝑋 + 𝑠𝑌
𝑁−2 𝑁−2 C. 0.15342

D. 0.15234
Introvert

Extravert
Independent Groups t-test: calculate 𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒
2
𝑀1 − 𝑀2 𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 0.829315
𝑡𝑜𝑏𝑡 =
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = +
2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 𝑛1 𝑛2
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = +
𝑛1 𝑛2
0.829315 0.829315
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = +
𝑛𝑋 − 1 2 𝑛𝑌 − 1 2 70 73
2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 𝑠𝑋 + 𝑠𝑌
𝑁−2 𝑁−2
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0.11847 + 0.01136

𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0.023208
Introvert
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0.15234
Extravert
Independent Groups t-test: calculate t-obtained
𝑀1 − 𝑀2
𝑡𝑜𝑏𝑡 =
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒

3.6357 − 3.5616
𝑡𝑜𝑏𝑡 =
0.1523

𝑡𝑜𝑏𝑡 = 0.4864

Introvert
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0.15234
Extravert
Null Hypothesis Sampling
Distribution of the Difference
Between Group Means, at N-2
degrees of freedom

tcrit(141) = -1.980 tcrit(141) = 1.980

.025 .025

All possible
-5 -4
values of t

If our sample mean difference = 0, μ1 – μ2 = 0


then tobtained = 0

When we reject the null hypothesis with the t sampling distribution, we’re saying we think we drew
our sample from a population that has a non-zero t (i.e., a difference between group means).
Null Hypothesis Sampling
Distribution of the Difference
𝑡𝑜𝑏𝑡 = 0.4864
Between Group Means, at N-2
degrees of freedom Is our difference
between means
tcrit(141) = -1.980 tcrit(141) = 1.980
significantly
different from zero?
A. Yes
B. No
.025 .025

-5 -4

μ1 – μ2 = 0

tobtained is between the critical values, not outside either of them. It


looks like these two groups were drawn from the null population
(no difference). p > .05.
Computing Cohen’s D

Q7. What is the effect size?


𝑀1 −𝑀2
Cohen’s 𝑑 = 𝑠𝑝𝑜𝑜𝑙𝑒𝑑 A. 0.0741
B. 0.08935
C. 0.08137
D. 0.4864

2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 0.82931
Introvert
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0.15234
Extravert
Computing Cohen’s D

𝑀1 −𝑀2
Cohen’s 𝑑 = 𝑠𝑝𝑜𝑜𝑙𝑒𝑑
2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 0.82931

3.6357 − 3.5616
𝑑=
0.910665 𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 0.910665

𝑑 = 0.081
Add a Confidence Interval to identify the range of plausible
values for the difference between means of whatever
population our sample belongs to.
2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 0.82931
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0.15234 Q8. What is the 95CI
tcritical = 1.98 A.[-0.22, 0.38]
B. [-1.56, 1.72]
Lower Boundary Upper Boundary
C.[-0.07, 0.23]
MDifflower = MDiff - tcritical(𝑠𝑑𝑖𝑓𝑓 ) MDiffupper = MDiff + tcritical(𝑠𝑑𝑖𝑓𝑓 ) D. [-0.15, 0.38]

-.30 -.20 -.10 0 .10 .20 .30 .40 .50 .60 .70 .80 .90

Difference Between Group Means


Add a Confidence Interval to identify the range of plausible
values for the difference between means of whatever
population our sample belongs to.

Lower Boundary Upper Boundary

MDifflower = MDiff - tcritical(𝑠𝑑𝑖𝑓𝑓 ) MDiffupper = MDiff + tcritical(𝑠𝑑𝑖𝑓𝑓 )


MDifflower = 0.074 – 1.98(0.1523) MDiffupper = 0.074 + 1.98(0.1523)
MDifflower = -0.22 MDiffupper = 0.38

-.30 -.20 -.10 0 .10 .20 .30 .40 .50 .60 .70 .80 .90

Difference Between Group Means


The Plan: Analyzing Our Data

t-test Confidence
• Means, SD, n per group Intervals Cohen’s d
• Difference between group • t-critical • Difference between
means • Difference between
• Pooled standard group means
group means Conclusions
• Standard Error of the • Pooled standard
deviation deviation
difference between
• Standard Error of the means
difference between
means
The 95% confidence interval for Mx – My does not include 0. If the
H0 that mx – my = 0 was being tested, the difference between Mx
and My would:

A. not be significant at the 0.05 level.


B. be significant at the 0.025 level.
C. be significant at the 0.05 level.
D. not be significant at the 0.10 level.
Every statistical test relies on some
assumptions
ASSUMPTIONS OF THE Z TEST/SINGLE SAMPLE T-
TEST/ PAIRED SAMPLES T TEST
•DV measured using a scale variable (so can
calculate means) What happens if
•Population(s) normally distributed or N ≥ 30 we violate these
• N = n1 + n2 assumptions?
•Participants randomly selected from
population(s)
• Careful generalizing
Every statistical test relies on some
assumptions
ASSUMPTIONS OF THE INDEPENDENT SAMPLES T-TEST

•DV measured using a scale variable (so can


calculate means)
•Population(s) normally distributed or N ≥ 30
•Subjects are randomly selected from
population(s)
• Careful generalizing
•Homogeneity of Variance (HOV)
Homogeneity of Variance (HOV) Assumption
HOMOSCEDASTIC: populations that have the same variance
HETEROSCEDASTIC: populations that have different variances

Because we are ‘pooling’ our variance Independent Samples T-Tests are


estimate from the two different Robust to violations of HOV ONLY if
samples… sample sizes are perfectly equal (n1 = n2
We are assuming that the samples all
come from populations with the same
variances Otherwise we need to check that an
HOV test is not significant
▪In other words, we want to check that
these variances are not significantly
different
Independent samples t-test: Caffeine and
Reaction Times Study
oDoes caffeine improve reaction times?
oGroup 1 gets real Caffeine, Group 2 gets placebo

oEverybody plays a reaction time task


oDependent variable: reaction time in milliseconds (ms)
We want to conduct a one-tailed test
oHo: People who consume caffeine will have
Fill in the blanks:
slower or equal reaction times compared to
people who consume placebo A. Not different / Different
Ho: μCaffeine ≥ μPlacebo B. Different / Not different
C. Slower or equal / Faster
oH1: People who consume caffeine will have D. Faster or equal / Slower
faster reaction times compared to people who
consume a placebo
H1: μCaffeine < μPlacebo
We want to conduct a one-tailed test
oHo: People who consume caffeine will have
Fill in the blanks:
slower or equal reaction times compared to
people who consume placebo A. ≥ / <
Ho: μCaffeine ≥ μPlacebo B. ≤ / >
C. = / ≠
oH1: People who consume caffeine will have D. ≠ / ≠
faster reaction times compared to people who
consume a placebo
H1: μCaffeine < μPlacebo
We want to conduct a one-tailed test
oHo: People who consume caffeine will have ***NOTE***
slower or equal reaction times compared to “Slower” reaction
time means a
people who consume placebo larger millisecond
value
Ho: μCaffeine ≥ μPlacebo
oH1: People who consume caffeine will have
***NOTE***
faster reaction times compared to people who “Faster” reaction
consume a placebo time means a
smaller millisecond
H1: μCaffeine < μPlacebo value
Alpha one tailed = .05
Caffeine = 7 people
What is t-critical? Placebo = 6 people

Ho: μCaffeine≥μPlacebo A. + 2.201


H1: μCaffeine<μPlacebo B. – 2.201
C. + 1.796
D. – 1.796
Data collected
Caffeine (ms) Placebo (ms)
40 45
45 60
55 55
35 50
40 55
45 50
40

Caffeine
Placebo
2
calculate 𝑠𝑝𝑜𝑜𝑙𝑒𝑑
Caffeine (ms) Placebo (ms) 𝑋ത1 − 𝑋ത2 What is the pooled
𝑡𝑜𝑏𝑡 =
40 45 𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 variance estimate?
45 60 2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 A. 34.5779
55 55 𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = +
𝑛1 𝑛2
35 50 B. 34.4871
40 55
45 50 2 𝑛𝑋 − 1 2 𝑛𝑌 − 1 2 C. 5.85388
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 𝑠𝑋 + 𝑠𝑌
40 𝑁−2 𝑁−2
D. 5.8803

2 𝑛𝑋 − 1 2 𝑛A𝑌 − 1 2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 𝑠𝑋 + 𝑠𝑌
Caffeine 𝑁−2 𝑁−2
Placebo
Independent Groups t-test: calculate 𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒
2
𝑋ത1 − 𝑋ത2 𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 34.5779
𝑡𝑜𝑏𝑡 =
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 What is the standard error
2 2
estimate?
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 𝑠𝑝𝑜𝑜𝑙𝑒𝑑
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = + A. 10.7027
𝑛1 𝑛2

B. 3.5607
2 𝑛𝑋 − 1 2 𝑛𝑌 − 1 2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 = 𝑠𝑋 + 𝑠𝑌
𝑁−2 𝑁−2 C. 3.2715

D. 12.6786

Caffeine
Placebo
Independent Groups t-test: calculate t-obtained
𝑋ത1 − 𝑋ത2
𝑡𝑜𝑏𝑡 =
𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒

42.8571 − 52.5000
𝑡𝑜𝑏𝑡 =
3.2715

𝑡𝑜𝑏𝑡 = -2.948

𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 3.2715
Caffeine
Placebo
Null Hypothesis Sampling
Distribution of the Difference
Between Group Means, at N-2 𝑡𝑜𝑏𝑡 = -2.948
degrees of freedom Is our difference between
means significantly different
tcrit(11) = -1.796 from zero?

Make sure to check the


direction of the effect!

.05
A. Yes
All possible
values of t
-5 -4 B. No

If our sample mean difference = 0, μ1 – μ2 = 0


then tobtained = 0

Caffeine
Placebo
SPSS Output

Levene’s Test is testing the assumption that our two Did we violate our HOV assumption?
samples have equal variances in the population (HOV) p=0.78

Here we can see this test is NOT SIGNIFICANT (p=.78) A. YES, we violated this assumption
B. NO, we did not
HOV Assumption
• When we design an experiment, we generally hypothesize that our
manipulation (ie caffeine) will cause some mean difference between our
groups

• We do NOT expect this manipulation to change the variability in our


experimental groups

• Ie. There should still be the same ‘spread’ or variability in the caffeine and
non caffeine group

• IF caffeine did happen to have an effect on group variance, a significant


Levene’s test will tell us, and will apply a correction to our test (row 2)
Writing a good conclusion
Participants who consumed caffeine (M= 42.86 ms, s=6.36
ms) performed the reaction time task significantly faster
than the placebo group (M= 52.50, s = 5.24) t(11)= -2.95,
p<.05.

A good conclusion ALWAYS states your group(s), the descriptive statistics


(means and standard deviations) and your test results. If you have it, you
should also report Cohen’s D and your confidence intervals.
Where we’ve been…
Normal Single sample
To compare sampling z test
distribution
sample mean to a distribution of the
(when know μ (CI around mean,
population mean mean (𝜎𝑀 )
and σ) effect size)
Where we’ve been…
Normal Single sample
To compare sampling z test
distribution
sample mean to a distribution of the
(when know μ (CI around mean,
population mean mean (𝜎𝑀 )
and σ) effect size)

To compare sample t distribution Single sample


sampling t test
mean to a (when don’t know
distribution of the
population mean
mean (𝑠𝑋ത )
σ) (CI around mean,
or particular score df = N-1 effect size)
Where we’ve been…
Normal Single sample
To compare sampling z test
distribution
sample mean to a distribution of the
(when know μ (CI around mean,
population mean mean (𝜎𝑀 )
and σ) effect size)

To compare sample t distribution Single sample


sampling t test
mean to a (when don’t know
distribution of the
population mean
mean (𝑠𝑋ത )
σ) (CI around mean,
or particular score df = N-1 effect size)

Paired Samples
sampling distribution (repeated measures) t-
To compare means of t distribution
of the mean difference test
two related groups df = N-1
(𝑠𝑋ത ) (CI around mean
difference, effect size)
Where we’ve been…
Normal Single sample
To compare sampling z test
distribution
sample mean to a distribution of the
(when know μ (CI around mean,
population mean mean (𝜎𝑀 )
and σ) effect size)

To compare sample t distribution Single sample


sampling t test
mean to a (when don’t know
distribution of the
population mean
mean (𝑠𝑋ത )
σ) (CI around mean,
or particular score df = N-1 effect size)

Paired Samples
sampling distribution (repeated measures) t-
To compare means of t distribution
of the mean difference test
two related groups df = N-1
(𝑠𝑋ത ) (CI around mean
difference, effect size)

sampling Independent
To compare means distribution of the groups t-test
of two t distribution
difference between (CI around
independent two means df = N-2
groups difference between
(𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 ) means, effect size)
Where z-distribution is the normal
curve, and t’s reference t-distribution
Comparison Distributions as their associated dfs

Ch 7: Z-test Ch 9 single sample t-test


Sampling Distribution of the mean
• Centered around population mean Sampling Distribution of the mean
• Error = SEM • Centered around population mean
• Error = SEM

𝜎 𝑠
𝜎𝑋ത = 𝑠𝑋ത =
𝑁 𝑁

Sampling Distribution differences


Ch 9: Paired Ch 10: Independent between means

samples t-test
Sampling Distribution of mean samples t-test • Centered around 0
difference • Error = standard error of difference
• Centered around 0 between means
• Error = standard error of mean 2 2
𝑠𝑝𝑜𝑜𝑙𝑒𝑑 𝑠𝑝𝑜𝑜𝑙𝑒𝑑
difference s 𝑠𝑑𝑖𝑓𝑓 = +
D
𝑠𝑋ത = 𝑛1 𝑛2
𝑁
Learn to dissociate your symbols
Standard deviation = s
Variance =𝑠 2
Standard deviation of ‘difference scores’ = 𝑠𝐷
2
Pooled variance = 𝑠𝑝𝑜𝑜𝑙𝑒𝑑

Standard error of the mean: 𝑠𝑋ത


Standard error of the differences between means: 𝑠𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒
Which test is best?
IS OUR MEAN DIFFERENT FROM A SPECIFIC ARE THESE TWO SAMPLE MEANS DIFFERENT
POPULATION MEAN? FROM EACH OTHER?

Do we know the population standard Are the data correlated or independent?


deviation? (σ)

Yes No Correlated Independent

Single sample Paired samples Independent


Z test samples t-test
t-test t-test

You might also like