0% found this document useful (0 votes)
14 views149 pages

Inbound 2323158544640608273

The document outlines various correlation research tools and statistical methods used for analyzing relationships between variables, including Pearson's correlation, t-tests, ANOVA, and chi-square tests. It explains the significance of correlation coefficients, effect sizes, and the steps involved in hypothesis testing. Additionally, it provides examples and interpretations of statistical results, emphasizing the importance of understanding data relationships in research.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views149 pages

Inbound 2323158544640608273

The document outlines various correlation research tools and statistical methods used for analyzing relationships between variables, including Pearson's correlation, t-tests, ANOVA, and chi-square tests. It explains the significance of correlation coefficients, effect sizes, and the steps involved in hypothesis testing. Additionally, it provides examples and interpretations of statistical results, emphasizing the importance of understanding data relationships in research.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 149

CORRELATION RESEARCH TOOLS

TYPICAL CORRELATIONAL AND COMPARATIVE


RESEARCH MEASURING TOOLS
• 1. Correlations
 Pearson (parametric data) – most commonly applied correlation method
Kendall’s tau
Spearman
Chi-square (nonparametric data)

*Parametric (Interval or ratio- scaled)


* Non-parametric (Ordinal and Nominal variables)
• 2. t- Test (paired samples, independent samples t-test)
 for independent samples t-test, the grouping variable should
be two categories only.
• 3. Analysis of Variance (ANOVA)
 post hoc (Tukey – equal variance assumed) or (Tamhane –
equal variance not assumed) and homogeneity of variances
test (Levene statistic).
For post hoc, the factor should have at least categories
• 4. Scales reliability test
 interval or ratio scaled data – Cronbach’s alpha
5. Factor analysis
6. Normality analysis
7. Multiple regression analysis
CORRELATIONS
Two variables are measured at a time (bivariate)
In stating the correlations of more than 2 variables, the word “between” is
used instead of “among” because the correlation is measured between
variables at a time (bivariate).

Example: (Null hypothesis)


• There are no significant correlations between Quality Education (variable
1), Qualified Faculty (variable 2) and Better School Facilities (variable 3).
PEARSON r
(Pearson Product Moment Correlation Coefficient r)

 commonly used to test the relationship between variables that are quantitative in
nature (either interval or ratio scale)

Meanwhile,
* Spearman rho or Kendall’s tau – are common statistical tools used to measure the
relationship between variables at the ordinal scale.
• * Chi – square test – is commonly used for significance of
relationship.

• *Phi and Cramer’s V – are alternative statistics if one wants to


know only the significance but also the strength of relationship
between two nominal data.
THE PEARSON PRODUCT MOMENT CORRELATION
COEFFICIENT
• The correlation coefficient is the single number that represents the degree
of relation between two variables.
• The Pearson Product-Moment Correlation Coefficient (symbolized by r) is
the most common measure of correlation; researchers calculate it when both
the X variable and the Y variable are interval or ration scale
measurements. Mathematically, it can be defined as the average of the
cross-products of z-scores.
• The raw score formula for r is:

SMITH/DAVIS (C) 2005 PRENTICE HALL


SMITH/DAVIS (C) 2005 PRENTICE HALL
THE RANGE OF R VALUES

• The Range of r – correlation


coefficients can range in value from
-1.00 to +1.00.
• A correlation of -1.00 indicates a
perfect negative correlation between
the two variables of interest. That is,
whenever there is an increase of one
unit in one variable, there is always
the same proportional decrease in the
other variable.
THE RANGE OF R VALUES

• The Range of r – correlation coefficients


can range in value from -1.00 to +1.00.
• A zero correlation means there is little or no
relation between the two variables. That is,
as scores on one variable increase, scores on
the other variable may increase, decrease,
or not change at all.
THE RANGE OF R VALUES
• The Range of r – correlation
coefficients can range in value from -
1.00 to +1.00.
• Perfect positive correlation occurs when
you have a value of +1.00 and as we
see an increase of one unit in one
variable, we always see a proportional
increase in the other variable.
• The existence of a perfect correlation
indicates there are no other factors
present that influence the relation we are
measuring. This situation rarely occurs in
real life.
SMITH/DAVIS (C) 2005 PRENTICE HALL
INTERPRETING CORRELATION COEFFICIENTS
• Statistically significant results mean that a research result occurred rarely
by chance.
• If the correlation you calculate is sufficiently large that it would occur
rarely by chance, then you have reason to believe that these two
variables are related.
• The standard by which significance in psychology is determined is at the
.05 level.
• That is, a result is significant when it occurs by chance 5 times out of a
hundred.
• Researchers who are more caution may choose to adopt a .01 level of
significance.
EFFECT SIZE
• Even though statistical significance is an important component of
psychological research, it may not tell us very much about the
magnitude of our results.
• Effect size refers to the size or magnitude of the effect an independent
variable (IV) produced in an experiment or the size or magnitude of a
correlation.
• Effect size calculation is important because, unfortunately, a research
result can be significant and yet the effect size may be quite small.
• An example of this situation occurs as sample size gets larger, the
critical value needed to achieve significance becomes smaller.
EFFECT SIZE
• To calculate the effect size for the Pearson product-moment
correlation, all you have to do is square the correlation coefficient.
• r2 is known as the coefficient of determination.
• Multiply the coefficient of determination by 100 and you will see
what percentage of the variance is accounted for by the
correlation.
• The higher r2 becomes, the more variance is accounted for by the
relation between the two variables under study.
• Lower r2 values indicate that factor, other than the two variables of
interest are influencing the relation in which we are interested.
PEARSON R/ PEARSON PRODUCT -MOMENT
CORRELATION
 Direct relationship

 Ho : There is no significant relationship between


score in Math and score in Physics

 H1 : There is significant relationship between


score in Math and score in Physics
17
DATA
Student # Score in Math (IV) Score in Physics (DV)
1 90 100
2 95 96
3 90 98
4 100 100
5 65 68
6 90 91
7 88 89
8 75 76
9 90 87
10 85 85
11 82 82
12 80 80
13 92 93
18
14 90 88
15 85 80
STEPS IN USING PEARSON r THROUGH SPSS
Open
SPSS

Click File

New

Data 19
Click
Variable
View

Type in
ScoreMath
&
ScorePhysics

*** Note: in typing the


variables, there should be
no space in between.
20
Click
Data
View

Enter
Data
21
Click
Analyze

Correlate

Bivariate
22
Highlight
ScoreMath &
ScorePhysics

Drag to
Variables

Click
Options

23
• *** Note:

Pearson r

• Two- tailed
Flag significant
correlations
Check mean and
standard deviation

Continue

Ok

Result
25
RESULT:

Sig. (2-tailed)
= 0.000

26
INTERPRETATION:
 Stat test: Pearson r
 α = 0.05
 Tail = 2 tailed
 Result: Computed Pearson r = 0.913
 Sig = ρ value = 0.000

 Decision:
Reject Ho • Rule # 1 : If sig ˂ 0.05
Reject Ho • Rule # 2: If sig = 0.05
27
Accept Ho • Rule # 3: If sig ˃ 0.05
DECISION:
• Reject Ho

• Conclusion:
• * There is significant relationship between score in Math and
score in Physics.

• Implications:
• * Better background in Mathematics would lead to better
performance in Physics.
28
Pearson r
Analyze

Correlate

Bivariate

Drag

Pearson r

Option

Ok
29
TABLE OF CORRELATIONS INTERPRETATION
Range of Coefficient Description
From To
+ 0.81 + 1.00 Very Strong

+ 0.61 + 0.80 Strong

+ 0.41 + 0.60 Moderate

+ 0.21 + 0.40 Weak

+ 0.01 + 0.20 Weak to No Correlation


ASSIGNMENT: DATA ON BUYING BEHAVIOR

• 1. What is the profile of the respondents in terms of sex, age and income?
Stat tools: frequency & percentage distribution and graph
2. To what extent is the buying behavior of the respondents in terms of: cultural,
social, personal and psychological factors?
Stat tools: mean and sd
3. Is there a significant relationship between respondents’ sex and their overall
buying behavior?
Stat tool: Pearson r
Chi- Square Test as a Statistical Tool
Chi-Square as a Statistical Test

Chi-square test: an inferential statistics


technique designed to test for significant
relationships between two variables organized
in a bivariate table.

Chi-square requires no assumptions about the


shape of the population distribution from
which a sample is drawn.
Chi-Square as a Statistical Test
Chi-square test: an inferential statistics technique
designed to test for significant relationships between
two variables organized in a bivariate table.

Chi-square requires no assumptions about the shape


of the population distribution from which a sample is
drawn.
The Chi Square Test

◦ A statistical method used to determine goodness of


fit
◦ Goodness of fit refers to how close the observed data
are to those predicted from a hypothesis
Note:
*The chi square test does not prove that a hypothesis is
correct,
*It evaluates to what extent the data and the hypothesis
have a good fit
Limitations of the Chi-Square Test

The chi-square test does not give us much information


about the strength of the relationship or its substantive
significance in the population.

The chi-square test is sensitive to sample size. The size


of the calculated chi-square is directly proportional to
the size of the sample, independent of the strength of
the relationship between the variables.

The chi-square test is also sensitive to small expected


frequencies in one or more of the cells in the table.
Statistical Independence
Independence (statistical): the absence of
association between two cross-tabulated variables.
The percentage distributions of the dependent
variable within each category of the independent
variable are identical.
Hypothesis Testing with Chi-Square
Chi-square follows five steps:
1. Making assumptions (random sampling)

2. Stating the research and null hypotheses

3. Selecting the sampling distribution and specifying the


test statistic

4. Computing the test statistic

5. Making a decision and interpreting the results


The Assumptions

The chi-square test requires no assumptions


about the shape of the population distribution
from which the sample was drawn.

However, like all inferential techniques it assumes


random sampling.
Stating Research and Null Hypotheses

The research hypothesis (H1) proposes that the


two variables are related in the population.

The null hypothesis (H0) states that no association


exists between the two cross-tabulated variables in
the population, and therefore the variables are
statistically independent.
H1: The two variables are related in the population.
Gender and fear of walking alone at night are
statistically dependent.

Afraid Men Women Total


No 83.3% 57.2% 71.1%
Yes 16.7% 42.8% 28.9%
Total 100% 100% 100%
H : There is no association between the two variables.
0

Gender and fear of walking alone at night are statistically


independent.

Afraid Men Women Total


No 71.1% 71.1% 71.1%
Yes 28.9% 28.9% 28.9%
Total 100% 100% 100%
The Concept of Expected Frequencies
Expected frequencies fe : the cell frequencies that
would be expected in a bivariate table if the two
tables were statistically independent.

Observed frequencies fo: the cell frequencies actually


observed in a bivariate table.
Calculating Expected Frequencies

fe = (column marginal)(row marginal)


N
To obtain the expected frequencies for any cell in any cross-
tabulation in which the two variables are assumed
independent, multiply the row and column totals for that cell
and divide the product by the total number of cases in the
table.
Chi-Square (obtained)
The test statistic that summarizes the
differences between the observed (fo) and the
expected (fe) frequencies in a bivariate table.
Calculating the Obtained Chi-Square

( fe  fo ) 2
 
2

fe
fe = expected frequencies
fo = observed frequencies
The Sampling Distribution of Chi-Square

The sampling distribution of chi-square tells the probability of


getting values of chi-square, assuming no relationship exists in
the population.

The chi-square sampling distributions depend on the degrees


of freedom.

The  sampling distribution is not one distribution, but is a


family of distributions.
The Sampling Distribution of Chi-Square

The distributions are positively skewed. The research


hypothesis for the chi-square is always a one-tailed
test.

Chi-square values are always positive. The minimum


possible value is zero, with no upper limit to its
maximum value.

As the number of degrees of freedom increases, the


 distribution becomes more symmetrical.
Determining the Degrees of Freedom
df = (r – 1)(c – 1)

where
r = the number of rows
c = the number of columns
Calculating Degrees of Freedom
How many degrees of freedom would a table with 3 rows and 2 columns
have?

(3 – 1)(2 – 1) = 2

2 degrees of freedom
The Chi Square Test
(we will cover this in lab;)

The general formula is

(O – E)2
  S
E

• where
– O = observed data in each category
– E = observed data in each category based on the
experimenter’s hypothesis
S = Sum of the calculations for each category
Consider the following example in Drosophila melanogaster

• Gene affecting wing shape • Gene affecting body color


– c+ = Normal wing – e+ = Normal (gray)
– c = Curved wing – e = ebony
• Note:
– The wild-type allele is designated with a + sign
– Recessive mutant alleles are designated with lowercase
letters

• The Cross:
– A cross is made between two true-breeding flies (c+c+e+e+
and ccee). The flies of the F1 generation are then allowed
to mate with each other to produce an F2 generation.
• The outcome
– F1 generation
• All offspring have straight wings and gray bodies
– F2 generation
• 193 straight wings, gray bodies
• 69 straight wings, ebony bodies
• 64 curved wings, gray bodies
• 26 curved wings, ebony bodies
• 352 total flies

• Applying the chi square test


– Step 1: Propose a null hypothesis (Ho) that allows us to
calculate the expected values based on Mendel’s laws
• The two traits are independently assorting
– Step 2: Calculate the expected values of the four
phenotypes, based on the hypothesis
• According to our hypothesis, there should be a
9:3:3:1 ratio on the F2 generation
Phenotype Expected Expected Observed number
probability number
straight wings, 9/16 9/16 X 352 = 198 193
gray bodies
straight wings, 3/16 3/16 X 352 = 66 64
ebony bodies
curved wings, 3/16 3/16 X 352 = 66 62
gray bodies
curved wings, 1/16 1/16 X 352 = 22 24
ebony bodies
– Step 3: Apply the chi square formula

(O1 – E1)2 (O2 – E2)2 (O3 – E3)2 (O4 – E4)2


  + + +
E1 E2 E3 E4

(193 – 198)2 (69 – 66)2 (64 – 66)2 (26 – 22)2


 

198
+
66
+
66
+
22

Expected Observed
  0.13 + 0.14 + 0.06 + 0.73 number number
198 193
  1.06 66 64
66 62
22 24
• Step 4: Interpret the chi square value
– The calculated chi square value can be used to obtain
probabilities, or P values, from a chi square table
• These probabilities allow us to determine the likelihood that the
observed deviations are due to random chance alone

– Low chi square values indicate a high probability that the


observed deviations could be due to random chance alone
– High chi square values indicate a low probability that the
observed deviations are due to random chance alone

– If the chi square value results in a probability that is less


than 0.05 (ie: less than 5%) it is considered statistically
significant
• The hypothesis is rejected
• Step 4: Interpret the chi square value

– Before we can use the chi square table, we have to


determine the degrees of freedom (df)
• The df is a measure of the number of categories that are
independent of each other
• If you know the 3 of the 4 categories you can deduce the
4th (total number of progeny – categories 1-3)
• df = n – 1
– where n = total number of categories
• In our experiment, there are four phenotypes/categories
– Therefore, df = 4 – 1 = 3
– Refer to Table 2.1
1.06
• Step 4: Interpret the chi square value

– With df = 3, the chi square value of 1.06 is slightly greater


than 1.005 (which corresponds to P-value = 0.80)

– P-value = 0.80 means that Chi-square values equal to or


greater than 1.005 are expected to occur 80% of the time
due to random chance alone; that is, when the null
hypothesis is true.

– Therefore, it is quite probable that the deviations between


the observed and expected values in this experiment can be
explained by random sampling error and the null hypothesis
is not rejected. What was the null hypothesis?
CHI- SQUARE THROUGH SPSS
Chi - Square
Test of null hypothesis when there are 1 or 2 independent variables.

Independent Variables:
Degree – Teaching (1),
Non- teaching (0)
Age – 20 or above years old (1),
below 20 (0)

Dependent Variable:
Cholesterol – (3) High
(2) Moderate
(1) Low
Hypothesis:
H0 : Degree is not associated with age in relation to
a person’s cholesterol

H1 : Degree is associated with age in relation to a


person’s cholesterol

32
n Degree Age Choleste n Degree Age Choleste
rol rol
1 0 1 3 11 0 1 2

2 1 1 2 12 1 1 1

3 1 1 1 13 1 1 3

4 1 1 1 14 0 1 3

5 0 1 1 15 1 1 1

6 0 1 2 16 1 1 1

7 1 1 2 17 1 1 2

8 1 0 3 18 0 1 2

9 0 0 3 19 1 1 1

10 0 0 1 20 0 1 1

33
Required:
A. Frequency Table
B. Null hypothesis
C. Test the null hypothesis
D. Conclusion

34
FREQUENCY TABLE
Degree 20 or above years old Below 20 years old Total

Cholesterol 3 2 1 3 2 1

Teaching (1) 1 3 6 1 0 0 11

Non- 2 3 2 1 0 1 9
teaching (0)

Total 3 6 8 2 0 1 20

35
NULL HYPOTHESIS
Decision True False
Reject Type I Error No Error
Accept No Error Type II Error

36
Test H0 :
Rule # 1 : If sig < 0.05
Rule # 2: If sig = 0.05
Reject HO

Rule # 3: If sig > 0.05 Accept HO

* if αlpha is not mentioned = 0.05

37
How to Input Data on SPSS?
Open SPSS
(File, New, Data)

• Degree
Type Variable • Age
• Cholesterol

• Enter
Click Data View data

38
File

New

Data

39
Click
Variable
View

Enter Variables:
Degree
Age
Cholesterol

40
Click Data
View

Enter
Data

41
Click
Analyze

Descriptive
Statistics

Crosstabs

42
Drag
DEGREE
to rows

AGE to
columns

43
Click
Statistics

Check Phi
and
Cramer’s V

44
Check Cells

Check
Observe

Check
Expected

Continue

Ok

45
Result

46
Result
Symmetric Measures
Value Approx. Sig

Nominal by Nominal Phi 0. 183 .413


Cramer’s V
0.183 .413

N of valid cases 20

Therefore: Ho is accepted

• Degree has nothing to do with cholesterol.


• Age has nothing to do with cholesterol.
47
Analyze
Process of Computing Chi-
Descriptive
Statistics
Square through SPSS
Crosstabs

Statistics

Phi &
Cramer’s V
Cells
Observed
Expected

Continue
Ok

48
Analysis of Variance (ANOVA)

1
One- Way Analysis of Variance
(ANOVA)
 is used to determine whether there are any
statistically significant differences between the
means of three or more independent
(unrelated) groups

 A tool applied if the measure is about one interval


or ratio-scaled variable compared to three or more
categories of a nominal variable.

 * Note: ANOVA will not be processed if there are only two


categories.
One- Way Analysis of Variance
(ANOVA)

Steps:
Ho: There are no significant differences among
the three (3) group means.

H1: There are significant differences among


the three (3) group means.

***TM (teaching methods)


Test # TM (1) TM (2) TM (3)

1 90 100 98
2 90 90 80
3 100 90 90
4 85 90 75
5 100 100 100
6 92 90 90
7 91 100 90
8 80 90 80
9 100 70 80
10 100 90 4 80
Compute Mean ( x )
Analyze

Descriptive
Statistics

Descriptive

Drag

Option

Mean

Ok
5
Open
File

New

Data

6
Click Variable
View

Enter
TM1
TM2
TM3

7
Click
Data
View

Enter
Data

8
Click
Analyze

Descriptive
Statistics

Descriptives

9
Drag
TM1,
TM2,
TM3 to
Variables

10
Click
Options

11
Check
Mean

Continue

Ok
12
Computed Mean

13
Compute One – Way ANOVA
Test # TM (1) TM (2) TM (3)

1 90 100 98
2 90 90 80
3 100 90 90
4 85 90 75
5 100 100 100
6 92 90 90
7 91 100 90
8 80 90 80
9 100 70 80
10 100 90 14 80
Open File

New

Data

Variable View

Type ScoreTM
and Group
15
Click Data
View
Score TM: Groups:

1- 10 TM1 (1)
11-20 TM2 (2)
Enter 21-30 TM3 (3)
Data

16
Click
Analyze

Compare
Means

One- Way
ANOVA

17
Drag
ScoreTM to
Dependent
List

Group to
Factor

18
Ordinal Variable

Click
Post
Hoc
Nominal Variable

19
Check
Scheffe
***Take Note:
Significant
level @ 0.05

Continue

20
Click Options

Check Fixed and


random effects

Continue

Ok

Result

21
One – Way ANOVA Result

22
Interpretation:
 Stat test: One- Way ANOVA
 α = 0.05
 Tail = 2 tailed
 Result: Computed F - value= 1.70
 Sig = ρ value = 0.202

 Decision:
Reject Ho • Rule # 1 : If sig ˂ 0.05
Reject Ho • Rule # 2: If sig = 0.05
Accept Ho • Rule # 3: If sig ˃ 0.05
23
Decision:
• Accept Ho
• Conclusion:
• * There are no significant differences
among the three (3) group means.

• Implications:
• * The three methods of teaching are
equally effective.

24
Analyze
Compare
Means • Score – Dependent List
One – Way • Group- Factor
ANOVA

Post- Hoc
Scheffe

Continue
Option

Fixed & Random


Effects

Continue
Ok

25
Two- Way Analysis of Variance
(ANOVA)
 A two-way ANOVA tests the effect of two independent
variables on a dependent variable.

 A two-way ANOVA test analyzes the effect of the


independent variables on the expected outcome along with
their relationship to the outcome itself.

 A two-way ANOVA test reveals the results of two


independent variables on a dependent variable.

 ANOVA test results can then be used in an F-test on the


significance of the regression formula overall.
Two – Way ANOVA

Religion:
* * Performance
 Catholic (1) 5- Excellent
N0n- Catholic (2) 4- Very Satisfactory
3- Satisfactory
*Sex 2- Fair
Male (1) 1- Poor
Female (2)

27
Teacher Performance Rating (3)
Catholic (1) Non- Catholic (2)
Male (1) Female (2) Male (1) Female (2)
5 5 5 4
4 5 5 5
3 5 4 3
3 4 3 3
5 4 3 2
4 3 4 4
3 3 5 3
Hypotheses:

Religion: Ho1 : Xc = X Nc

Sex : Ho2 : XM = XF

Interaction: Ho3 : There is no


significant interaction between
religion and sex.

29
Open File

New

Data

Variable View

Enter:
Religion
Sex
Performance

30
Click Data Religion:

View 1-14 Catholic (1)


15-28 Non-Catholic (2)

Sex:
1-7 Male (1)
8-14 Female (2)
15-21 Male (1)

Enter
22-28 Female (2)

Data Performance:
Data shown in the table

31
Click
Analyze

General
Linear Model

Univariate

32
Performance
(Dependent
Variables)

Sex and
Religion
(Fixed Factors)

33
Click
Model

34
Full
Factorial

Type III

Continue

35
Contrasts

Continue

36
Drag Religion
and Sex to
Post Hoc Tests

Check Scheffe

Continue

Click Options

37
Drag all
variables to
Display Means

Check
Descriptive
Statistics

Continue

38
39
Bootstrap

Continue

Ok

40
41
Two – Way ANOVA Result

42
Analyze

General Linear
Post – Hoc Test
Model

Univariate

Dependent Variable • Performance

• Religion
Fixed Factors • Sex

Model

Full
Factorial

43
Type III
Post – Hoc Test
Contrasts

Post Hoc
Enter Religion &
Sex

Scheffe

Continue

Option

44
Post – Hoc Test
• Religion
Enter • Sex
Overall • Performance

Descriptive
Statistics

Continue

Boostrap

Continue

Ok

45
•Education is not the learning of facts,
training of the mind
but the

to think!
-Albert Einstein

46
t -test

1
t-test
• The t-test tells you how significant the differences
between groups are; In other words it lets you know
if those differences (measured in means/averages)
could have happened by chance.

• Can be applied to two parametric values (interval or


ratio scaled) at a time (paired samples t-test) or to
the measurement of two categories of a nominal
variable (known as grouping variable) with an
interval-scaled variable (metric). The latter is called
independent samples t-test.

2
Three main types of t-test:

• T-test for independent samples, compares the means for


two groups
• T-test for uncorrelated samples,
• T-test for heterogeneous samples, tests the mean of a
single group against a known mean.

Opposites:
• Dependent
• Correlated/ paired t-test, compares means from the same
group at different times (say, one year apart).
• Homogeneous paired t-test
3
• Test at = 0.05 if the group’s
pretest mean score differ significantly
from that of the group’s posttest mean
score.
Step 1:
Ho : XPretest = Xposttest
H1 : XPretest = Xposttest
Step 2: Statistical test
*test for correlated samples
Step 3: Compute
4
Step 4: Decide
Step 5: Draw a conclusion

***If uncorrelated, # of members in a


group are not equal.

*** If correlated, # of members in a


group are equal.

5
T- test for correlated samples
Pretest Post-test
90 88
80 90
90 91
84 90
82 95
80 92
80 90
94 80
90 80 6
File

New

Data

Variable View:
Pretest
Posttest
7
Click
Data
View

Enter
Data

8
Click Analyze

Compare
Means

Paired Samples
T Test

9
Drag Pretest
& Posttest to
Paired
Variables

Click Options

10
Continue

Ok

Result

11
• Sig (correlation) = 0.028
12
• Sig. (2- tailed) = 0.404, sig ˃0.05
• Therefore: Accept Ho
• T = -0.881
Conclusion :
* There is no significant difference between
group’s pretest mean scores and group’s post
test mean scores.
13
T- test for Uncorrelated Samples
 If uncorrelated, unequal number of members in a group.
Boys (x) Sex (Boys) Girls (x) Sex (Girls)
90 1 90 0
90 1 95 0
85 1 95 0
85 1 90 0
85 1 90 0
90 1 95 0
91 1 95 0
86 1 96 0
XB = 87.75 97 0
100 0
XG= 94.30
14
File

New

Data

Variable View:
x
Sex
15
Click
Data
View

Enter
Data

16
Click Analyze

Compare
Means

Independent-
Samples T Test

17
Drag X - test
variables and
sex - grouping
variables

Click Define
Groups

18
Define Groups:
Group 1 – 1
Group 2 - 0

Continue

Option

Continue

Ok

19
RESULT

F- value: 0.013 Sig. = 0.911


Sig (2-tailed)= 0.000
20
21
Now, it’s your turn!!!

22
•Education is not the learning of facts,
training of the mind
but the

to think!
-Albert Einstein

23

You might also like