100% found this document useful (44 votes)
112 views

Original PDF Foolproof Guide To Statistics Using Ibm Spss Custom Edition PDF

ebook

Uploaded by

gordon.hatley642
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (44 votes)
112 views

Original PDF Foolproof Guide To Statistics Using Ibm Spss Custom Edition PDF

ebook

Uploaded by

gordon.hatley642
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

(Original PDF) Foolproof Guide To

Statistics Using IBM SPSS (Custom


Edition)
Visit to download the full and correct content document:
https://ebooksecure.com/download/original-pdf-foolproof-guide-to-statistics-using-ibm
-spss-custom-edition/
V
Foolproof Guide to Statistics
using IBM SPSS
2nd Edition
4
1 Dr Adelma Hills =
Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e

MAT09_Basic3_6-297x210.indd 1 21/1/11 2:01:42 PM


1 Introduction 1

1 INTRODUCTION

About this book


The intention with this guide is to provide a foundation from which readers can develop their under‐
standing of statistics, and their skill in using SPSS. It is not intended as an exhaustive text on either. The
domain of statistics, in its full complexity, is vast. Many modern text books aim to be relatively com‐
prehensive in their coverage, but in doing so, they can overwhelm the limited information processing
capabilities of the poor human user—particularly the new and/or infrequent user. SPSS, like so many
software programs, is similarly vast and can do pretty much “anything” statistical, but to make full use
of the program you need an extensive knowledge of its capabilities and operations. Many of these
capabilities go beyond the user‐friendly, point‐and‐click procedures and require the use of syntax
(instructions written in the code language of SPSS). A basic introduction to syntax is provided at vari‐
ous points in this guide.

The aim then, is to provide the essential information students and researchers need in order to have a
fundamental grasp of a range of statistical techniques, and essential practical skills in using SPSS for
Windows. This guide should always be accompanied by more comprehensive texts that can be con‐
sulted for more detailed or specialised information. For example, texts1 by Field (2005), Green and
Salkind (2005), Howell (2002), Pallant (2007), and Tabachnick and Fidell (2007) are all useful as
alternative sources of information, or more comprehensive texts.

Important . . .
Throughout the guide procedures are described for getting things done in SPSS, but very often there
are alternative ways of achieving the same outcome. You might find the alternatives easier or prefer‐
able to the one provided. Therefore, as you acquire confidence, it is essential that you explore all the
menus and option buttons; and consult the extensive Help (F1 key) that is provided in SPSS, so you can
develop a high level of expertise, solve any problems you encounter, and learn about advanced topics.
SPSS for Windows is actually quite easy to use, but real skill only comes with practice. One learns by
doing! Once you can “think” like SPSS you can wean yourself away from guidebooks and work out how
to do things yourself.

It is also important to be aware that computers and their associated software programs are marvellous
tools when they serve us; but they have a sinister potential when we serve them. Increasingly, software
programs automate many operations, but this increases the helpless dependence of the human user.
SPSS has innumerable default options, and this book relies on many of them, but as one becomes more
experienced default options should always be investigated and a decision should be made as to
whether or not they are appropriate. Users of SPSS can easily generate misleading results—or even
utter garbage—if they mindlessly point‐and‐click without really knowing what they are doing.

Changing technology
Constant technological change is now a fact of life, and one of the most important skills to develop is
the ability to adapt to change. Once you understand how computers and software programs work it is
usually relatively easy to figure out changes and adapt to them yourself. SPSS upgrades frequently so
be prepared for changes that may not yet have been updated in textbooks or guidebooks. Similarly, be

1 Text books are updating all the time, so always check to see if a later edition is available.

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
2 Foolproof guide to statistics using SPSS

prepared for differences if you are using an older version of SPSS, or even if you are using a student
version. If you encounter any such changes from the material in this text, try to work them out your‐
self, rather than depending on an explanation from someone else.

Computers and computing


To use SPSS for Windows effectively you need to make sure you are familiar with the computer system
you are using. Make this your first task. This includes knowing where to find and start programs such
as SPSS, knowing where to find files on the hard disk or network, and knowing how to log on to net‐
works if you are using one (e.g., in university computer labs).

One issue in the use of computers is document location. Windows programs control the storing of files
(documents) in various folders, but the user may have no idea where they are on the hard disk. Some
users, however, prefer to control this process themselves, keeping all their data in a personal folder
separate from programs—usually the My Documents folder. The Windows Explorer program allows
you to navigate through the folder structure, create your own folders, and exchange files among them,
as well as easily copy to disks, CDs or DVDs, or USB flash drives. (Note that Windows Explorer is a
Windows accessory program for managing files; it is not to be confused with the internet browser
Internet Explorer.)

Users of Macintosh computers need to have a similar understanding of their system.

VERY important: Backing up your work


Always be sure to back up any work you want to keep, as a safeguard against system failure. There are
automated ways of doing this, or in the interests of maintaining some sense of control users like me
prefer to do this manually—it’s easy once you develop the habit. At the end of each session (or during
the session if you are particularly obsessive) you can use Windows Explorer to quickly copy files to
other storage devices (e.g., USB flash drives).

Basic statistical concepts


The remainder of this book provides the fundamentals you need in order to use the various statistical
techniques, and SPSS to perform the analyses. Early chapters in each section deal with the basic tech‐
niques and include manual calculations to enhance your understanding of what the analysis is actually
doing. Later chapters deal with advanced techniques.

Before proceeding, make sure you are familiar with the following statistical concepts. These must be
part of the general knowledge of any graduate in disciplines that involve quantitative research.

Variables
A variable is any attribute that can vary (e.g., age, gender, religion, self‐esteem, air temperature, circle
diameter, etc.), as opposed to a constant that always has the same value (e.g., pi, the ratio of the cir‐
cumference of a circle to its diameter). Constants are relatively rare in the behavioural sciences, al‐
though a variable can be held constant in a research study by only considering one of its values (e.g.,
women in the case of gender). Measures of variables are the data of quantitative research.

Population
In research, interest is in understanding the nature of variables in a large group of people—the popu­
lation of interest (e.g., the ages of first year students at a particular university, or in a particular state,
or in the whole country). A summary measure of some population variable (e.g., average age) is known
as a parameter.

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
1 Introduction 3

Samples and random samples


Usually it is not practicable to measure every member of the population in order to determine a popu‐
lation parameter. Instead we take a sample or subset of people from the population. A summary
measure from a sample is known as a statistic, and it is used as an estimator of the population pa‐
rameter.

For the sample statistic to be a good estimator of the population parameter the sample must be repre­
sentative of the population, and not biased in any way. Would a sample of students from an evening
class be likely to give an unbiased estimate of the average age of first year psychology students at a
university? I hope you can see that the answer is “no”—why?—because it is likely to be biased toward
older people who are working during the day.

The best way to achieve a representative sample is via random sampling. A random sample is one in
which every member of the population has an equal chance of being selected in the sample.

Descriptive and inferential statistics


There are two goals of analysis. The first is simply to describe the sample (or sometimes the popula‐
tion), using techniques that organise and summarise the data. This is the province of descriptive
statistics.

A second goal is to use sample statistics to make inferences about population parameters, or to use
relationships found in a sample to make inferences about the relationships that exist in the popula‐
tion. This is the province of inferential statistics.

Hypothesis
The basis of research is a clear and concise research question. Reference to extant theory then leads
wherever possible to expression of the research question in terms of a hypothesis. This is a tentative
statement about the relationship between two or more variables. The aim of the research is then to
test the research hypothesis, by finding evidence that either supports or refutes it. (Note that a hy‐
pothesis is never “proved”1; it can only be supported.) Variables can be positively related (as one
increases the other increases), negatively related (as one increases the other decreases), or unrelated
(changes in one are not associated with any predictable change in the other). In experimental re‐
search causal relationships are investigated by looking for differences between groups treated differ‐
ently. For example, if a negative causal relationship is hypothesised between test difficulty and per‐
formance, this can be tested by giving one group of research participants a difficult test and another
group an easy test. If the group with the difficult test performs worse the hypothesis is supported.

The main types of variables in research


In research we hypothesise that one variable (e.g., X) will affect or be related to another (Y), that is, we
hypothesise that X will cause changes in Y, or we hypothesise that X will simply be related to Y. To test
this hypothesis we manipulate (in experimental research) or select (in correlational research) the
levels of X and measure participants' responses on Y as a function of the different levels of X.

The variable that is manipulated or selected in the research design is known as the independent
variable. It is usually abbreviated as the IV.

1 We are very tiny creatures inhabiting a small planet, orbiting a nondescript star, one of billions and
billions and billions in the known universe. We only have access to a part of reality and can’t be abso‐
lutely certain of anything. It helps to remember this, and to continually question what we think we
know.

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
4 Foolproof guide to statistics using SPSS

The variable that is observed and measured in response to the independent variable is known as the
dependent variable; that is, its values depend on the levels of the independent variable. DV is the
abbreviation.

Note that in correlational research, which tests relationship not causation, it is actually more correct to
refer to the IV as the predictor, and the DV as the criterion.

Control variables are those that are held constant in a study.

Internal and external validity


Be very careful of the term validity; it is a very important term that has somewhat different meanings
in different contexts—be sure you always fully understand it. In the context of research, there are two
main kinds of validity.

Internal validity refers to the accuracy of any conclusions we draw about the causal relationship
between the IV and DV. It is threatened to the extent that the observed relationship can be attributed
to other things. Consider the test difficulty example used previously. If all the participants working on
the difficult test did so in a hot, confined room, while those working on the easy test were in a comfort‐
able room, then performance differences might have been caused by the environmental conditions
and not by the difficulty of the test.

External validity refers to the extent to which research conclusions can be generalised beyond the
specific research context, that is, to different people, places, and times. For example, can research
findings in Australia be generalised to Inuit people living in Alaska; or research findings with 20‐year‐
olds be generalised to 80‐year‐olds? The answers depend on the circumstances of the particular re‐
search study.

Quantification of variables: Levels of measurement


In order to conduct quantitative research we need to be able to quantify or measure variables. There
is, of course, another type of research, namely qualitative research that uses thematic analysis of
qualitative data (e.g., interviews, texts, videos of behaviour), not statistical analysis of quantitative
data; however, it is not the subject of this book. In quantifying variables there are four types of meas‐
urement scale that you must understand:

Nominal or categorical scales involve using numbers simply as codes for some attribute. For
instance, we might code different religions as:
1= Protestant 2= Catholic 3=Baptist 4=Other

In nominal scales there is no mathematical relationship between the numbers (i.e., 1 is in no way
bigger, smaller, better, more than, or less than 2). Nominal variables are often referred to as categori­
cal variables, or even as qualitative variables.
Nominal variables that can have only two values (e.g., gender) are known as dichotomous variables.

Ordinal scales involve numbers that do indicate some mathematical rank order, but the intervals
between ranks are not necessarily equal. For example, when asked to list six life goals in order of their
importance to her, a participant produces this list:
1 Material Wealth
2 Pleasure
3 Security
4 Freedom
5 A World of Peace
6 Salvation

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
1 Introduction 5

This is a rank order. It tells us, for instance, that material wealth is more important to the person than
pleasure, but we cannot say that the intervals between goals are the same. The first four could be of
near equal importance, while the fifth and sixth ones may be much further removed.

Interval scales also assign numbers to a characteristic, but in this case there is a strong mathematical
relationship between the numbers, as each interval is equal. The classic example of an interval scale is
the temperature scale (Fahrenheit or Centigrade).
The thing to note about an interval scale is that it does not have a true zero; 0°C or 0°F does not indi‐
cate a complete absence of heat. In the absence of a true zero it is NOT the case, for example, that 4 can
be regarded as twice as much as 2. Forty degrees centigrade is not twice as hot as 20°C, although the
difference between 40°C and 50°C is the same as the difference between 20°C and 30°C.

Ratio scales involve an even stronger mathematical relationship; not only are the intervals between
the numbers or scale values equal, but there is a true zero so that 4 is twice as much as 2. Distance is
a ratio scale as 0 indicates no distance, and 10 km is twice as far as 5 km.

Discrete and continuous variables


Nominal variables are sometimes referred to as discrete variables, because they have fixed values,
and it is not possible to have smaller values between them (e.g., you cannot really be .5 of a man).

Interval or ratio variables, on the other hand, are often continuous variables, because they can be
broken up into any number of finer divisions. Variables such as distance, for example, are continuous.
Depending on how finely we measure the distance between two points there can be anything up to an
infinite number of measures (e.g., 15 km, 14.91 km, 14.907 km, 14.9068 km etc.). However, interval or
ratio variables can also be discrete (e.g., number of children in the family, where it is not possible for a
family to have 3.65 or 3.642 children).

Levels of measurement in psychology


In psychology we often use rating scales similar to the following, where participants tick a box to
indicate where they stand:
Political advertising should not be permitted on television in the last week before an election.

      
Strongly Disagree Slightly Neutral Slightly Agree Strongly
Disagree Disagree Agree Agree

We then code these scales numerically (e.g., 1 to 7, or ‐3 to 3), and strictly speaking they are ordinal
scales. Nonetheless, it is common in psychology to regard them as if they are interval scales. It is
assumed that the psychological intervals are about the same.

Be aware this is quite a controversial issue, and different researchers and textbooks can take different
points of view.

Introduction to research design and statistics


Despite the dread induced in students by research and, even worse, statistics, quantitative research
design and analysis (statistics) are really based on very simple ideas. If you understand the simple
foundations you will not be so overwhelmed by the details, although you must be sure to retain
concepts, as later ones build on earlier ones, and you will soon be overwhelmed if you forget what is
learned at each step.

In essence, quantitative research is about commonsense pattern recognition. One of the simplest
forms of pattern recognition involves identifying what things occur together, so you can predict one on

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
6 Foolproof guide to statistics using SPSS

the basis of the other. This in essence is what correlation is about, but you need to be very careful with
the interpretation. Just because things occur together does not necessarily mean that one causes the
other. Perhaps the most fundamental issue of all is that of causation.

Usually, what we most want to identify are cause and effect relationships. The commonsense way to
do this is to experiment. If you think X might cause Y, then manipulate X and see if Y changes accord‐
ingly, and do this repeatedly so you can rule out chance as an explanation.

Research designs
These commonsense approaches to understanding how the world works are the basis of the two main
types of research design, namely nonexperimental and experimental.

Nonexperimental or correlational designs assess relationships among variables.

Experimental designs attempt to assess cause and effect. We measure group differences on the effect
variable (the dependent variable, DV) for groups of research participants treated differently on the
hypothesised causal variable (the independent variable, IV). The defining feature of experimental
designs is that the researcher actively manipulates the IV. True experiments use randomly formed
groups of participants; quasi experiments use intact (i.e., preexisting) groups.

Two types of experimental designs are between­groups (or between­subjects) where different
groups of participants receive the different manipulations of the IV, and repeated measures (or
within­subjects) where only one group of participants receives the different manipulations of the IV
on different occasions.

When conducting and writing up research, never get confused between experimental and correla­
tional research designs. In particular, never use causal terminology such as “effect of” or “influence
of” or “impact of” etc. when you only have a correlational design. Instead, be sure to only talk about
relationships. You may wish to argue on logical grounds for a causal relationship, but you must not
assume it from the design. Do not ever forget this!

Analysis techniques
Even though research designs and analysis techniques are intimately related, they are not one‐in‐the‐
same. Analysis of Variance (abbreviated as ANOVA), for instance, is the analysis technique for analys‐
ing group differences, and is the main analysis for experimental research designs when the groups
receive different levels of a manipulated variable (e.g., different amounts of a drug, different room
temperatures). However, it can be also used for nonexperimental correlational research designs
when the groups comprise a naturally occurring variable such as gender.

Always be alert to the natural groups nonexperimental design depicted on the next page. Arguably,
the term natural groups (Shaughnessy, Zechmeister, & Zechmeister, 2000, pp. 235‐238) best describes
this design, but different texts often use different terms. Very often too, the design is included under
quasi experiments, which is not appropriate—it is not an experiment because there is no manipula‐
tion. Furthermore, the design is fundamentally correlational in nature; it tests relationship, not causa‐
tion.

The different types of research design and analysis are illustrated on the next page.

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
1 Introduction 7

Types of quantitative research design and statistical analysis


Experimental Design Nonexperimental (Correlational) Design
True experiment Quasi experiment Natural groups Correlational
Manipulate IV: Manipulate IV: “IV” occurs naturally No manipulation or
Administer different Administer different levels and is not manipu- selection into groups:
levels of IV to different of IV to different groups of lated: Simply measure all vari-
groups of participants. participants. Levels of IV are selected ables of interest and
and represented by assess relationships among
Groups formed by Intact groups used, often discrete groups (e.g., these naturally occurring
random assignment (in in natural settings (e.g., gender, age-group, high variables.
between-groups de- school classes, shifts of versus low self-esteem
signs). workers). groups, religion).

Control other relevant Control of other variables is


variables (i.e., hold them problematic.
constant).

Causal inference can Causal inference is Cannot infer causation from design.
be inferred. problematic. Control of other variables is problematic, and other
Can infer from the Depends on explicitly ruling variables may be the cause of observed relationships;
design that IV causes out threats to causal hence causal inference is much more difficult and
the DV. inference (i.e., to internal cannot be made from the design itself. Researcher may
validity). be able to argue causation on logical grounds alone.

Significance of group differences Relationship among variables


Analysis of Variance (ANOVA) “family” of statistics Correlational “family” of statistics
(and their nonparametric alternatives) (and their nonparametric alternatives)

Between-groups designs: Two-variables:


2 groups: Independent t test (Mann-Whitney) Pearson bivariate correlation and regression
3+ groups: One-way ANOVA (Kruskal-Wallis) Spearman nonparametric alternative
2+ IVs: Factorial ANOVA

Repeated measures designs: Relationship between several “IVs” (predictors) and


2 levels: Dependent t test (Wilcoxon) one “DV” (outcome or criterion):
3+ levels: One-way repeated ANOVA (Friedman) Multiple regression
2+ IVs: Factorial repeated ANOVA Logistic regression (nonparametric)

Other ANOVA designs: Structure (relationship pattern) in set of variables:


Mixed ANOVA (Between- and repeated IVs) Principal components or factor analysis
ANCOVA Analysis of Covariance Cluster analysis (nonparametric)
MANOVA Multivariate ANOVA (multiple DVs) Multidimensional scaling (nonparametric)

Frequency (Nominal) data: Frequency (Nominal) data:


One-way and two-way chi-square (nonparametric) One-way and two-way chi-square (nonparametric)
Multiway frequency analysis (nonparametric) Multiway frequency analysis

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
8 Foolproof guide to statistics using SPSS

2 ENTERING AND SAVING DATA IN SPSS


Getting things done in SPSS is usually a matter of pointing and clicking (or double‐clicking) on the
appropriate icon, then pointing and clicking instructions for what you want to do. Many operations can
also be performed by selecting from pull‐down menus. You are free to choose which technique you
prefer to use. Also, there are often several different ways of doing the same thing: which technique you
use is a matter of personal preference. You may find and prefer alternative ways of doing many of the
things covered in this text.

The main SPSS window appears as follows. Data are entered in this Data View window, but not before
you have defined your variables by clicking on the Variable View tab at the bottom to go to the Vari­
able View window. Note that the colour scheme and style may vary, depending on the Windows
software version you are using and the style preferences that have been selected.

Menu bar: Click on


words to pull down
menus for performing
tasks.

Icon bar: Click on


icons to perform
tasks. Point to Click on each of these three buttons to control the window:
each one and it
To minimise the window to just a title button at bottom of screen
will tell you what it
(then click title button to restore)
does.
To reduce the window size , then maximize to full screen again

To Exit from SPSS


This active tab shows you are
in Data View, where the data
are entered and viewed.

To start, before we enter any data, we need to tell SPSS all about the
variables. Click on this tab to go into Variable View, where variables can be
named and formatted, as explained on following pages.

Saving the contents of SPSS windows


The contents of any window can be saved to a file on the hard disk, floppy disk, CD or DVD, USB flash
drive, or network etc. SPSS works with three types of windows (and files):

Data windows are where variables are defined and data are entered (as explained on the following
pages). Data window contents are saved in data documents or files, to which SPSS gives the extension
sav.

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
2 Entering data 9

Output windows are where SPSS places the output or results of statistical analyses. The contents of
output windows are saved in SPSS viewer documents, to which it gives the extension spo.

Syntax windows are where SPSS commands or control lines can be typed, then run to perform any
statistical analysis. We only tend to use syntax for more complex analyses. The contents of syntax
windows are saved in text documents, to which SPSS gives the extension sps.

Whenever you are working with any of these windows and you want to keep the contents, remember
to ask SPSS to save the document. The contents of any window can be saved as follows: .

1 How to: Save window contents to documents (files)

[For example, to save the contents of a Data window, select Menu options by clicking as follows:]
File
Save As...
This calls up the Save Data As dialog box, as demonstrated below.
When you have specified the location and file name click on Save button.

Important: If successfully saved, the window heading will change to the new file name. If this does not happen you
have done something wrong, and the file has not been saved!

Saving subsequent versions of a file:


Once a file has been named, subsequent versions can be saved by choosing the File option from the Menu bar,
then simply click on the Save option, OR just click on the save icon (second from left on the icon bar). Note
that subsequent versions replace the earlier version.

Click this right arrow, then select the drive or folder


where you want to save the file (e.g., to save to a
USB flash drive, select the relevant drive as shown).

In the space for File name: type the name you would like to give
the file (e.g., DemoSPSS). SPSS automatically adds the .sav
extension (indicating a data file).

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
10 Foolproof guide to statistics using SPSS

Defining variables
Let us move on now to learn about how to enter data into SPSS. Suppose that data on age and gender
are to be recorded for a class of 10 students. We must first define these variables in Variable View.

The Variable View window is shown here, with column reference numbers added for the important
columns to cross reference to the explanations below.

4
5
1 2 3 6

Don’t forget
5a to click Add

If you click on a cell in any column except Name and Label an arrow or arrows selector (as shown in
column 3 for Decimals) or option indicator (as shown in column 5 for Values) appears to the right of
the cell.
Arrow selectors: If you click on the arrows you can change the settings. In this example, none of the
variables uses any decimal places, so each has been changed to 0, as shown here for age. (To save time
you can then copy and paste from this one to the others.)

Option indicators: If you click on cells in any of the columns Type, Values, or Missing an option
indicator appears to the right of the cell, as shown here in the Values column for gender (this is ex‐
plained below).

1. Name column: Here, you type the short name of each variable. For this example, ID has been
typed in the first row of the Name column, gender in the second row, and age in the third row.
Once you name the variables in Variable View, SPSS heads the columns in Data View with
these names.

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
2 Entering data 11

Although SPSS already has case numbers down the left hand side of the data window, it is a very
good idea to include an ID variable as a unique identifier for each participant, because SPSS of‐
ten rearranges the data during analysis. When this happens the numbers down the side no
longer correspond to the correct participant number.

2. Type column: This can be left as the default Numeric for variables that are numbers. If, how‐
ever, you have alphabetic variables (e.g., if you wish to literally type in “male” and “female” for
gender) you would need to activate the Numeric dialogue box, and select the  String option,
which means alphabetic characters.

3. Decimals column: This is where you specify the number of decimal places you want for each
variable, by clicking on the arrow indicators accordingly.

4. Label column (optional): Where a longer, more explanatory, variable name is desired in the
printed output, type such names in the Label column, as shown here for ID.

5. Values column: Here is an example of an option indicator. When it is clicked upon, a dialog box
appears for Value Labels (as shown). In order to perform statistical analyses on a nominal (or
categorical) variable such as gender it must be assigned numerical codes (e.g., 0=Male,
1=Female). This column enables you to specify how such variables are coded.

To tell SPSS how gender is coded in this example, type in the Value Labels dialogue box as fol‐
lows (Note: these steps have already been completed in the example for the 0 value):

[In the space for] Value [type] 0 [note this is zero, not letter O]
[Hit the Tab key, then in the space for] Value Label [type] Male
[Click on the button ] Add
[Cursor moves to the space for] Value [type] 1
[Hit the Tab key, then in the space for] Value Label [type] Female
[Click on the button ] Add
[Click on the button ] OK

6. Measure column: There are three options here for specifying the level of measurement of the
variables, namely, Scale (interval or continuous), Ordinal, and Nominal (categorical). In this
example ID and gender have been changed to nominal.

Of the remaining columns, Width only needs to be changed if your data are likely to need more than
the default of 8 characters (as may be the case if you are entering text for a string variable). Missing is
used if you want to assign codes for different types of missing data, while Columns and Align control
the physical width and alignment respectively of the data columns in the Data View window.

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
12 Foolproof guide to statistics using SPSS

Entering data
Now that the variables have been defined in Variable View you can click on the Data View tab at the
bottom left to move back to the data window where the actual data (listed here) can be entered. Below
are versions of the Data View window after these data have been entered.

ID Gender Age
1 0 18
2 1 19
3 1 18
4 0 18
5 1 32
6 1 19
7 0 40
8 1 18
9 1 19
10 0 20

If you click on the View menu option, then select Value Labels in the dialog
box that appears (so that it is ticked) values, rather than numbers, can be
displayed for nominal variables, as shown here for gender.

As you type data in the active


cell (highlighted in yellow
below) it will also appear in this
space. When you hit the
keyboard Enter or arrow keys,
it will be stored in the active
cell only.

This yellow highlighted cell with the rectangle around it is the active
cell where you type the data.
You use the keyboard arrow keys to move to other cells.

Remember: After all the data have been defined and entered, save the data
file as demonstrated at the beginning of this chapter.

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
2 Entering data 13

Using SPSS to perform a simple frequency analysis


2 How to: Perform a frequency analysis

[Select Menu options by clicking with mouse on:]


Analyze
Descriptive Statistics 
Frequencies…
[In dialog box select required variables by clicking on variable name, then on arrow button]
age 
gender 
[Click on] OK

To deselect variables:
Click on them, then send them back to the list of variables by clicking on the arrow that will now be pointing in the
reverse (send back) direction ().
To remove all selections and return everything to its default setting, click on the Reset button.

Examining the output


SPSS moves to the Output window where the results of this analysis (the output) appear. .

Click on the up or
down pointing
triangles in the
scrollbar at the right
of the window to
scroll up (or down).

This is the What is this output telling you?


Navigation What percentage of students is
panel. Click on female? How many students are
any section of aged 18 years?
the output to go
to it directly. To exit SPSS completely double-
click on the “kill” button, the very
top right-hand button. Then,
if you do not want to save any of
this to disk, click on No for any
dialog boxes that appear. If you
do want to save any window
contents, refer back to the
instructions at the beginning of the
chapter.

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
14 Foolproof guide to statistics using SPSS

3 THE PROBLEM OF VARIABILITY:


DESCRIPTIVE STATISTICS FOR CENTRAL
TENDENCY AND VARIANCE

If human beings and other living organisms were like pieces of metal, copper for instance, behavioural
science research would be relatively easy. Any piece of copper behaves like any other piece of copper
of equivalent purity under the same conditions. Therefore, if you want to establish the effect on copper
of changes in temperature any individual sample will suffice.

This is not the case in the biological and behavioural sciences where there exists the problem of vari‐
ability! Living organisms are complex entities whose behaviour (both physical and behavioural) is
determined by innumerable variables. Thus, for example, the effect of a given drug on one individual
may be quite different to the effect of the same drug on another individual; it may have a large effect on
one individual and little or no effect on another. Moreover, if it is a drug hypothesised to affect blood
pressure, for instance, it will be difficult to compare effects across individuals because those individu‐
als are likely to have a range of different blood pressure levels to begin with for a range of different
reasons.

The way this problem has traditionally been dealt with in disciplines such as psychology is to average
across individuals and determine how much of the entire variance in a sample of individual scores can
be explained or accounted for by a particular variable (the drug in this example). Inferential statistics
are then used to generalise from the sample to the population. Of course, what this means is that we
have an estimate of the average effect in the whole population, but we are not able to specify the effect
for any given individual, especially when the proportion of variance explained is quite small—as it very
often is in the social and behavioural sciences.

The problem of individual variability underpins the whole of behavioural research and the whole of
statistics. In fact, statistics is nothing more than an elaborate device for trying to deal with variability.

Summary statistics
In averaging across individuals, two summary descriptive statistics are needed to summarise or de‐
scribe a sample. The first is a measure of the central tendency (i.e., the average score in the sample),
and the second is a measure of the average variability or variance about that average score. Both
measures are necessary to adequately describe the sample. To know, for example, that the average
income in a country is $150,000 is not enough; you would not rush over there until you knew the
variability. You might be less enthusiastic if you discovered that there is a large variance and that
incomes actually range from $5,000 (for most of the population) to $900,000 (for the ruling elite). If,
on the other hand, the variance was small, with incomes ranging from $145,000 to $155,000, you may
well be applying to emigrate.

The remainder of this chapter reviews measures of central tendency and variance, but first a word
about statistical notation.

Statistical notation
In order to understand statistics you to need to be familiar with statistical notation, that is, the set of
abbreviations used to represent statistical concepts. Among the main symbols are:
N refers to the total number of scores (participants) in a sample

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
3 Variability 15

n refers to the number of scores in a subset of a sample

X refers to any score

 means the sum of (it is the Greek capital letter, sigma)

X is the symbol used in calculations for the mean ("X‐bar"), however, when writing re‐
search reports use M for the mean and SD for the standard deviation.

Note that most symbols are in italics.

Measures of central tendency and variability


The mode
The simplest measure of central tendency is the mode, which is just the most frequently oc­
curring score or scores.

To find the mode, list the scores in order, then locate the score or scores with the highest fre‐
quency.

The mode can be found for nominal level data and above.

There is no specific measure of variability to accompany the mode, although with ordinal, inter‐
val or ratio level data, the range should be indicated (see below).

2 3 4 4 5 6 6 6 7 8 Mode = 6

The median, the range, and the semi-interquartile range


The median is the measure of central tendency that corresponds to the 50th percentile, that is,
the score that separates the bottom 50% of scores from the top 50% of scores.

To find the median, first list the scores in order, then locate the middle score.

The median is used with ordinal level data and above.

The abbreviation used for the median in written reports is Mdn.

2 3 4 4 5 6 6 6 7 8 Mdn = 5.5

1 1 2 2 3 4 6 6 6 7 8 Mdn = 4

3 4 5 8 8 9 Mdn = 5 + ( (8 ‐ 5) / 2 )
= 5+3/2
= 5 + 1.5
= 6.5

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
16 Foolproof guide to statistics using SPSS

The range is the measure of variability that accompanies the median. It is the distance between
the highest and lowest scores. Scores must be listed in order to determine the range.

2344566678 Range = 8 ­ 2 = 6

The semi­interquartile range is half the distance between the 25th and 75th percentiles (i.e.,
half the distance between the bottom 25% of scores and the top 25% of scores; or the 50% of
scores that fall either side of the median).

The range and semi‐interquartile range are used with ordinal level data and above.

The median is the most appropriate measure of central tendency for ordinal data. The range
can be used as the measure of variability, although the presence of extreme scores or outliers
can make it misleading. Hence, some texts recommend the use of the semi­interquartile range
as the best indicator of variability in ordinal level data (accompanying the median).

The mean, the variance, and the standard deviation


The mean is the most frequently used measure of central tendency. It is the arithmetic "average".

Note that the statistical symbol for the mean used in calculations is X .

The symbol for the mean used in written reports is M.

The mean is used with interval level data and above.

To demonstrate how the mean is calculated, let us use the same set of figures as for the mode
and median, but note there is no need to arrange them order.

Participant
Number X
(ID)
1 6
2 4
3 3
4 2
5 6
6 8
7 7
8 5
9 6
10 4
N = 10 X = 51

Using the formula for the mean, we calculate is as follows:


ΣX
X
N
51

10
 5.1

For this set of data note that the mean is 5.1, the median is 5.5, and the mode is 6.

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
3 Variability 17

This discrepancy in the three measures of central tendency is due to the negative skew1 in the
scores. In a perfectly symmetrical distribution the mode, median, and mean will be the same, but
in skewed distributions they are spread apart.

The standard deviation approximates the average amount by which scores deviate from the mean,
and the variance is the standard deviation squared.

For data that are interval level or above, the mean is the most appropriate measure of central
tendency, and the standard deviation is the most appropriate measure of variability.
The statistical symbol for standard deviation used in most calculations is s.
The symbol for standard deviation used in written reports is SD.
The statistical symbol for variance used in most calculations is: s 2 .
There are two ways to calculate the variance and standard deviation. The most useful way for
conceptual understanding is to use the deviation score formula, as follows (note it requires
the mean to be calculated first):

ID X X- X (X- X )2
x x2
(Deviation score) (Squared deviation score)
1 7 1.2 1.44
2 3 -2.8 7.84
3 9 3.2 10.24
4 4 -1.8 3.24
5 6 0.2 0.04
N=5  X = 29  x = 0.0  x = 22.80
2

Sum of deviation scores Sum of squared


Always equals 0 deviation scores
SUM OF SQUARES (SS)

ΣX
X 
N
29

5
 5.8

s variance  
2 Σx 2 s SD   s 2
N -1
 5.7
22.80

4  2.3875
 5.7  2.39

1 Skewness is explained in the next chapter.

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
18 Foolproof guide to statistics using SPSS

The most efficient formula for calculation purposes is the raw score formula, as follows:

ID X X2
1 7 49
2 3 9
3 9 81
4 4 16
5 6 36
N=5  X = 29  X = 191
2

Variance calculation:

ΣX 2 
ΣX 2
s2  N
N 1

191 
29 
2

 5
5 1

841
191 
 5
4

191  168.2

4

22.8

4

 5.7

Standard deviation calculation (square root of variance):

2
s(SD )  s

 5.7

 2.3875

 2.39

Copyright © Pearson Australia (a division of Pearson Australia Group Pty Ltd) 2014 – 9781442549821 - Hills/Foolproof Guide to Statistics using IBM SPSS 2e
Another random document with
no related content on Scribd:
8822 Sherwood F 76 Sept
B 15
Aug
4950 Shindler Jno Art 1 I
7
Aug
6602 Shore J J 1F
23
Oct
10946 Short J 2B
14
23 Sept
7735 Shults A M
B 3
28 Oct
10415 Shults George
H 6
17 May
1458 Simmonds E
D 29
2 Aug
6957 Simons A Art
M 26
34 July
4186 Simpson D O
D 28
Sept
9842 Simpson W Art 2H
27
1 Aug
6141 Sinclair A
G 19
20 Oct
11189 Sloan S
K 19
1 Sept
8375 Small Z Art
G 11
2 Oct
10404 Smalley J H
G 6
12 Mar
9 Smith Warren
F 5
Stevens July
2881 2H
Thomas 4
1758 Stewart J 11 June
H 9
52 Oct
11291 Stewart E
D 22
27 Jan
12420 Stone F P 65
A 9
Oct
10181 Stone A Art 2H 64
1
16 Aug
5957 Sullivan Jno
A 17
Aug
7401 Sullivan Jno 2K 64
31
Oct
10890 Sullivan M 2D
4
9 Sept
8203 Sullivan P
- 8
15 Oct
10792 Sullivan P
I 12
59 Oct
11671 Sullivan F
B 30
Mar
12788 Sylvester D 1B 65
17
Sept
8325 Sylvester E Art 2H 64
10
Nov
12053 Sylvester J 4A
16
35 Nov
11957 Tabor B
C 11
16 Oct
10097 Tabor F, S’t
E 11
17 June
2067 Taggerd John
E 19
37 July
3368 Taylor N
D 15
2515 Taylor Thos Cav 2 June
G 26
110 Sept
8805 Temerts T J, S’t
D 15
3 July
4386 Tenney Wm
G 31
27 July
3812 Thayer J
A 23
Sept
8612 Thomas J Art 2H
13
32 Oct
11123 Thomas J A
G 18
56 June
2421 Thomas J W
I 24
Jan
12527 Thompson C Art 1B 65
26
16 June
1890 Thompson Geo 64
- 13
58 Aug
4536 Thompson Geo
F 2
27 July
3908 Thompson J M
H 24
58 July
3596 Thompson W W
G 19
23 Aug
4634 Tibbett A
F 3
Sept
7468 Tiffany J 4F
1
27 Aug
6549 Tilden A
B 23
29 July
3898 Tillson C E
E 24
3549 Tooma Jno 28 July
E 18
12 Apr
407 Torey L
H 7
7 Aug
6019 Torrey C L
G 17
Oct
10131 Townley J J 1F
1
2 Sept
9108 Travern W Art
G 18
59 Sept
7860 Travis H C, Cor
C 5
15 Sept
7996 Trescutt W M
I 6
34 Sept
8132 Turner H
F 8
20 Nov
12161 Tuith F
F 25
17 Aug
5428 Twichell J
K 12
36 Aug
6332 Twichell ——
C 21
17 Sept
9517 Usher Samuel
I 22
2 Sept
8466 Wade A D L Art
G 11
36 Aug
5959 Waldon Wm
B 17
19 Jan
12444 Walker A 65
F 12
57 July
3377 Wallace P 64
B 16
Oct
11494 Walsh M 4C
26
5191 Walton E A 57 Aug
H 10
59 Sept
8724 Walton Nat
E 14
Sept
8304 Wanderfelt —— 6C
10
17 June
1733 Wardin H
I 8
Aug
5217 Ware Sam 1H
10
27 Sept
8864 Warffender J W
C 15
19 Nov
12131 Warner A F, Cor
D 22
27 Aug
6454 Washburne W E
I 21
17 Aug
4721 Weiden H
H 4
17 May
1066 Welsh Frank
B 13
Aug
6224 Weldon Chas Art 1D
20
Nov
11796 Wells S 1A
14
2 Aug
5214 Wellington G W
G 10
18 July
3547 Welworth C W
D 18
58 July
3247 Werdier W
G 13
24 May
1334 West E
A 24
7002 West J G Art 1E Aug
27
15 Aug
4577 White F
K 2
2 Aug
6807 White Joseph Art
G 25
2 Aug
7188 White Joseph
G 29
27 Sept
7902 Whiting A
H 5
1 Aug
6867 Whitney F P
G 26
17 Apr
635 Whittaker S 64
D 20
22 May
1115 Wizard Geo
A 15
27 Aug
6715 Wilber E
G 24
14 Aug
4539 Wilcox A Art
C 2
2 Aug
5519 Wilder L E
G 13
1 Aug
7318 Wilkins S O
G 30
27 Aug
661 Williams Chas
G 24
58 Sept
668 Williams J
G 13
17 July
469 Willis C
K 17
Sept
7549 Wilson J Art 2H
2
34 Aug
769 Wilson Robert
A 25
6742 Wilson S Art 2 Aug
G 24
18 Oct
10545 Wilson W
B 9
47 Aug
13 Witherill O
C 20
17 Aug
6483 Woodbury B
A 21
27 Aug
6564 Woodward W A
B 23
27 Aug
6368 Wright C E
B 21
27 Aug
6288 Wright M E
C 20
Aug
4923 Wyman H C Art 2H
6
3 July
3562 Wright W M “
G 18
Aug
7152 Young N C 1 I
29
2 Sept
8882 Young E
- 16
Aug
6922 Young G W Art 2H
26
Total
758.
MICHIGAN.
22 June
2198 Ayres J B, S’t 64
C 17
22 June
2247 Acker J
K 20
22 June
2461 Atkinson P
C 22
Anderson 23 June
2576
George E 27
July
3257 Abbott C M 5E
13
23 Aug
4947 Ammerman H H
A 7
10 Aug
5472 Aulger George
F 13
Aug
5601 Ackler W Cav 3C
14
Aug
6119 Austin D 8C
19
14 Aug
6713 Allen A A
I 24
Sept
9156 Anderson F Cav 1G
18
Dec
12650 Arsnoe W 7E
27
Feb
12571 Allen J 9H 65
2
Feb
12606 Adams A 4B
7
121 Brockway O 11 Mar 64
K 23
May
1154 Banghart J Cav 9G
16
May
1283 Broman C 4H
22
May
1511 Beckwith E, Cor Cav 6 I
31
27 May
1513 Bishop C
F 31
June
1664 Beard J 6E
6
Bostwick R S, June
2004 2F
Cor 15
Bowerman R, 22 June
2025
Cor H 17
June
2201 Bryant George Cav 6H
17
June
2271 Bush Thomas 8A
20
22 June
2303 Brigham David
D 22
27 June
2381 Bowlin J
E 23
June
2478 Briggs I 6E
25
15 June
2595 Berry Henry
E 28
22 June
2700 Broo F
I 30
July
2946 Bailey John Cav 4M
6
20 July
3149 Briggs W H
G 11
3215 Bibley J 3C July
12
July
3479 Brannock F 3C
17
16 July
3517 Brush J
K 18
17 July
3531 Bradley Geo
B 18
July
3591 Bulit F Art 3A
19
10 July
3777 Bohnmiller J Cav
H 22
Beardslee M A, 22 July
3798
S’t D 22
July
4109 Billiams Jno 2K
27
Aug
4339 Binder Jno 2A
30
July
4395 Brown G Cav 4E
31
Aug
4810 Baker A Cav 5F 64
5
Aug
5573 Betts P 1C
14
Sept
8333 Brookiniger E 7D
10
Aug
5950 Bertan I Cav 8B
16
Aug
5970 Burnett J 7G
17
22 Aug
6013 Burkhart C, Cor
G 17
6065 Brower L F, Cor 17 Aug
H 18
Aug
6290 Bilby Geo 9E
20
Aug
6388 Burcham J 5B
21
Aug
6990 Burdick Theo Cav 6 I
27
18 Aug
7148 Beirs S
B 29
Aug
7227 Billingsby J Bat 1 -
29
Sept
7536 Bradley B Cav 9E
1
Sept
7796 Blair Jno 7E
4
Sept
7932 Barr W, S’t Cav 8L
5
Sept
8391 Brown H S Cav 8F
10
11 Sept
8505 Bradley E, S’t
K 12
Sept
8814 Blanchard Jas 7G
15
Sept
8869 Brown A 3G
15
Sept
9226 Beckley W Cav 1E
19
13 Sept
9240 Brown H
A 19
Sept
9305 Beebe Jno, Cor 1A
20
Sept
9430 Baker Jno Cav 1H
21
9545 Birdsey J 7D Sept
23
26 Sept
9553 Barber J M, Cor
C 23
Sept
9637 Baxter S Cav 6L
24
Sept
9830 Batt W H Cav 6L
27
Sept
9834 Bunker R B 1D
27
Sept
9853 Barnard G, Cor Cav 7M
27
10 Sept
9866 Beekley L
F 27
17 Sept
10044 Barney H
D 29
Oct
10340 Blackburn Jas 5G
4
24 Oct
10490 Bentley H
I 7
Oct
10835 Bittman J Cav 1C
13
24 Oct
11275 Baldwin L A
B 22
Nov
12130 Beck G Cav 1H
23
26 Nov
12162 Bennett W L
G 26
Nov
12187 Barnett I 2E
28
15 Mar
12745 Bearves M 65
G 7
34 Colan Fred 17 Feb 64
F 9
20 Feb
210 Chilcote Jas C
G 28
Chambers J R, Apr
398 Cav 5K
S’t 5
Apr
439 Cowill Ed “ 8G
8
10 Apr
593 Cowell John “
H 15
May
1037 Conrad Edson “ 8G
24
May
1077 Cripper G F “ 5C
14
May
1164 Coastner J D “ 5L
16
May
1330 Chapman H 5E
24
Cameron Jas, 27 May
1351
S’t H 25
May
1505 Constank John 9B
31
22 June
1692 Conkwrite John
K 7
June
1711 Cook J Cav 4D
7
June
1811 Churchward A R 9C
10
22 June
1943 Clear James
F 14
June
2617 Cussick B 7C
28
July
3071 Collins James 5 I
9
3462 Cartney A Cav 2E July
17
July
3595 Cameron D, S’t “ 1L
19
July
3800 Cummings W 2F
22
July
3989 Clements Wm SS 1C
26
10 July
4032 Cook J
F 26
Aug
4620 Cronk Jas Cav 5G
3
Aug
4920 Cooper J 7K
6
Aug
4956 Curtis M D 8C
7
Aug
5201 Crunch J Cav 1 -
10
Aug
5685 Cummings D “ 5 I
15
Aug
5686 Churchill G W 3A
15
25 Aug
5905 Carr C B 64
K 16
20 Aug
6263 Coft Jas
F 20
Aug
6285 Cobb G 4D
20
10 Aug
6446 Cook Geo Cav
H 22
Aug
6004 Cahon W J 1H
26
7904 Carp J S, S’t 1K Aug
28
Aug
7164 Caten M Cav 7E
29
Sept
7496 Cling Jacob 2K
1
Sept
7534 Campbell S B 2H
1
124 Sept
7883 Coldwell W, Cor
H 5
17 Sept
8406 Cope J B
A 11
Sept
8993 Cornice J D 7F
17
Sept
9341 Carver J H Cav 4 -
20
Oct
10644 Cooley G 3A
9
Oct
10759 Clago S, S’t 7C
12
17 Oct
10788 Crain R O
A 12
34 Oct
10871 Cooley Henry
G 13
Nov
11743 Collins C 2K
2
Nov
11903 Clark G W, S’t Art 1C
7
17 Nov
12143 Cameron F
E 24
Dec
12258 Cook N 1K
10
Jan
12391 Case S, Cor Cav 5L 65
4
12474 Coras E “ 6C Jan
17
Feb
12634 Chambers W “ 8G
10
May
1345 Davis Wilson 8A 64
24
Feb
43 Diets Jno Cav 6 I
14
Feb
195 Dunay Jno 6C
27
April
315 Deas Abe Cav 7L
2
10 April
716 Decker L
H 24
27 May
1270 Drummond Jno
E 21
27 May
1292 Dolf Sylvanus
G 23
May
1296 Denter W A 5E
23
June
1683 Dougherty D 8C
6
June
2090 Demerie D Bat 1 -
17
Dillingham W O, 20 June
2248
Cor I 20
June
2683 Dennison H Cav 5G
30
July
2882 Dreal D “ 2B
4
17 July
3207 Dusalt A
H 12
3314 Dyre Wm 17 July
B 14
22 July
3610 Davy R
C 19
July
3619 DeRealt F 5C
20
Aug
4660 Decker G S, Cor Cav 5K
3
Aug
4669 Darct S 5 I
4
21 Aug
4670 Dugan D
I 4
17 Aug
5070 Dawson D
H 8
Aug
5351 Dalzell Wm 6A
10
Aug
5666 Dolph S 8B
14
Aug
6225 Duinz G W Cav 5 I
20
Aug
6401 Denton G 5E
21
Sept
7654 Derffy Wm 1H
3
36 Sept
7769 Dumont W
H 4
Sept
8651 Daly A, Cor Cav 7E
13
Sept
9995 Dyer J 5 I
29
Oct
10161 Doass M Cav 1L
1
Oct
10922 Dixon Jno “ 5L
14
11125 Dennis O 1H Oct
18
24 Oct
12124 Dunroe P
H 22
22 Feb
12574 Drake O 65
D 2
22 July
2850 Egsillim P H 64
K 4
Aug
5318 Eggleston Wm Cav 7E
10
24 July
3981 Elliot J
G 26
22 May
1210 Eaton R
H 19
May
1240 Ellis E Cav 2B
20
11 July
2788 Ensign J
A 2
Sept
7901 Edwards S 6E
5
Sept
8255 Edmonds B 1H
9
17 Oct
11065 English James
B 17
77 Aug
5817 Everett J 64
K 16
27 May
890 Force F
D 5
May
1064 Fitzpatrick M Cav 1B
13
14 May
1367 Folk C
E 25
2197 Fitse T Cav 1C June
19
15 June
2252 Fairbanks J “
G 20
June
2343 Face W H 6 -
23
22 June
4194 Fisher F
G 29
22 Aug
5081 Farmer M
D 8
Aug
5861 Flanigan John 5D
16
Aug
6135 Farnham A 5A
19
Aug
6353 Fox James 3H
21
22 Aug
6680 Fritchie M
G 24
Aug
6983 Fitzpatrick M 8E
27
Aug
7027 Fox Charles 1B
27
Aug
7060 Forsythe H 5F
28
Aug
7171 Forbs C Cav 1B
27
Sept
8586 Fethton F “ 1G
12
27 Oct
10275 Fliflin H
F 3
Oct
11500 Freeman B SS 1 -
26
Nov
11709 Fredenburg F 7 -
1
12688 Findlater H Cav 7C Feb 65
22
April
12845 Frederick G 9G
23
Sept
8250 Face C SS 1B 64
9
22 Oct
11509 Fox W
E 26
Goodenough G 23 Mch
145
M K 25
20 April
566 Grover Jas
H 15
April
784 Grippman J Cav 5M
28
May
956 Graham Geo W 5C
8
May
1049 Goodbold Wm Cav 2L
12
13 May
1131 German E, Cor
H 16
May
1234 Garrett S H Cav 2G
20
22 June
1927 Grimley Jas
D 14
June
2192 Ganigan J Cav 9L
19
June
2614 Gorden Jas 1D
28
July
2862 Gilbert F 3K
3
July
2928 Gibbons M 6C
5
3863 Goodman W 5 I July

You might also like