Quantitative Data Analysis 2
Quantitative Data Analysis 2
net/publication/351637670
CITATION READS
1 36,319
1 author:
Ameer Ali
University of Sindh
43 PUBLICATIONS 48 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Ameer Ali on 17 May 2021.
Quantitative data analysis is a systematic process of both collecting and evaluating measurable
and verifiable data. It contains a statistical mechanism of assessing or analyzing quantitative data
(Creswell, 2007). A quantitative research analyst’s main purpose is to quantify a hypothetical
situation. It is usually carried out by the scholars who are well equipped with the techniques of the
quantitative analysis either manually or with the assistance of computers (Cowles, 2005). The
quantitative approach to a phenomenon mostly entails two important advantages. First, it enables
a researcher to systematically categorize, sum up, and illustrate observations. All these
mechanisms and techniques are called descriptive statistics. Second, it also makes it possible for a
researcher to understand and conclude a phenomenon (a sample) that is studied in an identified,
narrow group. The sample is always taken systematically from a much larger group in a way that
the derived conclusions may be generalized to the whole of population (Cowles, 2005). To put it
in much more precise terms, this process paves the way for a researcher to draw the conclusions
through inductive reasoning. All the processes, techniques, findings, and conclusions are
quantified as inferential statistics. There are two types of data analysis (James and Simister, 2020)
which are discussed here:
Descriptive statistics, a type of quantitative data analysis, is used to describe or present data in
an easily accessible, quantitative form (James and Simister, 2020). In other words, this analytical
process helps researchers to illustrate and sum up an observation. Moreover, this statistical
technique is chosen by researchers, because it helps researchers in establishing rationale that is
associated with quantification. The statistical measurement is a preliminary phase of the
quantitative research, as it converts observations into numerical figures. In much more broader
terms (Peller, 1967), the statistical measurement is the task of numbers that is applicable to items
or events as per rules. Peller (1967) has systemically categorized into four types: the first type is
nominal scales which help in organizing observation into restricted groups. The second type is
ordinal scales. The ordinal scales are employed to arrange research variables in accordance with
their respective position in a group. Moreover, the interval scales are the third type of measuring
categories. These scales through balanced intervals not only measure but also signify the stage of
a quality that a variable, an individual or an object possesses. The fourth type is the ratio scales.
The ratio scales also make use of the balanced intervals to register measurements from a well-
defined zero spot. Furthermore, the researchers quantify their observations in an organized way
through the use of frequency distribution or graphs.
The scholars who carry out the qualitative research usually collect a huge amount of data for
the purpose of statistical investigation. Before conducting the statistical analysis, the researchers
always arrange the data into well-organized categories. They actually do so either through
frequency distribution or graphs.
In frequency distribution, the researchers logically arrange each measurement from high to
low. There is also an initial stage in the frequency distribution which enables a researcher to enlist
the average in a line. The peak of the line stands for the highest point of all, whereas the foot of
the line shows the lowest point of all. Besides, the line also involves all the transitional average,
including those with zero average, otherwise, the division or distribution of frequency would end
up to be much more compressed than it in fact is (Fallon, 2016). Thus, the organization of the data
in frequency distribution helps in the calculation for statistical analysis.
A graph is actually a diagram that displays data. The graphs in quantitative research usually
show the relationship between two or more quantities, measurements or indicative numbers
(Creswell, 2005). The management of the collected research data into the graphs is very helpful
for the researchers. There are actually many kinds of the graph management, but the researchers
mostly employ frequency polygon and histogram. Moreover, the early stages of making the
frequency polygon and histogram are not different at all. The same early stages are discussed
briefly over here:
1. Straightly putting down the average; the straight line must start with the low value on the
left side, while the high value must be put down to the right side. Sufficient space must be
left to include average at both the sides of the division.
2. The frequency of average points must be put down on the upward length.
3. At the top of the midpoint, the researchers must put a mark of every average to ensure its
regularity.
In this way, the researcher can easily create both the frequency polygon and the histogram.
The mode, median, and mean are measures of basic affinity. All of these help in granting a
specific index which displays the standard, regular score of the total set of measurement. The mode
is a nominal statistical measure or scale. It measures a constant that occurs minimally. Besides, it
is hardly helpful at all when it comes to educational research (Kothari, 2004). Additionally, the
median represents an ordinal scale of statistic calculation or measurement. It takes into no
consideration the level of scores in distribution, but it does account for the volume of the individual
scores. Lastly, the mean is a rational statistic and it remains highly constant. The means also
indicates a foundational tendency of a phenomenon.
The inferential statistics is inductive in its approach and technique. It allows researchers to
generalize the findings drawn through a sample to the whole of population. In other words, the
researchers can generalize their findings related to a specific faction to the entire population. The
generalizations are said to be reliable if the samples under investigation truly represent the
population from which they are taken. Therefore, the researchers have divided the process of
sampling into two types: probability sampling and non-probability sampling (Kothari, 2004).
The probability sampling a random selection of population and it is sub-divided into four
procedures: simple random sampling, cluster sampling, systematic sampling, and stratified
sampling.
In the simple random sampling, each factor or ingredient of a population has an equal
chance of to be selected randomly. In the cluster sampling, the researcher arbitrarily chooses
groups or clusters from a large population and employs them as a cluster or a sample. Moreover,
the stratified sampling involves the selection of a sample from different subgroups or sections of
the population. However, in the systemic sampling, the researcher chooses every Kth case from
the directory of a population.
The indexes usually employed in inferential statistics are t test and the chi-square test. The
t test is very much helpful in setting up the statistical importance of means between two samples.
Moreover, the t test is categorized into three types. First, the t test for independent groups is used
to compare two samples or groups when they are independently collected from a population.
Second, the t test for dependent groups is employed for the two samples having objects which are
either identical or for repeated calculations which are found in the same objects. Third, the t test
for Pearson is employed for correlation (Leavy, 2017).
This index actually looks for the variations between subjects, objects, and events which are
listed into various groups of nominal data through evaluation of the experienced and expected
frequencies under a null hypothesis.
The notions of reliability and normality are very important in the field of quantitative
research. These two main concepts signify measurement. For instance, a research scholar wants to
study the history of failure of female students, so in this case, failure or academic demotion would
be the conceptual notion that the researcher would be studying quantitatively. Moreover, the
researcher will have to use a portfolio or a test to derive measurement. In other words, the
researcher will have to use an instrument that can measure even the abstract realities associated
with a variable. Measurement actually provides a researcher with numbers which the researcher
can use to do quantitative, statistical analyses (Kumar, 2011). The important issue that emerges
afterwards is that how much constant or valid measurement is. If the aim of a researcher is to
measure height, the researcher must avoid measuring weight or width. Moreover, the researcher
will have also to make sure that measurement must not give inconsistent result having dissimilar
value.
This can only be achieved by evaluating validity and reliability of a research work.
5.2 Validity:
The types of validity are: criterion validity, construct validity, and content validity.
It tends to measure the internal buildup of an instrument and the concepts it is supposed to measure.
It also interrelates to the theoretical understanding of the concept being measured. It also shows
that human concepts have different dimensions.
5.3 Reliability:
After validity, reliability is the second most important dimension that verifies the quality of the
measuring instruments. It ensures error free measurement (Muijs, 2010). On measurement level,
reliability possesses three elements. First, true score is an error free score that a researcher wants
to measure. Second, systematic error is a constantly occurring error when a researcher moves from
one measurement to another. Third, random error is also known as unsystematic error. It varies
from measurement to measurement and is quite irregular.
References
https://imotions.com/blog/qualitative-vs-quantitative-research/
content/uploads/2017/01/Quantitative-analysis.pdf
PELLER, S. (1967). undefined. Quantitative Research in Human Biology and Medicine, 10-19.
doi:10.1016/b978-1-4832-3256-0.50007-4
View publication stats