Case Study 143
Case Study 143
b. Situation B. A study of 250 patients admitted to a hospital during the past year
revealed that, on the average, the patients lived 20 kilometers from the hospital.
Answer:
Descriptive statistics are brief descriptive coefficients that summarize a given data set,
which can be either a representation of the entire or a sample of a population. It helps describe
and understand the features of a specific data set by giving short summaries about the sample and
measures of the data. People use descriptive statistics to repurpose hard-to-understand
quantitative insights across a large data set into bite-sized descriptions. A student's grade point
average (GPA), for example, provides a good understanding of descriptive statistics. The idea of
a GPA is that it takes data points from a wide range of exams, classes, and grades, and averages
them together to provide a general understanding of a student's overall academic abilities. A
student's personal GPA reflects his mean academic performance (Kenton, 2019).
Meanwhile, inferential statistics is a statistical method that deduces from a small but
representative sample the characteristics of a bigger population. In other words, it allows the
researcher to make assumptions about a wider group, using a smaller portion of that group as a
guideline. The goal of this tool is to provide measurements that can describe the overall
population of a research project by studying a smaller sample of it. A company called Pizza
Palace Co. is currently performing a market research about their customer’s behavior when it
comes to eating pizza. The company is trying to understand the favorite tastes of its customers in
order to re-design the menu. The company gathered a group of 50 people of different ages and
genders, all residents of neighborhoods adjacent to where the store is located. By applying
inferential statistic’s tools to the study, the company could understand with a high degree of
confidence which were the favorite tastes of the population they currently serve (Kenton, 2019).
Answer:
A measure of central tendency is a single value that attempts to describe a set of data by
identifying the central position within that set of data. As such, measures of central tendency are
sometimes called measures of central location. They are also classed as summary statistics. The
mean (often called the average) is most likely the measure of central tendency that you are most
familiar with, but there are others, such as the median and the mode. The mean, median and
mode are all valid measures of central tendency, but under different conditions, some measures
of central tendency become more appropriate to use than others
While, Variability refers to how spread out a group of data is. In other words, variability
measures how much your scores differ from each other. Variability is also referred to as
dispersion or spread. Data sets with similar values are said to have little variability, while data
sets that have values that are spread out have high variability.
Answer:
Parametric tests are designed for idealized data. In contrast, nonparametric tests are
designed for real data: skewed, lumpy, having a few warts, outliers, and gaps scattered about.
Nonparametric methods are workhorses of modern science, which should be part of every
scientist's competence. Beyond that, they are very valuable for learning data literacy because
they encourage the student to gain a tangible “feel” for the data they are examining. Carrying out
nonparametric tests may involve reordering the datapoints into ranks, pairing them up across
groups, flipping coins to determine outcomes, or shuffling subjects or samples among different
groups. Parametric tests assume a normal distribution of values, or a “bell-shaped curve.” For
example, height is roughly a normal distribution in that if you were to graph height from a group
of people, one would see a typical bell-shaped curve. This distribution is also called a Gaussian
distribution. Parametric tests are in general more powerful (require a smaller sample size)
compared to nonparametric tests.
Nonparametric tests are used in cases where parametric tests are not appropriate. Most
nonparametric tests use some way of ranking the measurements and testing for weirdness of the
distribution. Typically, a parametric test is preferred because it has better ability to distinguish
between the two arms. In other words, it is better at highlighting the weirdness of the
distribution. Nonparametric tests are about 95% as powerful as parametric tests. nonparametric
tests are often necessary. Some common situations for using nonparametric tests are when the
distribution is not normal (the distribution is skewed), the distribution is not known, or the
sample size is too small (<30) to assume a normal distribution. Also, if there are extreme values
or values that are clearly “out of range,” nonparametric tests should be used.
3. Define.
Answer:
Answer:
All hypotheses are tested using a four-step process: The first step is for the analyst to
state the two hypotheses so that only one can be right. The next step is to formulate an analysis
plan, which outlines how the data will be evaluated. The third step is to carry out the plan and
physically analyze the sample data. The fourth and final step is to analyze the results and either
reject the null hypothesis, or state that the null hypothesis is plausible, given the data.
A random sample of 100 coin flips is taken, and the null hypothesis is then tested. If it is
found that the 100 coin flips were distributed as 40 heads and 60 tails, the analyst would assume
that a penny does not have a 50% chance of landing on heads and would reject the null
hypothesis and accept the alternative hypothesis (Majaski, 2020).
a. Excel
Answer:
These are the significant tests you can perform using Excel Statistical Analysis
(Srivastava, 2018):
1. Descriptive Analysis. It is the most basic set of analysis that can be performed on any data set.
It gives you the general behavior and pattern of the data. It is helpful when you a have a set of
data and want to have the summary of that dataset. You can also solve for the: mean, standard
error and median, mode and standard deviation, sample variance, kurtosis and skewness, range,
minimum, maximum, sum and count.
2. ANOVA (Analysis Of Variance) It is a data analysis method which shows whether the mean
of two or more data set is significantly different from each other or not. In other words, it
analyses two or more groups simultaneously and finds out whether any relationship is there
among the groups of data set or not.
3. Moving average is usually applicable for time series data such as stock price, weather report,
attendance in class etc.
4. Rank and Percentile calculate the ranking and percentile in the data set. For example, if you
are managing a business of several products and want to find out which product is contributing to
a higher revenue, you can use this rank method in Excel.
6. Random Number Generator- Although you can find a simple function to generate a series of
random numbers, this option in data analysis gives you more flexibility in the random number
generation process. It gives us more control over the generated data.
7. Sampling is the data analysis tool which is used for creating samples from a huge population.
You can randomly select data from the dataset or select every nth item from the set.
b. Minitab
Answer:
1. Summarize the data- Descriptive statistics summarize and describe the prominent features of
data. Use Display Descriptive Statistics to determine how many book orders were delivered on
time, how many were late, and how many were initially back ordered for each shipping center.
2. Interpret the results- The Session window displays each center’s results separately. Within
each center, you can see the number of back orders, late orders, and on-time orders in the Total
Count column.
3. Compare two or more means- One of the most common methods used in statistical analysis is
hypothesis testing. Minitab offers many hypothesis tests, including t-tests and ANOVA (analysis
of variance). Usually, when you perform a hypothesis test, you assume an initial claim to be true,
and then test this claim using sample data.
4. Interpret the ANOVA graphs- Minitab produced the following graphs: Four-in-one residual
plot, Interval plot, Individual value plot, Boxplot, and Tukey 95% confidence interval plot.
5. Access Key Result- Suppose you want more information about how to interpret a one-way
ANOVA, specifically Tukey’s multiple comparison method. Minitab provides detailed
information about the Session window output and graphs for most statistical commands.
c. SPSS
Answer:
SPSS (Statistical package for the social sciences) is the set of software programs that are
combined together in a single package. The basic application of this program is to analyze
scientific data related with the social science. This data can be used for market research, surveys,
data mining, etc. With the help of the obtained statistical information, researchers can easily
understand the demand for a product in the market, and can change their strategy accordingly.
Basically, SPSS first store and organize the provided data, then it compiles the data set to
produce suitable output. SPSS is designed in such a way that it can handle a large set of variable
data formats (Thomes, 2018). The SPSS plays a significant role in the following:
1. Data Transformation: This technique is used to convert the format of the data. After changing
the data type, it integrates same type of data in one place and it becomes easy to manage it.
5. T-tests: It is used to understand the difference between two sample types, and researchers
apply this method to find out the difference in the interest of two kinds of groups. This test can
also understand if the produced output is meaningless or useful.
5. What insights have you gained from your group research project?
Answer:
Numbers, formulas, or even equation automatically sent chills down my spine whenever I
am introduced with these things. I have never been a fond of solving, thus I took English as my
specialization way back in College. Suffice it to say, I hate the other subjects except English.
When I first attend my class in Statistics, I was a bit hesitant because numbers and I don’t get
along with each other. But, as time went by, I have developed a sense of appreciation of numbers
and equation. It made me realized that if you just put profound concentration and a bit more
focused on the subject, it wouldn’t be as hard as you think it is plus our Professor really knows
how to discuss the subject to us without overcomplicating things.
Conducting research provides students a potential method for learning and exploring a topic of
interest. It has become my most exciting and real learning experience. These are the things I
learned from doing my thesis and working at the same time:
I enjoy researching. I already know that I love reading and writing, but researching is another
thing. It’s reading and writing with one purpose only. To have the topic in mind and try to find the
sources, then read it and find the bits of information in the sources, then put it together to make
one paragraph that makes sense. That’s researching.
Research world is another world full of people that are trying to figure out what’s going on in the
world we live in. There’s so little that one can find and do a research on, but it’s contributing to
the whole book of knowledge we have as human race.
On a more serious note, I find this research project helpful especially with our Comprehensive
exam (SOON). This helped us with the quantitative part of the research which is abit
complicated and tricky.